SQL Server Functions

SQL Server starting with 2012 ship with a robust set of functions that can be used to make code perform faster, with fewer lines of code. The functions  can be used in ETL Process to provide better error handling. A example of this would be the Try_Parse function that allows you to check if a value can be converted.

Another example would be using the FIRST_VALUE() and LAST_VALUE() functions which work against a set of data. These functions are very useful when looking for things like month over month averages and when doing calculations. The table below contain a list of function that are supported starting with 2012 along with some examples of how to use them.

Continue reading “SQL Server Functions”

Import File Logging Application

In this post, I will use T-SQL and Microsoft C# to create a application to log file metadata into a SQL Server table.

Often times when processing data into and between data systems you need away to determine if all the data that is in the file is added to your tables. I like to store file information in the database and use this information as a way to true-up the data, in my data systems.  When a file is stored on a file system it is stored with metadata that can provide very useful information. This information include file name, Creation time, Last Modified Time, File Path, File Size and you can all so get row count information.  Collecting this information requires you to create a table inside the database to store the file metadata. You will also need to create an application to pull the metadata information from the files system folder directories.

In the code below I used C# to create an application that will accept a directory file path string. This path will then be used to search all sub directories in the path and log details about the files contained in the folders  into a table.  The C# code below contains three method calls that will be used to log the file information into a table.

The ProcessFile method is used to get the file name, Create Time, Last Access Time, File Size in Bytes and Line Count.

These data points are then passed to the LogToDB method. This method creates a SQL Connection to the database and executes a query to insert the data into a table named FileLogDetail.

To configure this to run you must first create a table to store the file information. Execute the below code in SQL Server Management Studio.

CREATE TABLE [dbo].[FileLogDetail](
  [FileLogDetailID] [int] IDENTITY(1,1) NOT NULL,
  [FilePath] [varchar](5000) NULL,
  [FileName] [varchar](200) NULL,
  [FileCreateTime] [datetime] NULL,
  [FileLastAccessed] [datetime] NULL,
  [FileSize] [bigint] NULL,
  [LineCount] [int] NULL,
  [JobName] [varchar](100) NULL,
  [FileLogTime] [datetime] NULL DEFAULT (getutcdate())
) ON [PRIMARY]




After the Table is created you will then need to create a C# console application. The code below is the source code needed.

Note: You will need to change the LogToDB method to connect your your SQL Server DB.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using System.Collections;
using System.Data.SqlClient;
using System.Data;


namespace DBAToolBelt
{
    class Program
    {
        static void Main(string[] args)
        {
            //string inputDir = args[0]; 
            string inputDir = @"C:\DataIn\";
            try
            { 
                ProcessDirectory(inputDir);
            }
            catch (Exception)
            {
                throw;
            }
                     
        }
        // Process all files in the directory passed in, recurse on any directories 
        // that are found, and process the files they contain.
        public static void ProcessDirectory(string targetDirectory)
        {
            // Process the list of files found in the directory.
            string[] fileEntries = Directory.GetFiles(targetDirectory);
            foreach (string fileName in fileEntries)
                ProcessFile(fileName);

            // Recurse into subdirectories of this directory.
            string[] subdirectoryEntries = Directory.GetDirectories(targetDirectory);
            foreach (string subdirectory in subdirectoryEntries)
                ProcessDirectory(subdirectory);
        }

        // Insert logic for processing found files here.
        public static void ProcessFile(string path)
        {
            FileInfo info = new FileInfo(path);
            DateTime createtime = info.CreationTime;
            DateTime LstAcessTime = info.LastAccessTime;
            long fieSize = info.Length;
            string fileName = info.Name;
            int lineCount = File.ReadAllLines(path).Count();

            LogToDB(path, fileName, createtime, LstAcessTime, fieSize, lineCount);

            Console.WriteLine("FilePath: " + path + "|CreateTime: " + createtime+ "|LastAccessTime: " + LstAcessTime +  "|fieSize: " + fieSize
                                + "|FileName: "+ fileName+ "|LineCount: "+ lineCount);

        }

        public static void LogToDB(string path, string fName, DateTime createT, DateTime LstTime, long fSize,  int lCount)
        {
            string Query = "insert into FileLogDetail(FilePath,FileName,FileCreateTime,FileLastAccessed,FileSize"
                          + ",LineCount) values('"+ path+ "','" + fName +"','"+ createT+ "','"+ LstTime+ "','"+ fSize+"','"+ lCount + "')";
            using (SqlConnection connection = new SqlConnection("server=localhost;" +   //Server Name 
                                                            "Trusted_Connection=yes;" +
                                                            "database=DBATools; " +     //Database Name
                                                            "connection timeout=30"))
            {
                using (SqlCommand command = new SqlCommand(Query, connection))
                {
                    command.Connection.Open();
                    command.ExecuteNonQuery();

                }                              
            }
                          
        }

    }
}

Conclusion

I created this solutions as a way to get detail information about the files on my file system. Although this is not the only way to achieve the goal logging information about files but it one way. I have found this code useful.

 

SQL Sever Agent Monthly Job Run Time Averages

Sometime as a ETL developer or Database Administrator you will need to gain insight into SQL Agent job executions times. This  insight can be used to proactively monitor the processing times of the various jobs running within your data environment.

Information about jobs execution times is stored in the MSDB database in table sysjobhistory. This table has the start time and the run duration times which I have used to create a report that will show the average job start and end times by month for all jobs running on a instance of SQL Server.

/**************************************************************************
Author Tywan Terrell
object type: Stored Procedure
Description: This procedure was created to find the average start and end 
       times for SQL Agent Jobs. This prcedure take one parameter
       @reporttype.
       
       @reporttype = 1 -> Monthly average by job.
       @reporttype = 2 -> Monthly average by Step  

       Example Call
       exec MonthlyJobAvgTimes @reporttype = 1
       go;
       exec MonthlyJobAvgTimes @reporttype = 2

***************************************************************************/
go;
Create Procedure MonthlyJobAvgTimes( @reporttype int)
as
begin

if @reporttype = 1 
begin
IF OBJECT_ID('tempdb..#MONTHLYJOBAVGTIME') IS NOT NULL
    DROP TABLE #MONTHLYJOBAVGTIME

;WITH JOBHISTORY AS
(
 SELECT sj.name,
        sh.run_date,
        sh.step_name,
    CAST(CONVERT(VARCHAR(10),RUN_DATE)AS DATE) AS 'FORMATEDDATE',
        STUFF(STUFF(RIGHT(REPLICATE('0', 6) +  CAST(sh.run_time as varchar(6)), 6), 3, 0, ':'), 6, 0, ':') 'run_time',
        STUFF(STUFF(STUFF(RIGHT(REPLICATE('0', 8) + CAST(sh.run_duration as varchar(8)), 8), 3, 0, ':'), 6, 0, ':'), 9, 0, ':') 'run_duration (DD:HH:MM:SS)  '
    
FROM msdb.dbo.sysjobs sj
JOIN msdb.dbo.sysjobhistory sh ON sj.job_id = sh.job_id
)
, JOBHISTORYSECONDS AS (

SELECT *, SUBSTRING(CONVERT (VARCHAR,FORMATEDDATE),6,2) AS MONTHNUMBER	, substring(cast (FORMATEDDATE as varchar),1,8) +'01 00:00:00'	as FOD
    ,( SUBSTRING(RUN_TIME,1,2) *3600)+ (SUBSTRING(RUN_TIME,4,2)* 60) +SUBSTRING(RUN_TIME,7,2) AS 'RUN_START_TIME(SEC)'
    , (SUBSTRING([run_duration (DD:HH:MM:SS)  ],1,2)*3600*24)+ (SUBSTRING([run_duration (DD:HH:MM:SS)  ],4,2)*3600)+(SUBSTRING([run_duration (DD:HH:MM:SS)  ],7,2)*60)+SUBSTRING([run_duration (DD:HH:MM:SS)  ],10,2) AS [run_duration_SEC ]
FROM JOBHISTORY  
)
, MONTHLYJOBAVGTIME AS (

SELECT 
name,MONTHNUMBER, FOD ,AVG([RUN_START_TIME(SEC)])  AS M_AVG_START_SEC,AVG([run_duration_SEC ]) AS M_AVG_DURATION
FROM JOBHISTORYSECONDS JS
GROUP BY 
JS.name,JS.MONTHNUMBER,FOD
)

SELECT 'MONTHLY-AVERAGE' AS AVG_TYPE,*, DATEADD(S,M_AVG_START_SEC, FOD )AS M_AVG_START_TIME, DATEADD(S,M_AVG_DURATION, DATEADD(S,M_AVG_START_SEC, FOD )) AS M_AVG_END_TIME
INTO #MONTHLYJOBAVGTIME
FROM 
  MONTHLYJOBAVGTIME



sELECT name,AvgDesc, [1] AS Jan, [2] as Feb,[3] as Mar,[4] as Apr,[5] as May,[6] as Jun,[7]as Jul,[8] as Aug,[9] As Sept,[10] as Oct,[11] as NOv,[12] as [Dec] 
FROM 
(SELECT name ,'StartTime' as AvgDesc,  CAST(MONTHNUMBER AS INT) AS MONTHNUMBER , (M_AVG_START_TIME)as M_AVG_START_TIME
   FROM #MONTHLYJOBAVGTIME) AS SOURCEPIVOT
PIVOT
(
  MAX(M_AVG_START_TIME)
  FOR MONTHNUMBER IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12] )
) AS PIVOTTABLE
union all 
--Endtime Pivot 
sELECT name,AvgDesc, [1] AS Jan, [2] as Feb,[3] as Mar,[4] as Apr,[5] as May,[6] as Jun,[7]as Jul,[8] as Aug,[9] As Sept,[10] as Oct,[11] as NOv,[12] as [Dec] 
FROM 
(SELECT name ,'EndTime' as AvgDesc,  CAST(MONTHNUMBER AS INT) AS MONTHNUMBER ,  M_AVG_END_TIME FROM #MONTHLYJOBAVGTIME) AS SOURCEPIVOT
PIVOT
(
  MAX(M_AVG_END_TIME)
  FOR MONTHNUMBER IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12] )
) AS PIVOTTABLE
order by 1
end

if @reporttype = 2
begin
--

IF OBJECT_ID('tempdb..#MONTHLYJOB_STEPAVGTIME') IS NOT NULL
    DROP TABLE #MONTHLYJOB_STEPAVGTIME


;WITH JOBHISTORY AS
(
 SELECT sj.name,
        sh.run_date,
        sh.step_name,
    CAST(CONVERT(VARCHAR(10),RUN_DATE)AS DATE) AS 'FORMATEDDATE',
        STUFF(STUFF(RIGHT(REPLICATE('0', 6) +  CAST(sh.run_time as varchar(6)), 6), 3, 0, ':'), 6, 0, ':') 'run_time',
        STUFF(STUFF(STUFF(RIGHT(REPLICATE('0', 8) + CAST(sh.run_duration as varchar(8)), 8), 3, 0, ':'), 6, 0, ':'), 9, 0, ':') 'run_duration (DD:HH:MM:SS)  '
    
FROM msdb.dbo.sysjobs sj
  
inner JOIN msdb.dbo.sysjobhistory sh ON sj.job_id = sh.job_id
)
, JOBHISTORYSECONDS AS (

SELECT *, SUBSTRING(CONVERT (VARCHAR,FORMATEDDATE),6,2) AS MONTHNUMBER	, substring(cast (FORMATEDDATE as varchar),1,8) +'01 00:00:00'	as FOD	
    ,( SUBSTRING(RUN_TIME,1,2) *3600)+ (SUBSTRING(RUN_TIME,4,2)* 60) +SUBSTRING(RUN_TIME,7,2) AS 'RUN_START_TIME(SEC)'
    , (SUBSTRING([run_duration (DD:HH:MM:SS)  ],1,2)*3600*24)+ (SUBSTRING([run_duration (DD:HH:MM:SS)  ],4,2)*3600)+(SUBSTRING([run_duration (DD:HH:MM:SS)  ],7,2)*60)+SUBSTRING([run_duration (DD:HH:MM:SS)  ],10,2) AS [run_duration_SEC ]
FROM JOBHISTORY  
)
, MONTHLYJOB_STEPAVGTIME AS (

SELECT 
  name,step_name,FOD,MONTHNUMBER,AVG([RUN_START_TIME(SEC)])  AS M_AVG_START_SEC,AVG([run_duration_SEC ]) AS M_AVG_DURATION
FROM JOBHISTORYSECONDS JS
GROUP BY 
JS.name,step_name,FOD,JS.MONTHNUMBER
)

SELECT 'MONTHLY STEP AVERAGE' AS AVG_TYPE,*, DATEADD(S,M_AVG_START_SEC, FOD ) AS M_AVG_START_TIME, DATEADD(S,M_AVG_DURATION, DATEADD(S,M_AVG_START_SEC, FOD )) AS M_AVG_END_TIME
      into #MONTHLYJOB_STEPAVGTIME
FROM 
  MONTHLYJOB_STEPAVGTIME




SELECT name,step_name,TimeType, [1] AS Jan, [2] as Feb,[3] as Mar,[4] as Apr,[5] as May,[6] as Jun,[7]as Jul,[8] as Aug,[9] As Sept,[10] as Oct,[11] as NOv,[12] as [Dec] 
FROM 
(SELECT name ,step_name,'StartTime' as TimeType,  CAST(MONTHNUMBER AS INT) AS MONTHNUMBER , replace(M_AVG_START_TIME,'1900-01-01','')as M_AVG_START_TIME
   FROM #MONTHLYJOB_STEPAVGTIME) AS SOURCEPIVOT
PIVOT
(
  MAX(M_AVG_START_TIME)
  FOR MONTHNUMBER IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12] )
) AS PIVOTTABLE
union all 
--Endtime Pivot 
sELECT name,step_name,TimeType, [1] AS Jan, [2] as Feb,[3] as Mar,[4] as Apr,[5] as May,[6] as Jun,[7]as Jul,[8] as Aug,[9] As Sept,[10] as Oct,[11] as Nov,[12] as [Dec] 
FROM 
(SELECT name ,step_name,'EndTime' as TimeType,  CAST(MONTHNUMBER AS INT) AS MONTHNUMBER ,  M_AVG_END_TIME FROM #MONTHLYJOB_STEPAVGTIME) AS SOURCEPIVOT
PIVOT
(
  MAX(M_AVG_END_TIME)
  FOR MONTHNUMBER IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12] )--, 2,3,4,5,6,7,8,9,10,11,12)
) AS PIVOTTABLE
order by name,step_name,TimeType desc 
end 
end

Results MonthlyaverageI typically schedule this to run on the first day of the month to help me better understand how the job processing times are trending month over month. I hope you find this code as helpful as I have.

Understanding T-SQL SELECT Statement

One of the most fundamental aspects of working with SQL Server is querying data stored inside a database. To query data from a database you use the SQL SELECT statement.  There are two important things to understand about the SELECT statement. The first is syntax, which are the rules and structure of the SQL statement that allows the statement to be sent to the database and have results returned. The second thing to understand is the “logical processing order” of select statements.  The way the SQL Server engine processes select statements to retrieve data is different from the order statement appears when you write syntactically SELECT statement in SQL Server Management Studio.

SELECT statements can be simple but can become very complex based on the needs and storage of the data inside the database.  You can find more information about the SELECT Syntax on Microsoft MSDN site.

Below you will find the basic syntax

T-SQL SYNTAX Logical Processing
SELECT <column1, column2,…> FROM
[FROM  <Table Name>] WHERE
[WHERE <Filter Conditions>] GROUP BY
[GROUP BY  <Columns used in Grouping>] HAVING
[HAVING  <Filter Conditions
on results frame>]
SELECT
[ORDER BY <Columns to be used for ordering> ] ORDER BY

The SELECT keyword is followed by the list of columns you want returned. This is then followed by the FROM clause, where you provide the name of the table you want to return, from the SELECT list of columns. This can then be followed by a WHERE predicate that allows you to filter rows return by the SELECT based on certain conditions. The GROUP BY clause is used, when an aggregate function is used in the SELECT statement, such as MAX, MIN or SUM. The HAVING clause is used as an additional layer of filtering based on conditions that are not known until the FROM, WHERE and GROUP BY clause has been processed by the SQL Engine. The HAVING clause works on what is know as a results partition. Finally the ORDER BY clause is used to provide sorting of the results sets. The ORDER BY clause allows for sorting columns in either ascending or descending order.

Although the SELECT keyword is the first statement, when writing a SQL statement, it is not the first statement processed by the SQL Engine. In fact the FROM clause is the first clause that is processed by the SQL Engine, Followed by the WHERE, GROUP BY and HAVING clauses. Once these clauses have been processed the SELECT statement is processed and finally the ORDER BY.  Understanding the syntax of SQL server and understanding the logical processing of query’s will help you write efficient query’s that provide the results you need for a particular solution.

Below are some basic select examples:

/*  Select   */
Select ProductID,Name, ListPrice 
From 
  Production.Product

figure_1

 
/*  Select With Where Clause   */
Select ProductID,Name, ListPrice  
From 
	Production.Product
Where  ListPrice >0

figure_2

/*Select to get a count of product by price*/
Select ListPrice,count(ProductID) as [NumOfProduct]
From 
	Production.Product 
Where  ListPrice >0
Group by ListPrice 
Order by [NumOfProduct] desc 

figure_3

 
/*Select to get a count of product by price where the listprice 
  Has more than 5 products with same price*/
Select ListPrice,count(ProductID) as [NumOfProduct]
From 
	Production.Product 
Where  ListPrice >0
Group by ListPrice 

Having count(ProductID) > 5
Order by [NumOfProduct] desc 

figure_4

/* Select to get a count of product by price where the listprice 
Has more than 5 products with same price*/
Select ListPrice,count(ProductID) as [NumOfProduct]
From 
	Production.Product 
Where  ListPrice >0
Group by ListPrice 
Having [NumOfProduct] > 5
Order by [NumOfProduct] desc 
 

figure_5

The above query fail with an error message “Invalid column Name ‘NumOfProduct’.” This error is caused because the Having clause is processed before the select. So therefore when this query executes the NumOfProduct column has not materialize so the SQL engine is unaware that it exist and throw an error. This can be fixed by referencing the expression in the having clause instead of the column alias.

Don’t forget to leave a comment.

Inserting Data Into SQL Server

Loading data is one of the most important aspects of working with any database system. A database is a container that store information, developers and application’s need to ensure that the data is stored correctly and with the correct associations.

If you think about a typical logon screen you would enter your user name and password in order to gain access to a application or system. Imagine if you went to logon to your Facebook or into your email and your username and password was not stored correctly in the database. Or worse image if your bank account balance was not stored properly and you was given my account balance with $1 instead of your account which had a lot more money. We can all agree this would not be a good thing hence the reason that loading data is a critical aspect of databases.

Data stored in a database typically come from one of two sources either user input or from a system application. System application usually create data behind the scenes while user input is data collected directly from user. When data is generated by the system it is typically done programmatically using some code that was developed specifically for the data that is to be loaded into the database.

In this post I want to discuss two of the simplest ways that data can be loaded into SQL Server using T-SQL. After reading though this post and reviewing the example you will be able to loaded data into SQL Server.  For the purpose of this post we will keep the examples simple and focus on the T-SQL INSERT Statement. We are going to look at the different ways you can use the INSERT statement to load data into a database table.

Let suppose you have a table in your database that contain a list of attributes(columns). The table name is “dbo.Users” and the columns in this table are UserID, FirstName, LastName, EmailAddress and DateOfBirth. In the examples below we will review the different ways we can populate data into this table using T-SQL.

Example #1

In this first example we will  create a table called “dbo.Users” and use the INSERT keyword to load data into the table. The INSERT keyword will be used with the Values keyword to load the data.

/* Create Table dbo.Users */

Create table dbo.Users
( UserID INT Identity(1,1),
  FirstName NVARCHAR(50),
  LastName NVARCHAR(50),
  EmailAddress NVARCHAR(100),
  DateOfBirth date 
)

/* Basic one row Insert  */
INSERT dbo.Users ( FirstName, LastName, EmailAddress, DateOfBirth)
VALUES ('Tywan', 'Terrell', 'tywan.terrell@live.com', '06/13/1985')

/* You can use the same syntax to insert multiple rows into the by
 separating the data into to be inserted with commas like below */
INSERT dbo.Users ( FirstName, LastName, EmailAddress, DateOfBirth)
VALUES ('Tywan', 'Terrell', 'tywan.terrell1@live.com', '05/13/1988'),
        ('Mike', 'Terrell', 'Mike.terrell@live.com', '04/24/1971'),
        ('Tim', 'Terrell', 'Tim.terrell@live.com', '03/15/1961')

/* The Select code below will show the data that was inserted from 
   the above Insert Statement you will notice that even though 
   the UserID column was not included in the INSERT Statement it 
   has a value in the database */
   
Select * from Users
Example #2

In this next example we will look at loading data into “dbo.Users”from existing tables using data that from existing tables in a database. In the code below we used a SQL statement to join tables Person and Email Address to get the first name, last name and email address and then load the results of the select into the “dbo.Users” table

INSERT INTO [DBO].[Users]([FirstName], [LastName], EmailAddress)
SELECT DISTINCT P.FirstName, p.LastName, e.EmailAddress 
FROM 
   [AdventureWorks2012].Person.Person P
   inner join [AdventureWorks2012].Person.EmailAddress E on p.BusinessEntityID = e.BusinessEntityID

In this post we looked at two different ways that we can load data into SQL Server tables. There are may more ways that we can load data in to tables these are two of the simplest way. In future post we will look at more advance ways of loading data into SQL Server tables.

T-SQL Tuesday

t_sql_tuesday_logoAndy (@SQLBEK) posted the article T-SQL Tuesday 84: Growing New Speakers encouraging people interested in becoming a speaker to blog on a topic they would want to present on. This got me to thinking about a topic that would be ideal for me to get started on the road to becoming a speaker and contributor to the community overall.

Being fairly new to blogging and wanting to start with post that covers beginner level topics. The thought of a fundamental blog series on T-SQL Fundamentals popped up into my head.

Why a T-SQL fundamental series?

Understanding the foundations and basic concepts are key to mastering anything and SQL server is no different. My experience over the years working with SQL server has shown me that some database designs and applications have issues and bugs.  We know that no application is without bugs. I am referring to the ones that are inflicted by the developers due to lack of understanding of the fundamental concepts of database and query design. In my career, I have come across database applications that needed to be redesigned due to issues like this. Bad design ultimately leads to bad data and bad data can’t be trusted. Understanding the fundamentals in the series will help beginners be successful in there journey.

What to expect from the series

This series is an opportunity for me to share what I have learned over the years with a bottom up approach. The intended audience of this series would be individuals that are just starting out with SQL Server.  The series will be broken up into three blog articles as outlined below.

Introduction to SQL Server 2016

1.       T-SQL history
2.       Understanding data types
3.       Referential Integrity
4.       Creating database objects

Inserting, Updating and Deleting Data

1. Inserting Data
2. Updating Data
3. Delete Data

Getting started with queries

1. Logical Query Processing
2. Basic Select Statements
3. Advance Query’s

This outlines the topics that will be covered at a high level. Each subtopic will be broken down into theory and example scripts.  The reader will be able to complete the serious and have a basic understanding of how to work with data in SQL Server.

Andy (@SQLBEK), Thank you for giving me the added inspiration to get this blog started and even more for offering to help people like myself. Your feedback on this post can and will be very valuable to me as I move forward and get more involved with the community.

Are you ready to learn SQL Server?

Here’s how I got started…

I wanted to share some of the resources that I have used to learn to SQL Server over the years. I know that sometimes when we go to learn something new. Finding good resources can be overwhelming and time consuming  so I have decide to share a few resources I use.  I have been learning SQL Server since 2009 and have spent the last 5 years working on various database applications.  So, I have read a lot of articles and used a lot of free information to develop my skill sets.  These are some of the resources that I would recommend for anyone interested in learning SQL server. These resources are broken down into three categories.

Blogs

I follow several blogs, they have become a trusted resource for me learning, and staying current on SQL Server technology. They also provide a source to stay current on SQL Server community news and events. Blogs post content that include articles, videos, scripts and freebies that you can use to learn SQL Server or to become better at doing your existing SQL Server job duties. Some of my favorite blogs include:

Dave Pinal – His blog is SQL Authority he has been posting about SQL Server and other technologies on his blog for 10 years. I first discovered his blog in 2006, the information here is accurate and very useful. If you’re just starting out you can find a lot of useful information to get started. I encourage anyone interested in learning SQL Server to check out this blog.

Brent Ozar – I recently discovered his blog Brent Ozar Unlimited. I have found so much content, I currently have a back log of articles and training, I am working through. Brent is a SQL Server Master and is very active on his blog, providing a wide really good information and training about SQL Server. His current focus is in the area of performance tuning and his expertise and ability to communicate complex topics has made me a daily visitor to his blog.

Courses & Books

I am more of applied learn so the best way for me to learn is by doing. Working with database is like most other things in the life. The more you practice at doing it right the better you will become at getting it right. Over my learning journey I have read books taken online course and did a lot of practicing. I have composed a few resources that I have found to be extremely help to learn SQL Server. I know there are a lot of resources out there but these are in my opinion the best if you are looking to get started or validate your current understanding.

Microsoft SQL Server 2012 T-SQL Fundamentals this book was one the first T-SQL server books I read and I loved it. The author conveys the fundamentals in a way that is easy to follow and understand. If you complete this book and do the exercise you will start to develop your skills and master fundamental very quickly.

Training Kit (Exam 70-461) Querying Microsoft SQL Server 2012 (MCSA) this training book is a good resource for understanding how to use SQL Server to extract data. The book focuses on the querying aspects of SQL Server 2012/2014. This book also cover all areas of the certification exam should you want to become certified on the product.

Microsoft Virtual Academy This site has a variety of courses that are in video format, for pretty much every software product Microsoft produce. I would recommend going to this site and check out some of the videos. The let you create a learning plan so you can find the courses you want to take add them to your learning plan and learn at your own pace. The Jump Start course is great if you have never used the product before. Administering Microsoft SQL Server 2012 Jump Start is a great place to start.

Community

SQL server has a large community that that is very active. The community is there prompting SQL Server, helping solve problems and increase users and adoption of the product. If you’re looking for a place to learn about what going on in the field there is a community event happening somewhere to keep you informed. Although some these events cost there are a lot of free events that you can attend. If you want to know more or want to get involved in the community you can join the community. I have provided some resources to some of the communities below.

SQL PASS was founded to promote SQL Server product, train and grow the user community. Pass is a not-for-profit organization that hosts, numerous events and have user groups active all around the world.SQL Saturday is a free event held on Saturdays in various cities across the world. You can register to attend a SQL Saturday, you can pick from a variety of session offered by speakers from across the PASS Community, who are industry experts. I will be attending some of these events in the future so if you are interested in attending let me know.

Chicago SQL Server User Group is a user group that typically meets once a month and have a guess speaker that present on a SQL Server topic. They typically provide a free lunch and a networking time after the presentation. The meet location is typically at the Microsoft office in downtown Chicago.

24 Hours Of PASS is a great virtual event where PASS host 24 hours of SQL Server sessions in one day. The session run around the clock that you can join if you have time and if you don’t have time you can view historical recording at your own convenience.

I hope you find this information informative and helpful. I am passionate about data and SQL Server technology and hope the content I provide on this blog help you. If you have question of any kind leave a comment or send me an email.

Hello world!

Welcome to my blog!!!

Today is the day that I officially start bringing my years of technical experience and love for learning SQL Server to the world. This blog will be a one stop learning shop for all Microsoft SQL Server 2012 and higher. I will be covering fundamentals as well as advance topics.

These topics will include querying SQL Server, SQL Server Integration Services, tutorials, tricks, big data and so much more….

My goal, is to have the site be very interactive so, I encourage everyone to ask questions and engage with the content. I will respond to readers’ Questions that are posted to topics covered or random questions that may ponder in your brain.

I really enjoy working with SQL server and want to help anyone who wants to learn the or have questions about. I look forward to the Road ahead.

SQL IS FUN SO LETS HAVE SOME!!!!!