August 20, 2014

SQL In the City Event

0 comments


SQL in the City is a unique event organized by RedGate for database developers and administrators. This event features some of the world’s top SQL Server speakers who share their wisdom about Database Development, Administration, Delivery and Best Practices. You can also have one-to-ones with Red Gate developers, and enjoy networking opportunities with the SQL Server community. As if that wasn’t tempting enough, you can learn more about the Red Gate tools that can resolve the SQL Server issues you face every day, direct from Red Gate developers and also do some drop-in hands-on labs and try out these awesome tools.

SQL in the City 2014 will be held in London in October and Seattle in November.

Who are the Speakers?

SQL Server Most Valued Professionals (MVP), Steve Jones and Grant Fritchey will be talking about best practices for database administration, development, and delivery. 10 additional speakers join the London line-up, including SQL Server MVP Ike Ellis and Visual Studio ALM MVP Brian Randell.
In Seattle, 15 additional speakers will present, including Microsoft Certified Masters Gail Shaw and Bob Pusateri.

For the full list of speakers, you can visit their website.


 
  Feedback:

August 17, 2014

List Empty Tables in SQL Server

0 comments


I was recently doing a clean up of my website database. I remember creating some tables on my database but never adding any new rows to it.

Here’s a simple query to list all empty tables in your SQL Server database that uses a Dynamic Management View called dm_db_partition_stats which returns page and row-count information for every partition in the current database.

;WITH EmptyRows AS
(
   SELECT SUM(row_count) AS [TotalRows],
          OBJECT_NAME(OBJECT_ID) AS TableName
   FROM sys.dm_db_partition_stats
   WHERE index_id = 0 OR index_id = 1
   GROUP BY OBJECT_ID
)
SELECT * FROM EmptyRows
WHERE [TotalRows] = 0


OUTPUT
  
image

Note that the results from the view is only approximate. If there are any active transactions that are inserting or deleting rows, the count may not include it.


 
  Feedback:

July 07, 2014

Temporary Table Variables in SQL Server

0 comments


In a previous article, we explored Temporary tables in SQL Server. We can also declare Temporary Table Variables which like Temporary Tables, allow us to store temporary data. However there are some differences between the two which we will be seeing shortly.

Let’s first create a simple Temporary Table variable and insert, select and update the data from the same. The script is as shown below –

temp-table-variable[2]

Execute the above script as a single batch and you will see the desired output. The declaration of the table variable is done with the help of ‘@’ sign.

Now try the script queries one by one. First declare the table variable. Then try to insert the data and you will receive an error as shown below –

err4[2]

The life time of the temporary table variable is only within the batch. So it has more limited scope compared to temporary tables.

There are other limitations which you will have to understand while working with table variable compared to temporary tables. Some of them are described below –
  • You cannot use Select * INTO statement with Table variables.
  • You cannot use sp_help stored procedure to see the metadata of table variables like we saw for temporary table in our previous article. However, if you want to see the metadata for the table variable, you can make use of Sys.Tables and Sys.Columns in the context of TempDB database.
  • When you are creating table variables using Dynamic SQL, all the statements must be declared with the dynamic statement scope. Unlike temporary table, where we can declare first and then use dynamic SQL to update, insert or select the data from the table; table variables will only work within the scope of dynamic SQL batch only.
  • If you create a table variable with Identity column, you cannot add the identity column value explicitly by using “SET IDENTITY_INSERT <TableName/TempTableName> ON/OFF”.


 
  Feedback:

June 24, 2014

Error Handling in SQL Server with THROW

0 comments


Continuing our series on SQL Server 2012, today we will talk about THROW. In versions prior to SQL Sever 2012, we used @@RAISE_ERROR to generate error messages dynamically or using the sys.messages catalog.

Consider the following example

SELECT ROUND(800.0, -3)

On executing this statement, you get the following error:

image

because the value does not fit into the decimal data type.

You can use @@RAISERROR to raise a message

BEGIN TRY
SELECT ROUND(800.0, -3)
END TRY
BEGIN CATCH
DECLARE @ErrorMsg nvarchar(1000), @Severity int
SELECT @ErrorMsg = ERROR_MESSAGE(),
@Severity = ERROR_SEVERITY()
RAISERROR (@ErrorMsg, @Severity, 1)
END CATCH


Note: The old syntax of RAISERROR syntax specifying the error number and message number got deprecated (RAISEERROR 50005 ‘Exception Occurred’). Instead the new syntax RAISEERROR(50005, 10, 1) allowed you to specify the messageid, severity and state). For new applications use THROW.

However in SQL Server 2012, there’s a better way to this without much efforts – THROW. Consider the following code

BEGIN TRY
SELECT ROUND(800.0, -3)
END TRY
BEGIN CATCH
THROW
END CATCH


image

As you can see, with just one word THROW, we were able to handle the error with grace and get a result too.


 
  Feedback:

June 07, 2014

Does my SQL Server Database Support Compression?

0 comments


Some time back, Praveen Dabade had written a nice article on SQL Server Compressed Tables and Indexes in SQL Server where he explained how compression is now supported on ‘ROW and PAGE’ for tables and indexes. However did you know that compression is an enterprise-level feature?

How do determine what Enterprise Edition features are enabled on your database? Well you can use the sys.dm_persisted_sku_features DMV to find what Enterprise Edition features are
enabled on your database.

Learn more about Dynamic Management Views (DMV’s) here

Here’s the query for the same

SELECT feature_name,feature_id
FROM
sys.dm_db_persisted_sku_features;


Running this query will list all edition-specific features that are enabled in the current database. Some of the database changing features restricted to the SQL Server Enterprise or Developer editions are Compression, Partitioning, ChangeCapture etc.

This DMV is also useful in situations where you are planning to move a database from a higher to a lower edition. Eg: From Enterprise to Standard edition. Running the query will tell you if there are any Enterprise Edition features enabled that may not work when you move to a lower edition.

The DMV will return no rows if no features restricted to a particular edition are used by the database.


 
  Feedback:

May 26, 2014

Activity Monitor in SQL Server

1 comments


Activity Monitor is a handy tool in SQL Server to quickly see performance hot spots in the server, network and database activity. One of the most commonly performed tasks in Activity Monitor is to kill a stubborn connection that will not release its resources. In SQL Server 2008, this tool has undergone a major facelift from the 2005 version and helps in tracking your SQL server performance even better than before.

To view Activity Monitor in SQL Server 2008, right-click the instance name and choose Activity Monitor or click the Activity Monitor icon on the standard toolbar.

image

In SQL Server 2005, Activity Monitor can be viewed in SSMS by connecting to the server with Object Explorer, expanding ‘Management’, and then double-click ‘Activity Monitor’.

Once opened, the Activity Monitor Dashboard View contains four graphs which can help identity any abnormal activity in the Database. The graphs include % Processor Time (SQL Server), Waiting Tasks, Database I/O and Batch Requests. The default graph refresh rate is 10 seconds, but you change that easily by right-clicking any of the four graphs and selecting the appropriate refresh interval.

This snapshot is very helpful to get a quick performance snapshot without the need to use other monitoring tool for the same purpose.

SQL Activity Monitor

This dashboard also contains expandable/collapsible panes to view detailed information about Processes, Resources, I/O and Expensive Queries. Just click on the expand button to the right of each pane to view the detailed information. Here’s a quick overview of the different graphs and what they show.
Processor Time - The percentage of elapsed time that the processor spends to execute non-idle threads for the instance across all CPUs. The Processes pane gives information on what processes are running, what resources are being utilized etc. You can even right-click a process and choose Trace Process in SQL Server Profiler. Presto!
 
Waiting Tasks - The number of tasks that are waiting for processor, I/O, or memory resources. The Resource Waits pane details the processes waiting for other resources on the server.  It shows the latest information from several DMVs like the sys.dm_os_wait_stats
 
Database I/O – Information on data and log files for system and user databases. Provides the transfer rate in MB/Sec, of data from memory to disk, disk to memory, or disk to disk. The pane contains information to quickly detect a contention in disk I/O.
 
Batch Requests/sec - The number of SQL Server batches that are received by the instance. The Expensive Query pane lets you find and tune expensive queries. You can even right-click any query and view its graphical execution plan. Also see Find the Most Time Consuming Code in your SQL Server Database
Note: To view the Activity Monitor in SQL Server, a user must have VIEW SERVER STATE permission. Also make sure you close the Activity Monitor tool in SSMS when it is not required.


 
  Feedback:

May 12, 2014

Using SUM OVER in SQL Server to Get % Sales

0 comments


In some of our previous posts Aggregates without GroupBy SQL Server and SQL Server 2012 - Running total with SUM function‏, we have seen how to use SUM and OVER to calculate running totals and count without using the GroupBy clause.

Today we will use both the SUM and OVER function together to calculate Percentage(%) Sales
Consider the following table:

Sum-Over-Sample

To calculate what % of Qty sold each Product Code accounts for, use this code using SUM and OVER

SELECT [ProductCode], SUM(QTYSOLD) AS [Sold],
       100.0 * SUM(QTYSOLD) / SUM(SUM(QTYSOLD)) OVER () AS [%]
FROM SomeTable
GROUP BY [ProductCode]


The code is pretty simple to understand. We are calculating the total items sold per Product and using a simple formula to calculate the % each Product Code has sold using SUM and OVER

OUTPUT

image


 
  Feedback:

May 04, 2014

SQL Server Stored Procedures - Back to Basics

1 comments


In this article, we will see how to create Stored Procedures in SQL Server. We will also explore Stored Procedures with INPUT/OUTPUT Parameters and Stored Procedures with Transactions and Cursors.
Stored Procedure is a group of T-SQL statements compiled into a single execution plan. It offers various functionalities like –
  • Modularity – Stored Procedures in SQL Server offers Modularity which allows us to divide the program/business logic into number of groups which can be recombined and reused as per our requirements.
  • Easy Maintenance – The required business logic and rules can be enforced and can be changed at any point of time as Stored Procedures are single point of control.
  • Reusability – Once you write a stored procedure, you can reuse the same over and over again in any application.
  • Improved Performance – The Stored Procedures are the compiled T-SQL blocks.
Likewise, there are number of benefits which we can achieve at database level by writing the stored procedures in SQL Server. Stored Procedures can be written with or without parameters to change the output and execute the business logic based on the conditional statements.

There are some limitations to stored procedures too, which I have briefly outlined at the end of this article.
Let’s try writing some stored procedures and explore their features. Open SQL Server Management Studio (SSMS) and open a New Query window. For this demonstration I will be using the Northwind database.
Here are the tables which we will make use for querying the data in our stored procedures –

USE Northwind
GO
--Tables to be used during Stored Procedures
SELECT * FROM Customers
SELECT * FROM Employees
SELECT * FROM Orders
SELECT * FROM [Order Details]
SELECT * FROM Products
GO

A Simple Stored Procedure

We will start by creating a stored procedure which will fetch all the order details with product name and supplier details. Let’s write this code in our SSMS Query window –

CREATE PROCEDURE FetchAllOrderDetails
AS
BEGIN
    SELECT O.OrderID,MONTH(O.OrderDate) Order_Month,
    P.ProductName,P.UnitPrice,P.UnitsInStock,
    S.CompanyName FROM Orders O
    INNER JOIN [Order Details] OD
    ON O.OrderID=OD.OrderID
    INNER JOIN Products P
    ON OD.ProductID=P.ProductID
    INNER JOIN Suppliers S
    ON P.SupplierID=S.SupplierID
END


EXEC FetchAllOrderDetails

The output of the above stored procedure is as follows –

stored-proc-output

Stored Procedure with a Parameter

Now we will write another stored procedure to fetch the product details and category details of the products purchased by the customer. We will input a customer ID to our stored procedure.

CREATE PROCEDURE CustomerProductDetails
(
    @p_CustomerID NVARCHAR(10)
)
AS
BEGIN
    SELECT CAT.CategoryName,CAT.[Description],
    P.ProductName,P.UnitPrice,P.UnitsInStock
    FROM Customers C INNER JOIN Orders O
    ON C.CustomerID=O.CustomerID
    INNER JOIN [Order Details] OD
    ON O.OrderID=OD.OrderID
    INNER JOIN Products P
    ON OD.ProductID=P.ProductID
    INNER JOIN Categories CAT
    ON P.CategoryID=CAT.CategoryID
    WHERE C.CustomerID=@p_CustomerID
END


EXEC CustomerProductDetails 'ALFKI'

The output of above stored procedure is as shown below –

sql-sp-result

Stored Procedure with Two Parameters

Let’s write one more stored procedure with two input parameters which will fetch the details of an employee(s) who has/have processed the maximum orders in a given month and year. The code is shown below –

CREATE PROCEDURE EmployeeOfTheMonth
(
    @p_Year INT,
    @p_Month NVARCHAR(10)
)
AS
BEGIN
    SELECT * FROM Employees WHERE EmployeeID IN
    (
        SELECT EmployeeID FROM
        (
            SELECT top 1 EmployeeID, COUNT(OrderID) TotalOrders
            FROM Orders
            WHERE YEAR(OrderDate)=@p_Year
            AND DATENAME(MONTH,OrderDate)=@p_Month
            GROUP BY EmployeeID
            ORDER BY TotalOrders DESC
        ) AS EmployeeOfTheMonth
    )
END


EXEC EmployeeOfTheMonth 1997,'June'

The output of above stored procedure is as shown below –

stored-proc-params

Stored Procedure with INPUT and OUTPUT parameter

We will vary the stored procedure we just wrote, this time with an INPUT and OUTPUT parameters. We will try fetching the product details which are supplied by a given supplier ID and will return the supplier’s Contact Name and Company Name. Let’s write the below code in our query pad –

CREATE PROCEDURE FetchSupplierProducts
(
    @p_SupplierID INT,
    @p_SupplierName NVARCHAR(30) OUTPUT,
    @p_CompanyName NVARCHAR(30) OUTPUT
)
AS
BEGIN
    SELECT P.ProductID,P.ProductName,P.UnitPrice FROM Products P INNER JOIN Suppliers S
    ON P.SupplierID=S.SupplierID WHERE S.SupplierID=@p_SupplierID
    SELECT @p_SupplierName=ContactName,@p_CompanyName=CompanyName FROM Suppliers
    WHERE SupplierID=@p_SupplierID
END


To test the stored procedure, write the following code –

DECLARE @v_ContactName NVARCHAR(30)
DECLARE @v_CompanyName NVARCHAR(30)
EXEC FetchSupplierProducts 1,@v_ContactName OUTPUT,@v_CompanyName OUTPUT
SELECT @v_CompanyName CompanyName,@v_ContactName SupplierName


The output of the above stored procedure is as shown below –

product-details

Stored Procedure with Transactions and Try-Catch Block

For the next demonstration, we will create three tables and add some dummy data in the same. Write below following code to create the three tables and insert some data –

CREATE TABLE Dept
(
    DEPTNO INT PRIMARY KEY,
    DNAME VARCHAR(20),
    LOC VARCHAR(20)


CREATE TABLE Emp
(
    EMPID INT PRIMARY KEY,
    ENAME VARCHAR(20),
    JOB VARCHAR(20),
    MGRNO INT,
    SAL DECIMAL(8,2),
    DEPTNO INT REFERENCES DEPT(DEPTNO)


CREATE TABLE UpdatedSalTable
(
    EMPID INT PRIMARY KEY,
    ENAME VARCHAR(20),
    JOB VARCHAR(20),
    MGRNO INT,
    SAL DECIMAL(8,2),
    DEPTNO INT REFERENCES DEPT(DEPTNO)


INSERT INTO Dept VALUES(10,'SALES','NORTH')
INSERT INTO Dept VALUES(20,'ACCOUNTS','SOUTH')
INSERT INTO Dept VALUES(30,'PRODUCTION','WEST')
INSERT INTO Dept VALUES(40,'TRAVEL','EAST')

INSERT INTO Emp VALUES(1008,'IIII','VP',NULL,1200,10)
INSERT INTO Emp VALUES(1000,'AAAA','MANAGER',1008,3200,10)
INSERT INTO Emp VALUES(1001,'BBBB','Sales Rept',1000,2200,10)
INSERT INTO Emp VALUES(1002,'CCCC','Account Mgr',1008,4200,20)
INSERT INTO Emp VALUES(1003,'DDDD','Analyst',1002,5000,20)
INSERT INTO Emp VALUES(1004,'EEEE','Analyst',1002,5000,20)
INSERT INTO Emp VALUES(1005,'FFFF','Field Manager',1008,7200,30)
INSERT INTO Emp VALUES(1006,'GGGG','Prod Eng',1005,3200,30)
INSERT INTO Emp VALUES(1007,'HHHH','Site Eng',1005,4200,30)
SELECT * FROM Dept
SELECT * FROM Emp
GO


Now create a stored procedure which will implement the transaction with error handling using TRY-CATCH block. The stored procedure will update the salary of an employee if the location of the employee’s department is ‘SOUTH’ and commit the transaction. It will also store the updated employee’s record into a separate table. Let’s write the following code in our query pad –

CREATE PROCEDURE UpdateEmployeeSalary
(
    @p_EmployeeID INT
)
AS
BEGIN
    DECLARE @v_Location NVARCHAR(10)
    DECLARE @v_DeptID INT
    DECLARE @UpdateSal NVARCHAR(20)='Salary Update Transaction'
    SELECT @v_DeptID = DEPTNO FROM Emp WHERE EMPID=@p_EmployeeID
    SELECT @v_Location=LOC FROM Dept WHERE DEPTNO=@v_DeptID
    BEGIN TRY
        BEGIN TRAN @UpdateSal
            IF(UPPER(@v_Location)='SOUTH')
                BEGIN
                    UPDATE Emp SET SAL=SAL+1000 WHERE EMPID=@p_EmployeeID
                    INSERT UpdatedSalTable
                    SELECT * FROM EMP WHERE EMPID=@p_EmployeeID                   
                END
            ELSE
                BEGIN
                    PRINT 'NO UPDATES'
                END
        COMMIT TRAN @UpdateSal
    END TRY
    BEGIN CATCH
        SELECT ERROR_MESSAGE(),ERROR_NUMBER(),ERROR_SEVERITY()
        ROLLBACK TRAN @UpdateSal
    END CATCH
END


Let’s test our stored procedure by writing the following code –

EXEC UpdateEmployeeSalary 1002
SELECT * FROM UpdatedSalTable


The output of updated employee is as shown below –

update-statements

Stored Procedure using a Cursor

The next stored procedure we will write will make use of CURSOR to modify the number of rows one by one. The stored procedure fetches each employee one by one and checks if the salary of an employee is greater than the manager’s salary. If the salary is greater than the manager’s salary, the job of an employee will be updated to Manager. Wish that happened in real life too eh! Let’s write the following code in our query pad –

CREATE PROCEDURE UpdateJobOfWorker
AS
BEGIN
DECLARE @UpdateSal NVARCHAR(20)='Salary Update Transaction'
BEGIN TRY
        BEGIN TRAN @UpdateSal
            DECLARE @ENO INT
            DECLARE complex_cursor CURSOR FOR
                SELECT WORKER.EMPID
                FROM dbo.EMP AS WORKER
                WHERE SAL>
                     (SELECT SAL
                      FROM dbo.EMP AS MANAGER
                      WHERE WORKER.MGRNO = MANAGER.EMPID)
            OPEN complex_cursor;
            FETCH NEXT FROM complex_cursor INTO @ENO;
            WHILE (@@FETCH_STATUS=0)
                BEGIN
                    SELECT @ENO
                    UPDATE dbo.EMP
                    SET JOB = 'MANAGER'
                    WHERE EMPID=@ENO;
                    FETCH NEXT FROM complex_cursor INTO @ENO;
                END
            CLOSE complex_cursor;
            DEALLOCATE complex_cursor;
        COMMIT TRAN @UpdateSal
    END TRY
    BEGIN CATCH
        SELECT ERROR_MESSAGE(),ERROR_NUMBER(),ERROR_SEVERITY()
        ROLLBACK TRAN @UpdateSal
        CLOSE complex_cursor;
        DEALLOCATE complex_cursor;
    END CATCH
END


Test the stored procedure  –

EXEC UpdateJobOfWorker

Some Stored Procedure Limitations

Two limitations that come to my mind that using too many Stored Procedures can put some load on the server as it increases the amount of processing that the server has to do. The other one is that SQL Server stored procedures do not follow the ANSI-99 standards, so porting them to a different database requires some rework.
A general rule of thumb to following while using stored procedures is that don't use them to implement business logic or for performing CRUD operations.
And that’s it. In this article, we saw the benefits of writing the stored procedures with various examples like Stored Procedures without and with INPUT/OUTPUT parameters, Stored Procedure with transactions and stored procedures with CURSORS. Hope you enjoyed reading the article!


 
  Feedback:

April 12, 2014

Memory Optimized Tables and Transactions in SQL Server 2014

0 comments


In one of our previous articles “SQL Server 2014 – InMemory Table, Indexes and Stored Procedures”, we saw how to create in-memory tables and indexes. We also saw how to use stored procedures to interact with these tables.

In-memory tables are capable of storing multi-versions of a rows. When you perform transactions on these tables, internally it uses snapshot – based transaction isolation. In memory optimized tables, we always assume that the transactions will optimistically commit the operations.

As we don’t have locks, we don’t have blocking mechanism in memory optimized tables. Given these points, a question always arises that what if there are conflicts. For example, we have two transactions. The first transaction tries to write a row and the same row is getting written by another transaction as well. Then these conflicts are detected and the second transaction will always fail.

In the scenario of two transactions, you can read the old version of the row in case a transaction is occurring on the row. But if the row is not available, then the transaction simply fails.

There are number of Isolation Levels supported by transactions as described below –

· Snapshot Isolation – In this level of isolation, the starting reads are consistent and writes are always consistent.

· Repeatable Read – In this level of isolation, we have guarantee that the rows which we read during the start of the transaction, are the same at the end of the transaction. Other transaction will not be able to change them.

· Serializable – In this isolation level, if you scan the entire table at the end of the transaction before commit to check, if the new row is inserted, then the transaction will fail.

Consider few guide lines while working with Memory Optimized Table Transactions –
  • Always declare which Isolation Level you want you as described above.
  • In case of transaction failure, try writing logic for handling conflicts and validations in your transactions.
  • Try avoiding long running transactions.
In my previous article I have shown how to create Memory optimized table, indexes and stored procedure. I am using the same table “Employees” for demoing Memory Optimized Tables and Transactions.

Let’s open SQL Server Management Studio and write the below script.

emlployee-script

In the script, the query pad 1 contains a T-SQL command which is creating a table with the name Test. We have inserted two rows in the table. We then start a transaction which will update the row from the Test table which has ID=1. We are updating the row without committing it. In the second query pad, we try to access the row which is under the transaction and hence you will not see the row unless the first transaction is either committed or rollback.

Try committing the transaction and you will see the updated row in your second query pad as shown below –
sql-result1

Now let’s compare the same with Memory Optimized table which we have created in the previous article. Let’s write the following script in Query pad 1 –

recordbeforeupd

Now update the above row and change the city from London to New York but do not commit it as shown below –

BEGIN TRAN
UPDATE Employees WITH (SNAPSHOT)
    SET City='New York' 
    WHERE EmployeeID = 88569
COMMIT TRAN


Now let’s try accessing the updated row from our second query pad and see the output. You will see the older version of row as shown below –

recordbeforeupd

Commit the transaction and you will see the updated row as shown below –

afterupdate

Let’s update the city from our first query pad without committing it. And then try to update the same row from our second query pad and then we will see the difference –

script2

The output of the second query pad transaction execution is as shown below. The error says – The current transaction attempted to update a record that has been updated since this transaction…

error1

The above error clearly says that the row which the current transaction attempted to update cannot commit the transaction as the row has been modified outside since this transaction started.

So, in this situation, the second transaction will always fail. We can also write a retry logic which can validate the row and then modify it.

Summary – In this article we have seen how to implement a transaction with memory optimized tables in SQL Server 2014.


 
  Feedback:

March 28, 2014

SQL Server 2014 - InMemory Tables, Indexes and Stored Procedures

1 comments


In this article, we will explore how to create memory optimized Tables, Indexes and Stored Procedures introduced in Microsoft SQL Server 2014. SQL Server 2014 as of this writing is in CTP1

To enhance the performance of OLTP, SQL Server 2014 has introduced a number of new features. One of them is In-Memory Tables, Indexes and Stored Procedures. The in-memory tables and indexes are memory optimized structures which allows the rows to be accessed using memory pointers. Simply put, using memory pointers, we can now access the rows more efficiently.

When you create indexes on in-memory tables, they do not occupy  disk space. The index existence is now in-memory. The in-memory data structures in SQL Server 2014 are lock-free data structures. So when you implement a transaction and two user’s try to modify the same row, the second transaction is always failed.
The in-memory structures does not have separate tools and a query language. This feature is fully integrated into SQL Server and thus you will use the same tools and language [T-SQL] for querying the data.

For demonstrations in this article, I am using SQL Server 2014 CPT 1 and Windows 8 OS. I have already created a database with the name “PurchaseOrderDB”. Let’s take a look at the following script –

CREATE TABLE Employees
(
    EmployeeID INT
        NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH(BUCKET_COUNT = 100000),
    EmployeeName NVARCHAR(200)
        COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IdxEmpName] HASH WITH (BUCKET_COUNT = 100000),
    City NVARCHAR(200)
        COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IdxCity] HASH WITH(BUCKET_COUNT = 100000),
    Country NVARCHAR(200) NOT NULL,
    Region NVARCHAR(10) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)


In the above CREATE TABLE script, you might have noticed some new keywords. Let’s first understand these new keywords before we proceed –

1. MEMORY_OPTIMIZED = ON – by using this keyword, you are creating memory optimized tables. Also note that this keyword can be used only with the CREATE TABLE statement.

2. DURABILITY = SCHEMA_AND_DATA – this option tells that the schema and data are durable/persisted. In case for some reason if the server crashes, you will still get the schema and data back. This is an optional setting and the default value is the same which we are using during our CREATE TABLE statement – SCHEMA_AND_DATA. The other option is Schema only.

3. NONCLUSTERED HASH WITH – HASHED INDEX is a new index which you can use only with in-memory tables. HASH INDEXES are used for point lookups. With HASH INDEX the BUCKET_COUNT is a must which is used for specifying how many buckets you want to have in the HASH table.

4. Other indexes which are known as secondary indexes excluding primary key index, now can be specified with the CREATE TABLE. As you can see that in the above script, we have indexes specified on EmployeeName and City column.

Note – The HASH INDEX can be specified only with CREATE TABLE.

When you write the above CREATE TABLE statement, internally SQL Server will generate the code for the table and then compile this code into a DLL. Once the DLL is ready, it will load it into the memory.

To access the tables and indexes from our PurchaseOrderDB database, we can create natively compiled stored procedures(SP’s). In these SP’s, we can write our business logic and the Data Manipulation Language (DML) statements as per our domain requirements. Let’s take a look at the syntax for creating a natively compiled stored procedure –

native-stored-procedure

The create procedure script contains some new keywords. Let’s first understand these new keywords –

1. NATIVE_COMPILATION – This keyword is used for making Stored Procedures as native stored procedures.

2. SCHEMABINDING – This keyword is a must-use-keyword so that the table which is getting accessed inside the SP, should not get dropped accidently.

3. EXECUTE AS OWNER – which context the SP should get executed. That means under which user this stored procedure will run. The possible values are owner, self or a specific user.

4. Begin ATOMIC – a required block in native stored procedures. The ATOMIC block automatically COMMIT’s or ROLLBACK’s the transaction. If there is already a transaction, then it will create a SAVEPOINT or if there is no transaction, it will start with a new transaction.

5. The WITH block – This block decides the various session settings which will be always be fixed at SP creation time. The two required parameters are TRANSACTION ISOLATION LEVEL and LANGUAGE. You can set multiple options as well.

Now let’s start SQL Server 2014 if you haven’t already and create a database with the name “PurchaseOrderDB”. Once the database is ready, write the following code in your query window –

Step 1 – Write the USE database command first as shown below –

USE PurchaseOrderDB
GO


Step 2 – Now alter the database using the following commands –

ALTER DATABASE PurchaseOrderDB ADD  FILEGROUP [PODB_fg]
CONTAINS MEMORY_OPTIMIZED_DATA

GO
ALTER DATABASE PurchaseOrderDB ADD FILE
( NAME = N'PODB_FG', FILENAME = N'C:\PODBDATA')
TO FILEGROUP [PODB_FG]


The question is why did we alter the database as shown above? The reason is when you directly execute the script shown for CREATE TABLE in the beginning of this article, you will receive the following error –

The Memory_Optimized_DATA filegroup does not exist or is empty. Memory optimized tables cannot be created for database until it has one Memory_Optimized_DATA filegroup that is not empty

The error clearly tells you that the memory optimized table needs separate file group. So we are configuring our database with the new file group and giving the path of a directory which will be used by SQL Server to store the DLLs, which are created for Data Definition Language (DDL) statements as shown below –

sql-2014-dll

Step 3 – Now let’s write the CREATE TABLE script as shown below –

CREATE TABLE Employees
(
    EmployeeID INT
        NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH(BUCKET_COUNT = 100000),
    EmployeeName NVARCHAR(200)
        COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IdxEmpName] HASH WITH (BUCKET_COUNT = 100000),
    City NVARCHAR(200)
        COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IdxCity] HASH WITH(BUCKET_COUNT = 100000),
    Country NVARCHAR(200) NOT NULL,
    Region NVARCHAR(10) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)


If you observe the code, we have also included collate with Latin1_General_100_Bin2. This is because string columns in a memory optimized columns must use BIN2 collation.

sql-server-nonbin

For more information, please refer the following MSDN link which lists the limitations, restrictions and workaround [if available] – http://msdn.microsoft.com/en-us/library/dn169381(v=sql.15).aspx

Once your table is created, you can query the table using our everyday T-SQL statements.

Step 4 – Write the following SELECT statement to query Employees table –

SELECT * FROM Employees

Step 5 – Let’s insert some records into our Employees table by using the following script –

BEGIN
    DECLARE @EmpID INT=1
    WHILE(@EmpID<100000)
        BEGIN
            INSERT INTO Employees VALUES(@EmpID,'DNCEmp','New York','USA','EAST')
            SET @EmpID+=1
        END
END


Now execute the query given in Step 4 and you will see the following result –

insert-employees

You can also use order by clause to get all the rows in order as shown below –

SELECT * FROM Employees ORDER BY EmployeeID

Step 6 – Write the memory optimized stored procedure as shown below –

CREATE PROCEDURE InsertEmployee
(
    @EmployeeID INT,
    @EmployeeName NVARCHAR(200),
    @City NVARCHAR(200),
    @Country NVARCHAR(200)
) WITH NATIVE_COMPILATION,
       SCHEMABINDING,
       EXECUTE AS OWNER
AS
BEGIN ATOMIC
    WITH
    (
        TRANSACTION ISOLATION LEVEL = SNAPSHOT,
        LANGUAGE = 'us_english'
    )
    SET @EmployeeID = 1
    WHILE(@EmployeeID<100000)
        BEGIN
            INSERT INTO dbo.Employees VALUES(@EmployeeID,@EmployeeName,@City,@Country,'EAST')
            SET @EmployeeID+=1
        END
END


Execute the SP as shown below –

exec InsertEmployee 1,'DNC','London','UK'

Now see the result in your table by repeating Step 4

Abstract – In this article we have seen how to create an in-memory or memory optimized table, indexes and Stored Procedures in SQL Server 2014. To access the rows, we have use T-SQL statements and to perform DML [Insert] operation, we have used Natively Complied Stored Procedure.


 
  Feedback:
 

Copyright © 2009-2014 All Rights Reserved for SQLServerCurry.com by Suprotim Agarwal | Terms and Conditions