April 12, 2014

Memory Optimized Tables and Transactions in SQL Server 2014

0 comments


In one of our previous articles “SQL Server 2014 – InMemory Table, Indexes and Stored Procedures”, we saw how to create in-memory tables and indexes. We also saw how to use stored procedures to interact with these tables.

In-memory tables are capable of storing multi-versions of a rows. When you perform transactions on these tables, internally it uses snapshot – based transaction isolation. In memory optimized tables, we always assume that the transactions will optimistically commit the operations.

As we don’t have locks, we don’t have blocking mechanism in memory optimized tables. Given these points, a question always arises that what if there are conflicts. For example, we have two transactions. The first transaction tries to write a row and the same row is getting written by another transaction as well. Then these conflicts are detected and the second transaction will always fail.

In the scenario of two transactions, you can read the old version of the row in case a transaction is occurring on the row. But if the row is not available, then the transaction simply fails.

There are number of Isolation Levels supported by transactions as described below –

· Snapshot Isolation – In this level of isolation, the starting reads are consistent and writes are always consistent.

· Repeatable Read – In this level of isolation, we have guarantee that the rows which we read during the start of the transaction, are the same at the end of the transaction. Other transaction will not be able to change them.

· Serializable – In this isolation level, if you scan the entire table at the end of the transaction before commit to check, if the new row is inserted, then the transaction will fail.

Consider few guide lines while working with Memory Optimized Table Transactions –
  • Always declare which Isolation Level you want you as described above.
  • In case of transaction failure, try writing logic for handling conflicts and validations in your transactions.
  • Try avoiding long running transactions.
In my previous article I have shown how to create Memory optimized table, indexes and stored procedure. I am using the same table “Employees” for demoing Memory Optimized Tables and Transactions.

Let’s open SQL Server Management Studio and write the below script.

emlployee-script

In the script, the query pad 1 contains a T-SQL command which is creating a table with the name Test. We have inserted two rows in the table. We then start a transaction which will update the row from the Test table which has ID=1. We are updating the row without committing it. In the second query pad, we try to access the row which is under the transaction and hence you will not see the row unless the first transaction is either committed or rollback.

Try committing the transaction and you will see the updated row in your second query pad as shown below –
sql-result1

Now let’s compare the same with Memory Optimized table which we have created in the previous article. Let’s write the following script in Query pad 1 –

recordbeforeupd

Now update the above row and change the city from London to New York but do not commit it as shown below –

BEGIN TRAN
UPDATE Employees WITH (SNAPSHOT)
    SET City='New York' 
    WHERE EmployeeID = 88569
COMMIT TRAN


Now let’s try accessing the updated row from our second query pad and see the output. You will see the older version of row as shown below –

recordbeforeupd

Commit the transaction and you will see the updated row as shown below –

afterupdate

Let’s update the city from our first query pad without committing it. And then try to update the same row from our second query pad and then we will see the difference –

script2

The output of the second query pad transaction execution is as shown below. The error says – The current transaction attempted to update a record that has been updated since this transaction…

error1

The above error clearly says that the row which the current transaction attempted to update cannot commit the transaction as the row has been modified outside since this transaction started.

So, in this situation, the second transaction will always fail. We can also write a retry logic which can validate the row and then modify it.

Summary – In this article we have seen how to implement a transaction with memory optimized tables in SQL Server 2014.


 
  Feedback:

March 28, 2014

SQL Server 2014 - InMemory Tables, Indexes and Stored Procedures

1 comments


In this article, we will explore how to create memory optimized Tables, Indexes and Stored Procedures introduced in Microsoft SQL Server 2014. SQL Server 2014 as of this writing is in CTP1

To enhance the performance of OLTP, SQL Server 2014 has introduced a number of new features. One of them is In-Memory Tables, Indexes and Stored Procedures. The in-memory tables and indexes are memory optimized structures which allows the rows to be accessed using memory pointers. Simply put, using memory pointers, we can now access the rows more efficiently.

When you create indexes on in-memory tables, they do not occupy  disk space. The index existence is now in-memory. The in-memory data structures in SQL Server 2014 are lock-free data structures. So when you implement a transaction and two user’s try to modify the same row, the second transaction is always failed.
The in-memory structures does not have separate tools and a query language. This feature is fully integrated into SQL Server and thus you will use the same tools and language [T-SQL] for querying the data.

For demonstrations in this article, I am using SQL Server 2014 CPT 1 and Windows 8 OS. I have already created a database with the name “PurchaseOrderDB”. Let’s take a look at the following script –

CREATE TABLE Employees
(
    EmployeeID INT
        NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH(BUCKET_COUNT = 100000),
    EmployeeName NVARCHAR(200)
        COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IdxEmpName] HASH WITH (BUCKET_COUNT = 100000),
    City NVARCHAR(200)
        COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IdxCity] HASH WITH(BUCKET_COUNT = 100000),
    Country NVARCHAR(200) NOT NULL,
    Region NVARCHAR(10) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)


In the above CREATE TABLE script, you might have noticed some new keywords. Let’s first understand these new keywords before we proceed –

1. MEMORY_OPTIMIZED = ON – by using this keyword, you are creating memory optimized tables. Also note that this keyword can be used only with the CREATE TABLE statement.

2. DURABILITY = SCHEMA_AND_DATA – this option tells that the schema and data are durable/persisted. In case for some reason if the server crashes, you will still get the schema and data back. This is an optional setting and the default value is the same which we are using during our CREATE TABLE statement – SCHEMA_AND_DATA. The other option is Schema only.

3. NONCLUSTERED HASH WITH – HASHED INDEX is a new index which you can use only with in-memory tables. HASH INDEXES are used for point lookups. With HASH INDEX the BUCKET_COUNT is a must which is used for specifying how many buckets you want to have in the HASH table.

4. Other indexes which are known as secondary indexes excluding primary key index, now can be specified with the CREATE TABLE. As you can see that in the above script, we have indexes specified on EmployeeName and City column.

Note – The HASH INDEX can be specified only with CREATE TABLE.

When you write the above CREATE TABLE statement, internally SQL Server will generate the code for the table and then compile this code into a DLL. Once the DLL is ready, it will load it into the memory.

To access the tables and indexes from our PurchaseOrderDB database, we can create natively compiled stored procedures(SP’s). In these SP’s, we can write our business logic and the Data Manipulation Language (DML) statements as per our domain requirements. Let’s take a look at the syntax for creating a natively compiled stored procedure –

native-stored-procedure

The create procedure script contains some new keywords. Let’s first understand these new keywords –

1. NATIVE_COMPILATION – This keyword is used for making Stored Procedures as native stored procedures.

2. SCHEMABINDING – This keyword is a must-use-keyword so that the table which is getting accessed inside the SP, should not get dropped accidently.

3. EXECUTE AS OWNER – which context the SP should get executed. That means under which user this stored procedure will run. The possible values are owner, self or a specific user.

4. Begin ATOMIC – a required block in native stored procedures. The ATOMIC block automatically COMMIT’s or ROLLBACK’s the transaction. If there is already a transaction, then it will create a SAVEPOINT or if there is no transaction, it will start with a new transaction.

5. The WITH block – This block decides the various session settings which will be always be fixed at SP creation time. The two required parameters are TRANSACTION ISOLATION LEVEL and LANGUAGE. You can set multiple options as well.

Now let’s start SQL Server 2014 if you haven’t already and create a database with the name “PurchaseOrderDB”. Once the database is ready, write the following code in your query window –

Step 1 – Write the USE database command first as shown below –

USE PurchaseOrderDB
GO


Step 2 – Now alter the database using the following commands –

ALTER DATABASE PurchaseOrderDB ADD  FILEGROUP [PODB_fg]
CONTAINS MEMORY_OPTIMIZED_DATA

GO
ALTER DATABASE PurchaseOrderDB ADD FILE
( NAME = N'PODB_FG', FILENAME = N'C:\PODBDATA')
TO FILEGROUP [PODB_FG]


The question is why did we alter the database as shown above? The reason is when you directly execute the script shown for CREATE TABLE in the beginning of this article, you will receive the following error –

The Memory_Optimized_DATA filegroup does not exist or is empty. Memory optimized tables cannot be created for database until it has one Memory_Optimized_DATA filegroup that is not empty

The error clearly tells you that the memory optimized table needs separate file group. So we are configuring our database with the new file group and giving the path of a directory which will be used by SQL Server to store the DLLs, which are created for Data Definition Language (DDL) statements as shown below –

sql-2014-dll

Step 3 – Now let’s write the CREATE TABLE script as shown below –

CREATE TABLE Employees
(
    EmployeeID INT
        NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH(BUCKET_COUNT = 100000),
    EmployeeName NVARCHAR(200)
        COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IdxEmpName] HASH WITH (BUCKET_COUNT = 100000),
    City NVARCHAR(200)
        COLLATE Latin1_General_100_BIN2 NOT NULL INDEX [IdxCity] HASH WITH(BUCKET_COUNT = 100000),
    Country NVARCHAR(200) NOT NULL,
    Region NVARCHAR(10) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)


If you observe the code, we have also included collate with Latin1_General_100_Bin2. This is because string columns in a memory optimized columns must use BIN2 collation.

sql-server-nonbin

For more information, please refer the following MSDN link which lists the limitations, restrictions and workaround [if available] – http://msdn.microsoft.com/en-us/library/dn169381(v=sql.15).aspx

Once your table is created, you can query the table using our everyday T-SQL statements.

Step 4 – Write the following SELECT statement to query Employees table –

SELECT * FROM Employees

Step 5 – Let’s insert some records into our Employees table by using the following script –

BEGIN
    DECLARE @EmpID INT=1
    WHILE(@EmpID<100000)
        BEGIN
            INSERT INTO Employees VALUES(@EmpID,'DNCEmp','New York','USA','EAST')
            SET @EmpID+=1
        END
END


Now execute the query given in Step 4 and you will see the following result –

insert-employees

You can also use order by clause to get all the rows in order as shown below –

SELECT * FROM Employees ORDER BY EmployeeID

Step 6 – Write the memory optimized stored procedure as shown below –

CREATE PROCEDURE InsertEmployee
(
    @EmployeeID INT,
    @EmployeeName NVARCHAR(200),
    @City NVARCHAR(200),
    @Country NVARCHAR(200)
) WITH NATIVE_COMPILATION,
       SCHEMABINDING,
       EXECUTE AS OWNER
AS
BEGIN ATOMIC
    WITH
    (
        TRANSACTION ISOLATION LEVEL = SNAPSHOT,
        LANGUAGE = 'us_english'
    )
    SET @EmployeeID = 1
    WHILE(@EmployeeID<100000)
        BEGIN
            INSERT INTO dbo.Employees VALUES(@EmployeeID,@EmployeeName,@City,@Country,'EAST')
            SET @EmployeeID+=1
        END
END


Execute the SP as shown below –

exec InsertEmployee 1,'DNC','London','UK'

Now see the result in your table by repeating Step 4

Abstract – In this article we have seen how to create an in-memory or memory optimized table, indexes and Stored Procedures in SQL Server 2014. To access the rows, we have use T-SQL statements and to perform DML [Insert] operation, we have used Natively Complied Stored Procedure.


 
  Feedback:

March 20, 2014

User Defined Functions in SQL Server

0 comments


User Defined Functions play an important role in SQL Server. User Defined functions can be used to perform a complex logic, can accept parameters and return data. Many a times we have to write complex logic which cannot be written using a single query. In such scenarios, UDFs play an important role. For example, we can call user defined function in a where clause or use a user defined function in a JOIN [Where UDF returns a result set].

SQL Server supports two types of User Defined Functions as mentioned below –

- Scalar Functions – The function which returns a Scalar/Single value.
- Table Valued Functions – The function which returns a row set of SQL server Table datatype. Table Valued Functions can be written as –
  • Inline Table
  • Multi-statement Table
We will explore these functions today. I am using SQL Server 2012 for this demonstration, although you can use SQL Server 2005, 2008, 2008 R2 as well.

I have preconfigured Northwind database on my SQL Server instance. We will be using the following tables for creating different User Defined Functions –
  • Customers
  • Employees
  • Orders
  • Order Details
  • Products
Let’s start querying the above table. Open a new Query window and write the following commands –

tablequeries

Scalar Function

We will now create a scalar function, which returns the number of orders placed by a given customer. Write the following code in your query pad –

scalar1

The above function returns an integer value. To test this function, we will write some code as shown below –

scalartest1

Let us see another example which will fetch the number of orders processed by an employee for a given year. Write the following function in our query pad –

scalar2

We will test this function with different years for an employee as shown below –

scalartest2

Table Valued Functions

Now let’s try an Inline Table valued function. Inline Table valued functions can return a row set using SQL Server Table datatype. You cannot perform addition logic in inline table valued functions. We will fetch the product details purchased by a customer as shown below –

tvf1

To test this example we will use a select statement as shown below –

tvftest1

Another example of the Inline Table Valued Function is as shown below –

tvf2

To test this function, we will use different years as shown below –

tvftest2
clip_image001

We will now see a Multi-Statement Table Valued Function. This function can be used to perform additional logic within the function. The code is as shown below –

clip_image003

To use the Multi-Statement Table Valued function, use this code –

tvfmultistatementtest

There are couple of limitations you must consider before creating User Defined Functions. Some of them are as shown below –
  • You cannot modify the state of the database using UDFs
  • Unlike Stored Procedures, UDF can return only one single result set
  • UDF does not support Try-Catch, @ERROR or RAISERROR function
Summary – User-defined functions are routines which perform calculations, receive one or more parameters and return either a scalar value or a result set. In this article, we saw how to create User Defined Functions. We also saw how to use Scalar functions and Table Valued Functions [Inline Table Valued Functions and Multi-Statement Table Valued Functions].

Download the source code of this article (Github)


 
  Feedback:

March 01, 2014

Throw Statement in SQL Server 2012

0 comments


In SQL Server 2012, Microsoft introduced the THROW statement to throw error/exception during the execution of T-SQL program which will transfer the execution control to a CATCH block.
Hypothetical Scenario - Customers in a Northwind database can place an order for a specific product or for multiple products. In case, the asking Order Quantity is more than the available stock for a specific product, a user defined exception should be raised and a notification sent to stock department to refill the stock for the required quantity for the order which needs to be processed.

To implement the above scenario, we will create a Stored Procedure, which will take the details of Customers using CustomerID, the Product using ProductID and required quantity. We will then check the available stock against the Quantity and accordingly we will throw the user defined exception, if the required quantity is more than the available quantity.

Earlier, we were using RAISERROR() function to throw the user defined error/exception in SQL Server. Let's see how we were using RAISERROR() statement to throw the error/exception.

For this demonstration, I am going to use Northwind database and the scenario is also build against the same database.

CREATE PROCEDURE PlaceOrder
(
    @p_CustomerID VARCHAR(10),
    @p_ProductID INT,
    @p_RequiredQuantity INT
)
AS
BEGIN
    DECLARE @v_ProductName VARCHAR(50)
    DECLARE @v_UnitsInStack INT
    DECLARE @v_CompanyName VARCHAR(20)
    SELECT @v_ProductName=ProductName,@v_UnitsInStack=UnitsInStock FROM Products WHERE ProductID=@p_ProductID
    SELECT @v_CompanyName=CompanyName FROM Customers WHERE CustomerID=@p_CustomerID
    BEGIN TRY
        IF(@p_RequiredQuantity>@v_UnitsInStack)
        BEGIN
            PRINT 'Required Quantity is more than the available stock. Please refill the stock for the Order placed by the company - ' + @v_CompanyName;
            THROW 54600,'Required Quantity is more than the available stock.',12
        END
    END TRY
    BEGIN CATCH
            THROW
    END CATCH
END

Now let's use the THROW statement to throw the user defined exception instead of RAISERROR() function.

CREATE PROCEDURE PlaceOrder
(
    @p_CustomerID VARCHAR(10),
    @p_ProductID INT,
    @p_RequiredQuantity INT
)
AS
BEGIN
    DECLARE @v_ProductName VARCHAR(50)
    DECLARE @v_UnitsInStack INT
    DECLARE @v_CompanyName VARCHAR(20)
    SELECT @v_ProductName=ProductName,@v_UnitsInStack=UnitsInStock FROM Products WHERE ProductID=@p_ProductID
    SELECT @v_CompanyName=CompanyName FROM Customers WHERE CustomerID=@p_CustomerID
    BEGIN TRY
        IF(@p_RequiredQuantity>@v_UnitsInStack)
        BEGIN
            PRINT 'Required Quantity is more than the available stock. Please refill the stock for the Order placed by the company - ' + @v_CompanyName;
            THROW 54600,'Required Quantity is more than the available stock.',12
        END
    END TRY
    BEGIN CATCH
            THROW
    END CATCH
END

In the above stored procedure, we are throwing a user defined exception using the new THROW statement in SQL Server 2012.

THROW uses the following parameters -

THROW Error_number, message, state
image
If you want the re-throw the exception, then in a catch block, you can use THROW without any parameters. This will throw the exception that was caught. Also make a note that the statement before the THROW must end with semicolon.

The output of the RAISERROR() function is as below -

raiserroroutput

throw


 
  Feedback:

February 22, 2014

Group Results and Rank Them in SQL Server using NTILE

0 comments


The SQL Server NTILE() function divides the result set into a specified number of even sized group (approximate division) and assigns a ranking value to these groups.

The NTILE() function is similar to other ranking functions like the RANK() and DENSE_RANK(), except that NTILE takes one parameter of type int/bigint that specifies the number of groups into which each partition must be divided.

Let us see an example. We will query the Products table of the Northwind database and divide it into 5 groups ordered by the UnitsInStock:

NTILE SQL Server

OUTPUT

NTILE SQL Server Demo

On running the query, the results are divided into 5 groups, but because the total number of rows (48) is not divisible by the number of groups (5), NTILE puts 10 rows in the first three group and the remaining two groups have 9 rows each.


 
  Feedback:

February 02, 2014

SQL Server Nested Sub Queries and Correlated Sub Queries

1 comments


In this article, we will see how to write SQL Server sub queries which includes nested sub queries, multi-level sub queries and correlated sub queries. A sub query, also called as an Inner Query is a query nested with various statements like DML [INSERT/UPDATE and DELETE] and DQL [SELECT] statement.

In some cases, you can use either Sub Queries or JOINs. But there are some requirements which can be solved easily using only sub queries. When you write sub queries, you can use –

1. Nested Sub Query [Single or Multi-Level] – a query which gets executed only once and provides output to the outer query is called as sub query.

2. Correlated Sub Query –  a sub query/inner query which  gets executed multiple times for the outer query is called as correlated sub query. In correlated sub query, you take the reference of outer query column in sub query.

Nested Sub Query [Single or Multi-Level]

For demonstrating the sub queries, we will make use of Northwind database. We will consider the following tables –

northwind-query
We will first try out some simple sub queries. Let’s write our first sub query as shown below –

subquery1
In the above query, we have to fetch all the orders for a given customer’s phone number. The relationship between the Customers and Orders table is based on CustomerID column. Hence, we are first finding the CustomerID for a given phone number and passing this CustomerID as an input to the outer query, which fetches all the orders.

subquery2

In this query shown above, we are fetching all the products which are available under Category ‘Seafood’.

subquery3

In the above query, we are fetching all the products which are supplied by a company called ‘Pavlova, Ltd.’.

Subquery returning more than one value

Sometimes, a sub query may return more than one value. In that situation, w cannot use the regular comparison operators as shown in this query –

subquery4

The above query will throw you an exception Subquery returned more than 1 value as shown below –

sql-error

When your sub query returns more than one value, then we can use some special operators for the comparison. These special operators are as listed below –

1. IN / NOT IN – This operator takes the output of inner query after inner query gets executed which can be zero or more values and send it to outer query. The outer query then fetches all the matching [IN operator] or not non matching [NOT IN operator] rows.

2. ANY – [>ANY or <ANY] – The >ANY operator takes the list of values produced by inner query and fetches all the values which are greater than the minimum value of the list. The <ANY operator takes the list of values produced by the inner query and fetches all the rows which are less than the maximum value of the list.
  • For example -: >ANY(100,200,300), the ANY operator will fetch all the values greater than 100.
  • For example -: <ANY(100,200,300), the ANY operator will fetch all the values lesser than 300.
3. ALL – [>ALL or <ALL] – The >ALL operator takes the list of values produced by inner query and fetches all the values which are greater than the maximum of the list. The <ALL operator takes the list of values produced by the inner query and fetches all the rows which are less than the minimum value of the list.
  • For example -: >ALL(100,200,300), the ALL operator will fetch all the values greater than 300.
  • For example -: <ALL(100,200,300), the ALL operator will fetch all the values lesser than 100.
4. EXISTS – The EXISTS keyword produces a Boolean value [TRUE/FALSE]. This EXISTS checks the existence of the rows returned by the sub query.

Let’s see the queries for all the above operators as shown below –

query-in-operator

In the above queries, the first query returns all the orders which are placed by the customers and who belong to London city. The second query is opposite to the first one which fetches all the orders placed by all the customers excluding the customers who belongs to London city.

We will take a look at second operator ANY. The queries are as shown below –

anysome

The output of ANY/SOME and ALL operators is shown below. The result is of Sub/inner query –

anyallqueryoutput

ANY/SOME are the same. You can make use of any one of them.

We will now take a look at EXISTS operator. The query is as shown below –

sql-exists

The above query will fetch all the customers if there are more than 30 orders shipped in London city.

Correlated Sub Query

In a correlated sub query, the inner query gets executed multiple times for the outer query. In correlated sub query, the inner query depends on the outer query for its values. Let’s take an example of the same –

correlated-subquery

In the above query, we are selecting all the orders where the employee’s city and order’s ship city are same.
You can also do nesting of sub queries at multi-level. For example let’s take a look at the following query –

multisubquery

This query selects all the order which are placed for the product Chai. The orders table is not directly connected with products table. Hence, we are first of all finding a product ID for product name ‘Chai’. Then we are finding the orderids placed for Chai product from [Order Details] table. At the end, we are then fetching the orders from the orders table.

You can use Sub Queries during INSERT, UPDATE and DELETE statements as well. Let’s take a look at UPDATE and DELETE Sub query examples as shown below –

updelsubq

The first query deletes all the customers whose orders shipped in London city. The second query updates all the products and sets discontinued column value false for the suppliers who belong to London city.

While working with sub queries, do remember some points as mentioned below –
  • You cannot use DISTINCT keyword in sub query which contains GROUP BY clause.
  • You cannot use Order By clause unless you use TOP in a sub query.
  • You cannot use COMPUTE and INTO clause in sub query.
Summary – In this article we saw how to write SQL Server sub queries, simple sub query, multi-level nested sub query, correlated sub query and sub queries with DML statements.

Download the source code of this article (Github)


 
  Feedback:

January 16, 2014

Create Database in SQL Server With Different Configuration Options

0 comments


In this article, we will see how to create a database in SQL Server. We will also explore various options while creating a database.

Creating SQL Server databases can be achieved by two different ways as explained below -

1. You can make use of SQL Server Management Studio (SSMS) and Create Database wizard to create the database.

2. You can create the database programmatically by writing scripts.

If you are using the database wizard for creating the database, you will have limited options while creating your database. Most developers and administrators prefer to create the databases programmatically by writing scripts. These scripts can then be easily deployed on other SQL Server instances where you want to create the same database with the same configurations.

In this article, we will explore some configuration options as described below -
  1. Create database using database wizard
  2. Create database using script
  3. Create database with files and file groups
  4. Configuring File Stream during database creation
  5. Configuring recovery model during database creation
  6. Creating database with Collation option

Create database using database wizard

To start with, we will opt for SQL Server Management Studio [SSMS]. Once you open SSMS, in an Object Explorer window, we will right click the Databases folder and create a new database as shown below -

create-db-wiz

This step will show you a New Database window. Fill the required details as shown below -

newdb-window

You can give a database name and set the owner of the database. Also look at the database files. Check out the initial size of the database. In my case, it is 5 MB. The question is where is this size coming from?

Creating simple database using script

In SQL Server the "Model" [System Database] acts as a template for all the databases. Whatever the settings of Model, the same settings are applied to your new database. For example, let's add a simple table to our Model database and then create a PurchaseOrderDatabase. The table script is as shown below -

model-table

Note: All these scripts shown in this article can be downloaded from here

After creation of the table, create a database as shown in below script -

crdbscrpt

After creating the database, let's see how to select all the available databases under a given SQL Server instance. You can find the same under a view available under master database - sys.databases. The output of the sys.databases is as shown below -

sel-databases

You can now expand the PurchaseOrderDatabase using Object Explorer and check the tables folder. You will find "TestTable" created under Model database.

obexp1

This table got copied into our PurchaseOrderDatabase, as Model works as a template for the custom databases. You can also make use of 'sp_helpdb' stored procedure as shown below -

sphelp

The result of sp_helpdb stored procedure gives the output shown as above. It shows the information about the database as well as the files created for the database for both PurchaseOrderDatabase as well as Model database. So that’s about the Model database!

Create database with files and file groups

While writing the Create Database script in our query, we did not mention any other options. We will explore some options that can be used while creating the database. Let's first drop the PurchaseOrderDatabase so that we can create the same database with different options.

We will create the database using data files and file group options. Let's write the script as shown below to create a database PurchaseOrderDatabase -

db1-script

When you create a database, you have to provide at least two files - Primary file and Transaction Log File. Also you have to specify at least one file group while creating the database. In the above script, we have specified the size of the primary data file, maximum size of the database and file growth in Mega bytes. You can specify the size of the file in KB, MB, GB or TB. The default is MB.

You can also create the database using multiple data files with the extension .ndf which are also known as secondary data files as shown below -

db2-srpt

Configuring File Stream during database creation

Now let's create a database with File stream. File Stream is used to store unstructured data like documents and images on NTFS file system with the help of VARBINARY(MAX) type. By default, the file stream is not enabled on SQL Server. To enable the file stream access level on SQL Server, write the following command -

enable-filestrm-acclevl

After enabling the filestream access level, let's create a database with the file stream option as shown below -

file-streamdb

To test the file stream database, we will create a table with File Stream and will try to insert some data in the same, as shown below -

insertfs

While creating the database, you can specify the database recovery model. There are three options which you can avail for recovery model -

1. Simple - In this recovery model, there is no log backup. Also you cannot use Log shipping or point-in-time restores.
2. Full - In this recovery model, all backup types are permitted. Point-in-time recovery is possible.
3. Bulk Logged - In this recovery model, all backup types are permitted. No point-in-time recovery is supported. Recovery is possible at the end of any backup.

Configuring recovery model during database creation

We will now create a database with recovery options. But before that, we will see what is the default recovery option set on our database. For this, we will again make use of sys.databases system view as shown below -

existing-recmodel

The output is as shown below -

recoutput1

Below is a script for database creation with the recovery options -

recquery1

Creating database with Collation option

Finally, we have the last option to explore, i.e. create a database with collation option. SQL Server supports number of Collations.

Many cultures around the world use different character sets. Simply put, a collation encodes the rules that govern the proper use of characters for an alphabet or language.

Let's create a database with our choice of Collation. Before that we will also see what is the default collation set on our databases as well as all the available Collation with the help of sys.fn_helpcollations(). The queries are as below -

collation-query

The query for creating a database with the different collations is as shown below -

collation-dbscript

And that’s it!!

Conclusion

In this article, we have seen how to create a database in SQL Server using a wizard and by writing scripts using SQL Server Management Studio [SSMS] with various options like files and file groups, with file stream option, with recovery model and with collation option.

The scripts shown in this article can be downloaded from here


 
  Feedback:
 

Copyright © 2009-2014 All Rights Reserved for SQLServerCurry.com by Suprotim Agarwal | Terms and Conditions