Archive for category Sql Server

Minimizing SQL Injection – Dynamic SQL with IN Clause and QUOTENAME()

Use case

The application gets a list of comma separated account codes which need to be looked up in an Account table using the IN clause in a dynamic query. The application can not use prepared statements or stored procedures. The objective is to minimize the risk of SQL injection. In this post we will discuss potential SQL injection using dynamic SQL and using an SQL server inbuilt function named QUOTENAME() to minimize the risk of SQL injection.

Setting up Account table and sample data

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO

IF OBJECT_ID('Account') IS NOT NULL
BEGIN 
DROP TABLE Account
END

GO

CREATE TABLE [dbo].[Account](
    [Id] [int] IDENTITY(1,1) NOT NULL,
    [AccountCode] [varchar](50) NULL,
    [AccountName] [nvarchar](200) NULL,
 CONSTRAINT [PK_Account] PRIMARY KEY CLUSTERED 
(
    [Id] ASC
)
)
GO
SET ANSI_PADDING OFF
GO
SET IDENTITY_INSERT [dbo].[Account] ON
INSERT [dbo].[Account] ([Id], [AccountCode], [AccountName]) 
VALUES (1, N'XOD814', N'Test account1')
INSERT [dbo].[Account] ([Id], [AccountCode], [AccountName]) 
VALUES (2, N'GUR78437', N'Test account2')
INSERT [dbo].[Account] ([Id], [AccountCode], 
[AccountName]) VALUES (3, N'TIY5', N'Test account3')
INSERT [dbo].[Account] ([Id], [AccountCode], [AccountName]) 
VALUES (4, N'KOS370', N'Test account4')
SET IDENTITY_INSERT [dbo].[Account] OFF

After setup the Accounts looks like below :-

image

 

Input data  – ‘XOD814,GUR78437,ABCD’); DROP TABLE Account; –’
There are three items in the list which we want to look for in our table using the IN clause :-

  1. XOD814
  2. GUR78437
  3. ABCD’); DROP TABLE Account; –

As you can imagine, It’s the third item which might cause the table to be dropped. Let us see how :-

DECLARE @Query NVARCHAR(max)
SET @Query = ‘SELECT * FROM Account WHERE AccountCode IN (”XOD814”,”GUR78437”,
”ABCD”); DROP TABLE Account; –”)’
SELECT @Query AS ‘Executed query’
EXEC (@Query )
IF OBJECT_ID (‘Account’) IS NOT NULL
BEGIN
    SELECT ‘Account table exists’
END
ELSE
BEGIN
    SELECT ‘Account table got dropped.’
END
GO

If you zoom in to the highlighted statement above :-

image

A – The actual value ‘ABCD’ is terminated by a single quote (which is escaped here in order to make it a proper SQL statement) and then with a ending bracket and a semicolon making the end of the select statement. The subsequent drop statement become now a qualified correct statement to be executed.

B – The DROP TABLE statement is now terminated by a semicolon followed by double dashes commenting out any possible subsequent statements which might get added by dynamic SQL formation.

Executing the above Sql script would get the table stopped because of the below resulting query

Resulting SQL query
SELECT * FROM Account WHERE AccountCode IN (‘XOD814′,’GUR78437’,’ABCD‘); DROP TABLE Account; –‘)
NOTE:   '); closes the original statement and then a second statement (drop table) follows.

image

QUOTENAME() Function

QUOTENAME() function on a value delimits the value with a character (for example – a single quote) which can also be specified in the function. So, even if the value contains some other characters like quotes or semicolons to inject harmful SQL statements, the value is delimited and those harmful SQL statements will now become part of the string to be searched in the table rather than valid SQL statements to be executed and causing harm!

Below is the same query with all the values enclosed in QUOTENAME() with a single quote as delimiter. So, the client application can split the list of values and enclose each value in QUOTENAME() function before sending the dynamic SQL to SQL server.

DECLARE @Query NVARCHAR(max)
SET @Query = ‘SELECT * FROM Account WHERE AccountCode IN (‘+QUOTENAME(‘XOD814′,””)+’,’+QUOTENAME(‘GUR78437′,””)+’,’+QUOTENAME(‘ABCD”); DROP TABLE Account; –‘,””)+’)’
SELECT @Query AS ‘Executed query’
EXEC(@Query)
IF OBJECT_ID (‘Account’) IS NOT NULL
BEGIN
    SELECT ‘Account table exists’
END
ELSE
BEGIN
    SELECT ‘Account table got dropped.’
END
GO

Resulting SQL Query
SELECT * FROM Account WHERE AccountCode IN (‘XOD814′,’GUR78437’,’ABCD”); DROP TABLE Account; –‘)
NOTE – ”);
can not close the original statement because the single quote is escaped.

image

 

Difference when using QUOTENAME()

So – what is the difference? Its just a single quote – highlighted in red below. QUOTENAME() delimits the whole ‘ABCD); DROP TABLE Account; –’ with single quotes and in that process escapes the single quote right after ABCD which would have otherwise terminated the string to ABCD which would be then terminated by ); followed by execution of DROP TABLE.

Executed Query without QUOTENAME()
SELECT * FROM Account WHERE AccountCode IN (‘XOD814′,’GUR78437’,’ABCD‘); DROP TABLE Account; –‘)
NOTE:   '); closes the original statement and then a second statement (drop table) follows.


Executed Query with QUOTENAME()

SELECT * FROM Account WHERE AccountCode IN (‘XOD814′,’GUR78437’,’ABCD”); DROP TABLE Account; –‘)
NOTE – ”);
can not close the original statement because the single quote is escaped.

Leave a comment

Sql Server job – Cannot insert the value NULL into column ‘owner_sid’, table ‘msdb.dbo.sysjobs’

I needed to script out a Sql job one Sql server (server1) and use the same script to create the same job on a different Sql server machine(server2). So, I scripted out the job from Server1 and ran the same on the Server2 and got the below error :-

Cannot insert the value NULL into column ‘owner_sid’, table ‘msdb.dbo.sysjobs’

Looking at more closely on the script, I noticed that the the script I created from the job on the Server1 also has the login which I used to login to Server1. This login is different from the one I was using to run the script on server2. See the highlighted login in the below screenshot.

image

Once I changed the script to put the login I am using to run the script on server2, the script ran fine and the job got created in the Server2.

image

, ,

2 Comments

SQL Server–Change Data Capture [CDC] for “Watching” database tables for DML operations

In scenarios when we want to watch tables for any inserts, updates and deletes, we implement triggers. Triggers – not only needs database development effort and needs to be correctly written, It places locks on tables and slows things down.

In SQL Server 2008, Microsoft introduced a new capability called “Change Data Capture” [CDC] to watch and track changes on tables for any inserts, updates and deletes. This requires almost no database development effort and is more efficient than triggers. A very nice thing about CDC is that It makes use of transaction log which has all the data about any changes made to the database alreadySo why reinvent the wheel?

Basically, first you enable CDC on the database. Then enable CDC on the table (e.g. Account) you want to watch which will automatically create the a change tracking table (“Account_CT”) for the watched table. Any changes in your watched table (e.g. Account) will get recorded in the change tracking table (e.g Account_CT) and you can use the tracking table for all your queries.

“Talk is cheap. Show me the code.” – Linus Torvalds

1. Preparing sample data

/*Create sample database*/
USE [master]
GO
IF  EXISTS (SELECT name FROM sys.databases WHERE name = N’ChangeDataCaptureTest’)
DROP DATABASE [ChangeDataCaptureTest]
GO
/****** Object:  Database [ChangeDataCaptureTest]    Script Date: 10/23/2013 17:25:04 ******/
CREATE DATABASE [ChangeDataCaptureTest]

/*Create the sample table – This is the table we will be watching for inserts, updates and deletes*/
USE ChangeDataCaptureTest
GO
CREATE TABLE [Account]
(
    Id INT Primary Key IDENTITY(1,1),
    [Description] VARCHAR(500),
    [Active] BIT
)
GO

2. Enable CDC on the database

This looks scary, but all it is doing it is checking an already existing flag in sys.databases table in the master database. If you execute the following, you can see in which database CDC is currently enabled. In the below example, It is not enabled in any of the database:-
USE master
GO
SELECT [name], database_id, is_cdc_enabled 
FROM sys.databases      
GO    

image

Now we can enable CDC on our sample database.
USE ChangeDataCaptureTest
GO
EXEC sys.sp_cdc_enable_db
GO

If you execute the following, you can see the “is_cdc_enabled” column for our sample database is enabled:-
USE master
GO
SELECT [name], database_id, is_cdc_enabled
FROM sys.databases
GO

image

3. Enable CDC on the table you want to watch for Insert/Update/Delete

Now we need to enable the CDC on the table we want to watch.

USE ChangeDataCaptureTest
GO
EXEC sys.sp_cdc_enable_table
@source_schema = N’dbo’,
@source_name   = N’Account‘,
@role_name     = NULL
GO

When we execute the above, we see two Sql jobs created and started automatically.

image

image

cdc.ChangeDataCaptureTest_capture
This job watches the table “Accounts” and put changes in the tracking table Account_CT
cdc.ChangeDataCaptureTest_cleanup –
This job cleans up the tracking table Account_CT and can be scheduled as per the requirement.

At this point If we query the sys.tables :-
USE ChangeDataCaptureTest
GO
SELECT [name], is_tracked_by_cdc
FROM sys.tables
GO

image

4. Testing the results

Let us insert/update/delete data in the watched table [Account] and see the tracked changes in the [Account_CT] table.

Insert Operation
USE ChangeDataCaptureTest
GO
INSERT INTO Account
VALUES (‘Test’, 1)

Select to verify results
USE ChangeDataCaptureTest
GO
SELECT * FROM Account
GO
SELECT * FROM cdc.dbo_Account_CT
GO

image

Value for “__$operation” column is 2 indicates “Insert”. We can see the values in the columns “Description” and “Active”.

Update operation
USE ChangeDataCaptureTest
GO
UPDATE Account
SET Active= 0
WHERE Id = 1
Select to verify the results
USE ChangeDataCaptureTest
GO
SELECT * FROM Account
GO
SELECT * FROM cdc.dbo_Account_CT
GO

image
Value for “__$operation” column :-
3 = update (captured column values are those before the update operation).
4 = update (captured column values are those after the update operation)

DELETE Operations
USE ChangeDataCaptureTest
GO
DELETE Account
WHERE id = 1
Select to verify results
USE ChangeDataCaptureTest
GO
SELECT * FROM Account
GO
SELECT * FROM cdc.dbo_Account_CT
GO

image

Value for “__$operation” column is 2 indicates “Delete”.

Time based search on table changes
Need to see changes in a table based on given timestamp? No problem. When we enabled the table for change tracking, It also added a system table named “cdc.lsn_time_mapping” which has all the transaction with the timestamp.
Just join the change tracking table (Account_CT) with the system table “cdc.lsn_time_mapping” table on transaction id (start_lsn) and have the transaction filter criteria on the same.

image

USE ChangeDataCaptureTest
GO
SELECT B.*, A.tran_begin_time, A.tran_end_time 
FROM cdc.lsn_time_mapping A
INNER JOIN cdc.dbo_Account_CT B
ON A.start_lsn = B.__$start_lsn

image

Note :- In CDC, there’s no way to trace the user who causes each transaction.

, , ,

Leave a comment

SqlDbType.Structured [another gem in ADO.NET] and Bulk insert using Table Valued Parameters

Use case
You want to insert multiple rows (say from 2 to 999 rows) from your .NET application to a database table.
You are using .NET Framework 3.5 or above with Sql Server 2008 or above.

Approaches
Insert one row at a time
As you can imagine, this would have a performance hit because of too many connection getting opened/closed – more chatty
Use a CSV of rows
This approach is more chunkier than the above approach. Overall, below is the approach :-
– Create a comma separated string of rows from the application
– Send CSV to a stored procedure from application
– Stored procedure would make use of a UDF to parse the CSV to a table variable
– Stored procedure would insert data from the table variable to the actual table
Use SqlDbType.Structured
The above approach solves the problem of the application being too chatty to the database. However, It’t not elegant and involve too much of “manual” parsing of string.

In .NET Framework 3.5 and onwards, SqlCommand can make use of another parameter type named SqlDbType.Structured which enables a .NET application to send a DataTable (yes, an “System.Data.DataTable” object) directly to the stored procedure which can be directly used as a table inside the stored procedure as If It was a table in database.

In the below example we will send a DataTable of email addresses from .NET application to a database stored procedure which would insert data directly from this table to the actual table.

Steps
Database changes :-
/*Create a user defined table type which will hold the contents of the DataTable passed from the application.
This will be used as a Table Valued Parameter.
*/

CREATE TYPE [dbo].[EmailAddressList] AS TABLE
(
[EmailAddress] [NVARCHAR](100) NULL
);

/*Create the actual table to which we will insert data from the DataTable*/
CREATE TABLE EmailAddressDetails
(
EmailAddress NVARCHAR(100),
CreatedOn DateTime DEFAULT GetDate()
)

/*Create the stored procedure which will be called from the application*/
CREATE PROCEDURE EmailAddresses_InsertBatch
@EmailAddressBatch [EmailAddressList] READONLY
AS
BEGIN
INSERT INTO EmailAddressDetails (EmailAddress)
SELECT E.EmailAddress FROM @EmailAddressBatch E
END

Application changes
//Function to create a DataTable with dummy email addresses
//The DataTable created here should match the schema of the User defined table type created above.

private DataTable CreateEmailAddressDataTable()
{
DataTable emailAddressDT = new DataTable();
emailAddressDT.Columns.Add(“EmailAddress”, typeof(string));
int emailAddrressCount = 100;
for (int i = 0; i < emailAddrressCount; i++)
{
DataRow row = emailAddressDT.NewRow();
row[“EmailAddress”] = i.ToString() + “.something@lpl.com”;
emailAddressDT.Rows.Add(row);
}
return emailAddressDT;
}

//Function to call the stored procedure with DataTable
private void AddEmailAddressToDb()
{
DataTable dataTable = CreateEmailAddressDataTable();
string connectionString = “Server=YourServerName;Database=YourDatabaseName;UserId=ashish;Password=ashish;”;
using (SqlConnection connection = new SqlConnection(connectionString))
{
using (SqlCommand command = new SqlCommand())
{
connection.Open();
command.Connection = connection;
command.CommandText = “EmailAddresses_InsertBatch”;
command.CommandType = CommandType.StoredProcedure;
var param = new SqlParameter(“@EmailAddressBatch”, SqlDbType.Structured);
                   param.TypeName = “dbo.EmailAddressList”;
param.Value = dataTable;
command.Parameters.Add(param);
command.ExecuteNonQuery();
}
}
}

The above line highlighted in green is the gem in ADO.NET. 🙂 When we send DataTable the stored procedure, It populates the user defined table type which we can directly use in the stored procedure – inserting from it to the actual table in this case. No parsing of CSV in UDF – takes away big pain when you have a complex structure.
image
Note :- Microsoft recommends this to be used when you are inserting less than 1000 rows. For more rows, consider using SqlBulkCopy.

,

1 Comment

Understanding and detecting deadlocks in Sql Server

In order to understand deadlocks, let us first create tables and have some sample data.

/* Tables and data  - Start */
IF OBJECT_ID('Order') IS NOT NULL
BEGIN
DROP TABLE [Order]
END

IF OBJECT_ID('Customer') IS NOT NULL
BEGIN
DROP TABLE Customer
END

CREATE TABLE Customer
(
ID int IDENTITY(1,1) CONSTRAINT PK_cid PRIMARY KEY ,
Name varchar(20)
)

CREATE TABLE [Order]
(
ID int IDENTITY(1,1) CONSTRAINT PK_oid PRIMARY KEY ,
CustomerId int,
OrderDate datetime
)

ALTER TABLE [Order]
ADD FOREIGN KEY (CustomerID)
REFERENCES Customer(ID)

INSERT INTO Customer (Name)
SELECT 'xxx'
UNION ALL
SELECT 'yyy'
UNION ALL
SELECT 'zzz'

INSERT INTO [Order] (CustomerID, OrderDate)
SELECT 1, GetDate()
UNION ALL
SELECT 2, GetDate()
UNION ALL
SELECT 3, GetDate()
/* Tables and data  - End*/

Open Sql Profiler and use the “TSQL_Locks” template.

image

In the “Event selection” tab, make sure you have “Deadlock Graph” selected and the click “Run”.

image

In SSMS, open two new query windows and execute the below batches in those two separate windows at the same time (Hit F5 in one window, switch to the another and hit F5).

Batch 1

/*Transaction A updates record A in table A - Waits and then updates record B in table B*/
BEGIN TRANSACTION
UPDATE Customer SET Name = 'aaa' WHERE ID=1
WAITFOR DELAY '00:00:10' -- Waiting for 10 secs
UPDATE [Order] SET CustomerId = 1 WHERE ID = 2
COMMIT TRANSACTION

Batch 2

/*Transaction B updates record B in table B - Waits and then updates the updates record A in table A */
-- This causes deadlock
BEGIN TRANSACTION
UPDATE [Order] SET OrderDate = GetDate() WHERE Id = 2
WAITFOR DELAY '00:00:10' -- Waiting for 10 secs
UPDATE Customer SET Name = 'bbb' WHERE Id=1
COMMIT TRANSACTION

You will notice that the batch 1 transaction took affect but because batch 2 couldn’t as that was identified as deadlock victim.

image

Look at the profiler now.

image

Notice that the Batch 2 didn’t get committed (See the big blue cross on the left circle and the tool tip on it in the above screenshot from the profiler?). However the Batch 1 did go through and took effect (see below).

image

Note that If the above batches (separate transactions) were updating two different records at the same time, there wont be any deadlocks.

/* Transaction A updates record A in table A waits and the updates record B in table B*/
BEGIN TRANSACTION
UPDATE Customer SET Name = 'aaa' WHERE ID=2
PRINT 'Customer updated...'
WAITFOR DELAY '00:00:10' -- Waiting for 10 secs
UPDATE [Order] SET CustomerId = 1 WHERE ID = 2
PRINT 'Order updated...'
COMMIT TRANSACTION

/*  Should be executed on a separate query window
Transaction B updates record C in table A waits and the updates record C in table B */
BEGIN TRANSACTION
UPDATE [Order] SET OrderDate = GetDate() WHERE Id = 3
PRINT 'Order updated'
WAITFOR DELAY '00:00:10' -- Waiting for 5 secs
UPDATE Customer SET Name = 'bbb' WHERE Id=3
PRINT 'Customer updated'
COMMIT TRANSACTION

image

,

Leave a comment

SQL Server – Generate the data script (insert script) for existing data in tables

How to generate the data script for SQL server 2008 database tables

Right click on the database where your table exists > Tasks > Generate Scripts

image

Click Next on the below screenimage

Choose “Select specific database objects” radio button, expand the tables and choose the table for which the data script needs to be generated and click Next.image

On the below screen, click “Advanced” and then select “Data only” option from the “Types of data to script”.
By default “Schema only” option is selected. You can also select “Schema and data” option for both creating table and also for the insert scripts.image

Choose where to save the insert script (File/Query window/Clipboard) and click on Next.You can see the progress and will see the generated insert script.
image

IMPORTANT NOTES  –
A) If your table has large amount of data (like mine in this case having 5000K records, Use the “Save to file” option to save the script and It will generate the script file containing all the insert queries just fine. If you use “copy to clipboad” or a “New Query Window option”, the generation will fail with Out of Memory Exception.

B) ALWAYS enclose the insert scripts in transaction with TRY CATCH block before running them. You don’t want partial inserts on your table – do you?

 

How to generate the data script for SQL server 2005 database tables

Good thing is you can generate the data script of SQL server 2005 database objects from SQL server 2008 management studio. This is how you do it.

a) In the SQL Management studio, right click on the database and select “Task > Generate Scripts ”. A wizard will be launched.
b)  Select the database you want to generate the script from.
c) In the “Choose Script option” step of the Wizard, select the “Script data” as true:-

clip_image002

d) Proceed on to generate the script and It will generate the data script as well.

,

3 Comments

Bulk insert using OpenXML

— Tables
CREATE TABLE Singer
(
Id int,
Name varchar(200)
)

CREATE TABLE SingerGenere
(
SingerId int,
GenereId int
)

CREATE TABLE Genere
(
ID int,
Name varchar(200)
)

Data

— Singers
INSERT INTO Singer
VALUES (1, ‘Joe’)

INSERT INTO Singer
VALUES (2, ‘MJ’)

INSERT INTO Singer
VALUES (3, ‘ACDC’)

— Genere
INSERT INTOh Genere
VALUES
(1, ‘Rock’)

INSERT INTO Genere
VALUES
(2, ‘POP’)

INSERT INTO Genere
VALUES
(3, ‘Heavy Metal’)

— Singer – Genere mapping
INSERT INTO SingerGenere
(SingerId, GenereId)
VALUES (1,1)

INSERT INTO SingerGenere
(SingerId, GenereId)
VALUES (2,2)

INSERT INTO SingerGenere
(SingerId, GenereId)
VALUES (3,3)

The stored procedure :-

CREATE PROCEDURE GetSingersGenere
(@SingerData XML)
AS
BEGIN
DECLARE @hDoc int
exec sp_xml_preparedocument @hDoc OUTPUT,@SingerData
IF OBject_id(‘SingerGenereTable’) IS NOT NULL
BEGIN
DROP TABLE SingerGenereTable
END
CREATE TABLE SingerGenereTable
(
SingerName varchar(200),
GenereName varchar(200)
)

INSERT INTO SingerGenereTable
(
SingerName,
GenereName
)
SELECT XMLSinger.SingerName, Genere.Name  FROM OpenXML (@hDoc,’/Singers/Singer’)
WITH (SingerName varchar(200)’text()’) XMLSinger
INNER JOIN Singer on XMLSinger.SingerName=Singer.Name
INNER JOIN SingerGenere ON SingerGenere.SingerId = Singer.Id
INNER JOIN Genere ON Genere.Id = SingerGenere.GenereId
SELECT * FROM SingerGenereTable
END

Executing the Stored procedure

EXEC GetSingersGenere1
‘<Singers>
<Singer>
Joe
</Singer>
<Singer>
ACDC
</Singer>
</Singers>’

 

image

Leave a comment

Sql Server – Coding Guidelines and best practices

Object Type

Convention

Example

Tables

<ObjectName>

Employee
Department

Association tables

<Obeject1><Object2>

EmployeeDepartment

Stored Procedures

<TableName>_<OperationOnTable>

Employee_Select

Views

vw<TableName>

 

Scalar User-defined functions

<TableName>_<OperationOnTable>

Employee_GetTotalHrsWorked

Table valued user defined functions

   

Triggers

TR_<TableName>_<action><description>

TR_<Employee>_<UpdateEmailAddress>

Indexes

IX_<TableName>_<ColumnNamesSeparatedBy_>

IX_Employee_ID

Primary Key Constraints

PK_<TableName>

PK_Employee

Foreign Key Constraint

FK_<TableName1>_<TableName2>

FK_Employee_Department

· Datatype for a primary key :- uniqueidentifier
· Association tables should have a primary key
EmployeeDepartment should have ID, EmployeeId, DepartmentId.
· Avoid dynamic query. Use Stored procedures only.
· Use SET NOCOUNT ON in the beginning of Stored procedures and Triggers.
· Avoid Cursors wherever possible. Use SET based operations over row-by-row operations. Some of the scenarios are listed below:-
a) If rows in a table needs to be updated on a condition, rather than using a cursor or loop, used UPDATE with Case statements.

UPDATE Table table
SET Column1
Case
WHEN Column2=’1’
THEN Column1=’New value1’
WHEN Column3 = ‘2’
THEN Column1=’New value2’
END
WHERE
Column2=’1’ OR Column3=’2’
[Example]

b) If records in the table need to be updated or deleted, use UPDATE or DELETE with JOINs rather than cursors and loops.
UPDATE dbo.Table2
SET dbo.Table2.ColB = dbo.Table2.ColB + dbo.Table1.ColB
FROM dbo.Table2
INNER JOIN dbo.Table1
ON (dbo.Table2.ColA = dbo.Table1.ColA);

DELETE ab, b
FROM Authors AS a
INNER JOIN AuthorArticle AS ab ON a.AuthID=ab.AuthID
INNER JOIN Articles AS b ON ab.ArticleID=b.ArticleID
WHERE AuthorLastName=’Tom’;

· Instead of using LOOP to insert data from Table B to Table A, try to use SELECT statement with INSERT statement.
INSERT INTO TABLE A (column1, column2)
SELECT column1, column2
FROM TABLE B
WHERE <Condition>

· Create a copy of the table with out any data following way:-
SELECT TOP (0) * INTO EmployeeTest FROM Employee

· COUNT(*) and COUNT(1) are same performance-wise. Use either of them.

· Temparary Tables, Table variables and Common Table Expressions (CTE)
o Temporary tables should be used when the data you hold in them is pretty large. Creating indexes in them would make the data retrieval faster.
o Table variables are faster and good for smaller sets of data.
o Table variables do not have statistics on them and indexes cannot be created on them so they are not good for holding large data.
o Common Table Expressions are good for having smaller sets of data which need not be updated because it’s similar to a derived table but has better readability and you can use the same CTE in the different places in the batch.
o Use CTE for creating recursive queries.

Leave a comment

Table variables do get created in the TempDB

DECLARE @MyTab1 TABLE
(
Id uniqueidentifier,
Name varchar(200)
)

INSERT INTO @Mytab1 VALUES (newId(),’Ashish’)
SELECT * FROM @Mytab1
SELECT crdate AS CreatedDate,GetDate() As CurrentDate, Name from tempdb.dbo.sysobjects where name not like ‘sys%’
ORDER BY CrDate DESC

clip_image002

Execute the above query in any database multiple times with some seconds interval and you will see the CreatedDate and the CurrentDate very close to each other in the resultset. “CreatedDate” is the time when that table variable was created in the tempDB database and “CurrentDate” is the datetime when you executed the query. “CurrentDate” will always be greater (just milliseconds) than the “CreatedDate”.  Each time you execute the query, It creates a random name prefixed with a #  and there is no naming conflict and no error. A good article here.

Leave a comment

TableDiff.exe in Sql Server 2005

Interestingly, we have a console-based TableDiff.exe in Sql Server 2005 right in our SQL server 2005 installation directory.
 

C:\Program Files\Microsoft SQL Server\90\COM\tablediff.exe
 
From here . Checkout the bulleted point in green:-
 
·         Table Difference tool allows you to discover and reconcile differences between a source and destination table or a view. Tablediff Utility can report differences on schema and data. The most popular feature of tablediff is the fact that it can generate a script that you can run on the destination that will reconcile differences between the tables. TableDiff.exe takes 2 sets of input;
·         Connectivity – Provide source and destination objects and connectivity information.
·         Compare Options – Select one of the compare options
·         Compare schemas: Regular or Strict
·         Compare using Rowcounts, Hashes or Column comparisons
·         Generate difference scripts with I/U/D statements to synchronize destination to the source.
 
 
I haven’t tried this but seems like worth a look..  an example here..

Leave a comment

Random Thoughts

The World as I see it

Simple Programmer

Making The Complex Simple

Ionic Solutions

Random thoughts on software construction, design patterns and optimization.

Long (Way) Off

A tragic's view from the cricket hinterlands