DBCC CHECKDB – Use and Abuse

I recently got asked this question by one of my clients:

“I get an error in the logs indicating that I have a torn page or checksum error. What do I do?”

Should I run DBCC CHECKDB with one of the repair options! When I challenge them to explain how CHECKDB performs the repair, they are not are able to answer, unaware of the damage they might be doing.

So here is how to use DBCC CHECKDB, and what to do when you have a torn or corrupt page.

So How Do I Use It?

The primary purpose is to check for consistency errors, and should ideally be run every day.

The basic syntax is:


NO_INFOMSGS prevents an excessive number of informational messages from being generated. There are several other options, but this is the syntax you should aim to use as it performs all integrity checks.

This may take a long time on large databases and you may want to specify the PHYSICAL_ONLY option. This checks physical on-disk structures, but omits the internal logical checks. The syntax is:



It Has Found A Problem – What Do I Do?

You do have backups don’t you? You might be lucky; a non-clustered index can be dropped and rebuilt but actual data, such as a clustered index, cannot.

By far the best option is to restore from a backup, but let’s look at how you investigate which pages are affected and what type of data is affected:

Look at the output from DBCC CHECKDB. You may see something like this:

Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data): Page (1:94299) could not be processed. See other errors for details.
Msg 8939, Level 16, State 98, Line 1
Table error: Object ID 2088535921, index ID 0, partition ID 72345201021503994, alloc unit ID 72345201051571606 (type In-row data), page (1:94299). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed.
CHECKDB found 0 allocation errors and 2 consistency errors in table ‘yourtable’ (object ID 2088535921).
CHECKDB found 0 allocation errors and 2 consistency errors in database ‘yourdb’.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (yourdb).

From this you can see what page is corrupted (1:94299)

The first thing to do is check if it is data in a heap, in a clustered index, or in a non-clustered index. In the above text you can see it is index ID 0. You could also examine the page (1:94299 in database ‘yourdb’) as follows:

DBCC TRACEON (3604, -1)
DBCC PAGE(‘yourdb’, 1, 94299, 3)

In the output you will see something like:

Metadata: IndexId = n

If n is greater than 1 it is a non-clustered index and can safely be dropped and recreated. If n is 0 or 1 you have data corruption and need to perform one of the options described below.

Restoring from a backup
If the recovery model is FULL (or BULK_LOGGED, with some limitations), you can backup the tail of the log, perform a restore (with norecovery) from the last clean full backup, followed by subsequent log backups and finally the tail of the log.

If only a few pages are affected you have the option of selectively restoring only the bad pages, as follows:

RESTORE DATABASE yourdb PAGE = ’1:94299′
FROM DISK = ’C:\yourdb.bak’

If the recovery model is simple you don’t have that option, and have to accept that a restore from the last full backup will result in subsequent transactions being lost. In this case, or if you have no backups at all, you may decide that an automatic repair is the only option.

Automatic Repair Options
First let me emphasise the importance of running a backup BEFORE you go any further.

Have a look at the output of the original CHECKDB. It will specify the minimum repair level.

If the minimum repair level is REPAIR_REBUILD you have been lucky.
The syntax is


This attempts to repair all errors. Sometimes the only way to repair an error is to deallocate the affected page and modify page links so that it looks like the page never existed. This has the desired effect of restoring the database’s structural integrity but means that something has been deleted (hence the ALLOW_DATA_LOSS). There are likely to be issues with referential integrity, not to mention the important data that may now be missing.

The syntax is


Make sure you run DBCC CHECKCONSTRAINTS afterwards so you are aware of referential integrity issues and can take the appropriate action.

10 Important SQL Server Tuning TIPS for 2/25


  • Stop Waiting Around. When SQL not running, the reason for the wait is recorded by SQL Server, examining these waits can improve performance. These waits can be decoded by using Tom Davidson’s paper
  • Locate I/O Bottlenecks. I/O is often the primary reason for slow performance. You can examine IO via wait stats, DMF sys.dm_io_virtual_file_stats(), and disk related Perfmon counters. Causes include slow disks, disk placement, badly written SQL, and lack of good indexes
  • Root Out Problem Queries. The DMV sys.dm_exec_query_stats aggregates query metrics (duration, time on CPU, waiting, reads, writes, executions etc) at individual SQL statement level – great for identifying longest running SQL. Can then optimise these
  • Plan To Reuse. Change dynamic SQL to use stored procedures, for plan reuse, else as transactions increase you can get bad performance
  • Monitor Index Usage. The DMF sys.dm_db_index_operational_stats(), details what has been used, how used, scans, singletons, inserts, deletes, updates, latches and locking
  • Separate Data and Log Files. Important for both DAS but also for SAN. Separate random access (data) from sequential access (log)
  • Use Separate Staging Databases. Can then use a simple recovery model, which speeds imports, and uses less CPUs, IO, and memory
  • Pay Attention to Log Files. Log growth can be expensive, better to ensure you have the required free space. Also ensure you don’t have too many virtual log files (VLFs)
  • Minimize tempdb Contention. tempdb can be used by all the other databases on the server, and it is often the most used database
  • Change the MAX Memory Limit. Remember to leave at least 1 or 2GB of memory for other processes. Also consider the impact of multiple instances

Change Column Size

The examples below will help you if you want to change the size of a column in SQL.

Changing the size of a column with a SQL query

The syntax for changing the size of a column with SQL query is:

ALTER TABLE [table_name] ALTER COLUMN [column_name] [data_type]

For example, if a database contains a table “Students” with column “Name” that is nvarchar(10) and you want to change it to nvarchar(100) the following query will do the job:

ALTER TABLE Students ALTER COLUMN Name nvarchar(100)

Changing the size of a column in SQL Server with the Server Manager

1) Right-click on a table

2) Click on Design

3) Change datatype of column(s)

Changing the size of a column in MySQL

1) Login to phpMyAdmin and go to a database

2) Select table which field’s length you want to change in the sidebar

3) Click Structure in the top menu

4) Click Change in Action column for field which varchar length you want to change

5) Set field Length/Values to desired value

6) Save the changes


How to Return Bool From Stored Procedure

As developers, we often work with SQL Server stored procedures. The SQL Server stored procedures can have many purposes. One of them is to perform some logical operations and to return true or false value. The problem is that there is no boolean datatype so the only way is that stored procedure returns bit datatype.

Here is an example of stored procedure that will return bit value that could be used instead of boolean:

USE [Database]
CREATE PROCEDURE [dbo].[spReturnValue]
    DECLARE @result BIT
    SET @result = 1
    RETURN (@result)

In order to call the stored procedure use the following SQL code:

USE [Database]
DECLARE @return_value BIT
EXEC @return_value = [dbo].[spReturnValue]
SELECT 'Return Value' = @return_value

The Problem with READ COMMITTED

Are you still using READ COMMITTED transaction isolation level? Default transaction isolation level on all SQL Server versions (2000-2014) has serious inconsistency problems, by design. Not many people are familiar how and why is it happening. I will show you a demo code here to reproduce it and a solution to the problem.

How does READ COMMITTED work?

Under default READ COMMITTED isolation level, shared locks on the rows/pages are released as soon as the row is read. Consequence are two anomalies at a single-statement level, that some call “inconsistent analysis”:

- reading the same row multiple times

- legitimate rows are omitted, although they are present all the time there

That are serious, statement-level inconsistencies. E.g. single simple SELECT can read the same row twice and bring you duplicates in the result or it will skip some rows. The worst thing is that you will often not notice at all that data is inconsistent.

If you are running a SELECT and the other transaction updates the row you already read in a way that it moves that row in front of your reading point, you will read that row twice. If the other transaction updates the row you are yet to be read in a way that it moves that row before of your reading point, you will not read that row at all. Such data anomalies are difficult to reproduce because the timing is crucial.

Here is the script:

–==== SETUP – begin

create database test_artefacts
alter database test_artefacts set recovery simple
alter database test_artefacts set read_committed_snapshot off
alter database test_artefacts set allow_snapshot_isolation off
use test_artefacts
create table big_table
    id int primary key clustered,
    filler char(8000) default(‘a’)
– insert 500 000 rows. Or number large enough for selecting all rows from that table to take at least 5 seconds.
truncate table big_table
insert into big_table(id, filler)
select t.id, t.id
(    select top 10000 id = ROW_NUMBER() over(order by (select null)) + isnull((SELECT max(id) from big_table), 0)
    from sys.all_columns a cross join sys.all_columns b
) t
go 50
if OBJECT_ID(‘tempdb..#t’) is not null drop table #t
–select COUNT(*) from big_table — 500 000
–==== SETUP – end
– Start session 1, then session 2 very soon after (about 1 second or less)
–==== SESSION 1 – begin
– Copy 500 000 rows table
select * into #t from big_table
–==== SESSION 1 – end
–==== SESSION 2 – begin
use test_artefacts
–sp_spaceused big_table
–Update first rows to move them at the end of the clustered index.
–We are NOT changing number of records. We do not do inserts nor deletes.
–Number of rows should stay constant.
update t set id=id+1000000
from (select top 1 * from dbo.big_table order by id) t
go 123
–==== SESSION 2 – end
–==== RESULT
select COUNT(*) from #t — 500 123! We expected 500 000.
select COUNT(*) from big_table — 500 000!
– Run the same experiment again, but this time with this code in session 2:
update t set id=-id
from (select top 1 * from dbo.big_table order by id desc) t
go 123
–==== RESULT
select COUNT(*) from #t — 499 877! We expected 500 000.
select COUNT(*) from big_table — 500 000!


MDX Best Practice when it comes to commenting queries

I see a lot of customer MDX queries where comments are used to help explain complex logic inside an MDX query, or to remove a block that is not needed.

Please avoid using the single line style of comment as below:

SELECT -- Reseller Sales Amount by Quarter
[Measures].[Reseller Sales Amount] ON 0
,[Date].[Calendar].[Calendar Quarter].Members ON 1
FROM [Adventure Works];

This causes havoc for diagnostics and load testing as when we capture a trace, the entire query is concatenated onto one line as below:

SELECT -- Reseller Sales Amount by Quarter [Measures].[Reseller Sales Amount] ON 0 ,[Date].[Calendar].[Calendar Quarter].Members ON 1 FROM [Adventure Works];

A better approach is to use the start and end comment block approach, so that when the query is reduced to one line it is still valid syntax.

SELECT /*Reseller Sales Amount by Quarter*/ [Measures].[Reseller Sales Amount] ON 0 ,[Date].[Calendar].[Calendar Quarter].Members ON 1 FROM [Adventure Works];

We do have techniques to clean the comments out of the MDX, but the best way I have found so far is to insert CRLF’s before know heuristics that comments might come before (very messy).

As an aside, if you also code in TSQL on the DB Engine platform, commenting queries is also an even bigger no no. This is because the hash of the entire query “including comments” is used to identify similar query plans. If you have the same queries with different comments, then the query processor will have to make up multiple plans for them!

So lets recap:

  • Comments in compiled code =good.
  • Comments in the middle of query statements=handle with care or avoid completely

Buffer Pool and Buffer Cache Hit Ratio

There seems to be some confusion surrounding Buffer Cache Hit Ratio and the SQL Server Buffer Cache. Having answered several queries concerning this over the last few weeks on a variety of forums I thought I would post here in order set the record straight.

What is the SQL Server Buffer Cache?

The Buffer Cache (sometimes called the Data Cache) is an area of the SQL Server Buffer Pool (which in turn is part of the overall SQL Server Process space) and in simple terms is the amount of your database data (data pages) that are stored in memory.

What is Buffer Cache Hit Ratio?

The performance measure buffer cache hit ratio expresses as a percentage, how often a request for a database data page, can be served from the Buffer Pool. The alternative action is for SQL Server to have to fetch the data page from disk.

So for example, if you have a 500MB database, a server with 4GB and SQL Server is configured to use all available memory, you will have a very good Buffer Cache Hit ratio of 99% or above because your entire database can easily fit into memory.

How can I determine the Buffer Cache Hit Ratio of my server?

In order to determine you Buffer Cache Hit Ratio you can use the Windows Performance Monitor tool.

  1. Just go to: Start – Programs –  Administrative Tools – Performance
  2. Right click the graph and choose: Add Counters.
  3. Select Performance object: SQLServer:Buffer Manager 
  4. Add: Buffer Cache Hit Ratio.

How can I get more detailed information about SQL Servers overall memory usage?

In order to get a detailed report of SQL Servers memory usage I like to use the DBCC statement:



Thanks and more later!

T-SQL Best Practices

Every programming language has best and worst practices, and T-SQL is no exception. Some best practices in T-SQL are similar to those applicable to procedural languages, but most actually have to do with the fact that T-SQL is based on the relational model. This post will  highlight several important T-SQL best practices. (However, please don’t consider my list as complete.)

Think in Sets

Thinking in sets, or more precisely, in relational terms, is probably the most important best practice when writing T-SQL code. Many people start coding in T-SQL after having some background in procedural programming. Often, at least at the early stages of coding in this new environment, you don’t really think in relational terms, but rather in procedural terms. That’s because it’s easier to think of a new language as an extension to what you already know as opposed to thinking of it as a different thing, which requires adopting the correct mindset.

If that’s not enough, there are two levels of adaptation you need to go through. The first is not thinking in procedural terms, but rather in T-SQL terms. The second is realizing that even T-SQL isn’t perfect as a relational language and understanding how to use it relationally.

When I say “not thinking in procedural terms,” I’m talking about generally avoiding iterative constructs such as cursors and loops, in which you process data one row at a time in specific order. Remember that tables in T-SQL are supposed to represent sets of rows; sets need to be handled as a whole, and they have no particular order to their elements. I won’t tell you to never touch a cursor, because I recognize that there are cases in which, for practical reasons, it’s difficult to avoid them — however, you should consider a set-based approach by default.

As I mentioned, a set has no order, so you need to remember that neither tables nor views are ordered. This sounds easy, but in reality it’s very difficult for some people to accept this fact. You need to realize that regardless of what kind of physical structures exist, such as indexing, when you query a table or a view, you aren’t supposed to assume any particular physical order of processing the rows. Also, unless you add an ORDER BY clause to the outermost query, the order of the rows returned isn’t guaranteed. For these reasons, attempts to create a “sorted view” — for example, by adding TOP (100) PERCENT to the query — are wrong from a relational perspective and aren’t guaranteed to return rows from a view in any particular order when the outer query against the view doesn’t have an ORDER BY clause.

You must understand that the T-SQL concepts of table, row, and column have very different meanings from file, record, and field, which people often use mistakenly.

After you stop thinking in iterative, ordered, one-row-at-a-time terms and start thinking in terms of sets, you’ve taken the most important leap. A second level of maturity in programming with T-SQL is to realize that T-SQL isn’t perfect as a relational language, but it provides you with enough tools that you can use it relationally if you understand the relational model.  You need to understand the difference between the T-SQL concepts of table, column, and row and their relational counterparts: relation, attribute, and tuple. (A full discussion of this topic is beyond the scope of this article.)

Think of NULLs and the Three-Valued Logic

NULLs are a source of quite a lot of complexity and confusion in T-SQL. In essence, a NULL is supposed to represent a missing value that could be either applicable (exists but we don’t know what it is) or inapplicable (doesn’t exist, or is irrelevant). To start with terminology, NULL is a mark for a missing value — not a value — therefore, saying “NULL value” is incorrect. The terminology you want to use is either just a NULL or a NULL mark. Also, a predicate that compares something with a NULL — even another NULL — yields neither true nor false, but instead unknown.

One of the most important best practices in T-SQL is that whenever you write T-SQL code, you should always be mindful of NULLs, as well as the fact that interaction with NULLs is special and sometimes actually counterintuitive. So, for every piece of code you write with T-SQL, you should ask yourself whether NULLs are possible in the data you’re interacting with (e.g., filter, sort, group); if they are, make sure to include specific tests for NULLs in your QA and test environments. Sometimes your findings about the behavior of NULLs will require you to change the code in order for it to behave the way you need it to.

For example, you need to remember that query filters such as ON, WHERE, and HAVING return a row only if the predicate evaluates to true and don’t return the row if the predicate evaluates to either false or unknown. Consider the following example query against a sample database called TSQL2012:

USE TSQL2012; SELECT custid, country, region, city FROM Sales.Customers

WHERE region <> N’WA’;


Some places don’t have three levels in the location hierarchy — just two (country and city) — in which case the region is NULL, meaning that it’s inapplicable. The example query attempts to return all customers that aren’t from the region WA, which is supposed to also include locations where the region is simply inapplicable. However, any comparison with NULL (including the operator <>) yields unknown, and a query filter filters out unknowns. So this query will return only customers where the region is known to be different than WA and not NULL. To fix this problem and include customers from locations where the region is inapplicable, you need to add logic to the query, like so:

SELECT custid, country, region, city FROM Sales.Customers WHERE region <> N’WA’

OR region IS NULL;

Also, you need to keep in mind that some language constructs filter out unknowns (e.g., query filters), whereas some don’t (e.g., a CHECK constraint).

If you always think about NULLs and ensure that your tests always include NULLs, you’ll quickly learn that using a NOT IN predicate to look for non-matches is very dangerous when one of the elements in the subquery can be NULL. For example, suppose that you’re writing the following query to return customers who didn’t place orders:

SELECT custid FROM Sales.Customers WHERE custid NOT IN

(SELECT custid FROM Sales.Orders);

If any order has a NULL custid value, this query will return an empty set, even when there are customers who didn’t place orders. That’s because as far as the language is concerned, when a custid value doesn’t appear in the set of present values, the NULL could still represent that custid value. So, there’s no assurance that it’s not in the inner set. It’s very tricky logic that results from three-valued logic. After you identify the problem, you can determine a solution. For example, you can explicitly eliminate the NULLs in the subquery with a filter saying WHERE custid IS NOT NULL, and therefore you end up comparing the outer custid value only with present custid values from the inner set. Another option is to use NOT EXISTS instead because it relies on two-valued logic.

The specific example is less important than the general recommended best practice: Be mindful of NULLs for every piece of code you write in T-SQL. If NULLs are possible in the data, make sure you include them in your tests. And if the default language treatment isn’t the one you need, make sure you make the correct adjustments to make it behave the way you need.


The use of the NOLOCK query hint (equivalent to using the read uncommitted isolation level) is one of the most common practices in T-SQL yet one of the worst. The practice is so common because it makes queries run faster. Concurrency improves because with NOLOCK, your queries don’t request shared locks and therefore run into fewer blocking conflicts because of lock incompatibility. But at what cost?

Most people’s perception is that the effect of using NOLOCK is the possibility of getting uncommitted reads (aka dirty reads), which is true. Some consider that in certain circumstances it’s OK to get uncommitted reads. However, there’s more to NOLOCK than just uncommitted reads. As it turns out, when the storage engine gets an instruction from the query processor to perform some activity (e.g., an index scan with an ordered: false property), sometimes the storage engine has multiple ways to go about carrying out the instruction. Typically, the storage engine will first prioritize the correctness expectations from the request before performance. But when using the NOLOCK option, the storage engine will assume your main focus is speed and will therefore prefer the faster option. This can result in double-reading or skipping rows in cases that would typically not be allowed to happen. In short, you get data consistency problems with NOLOCK well beyond getting uncommitted reads.

If you have performance problems related to concurrency when using the read committed isolation, and you’re using SQL Server 2005 or later, you should consider using the newer read committed snapshot isolation. It has some overhead for writes, but it doesn’t suffer from the aforementioned consistency problems — and it typically solves the concurrency problems.

Adopt Good Coding Practices

There are some coding best practices that experienced database practitioners have heard a thousand times, but they’re still so important that I want to make sure I mention them.

Terminating statements with a semicolon. In the past, T-SQL didn’t have a requirement to terminate all statements with a semicolon. However, standard SQL requires it. Also, with each new version of SQL Server, this becomes a requirement for more and more specific cases to avoid parsing ambiguity. SQL Server 2008 already added “not terminating statements with a semicolon” to the list of deprecated features. So do yourself a favor and if you haven’t done so already, start getting used to terminating all of your statements.

Avoid *. Using * in the SELECT list is a bad practice in most cases. Changes in the underlying table definition might cause problems for applications that don’t expect them and in table expressions such as views defined based on the query. You also want to be able to rely efficiently on covering indexes when you’re truly interested in only a subset of the columns.

Schema-qualified object names. Always use schema-qualified object names (Sales.Orders as opposed to just Orders) both when creating and when querying them. Different users can have different default schemas. Also, even though the savings is very minor and probably negligible, explicitly indicating a schema means you save the cost of the instructions involved in implicit resolution. Furthermore, a query plan produced by a user with one default schema can’t be reused by a user with a different default schema. You’ll get multiple optimizations with multiple unnecessary execution plans in cache.

Have a style and use correct casing. I find it very important to use the correct case for identifiers of objects and columns. I’ve seen a lot of code in which people weren’t mindful of this rule; in case-insensitive environments, it typically doesn’t matter much. However, what if at some point in the future you need to make your databases case-sensitive? Why not write in a way that won’t get you into trouble if such changes are required in the future? Also, code that uses different casing for the same identifiers in different places doesn’t look good. Consistency can go a long way in code readability and maintainability.

For similar reasons, I find it important to develop some style for your code. Have you seen queries that look like this:


As opposed to:

SELECT custid, empid, YEAR(orderdate) AS orderyear, SUM(qty) AS qty FROM dbo.Orders GROUP BY custid, empid, YEAR(orderdate);

There’s not a specific style that’s better than the rest, but it’s important to have a style that’s readable and natural, and to be consistent with it. Doing so improves the readability and maintainability of your code.

Avoid the old-style join syntax. Some people find it difficult to adopt the ANSI SQL-92 syntax for joins with the JOIN keyword and the ON clause (e.g., FROM T1 INNER JOIN T2 ON T1.keycol = T2.keycol), as opposed to using a comma between the table names and then all predicates in the WHERE clause (e.g., FROM T1, T2 WHERE T1.keycol = T2.keycol). I encourage you to use the newer syntax (although contrary to popular belief, the old-style syntax for cross and inner joins is actually standard and is fully supported by SQL Server). Outer joins never had standard old-style syntax; only cross and inner joins did. Outer joins have only one standard syntax, which is the ANSI SQL-92 syntax; outer joins are the reason the standard bothered creating a new syntax. For consistency’s sake, the standard also adopted the newer syntax for the cross and inner joins, although it didn’t deprecate the old-style syntax for cross and inner joins.

At any rate, there are two main reasons why you should stick to the newer syntax for all joins. One is consistency; it simply doesn’t look good when you try mixing the different styles. Another is the fact that if you use the old-style syntax and forget to indicate the WHERE clause with the join predicate, you get an unintentional cross join. With the newer syntax, if you use an inner join and forget the ON clause, you get a parsing error that forces you to fix the code.

Listener for Availability Group gives error message

Here is the error message I encounter while configuring Listener for AG:

Create failed for Availability Group Listener ‘ListenerTest’.  (Microsoft.SqlServer.Smo)

The WSFC cluster could not bring the Network Name resource with DNS name   online. The DNS name may have been taken or have a conflict with existing name services, or the WSFC cluster service may not be running or may be inaccessible. Use a different DNS name to resolve name conflicts, or check the WSFC cluster log for more information.
The attempt to create the network name and IP address for the listener failed. The WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect. Check the state of the WSFC cluster and validate the network name and IP address with the network administrator. (Microsoft SQL Server, Error: 19471)


As a part of configuring listener, SQL Server cluster will create a Availability Group service in WSFC, with in that service, it will bind AG listener with IP and it will create one more resource. The reason for the above error message was because it was not able to create the computer object name in AD, so while configuring Network name with IP, the cluster network name need to have ‘Full control’ in AD to create the computer object, so once we give full control we should be able to configure the listener without any issues.  Here is the  steps for granting permissions in AD:

    • On a domain controller,       click Start,       click Administrative       Tools, and then click Active Directory Users and       Computers. If the User Account control dialog  box appears, confirm that the action it displays is what you want, and then click Continue.
    • On the View menu,       make sure that Advanced       Features is selected.
      When Advanced       Features is selected, you can see the Security tab       in the properties of accounts (objects) in Active Directory Users and       Computers.
    • Right-click the default Computers container       or the default container in which computer accounts are created in your       domain, and then click Properties. Computers is       located in Active       Directory Users and Computers/domain node/Computers.
    • On the Security tab,       click Advanced.
    • Click Add, type the       name of the account that was created or obtained in step 1, and then       click OK.
    • In the Permission       Entry forcontainer dialog       box, locate the Create       Computer objects and Full Control permissions,       and make sure that the Allow check       box is selected for each one.

And there you have it!

About Table Hints

Sometimes your query executes very slowly. One of the ways to increase the queries performance is using the SQL Server table hints. The table hints specify a locking method to be used by the query optimizer with this table, specify the locking granularity for a table, or one or more indexes to be used by the query optimizer when querying this table. If you want to use the table optimizer hints, you should specify the WITH keyword with the hints list after the table name in the FROM clause. You can specify several table hints and separate them by a comma.If you want to set any table hints, do not remember to test the query with hint
and without the hint and compare results.
Because SQL Server  query optimizer usually proposes the best execution plan, you should
test the query with table hint and use this hint only when it provides the best result. If the
table hint provides the better execution plan not stable, avoid using this hint and rely on the
SQL Server query optimizer.

Use the NOLOCK or READUNCOMMITTED table hint if you need the highest concurrency.
These hints specify that dirty reads can be allowed. When these hints are used, no shared locks are issued and no exclusive locks are honored.
This example shows how you can use the NOLOCK table hint:
The NOLOCK hint means it will not take care of any lock (instead of placing a lock). It will return data that could be dirty (not yet committed by other transactions). We can use this table hint to get results quickly when we are dead sure that dirty data is totally bearable.
In DELETE/UPDATE queries it should be totally avoided as it can produce junk results.

Consider using the READCOMMITTED table hint to specify that statements cannot
read data that has been modified but not committed by other transactions.
The READCOMMITTED table hint specifies that read operations comply with the rules for the
READ COMMITTED isolation level by using either locking or row versioning. When the database  option READ_COMITTED_SNAPSHOT is OFF (by default), the shared locks are used, when the READ_COMITTED_SNAPSHOT is ON, the row versions are used instead of shared locks.
This example shows how you can use the READCOMMITTED table hint:
Use the REPEATABLEREAD table hint if you need more consistency, than default
(READCOMMITTED) and higher concurrency, than with SERIALIZABLE.
By default, SQL Server uses READ COMMITTED isolation level. With this isolation level SQL Server uses shared locks while reading data. It ensures that not committed data will not be read, but it not ensures that the data will not be changed before the end of the transaction. If you need the current data will not be changed before the end of the transaction, you can use the REPEATABLEREAD table hint.
This example shows how you can use the REPEATABLEREAD table hint:


 Consider using the NOWAIT table hint if you need to return a message when a lock is
encountered on the table.
When the NOWAIT table hint is used, SQL Server returns a message as soon as a lock is encountered on the table. Using this hint is equivalent to use SET LOCK_TIMEOUT 0 for
a specific table.
In this example, SQL Server  will return a message when a lock is encountered on the
authors table:

Use the INDEX hint if you need to specify the name or ID of the indexes to be
used by SQL Server  when processing the statement.
This example shows how you can use the INDEX hint:
SELECT au_fname FROM authors WITH (INDEX(0))
If a clustered index exists, INDEX (0) forces clustered index scan, if no clustered index
exists, INDEX (0) forces a table scan.
Note: You can use only one index hint per table, but multiple indexes can be used in the
single hint list.


Consider using the XLOCK table hint to take and hold exclusive locks until
the transaction completes. This hint forces exclusive locks on the resources being referenced (not compatible with NOLOCK or UPDLOCK).
When you use the XLOCK table hint with ROWLOCK, PAGLOCK, or TABLOCK, the exclusive locks apply to the appropriate level of granularity.
This example shows how you can use the XLOCK table hint with the PAGLOCK hint:
Use the READPAST table hint if you need to return only the pasted rows.
When the READPAST table hint is used, the locked rows will be skipped. i,e Skips locked rows, and does not read them
This is the example to return only the pasted rows from the titles and titleauthor tables:
SELECT * FROM titles a JOIN titleauthor b
WITH (READPAST) ON a.title_id = b.title_id
Consider using the ROWLOCK table hint.
This hint specifies that a shared row lock will be taken instead of shared page or table lock.
The ROWLOCK provides the higher concurrency, but at the cost of lower performance because
SQL Server  should hold and release locks at the row level.
This example shows how you can use the ROWLOCK table hint:

Use the HOLDLOCK or SERIALIZABLE table hint if you need the highest consistency.
These hints specify shared locks will be holding until the completion of a transaction.
By default, the shared locks are released as soon as the required table or data page is
no longer needed. This results in higher consistency, but at the cost of lower concurrency.
This example shows how you can use the HOLDLOCK table hint:
Consider using the PAGLOCK table hint.
The PAGLOCK hint specifies that a shared page lock will be taken instead of table lock.
This example shows how you can use the PAGLOCK table hint:
SELECT * FROM titles a JOIN titleauthor b
WITH (PAGLOCK) ON a.title_id = b.title_id
Consider using the UPDLOCK table hint. The hint will force update locks instead of shared locks to be generated (not compatible with NOLOCK or XLOCK).
The UPDLOCK hint specifies that update locks are taken while reading the table and that
they are held until the end of the Transact-SQL statement or end of the transaction.
This example shows how you can use the UPDLOCK table hint:
SELECT * FROM titles a JOIN titleauthor b
WITH (UPDLOCK) ON a.title_id = b.title_id
Use the FASTFIRSTROW table hint if you need to quickly return first row.
You can quickly return first row and work with it, when the query continues execution and
produces its full result set. The FASTFIRSTROW table hint has the same effect as specifying
the FAST 1 query hint. Keep in mind, that when you use the FASTFIRSTROW hint the rest of
the rows are often returned more slowly that they would have been if the hint had not
been specified.
This is the example to quickly return first row from the titles and titleauthor tables:
SELECT * FROM titles a JOIN titleauthor b
WITH (FASTFIRSTROW) ON a.title_id = b.title_id
Consider using the TABLOCK table hint.
The TABLOCK hint specifies that a shared lock will be taken on the table until the end of
the Transact-SQL statement.
This example shows how you can use the TABLOCK table hint:
SELECT * FROM titles a JOIN titleauthor b
WITH (TABLOCK) ON a.title_id = b.title_id


Consider using the TABLOCKX table hint.
The TABLOCKX hint specifies that an exclusive lock will be taken on the table until the end
of the Transact-SQL statement or the end of the transaction. The TABLOCKX provides the worst
concurrency, but, in this case, SQL Server  will make minimum work to hold and release locks.
This example shows how you can use the TABLOCKX table hint:
This option applies to the OPENROWSET function’s new SQL Server BULK insert functionality and impacts how rows are inserted into a table with an IDENTITY column. If you use this hint, SQL Server will use the identity values from the data file, instead of generating its own.

Piece of Advice:
If your query is very slow, try the following before using table hints:
- rebuild indexes used in the query (or defragment them using the DBCC INDEXDEFRAG command),
- update statistics on the relevant tables,
- consider creating new indexes for this query,
- rewrite your query to provide better execution plan.
Use the table hints only when the actions above do not provide good performance.