Change Column Size

The examples below will help you if you want to change the size of a column in SQL.

Changing the size of a column with a SQL query

The syntax for changing the size of a column with SQL query is:

ALTER TABLE [table_name] ALTER COLUMN [column_name] [data_type]

For example, if a database contains a table “Students” with column “Name” that is nvarchar(10) and you want to change it to nvarchar(100) the following query will do the job:

ALTER TABLE Students ALTER COLUMN Name nvarchar(100)

Changing the size of a column in SQL Server with the Server Manager

1) Right-click on a table

2) Click on Design

3) Change datatype of column(s)

Changing the size of a column in MySQL

1) Login to phpMyAdmin and go to a database

2) Select table which field’s length you want to change in the sidebar

3) Click Structure in the top menu

4) Click Change in Action column for field which varchar length you want to change

5) Set field Length/Values to desired value

6) Save the changes

How to Return Bool From Stored Procedure

As developers, we often work with SQL Server stored procedures. The SQL Server stored procedures can have many purposes. One of them is to perform some logical operations and to return true or false value. The problem is that there is no boolean datatype so the only way is that stored procedure returns bit datatype.

Here is an example of stored procedure that will return bit value that could be used instead of boolean:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
USE [Database]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[spReturnValue]
AS
    DECLARE @result BIT
   
    SET @result = 1
   
    RETURN (@result)
   
GO

In order to call the stored procedure use the following SQL code:

1
2
3
4
5
6
7
8
9
10
USE [Database]
GO
DECLARE @return_value BIT
EXEC @return_value = [dbo].[spReturnValue]
SELECT 'Return Value' = @return_value
GO

The Problem with READ COMMITTED

Are you still using READ COMMITTED transaction isolation level? Default transaction isolation level on all SQL Server versions (2000-2014) has serious inconsistency problems, by design. Not many people are familiar how and why is it happening. I will show you a demo code here to reproduce it and a solution to the problem.

How does READ COMMITTED work?

Under default READ COMMITTED isolation level, shared locks on the rows/pages are released as soon as the row is read. Consequence are two anomalies at a single-statement level, that some call “inconsistent analysis”:

- reading the same row multiple times

- legitimate rows are omitted, although they are present all the time there

That are serious, statement-level inconsistencies. E.g. single simple SELECT can read the same row twice and bring you duplicates in the result or it will skip some rows. The worst thing is that you will often not notice at all that data is inconsistent.

If you are running a SELECT and the other transaction updates the row you already read in a way that it moves that row in front of your reading point, you will read that row twice. If the other transaction updates the row you are yet to be read in a way that it moves that row before of your reading point, you will not read that row at all. Such data anomalies are difficult to reproduce because the timing is crucial.

Here is the script:

–==== SETUP – begin

create database test_artefacts
GO
alter database test_artefacts set recovery simple
alter database test_artefacts set read_committed_snapshot off
alter database test_artefacts set allow_snapshot_isolation off
go
use test_artefacts
go
create table big_table
(
    id int primary key clustered,
    filler char(8000) default(‘a’)
)
go
– insert 500 000 rows. Or number large enough for selecting all rows from that table to take at least 5 seconds.
truncate table big_table
go
insert into big_table(id, filler)
select t.id, t.id
from
(    select top 10000 id = ROW_NUMBER() over(order by (select null)) + isnull((SELECT max(id) from big_table), 0)
    from sys.all_columns a cross join sys.all_columns b
) t
go 50
if OBJECT_ID(‘tempdb..#t’) is not null drop table #t
–select COUNT(*) from big_table — 500 000
–==== SETUP – end
– Start session 1, then session 2 very soon after (about 1 second or less)
–==== SESSION 1 – begin
– Copy 500 000 rows table
select * into #t from big_table
go
–==== SESSION 1 – end
–==== SESSION 2 – begin
use test_artefacts
go
–sp_spaceused big_table
–Update first rows to move them at the end of the clustered index.
–We are NOT changing number of records. We do not do inserts nor deletes.
–Number of rows should stay constant.
update t set id=id+1000000
from (select top 1 * from dbo.big_table order by id) t
go 123
–==== SESSION 2 – end
–==== RESULT
select COUNT(*) from #t — 500 123! We expected 500 000.
select COUNT(*) from big_table — 500 000!
– Run the same experiment again, but this time with this code in session 2:
update t set id=-id
from (select top 1 * from dbo.big_table order by id desc) t
go 123
–==== RESULT
select COUNT(*) from #t — 499 877! We expected 500 000.
select COUNT(*) from big_table — 500 000!

 

MDX Best Practice when it comes to commenting queries

I see a lot of customer MDX queries where comments are used to help explain complex logic inside an MDX query, or to remove a block that is not needed.

Please avoid using the single line style of comment as below:

SELECT -- Reseller Sales Amount by Quarter
[Measures].[Reseller Sales Amount] ON 0
,[Date].[Calendar].[Calendar Quarter].Members ON 1
FROM [Adventure Works];

This causes havoc for diagnostics and load testing as when we capture a trace, the entire query is concatenated onto one line as below:

SELECT -- Reseller Sales Amount by Quarter [Measures].[Reseller Sales Amount] ON 0 ,[Date].[Calendar].[Calendar Quarter].Members ON 1 FROM [Adventure Works];

A better approach is to use the start and end comment block approach, so that when the query is reduced to one line it is still valid syntax.

SELECT /*Reseller Sales Amount by Quarter*/ [Measures].[Reseller Sales Amount] ON 0 ,[Date].[Calendar].[Calendar Quarter].Members ON 1 FROM [Adventure Works];

We do have techniques to clean the comments out of the MDX, but the best way I have found so far is to insert CRLF’s before know heuristics that comments might come before (very messy).

As an aside, if you also code in TSQL on the DB Engine platform, commenting queries is also an even bigger no no. This is because the hash of the entire query “including comments” is used to identify similar query plans. If you have the same queries with different comments, then the query processor will have to make up multiple plans for them!

So lets recap:

  • Comments in compiled code =good.
  • Comments in the middle of query statements=handle with care or avoid completely

Buffer Pool and Buffer Cache Hit Ratio

There seems to be some confusion surrounding Buffer Cache Hit Ratio and the SQL Server Buffer Cache. Having answered several queries concerning this over the last few weeks on a variety of forums I thought I would post here in order set the record straight.

What is the SQL Server Buffer Cache?

The Buffer Cache (sometimes called the Data Cache) is an area of the SQL Server Buffer Pool (which in turn is part of the overall SQL Server Process space) and in simple terms is the amount of your database data (data pages) that are stored in memory.

What is Buffer Cache Hit Ratio?

The performance measure buffer cache hit ratio expresses as a percentage, how often a request for a database data page, can be served from the Buffer Pool. The alternative action is for SQL Server to have to fetch the data page from disk.

So for example, if you have a 500MB database, a server with 4GB and SQL Server is configured to use all available memory, you will have a very good Buffer Cache Hit ratio of 99% or above because your entire database can easily fit into memory.

How can I determine the Buffer Cache Hit Ratio of my server?

In order to determine you Buffer Cache Hit Ratio you can use the Windows Performance Monitor tool.

  1. Just go to: Start – Programs –  Administrative Tools – Performance
  2. Right click the graph and choose: Add Counters.
  3. Select Performance object: SQLServer:Buffer Manager 
  4. Add: Buffer Cache Hit Ratio.

How can I get more detailed information about SQL Servers overall memory usage?

In order to get a detailed report of SQL Servers memory usage I like to use the DBCC statement:

DBCC MEMORYSTATUS

 

Thanks and more later!

T-SQL Best Practices

Every programming language has best and worst practices, and T-SQL is no exception. Some best practices in T-SQL are similar to those applicable to procedural languages, but most actually have to do with the fact that T-SQL is based on the relational model. This post will  highlight several important T-SQL best practices. (However, please don’t consider my list as complete.)

Think in Sets

Thinking in sets, or more precisely, in relational terms, is probably the most important best practice when writing T-SQL code. Many people start coding in T-SQL after having some background in procedural programming. Often, at least at the early stages of coding in this new environment, you don’t really think in relational terms, but rather in procedural terms. That’s because it’s easier to think of a new language as an extension to what you already know as opposed to thinking of it as a different thing, which requires adopting the correct mindset.

If that’s not enough, there are two levels of adaptation you need to go through. The first is not thinking in procedural terms, but rather in T-SQL terms. The second is realizing that even T-SQL isn’t perfect as a relational language and understanding how to use it relationally.

When I say “not thinking in procedural terms,” I’m talking about generally avoiding iterative constructs such as cursors and loops, in which you process data one row at a time in specific order. Remember that tables in T-SQL are supposed to represent sets of rows; sets need to be handled as a whole, and they have no particular order to their elements. I won’t tell you to never touch a cursor, because I recognize that there are cases in which, for practical reasons, it’s difficult to avoid them — however, you should consider a set-based approach by default.

As I mentioned, a set has no order, so you need to remember that neither tables nor views are ordered. This sounds easy, but in reality it’s very difficult for some people to accept this fact. You need to realize that regardless of what kind of physical structures exist, such as indexing, when you query a table or a view, you aren’t supposed to assume any particular physical order of processing the rows. Also, unless you add an ORDER BY clause to the outermost query, the order of the rows returned isn’t guaranteed. For these reasons, attempts to create a “sorted view” — for example, by adding TOP (100) PERCENT to the query — are wrong from a relational perspective and aren’t guaranteed to return rows from a view in any particular order when the outer query against the view doesn’t have an ORDER BY clause.

You must understand that the T-SQL concepts of table, row, and column have very different meanings from file, record, and field, which people often use mistakenly.

After you stop thinking in iterative, ordered, one-row-at-a-time terms and start thinking in terms of sets, you’ve taken the most important leap. A second level of maturity in programming with T-SQL is to realize that T-SQL isn’t perfect as a relational language, but it provides you with enough tools that you can use it relationally if you understand the relational model.  You need to understand the difference between the T-SQL concepts of table, column, and row and their relational counterparts: relation, attribute, and tuple. (A full discussion of this topic is beyond the scope of this article.)

Think of NULLs and the Three-Valued Logic

NULLs are a source of quite a lot of complexity and confusion in T-SQL. In essence, a NULL is supposed to represent a missing value that could be either applicable (exists but we don’t know what it is) or inapplicable (doesn’t exist, or is irrelevant). To start with terminology, NULL is a mark for a missing value — not a value — therefore, saying “NULL value” is incorrect. The terminology you want to use is either just a NULL or a NULL mark. Also, a predicate that compares something with a NULL — even another NULL — yields neither true nor false, but instead unknown.

One of the most important best practices in T-SQL is that whenever you write T-SQL code, you should always be mindful of NULLs, as well as the fact that interaction with NULLs is special and sometimes actually counterintuitive. So, for every piece of code you write with T-SQL, you should ask yourself whether NULLs are possible in the data you’re interacting with (e.g., filter, sort, group); if they are, make sure to include specific tests for NULLs in your QA and test environments. Sometimes your findings about the behavior of NULLs will require you to change the code in order for it to behave the way you need it to.

For example, you need to remember that query filters such as ON, WHERE, and HAVING return a row only if the predicate evaluates to true and don’t return the row if the predicate evaluates to either false or unknown. Consider the following example query against a sample database called TSQL2012:

USE TSQL2012; SELECT custid, country, region, city FROM Sales.Customers

WHERE region <> N’WA’;

 

Some places don’t have three levels in the location hierarchy — just two (country and city) — in which case the region is NULL, meaning that it’s inapplicable. The example query attempts to return all customers that aren’t from the region WA, which is supposed to also include locations where the region is simply inapplicable. However, any comparison with NULL (including the operator <>) yields unknown, and a query filter filters out unknowns. So this query will return only customers where the region is known to be different than WA and not NULL. To fix this problem and include customers from locations where the region is inapplicable, you need to add logic to the query, like so:

SELECT custid, country, region, city FROM Sales.Customers WHERE region <> N’WA’

OR region IS NULL;

Also, you need to keep in mind that some language constructs filter out unknowns (e.g., query filters), whereas some don’t (e.g., a CHECK constraint).

If you always think about NULLs and ensure that your tests always include NULLs, you’ll quickly learn that using a NOT IN predicate to look for non-matches is very dangerous when one of the elements in the subquery can be NULL. For example, suppose that you’re writing the following query to return customers who didn’t place orders:

SELECT custid FROM Sales.Customers WHERE custid NOT IN

(SELECT custid FROM Sales.Orders);

If any order has a NULL custid value, this query will return an empty set, even when there are customers who didn’t place orders. That’s because as far as the language is concerned, when a custid value doesn’t appear in the set of present values, the NULL could still represent that custid value. So, there’s no assurance that it’s not in the inner set. It’s very tricky logic that results from three-valued logic. After you identify the problem, you can determine a solution. For example, you can explicitly eliminate the NULLs in the subquery with a filter saying WHERE custid IS NOT NULL, and therefore you end up comparing the outer custid value only with present custid values from the inner set. Another option is to use NOT EXISTS instead because it relies on two-valued logic.

The specific example is less important than the general recommended best practice: Be mindful of NULLs for every piece of code you write in T-SQL. If NULLs are possible in the data, make sure you include them in your tests. And if the default language treatment isn’t the one you need, make sure you make the correct adjustments to make it behave the way you need.

Avoid NOLOCK

The use of the NOLOCK query hint (equivalent to using the read uncommitted isolation level) is one of the most common practices in T-SQL yet one of the worst. The practice is so common because it makes queries run faster. Concurrency improves because with NOLOCK, your queries don’t request shared locks and therefore run into fewer blocking conflicts because of lock incompatibility. But at what cost?

Most people’s perception is that the effect of using NOLOCK is the possibility of getting uncommitted reads (aka dirty reads), which is true. Some consider that in certain circumstances it’s OK to get uncommitted reads. However, there’s more to NOLOCK than just uncommitted reads. As it turns out, when the storage engine gets an instruction from the query processor to perform some activity (e.g., an index scan with an ordered: false property), sometimes the storage engine has multiple ways to go about carrying out the instruction. Typically, the storage engine will first prioritize the correctness expectations from the request before performance. But when using the NOLOCK option, the storage engine will assume your main focus is speed and will therefore prefer the faster option. This can result in double-reading or skipping rows in cases that would typically not be allowed to happen. In short, you get data consistency problems with NOLOCK well beyond getting uncommitted reads.

If you have performance problems related to concurrency when using the read committed isolation, and you’re using SQL Server 2005 or later, you should consider using the newer read committed snapshot isolation. It has some overhead for writes, but it doesn’t suffer from the aforementioned consistency problems — and it typically solves the concurrency problems.

Adopt Good Coding Practices

There are some coding best practices that experienced database practitioners have heard a thousand times, but they’re still so important that I want to make sure I mention them.

Terminating statements with a semicolon. In the past, T-SQL didn’t have a requirement to terminate all statements with a semicolon. However, standard SQL requires it. Also, with each new version of SQL Server, this becomes a requirement for more and more specific cases to avoid parsing ambiguity. SQL Server 2008 already added “not terminating statements with a semicolon” to the list of deprecated features. So do yourself a favor and if you haven’t done so already, start getting used to terminating all of your statements.

Avoid *. Using * in the SELECT list is a bad practice in most cases. Changes in the underlying table definition might cause problems for applications that don’t expect them and in table expressions such as views defined based on the query. You also want to be able to rely efficiently on covering indexes when you’re truly interested in only a subset of the columns.

Schema-qualified object names. Always use schema-qualified object names (Sales.Orders as opposed to just Orders) both when creating and when querying them. Different users can have different default schemas. Also, even though the savings is very minor and probably negligible, explicitly indicating a schema means you save the cost of the instructions involved in implicit resolution. Furthermore, a query plan produced by a user with one default schema can’t be reused by a user with a different default schema. You’ll get multiple optimizations with multiple unnecessary execution plans in cache.

Have a style and use correct casing. I find it very important to use the correct case for identifiers of objects and columns. I’ve seen a lot of code in which people weren’t mindful of this rule; in case-insensitive environments, it typically doesn’t matter much. However, what if at some point in the future you need to make your databases case-sensitive? Why not write in a way that won’t get you into trouble if such changes are required in the future? Also, code that uses different casing for the same identifiers in different places doesn’t look good. Consistency can go a long way in code readability and maintainability.

For similar reasons, I find it important to develop some style for your code. Have you seen queries that look like this:

SELECT CUSTID,EMPID, YEAR(ORDERDATE) ORDERYEAR,SUM(QTY) QTY FROM ORDERS GROUP BY CUSTID, EMPID, YEAR(ORDERDATE)

As opposed to:

SELECT custid, empid, YEAR(orderdate) AS orderyear, SUM(qty) AS qty FROM dbo.Orders GROUP BY custid, empid, YEAR(orderdate);

There’s not a specific style that’s better than the rest, but it’s important to have a style that’s readable and natural, and to be consistent with it. Doing so improves the readability and maintainability of your code.

Avoid the old-style join syntax. Some people find it difficult to adopt the ANSI SQL-92 syntax for joins with the JOIN keyword and the ON clause (e.g., FROM T1 INNER JOIN T2 ON T1.keycol = T2.keycol), as opposed to using a comma between the table names and then all predicates in the WHERE clause (e.g., FROM T1, T2 WHERE T1.keycol = T2.keycol). I encourage you to use the newer syntax (although contrary to popular belief, the old-style syntax for cross and inner joins is actually standard and is fully supported by SQL Server). Outer joins never had standard old-style syntax; only cross and inner joins did. Outer joins have only one standard syntax, which is the ANSI SQL-92 syntax; outer joins are the reason the standard bothered creating a new syntax. For consistency’s sake, the standard also adopted the newer syntax for the cross and inner joins, although it didn’t deprecate the old-style syntax for cross and inner joins.

At any rate, there are two main reasons why you should stick to the newer syntax for all joins. One is consistency; it simply doesn’t look good when you try mixing the different styles. Another is the fact that if you use the old-style syntax and forget to indicate the WHERE clause with the join predicate, you get an unintentional cross join. With the newer syntax, if you use an inner join and forget the ON clause, you get a parsing error that forces you to fix the code.

Listener for Availability Group gives error message

Here is the error message I encounter while configuring Listener for AG:

Create failed for Availability Group Listener ‘ListenerTest’.  (Microsoft.SqlServer.Smo)

The WSFC cluster could not bring the Network Name resource with DNS name   online. The DNS name may have been taken or have a conflict with existing name services, or the WSFC cluster service may not be running or may be inaccessible. Use a different DNS name to resolve name conflicts, or check the WSFC cluster log for more information.
The attempt to create the network name and IP address for the listener failed. The WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect. Check the state of the WSFC cluster and validate the network name and IP address with the network administrator. (Microsoft SQL Server, Error: 19471)

Resolution:

As a part of configuring listener, SQL Server cluster will create a Availability Group service in WSFC, with in that service, it will bind AG listener with IP and it will create one more resource. The reason for the above error message was because it was not able to create the computer object name in AD, so while configuring Network name with IP, the cluster network name need to have ‘Full control’ in AD to create the computer object, so once we give full control we should be able to configure the listener without any issues.  Here is the  steps for granting permissions in AD:

    • On a domain controller,       click Start,       click Administrative       Tools, and then click Active Directory Users and       Computers. If the User Account control dialog  box appears, confirm that the action it displays is what you want, and then click Continue.
    • On the View menu,       make sure that Advanced       Features is selected.
      When Advanced       Features is selected, you can see the Security tab       in the properties of accounts (objects) in Active Directory Users and       Computers.
    • Right-click the default Computers container       or the default container in which computer accounts are created in your       domain, and then click Properties. Computers is       located in Active       Directory Users and Computers/domain node/Computers.
    • On the Security tab,       click Advanced.
    • Click Add, type the       name of the account that was created or obtained in step 1, and then       click OK.
    • In the Permission       Entry forcontainer dialog       box, locate the Create       Computer objects and Full Control permissions,       and make sure that the Allow check       box is selected for each one.

And there you have it!

About Table Hints

Sometimes your query executes very slowly. One of the ways to increase the queries performance is using the SQL Server table hints. The table hints specify a locking method to be used by the query optimizer with this table, specify the locking granularity for a table, or one or more indexes to be used by the query optimizer when querying this table. If you want to use the table optimizer hints, you should specify the WITH keyword with the hints list after the table name in the FROM clause. You can specify several table hints and separate them by a comma.If you want to set any table hints, do not remember to test the query with hint
and without the hint and compare results.
Because SQL Server  query optimizer usually proposes the best execution plan, you should
test the query with table hint and use this hint only when it provides the best result. If the
table hint provides the better execution plan not stable, avoid using this hint and rely on the
SQL Server query optimizer.
NOLOCK or READUNCOMMITTED:

Use the NOLOCK or READUNCOMMITTED table hint if you need the highest concurrency.
These hints specify that dirty reads can be allowed. When these hints are used, no shared locks are issued and no exclusive locks are honored.
This example shows how you can use the NOLOCK table hint:
SELECT * FROM titles WITH (NOLOCK)
The NOLOCK hint means it will not take care of any lock (instead of placing a lock). It will return data that could be dirty (not yet committed by other transactions). We can use this table hint to get results quickly when we are dead sure that dirty data is totally bearable.
In DELETE/UPDATE queries it should be totally avoided as it can produce junk results.
READCOMMITTED:

Consider using the READCOMMITTED table hint to specify that statements cannot
read data that has been modified but not committed by other transactions.
The READCOMMITTED table hint specifies that read operations comply with the rules for the
READ COMMITTED isolation level by using either locking or row versioning. When the database  option READ_COMITTED_SNAPSHOT is OFF (by default), the shared locks are used, when the READ_COMITTED_SNAPSHOT is ON, the row versions are used instead of shared locks.
This example shows how you can use the READCOMMITTED table hint:
SELECT * FROM authors WITH (READCOMMITTED)
REPEATABLEREAD :
Use the REPEATABLEREAD table hint if you need more consistency, than default
(READCOMMITTED) and higher concurrency, than with SERIALIZABLE.
By default, SQL Server uses READ COMMITTED isolation level. With this isolation level SQL Server uses shared locks while reading data. It ensures that not committed data will not be read, but it not ensures that the data will not be changed before the end of the transaction. If you need the current data will not be changed before the end of the transaction, you can use the REPEATABLEREAD table hint.
This example shows how you can use the REPEATABLEREAD table hint:
SELECT * FROM authors WITH (REPEATABLEREAD)

NOWAIT :

 Consider using the NOWAIT table hint if you need to return a message when a lock is
encountered on the table.
When the NOWAIT table hint is used, SQL Server returns a message as soon as a lock is encountered on the table. Using this hint is equivalent to use SET LOCK_TIMEOUT 0 for
a specific table.
In this example, SQL Server  will return a message when a lock is encountered on the
authors table:
SELECT * FROM authors WITH (NOWAIT)
INDEX:

Use the INDEX hint if you need to specify the name or ID of the indexes to be
used by SQL Server  when processing the statement.
This example shows how you can use the INDEX hint:
SELECT au_fname FROM authors WITH (INDEX(0))
If a clustered index exists, INDEX (0) forces clustered index scan, if no clustered index
exists, INDEX (0) forces a table scan.
Note: You can use only one index hint per table, but multiple indexes can be used in the
single hint list.

XLOCK :

Consider using the XLOCK table hint to take and hold exclusive locks until
the transaction completes. This hint forces exclusive locks on the resources being referenced (not compatible with NOLOCK or UPDLOCK).
When you use the XLOCK table hint with ROWLOCK, PAGLOCK, or TABLOCK, the exclusive locks apply to the appropriate level of granularity.
This example shows how you can use the XLOCK table hint with the PAGLOCK hint:
SELECT * FROM titles WITH (XLOCK, PAGLOCK)
READPAST :
Use the READPAST table hint if you need to return only the pasted rows.
When the READPAST table hint is used, the locked rows will be skipped. i,e Skips locked rows, and does not read them
This is the example to return only the pasted rows from the titles and titleauthor tables:
SELECT * FROM titles a JOIN titleauthor b
WITH (READPAST) ON a.title_id = b.title_id
ROWLOCK :
Consider using the ROWLOCK table hint.
This hint specifies that a shared row lock will be taken instead of shared page or table lock.
The ROWLOCK provides the higher concurrency, but at the cost of lower performance because
SQL Server  should hold and release locks at the row level.
This example shows how you can use the ROWLOCK table hint:
SELECT * FROM titles WITH (ROWLOCK)
HOLDLOCK or SERIALIZABLE:

Use the HOLDLOCK or SERIALIZABLE table hint if you need the highest consistency.
These hints specify shared locks will be holding until the completion of a transaction.
By default, the shared locks are released as soon as the required table or data page is
no longer needed. This results in higher consistency, but at the cost of lower concurrency.
This example shows how you can use the HOLDLOCK table hint:
SELECT * FROM titles WITH (HOLDLOCK)
PAGLOCK :
Consider using the PAGLOCK table hint.
The PAGLOCK hint specifies that a shared page lock will be taken instead of table lock.
This example shows how you can use the PAGLOCK table hint:
SELECT * FROM titles a JOIN titleauthor b
WITH (PAGLOCK) ON a.title_id = b.title_id
UPDLOCK :
Consider using the UPDLOCK table hint. The hint will force update locks instead of shared locks to be generated (not compatible with NOLOCK or XLOCK).
The UPDLOCK hint specifies that update locks are taken while reading the table and that
they are held until the end of the Transact-SQL statement or end of the transaction.
This example shows how you can use the UPDLOCK table hint:
SELECT * FROM titles a JOIN titleauthor b
WITH (UPDLOCK) ON a.title_id = b.title_id
FASTFIRSTROW :
Use the FASTFIRSTROW table hint if you need to quickly return first row.
You can quickly return first row and work with it, when the query continues execution and
produces its full result set. The FASTFIRSTROW table hint has the same effect as specifying
the FAST 1 query hint. Keep in mind, that when you use the FASTFIRSTROW hint the rest of
the rows are often returned more slowly that they would have been if the hint had not
been specified.
This is the example to quickly return first row from the titles and titleauthor tables:
SELECT * FROM titles a JOIN titleauthor b
WITH (FASTFIRSTROW) ON a.title_id = b.title_id
TABLOCK :
Consider using the TABLOCK table hint.
The TABLOCK hint specifies that a shared lock will be taken on the table until the end of
the Transact-SQL statement.
This example shows how you can use the TABLOCK table hint:
SELECT * FROM titles a JOIN titleauthor b
WITH (TABLOCK) ON a.title_id = b.title_id

TABLOCKX :

Consider using the TABLOCKX table hint.
The TABLOCKX hint specifies that an exclusive lock will be taken on the table until the end
of the Transact-SQL statement or the end of the transaction. The TABLOCKX provides the worst
concurrency, but, in this case, SQL Server  will make minimum work to hold and release locks.
This example shows how you can use the TABLOCKX table hint:
SELECT * FROM authors WITH (TABLOCKX)
KEEPIDENTITY:
 
This option applies to the OPENROWSET function’s new SQL Server BULK insert functionality and impacts how rows are inserted into a table with an IDENTITY column. If you use this hint, SQL Server will use the identity values from the data file, instead of generating its own.

Piece of Advice:
If your query is very slow, try the following before using table hints:
- rebuild indexes used in the query (or defragment them using the DBCC INDEXDEFRAG command),
- update statistics on the relevant tables,
- consider creating new indexes for this query,
- rewrite your query to provide better execution plan.
Use the table hints only when the actions above do not provide good performance.

How to preview a large string

Often you need to debug dynamic SQL statement. Simplest way to do this is to print statement. Because of 8000 char limit (400 for unicode), it is hard to do in “standard way”.

here is an example:

01 DECLARE @Very_big_string NVARCHAR(MAX)= '' ;
02 WITH    c AS ( SELECT   ROW_NUMBER() OVER ( ORDER BY o1.object_id, o2.object_id ) RN
03                FROM     sys.objects o1
04                         CROSS JOIN sys.objects o2
05              )
06     SELECT TOP 9999
07             @Very_big_string = @Very_big_string +
08     REPLICATE(0,4 - LEN(CAST(rn AS NVARCHAR)))
09             + CAST(rn AS NVARCHAR) + CHAR(13)
10     FROM    c   
11
12 PRINT @Very_big_string

Last result will be 0800 – 4 chars  for digits and one for end line – char(13) (4+1)*5=4000

There are some methods to print string by UDF etc. I propose simpler method: print XML which will contain your string (xml is not limited by char, by size in SSMS settings only – default is 2MB
So here is an example for this string:

01 DECLARE @Very_big_string NVARCHAR(MAX)= '' ;
02 WITH    c AS ( SELECT   ROW_NUMBER() OVER ( ORDER BY o1.object_id, o2.object_id ) RN
03                FROM     sys.objects o1
04                         CROSS JOIN sys.objects o2
05              )
06     SELECT TOP 9999
07             @Very_big_string = @Very_big_string +
08     REPLICATE(0,4 - LEN(CAST(rn AS NVARCHAR)))
09             + CAST(rn AS NVARCHAR) + CHAR(13)
10     FROM    c   
11
12 SELECT CAST('<root>'+@Very_big_string+'</root>' as XML)

Here you can preview your string in XML viewer.

Disadvantage of this method is that all standard XML symbols like <,>,! etc are changed to ampersand sequences ( eg > is changed to &dt; )

How to compare the rows of 2 tables and fetch the differential data

Let’s say we would like to compare and move the changes between the databases for some tables using T-SQL

The below example talks about moving the changes between the two databases in the same instance However the same can be extended across instances if you use linked server or SSIS packages.

Also we can write queries to move the DML changes from source to destination and vice versa. Let’s look at the below example

 

–creating a source database

create database source

 

–create source  table

use source

 

create table Product(

 

Pid int  primary key ,

Pname varchar (10),

Pcost float,

source int ,

location varchar(10))

 

–create destination database

 

create database Destination

 

–create destination table

 

use Destination

 

create table Product(

 

Pid int  primary key ,

Pname varchar (10),

Pcost float,

source int,

location varchar(10) )

 

–Insert data into source table

use source

 

insert into product values  ( 1,‘rdbms’,100,200,‘ind’)

insert into product values  ( 2,‘dbm’,20,100,‘US’)

insert into product values  ( 3,‘arp’,30,250,‘UK’)

insert into product values  ( 4,‘mqr’,40,100,‘ind’)

insert into product values  ( 5,‘ttp’,50,200,‘us’)

 

EXCEPT returns any distinct values from the left query that are not also  found on the right query.

–The below query gives us difference between sourec and destination

– we can use except ket word to look at selected columns or entire table

 

select * from source.dbo.product

 

except

 

select * from [Destination].dbo.product

 

–updating destination table with the changes from source

 

insert into [Destination].dbo.product

select * from source.dbo.product

except

select * from [Destination].dbo.product

 

– We see that the destination is populated with all the rows from source

 

select * from [Destination].dbo.product

 

–Now lets update the row in the source and see how it works

 

update source.dbo.product

set pname=‘sql’

where pid =1

–run the below query

select * from source.dbo.product

 

except

 

select * from [Destination].dbo.product

 

– the result gives us the only row which was changed in source

 

– loading the deiffrences to a temp table

select * into #temp from source.dbo.product

 

except

 

select * from [Destination].dbo.product

 

–updating the destination with changes

 

update [Destination].dbo.product

set [Destination].dbo.product.pname= #temp.pname

from #temp where #temp.pid= [Destination].dbo.product.pid

 

–lets run the statement to see the difference between these tables

 

select * from source.dbo.product

 

except

 

select * from [Destination].dbo.product

 

–lets see how the delete works

 

delete from source.dbo.product where pid= 2

 

– to see the rows which were deleted at source or inserted at destination only

select * from [Destination].dbo.product

except

select * from source.dbo.product

–based on the application logic either we will insert it back in the source or delete from dest

 

–lets say we want to delete from dest as well ,

 

select * into  #temp from [Destination].dbo.product

except

select * from source.dbo.product

 

delete from [Destination].dbo.product where pid in( select pid from #temp)

 

– Now lets see that difference between the tables

select * from [Destination].dbo.product

except

select * from source.dbo.product

 

 

In the above demo, we see that how we can propagate the DML changes (insert, delete and update) From source to destination using T-SQL queries.

Also this can be used when you have requirement from your application to move only the changes Instead of truncating the entire table and populating the entire data again.