Search This Blog

Sunday, January 15, 2012

Tips on how to avoid or resolve deadlocking on your SQL Server

Deadlocking occurs when two user processes have locks on separate objects and each process is trying to acquire a lock on the object that the other process has. Usualy, they have different incompatible locks which are illegal or conflict (as per the compatible mode on locks).When this happens, SQL Server identifies the problem and ends the deadlock by automatically choosing one process and aborting the other process, allowing the other process to continue. The aborted transaction is rolled back and an error message is sent to the user of the aborted process. Generally, the transaction that requires the least amount of overhead to rollback is the transaction that is aborted.

Identify Deadlock:
  1. Using SQL Profiler
  2. Using System_health extended event(It highly depends on the ring buffer)
  3. Using Server side trace (DBCC TRACEON (1204) )
  4. Using SQL Error Log with a trace
  5. Predefined notification functionalities to log the deadlock info using service broker, extended events etc.
Here are some tips on how to avoid/resolve deadlocking on your SQL Server:
  1. Ensure the database design is properly normalized.
  2. Have the application access server objects in the same order each time.
  3. During transactions, don’t allow any user input. Collect it before the transaction begins.
  4. Avoid cursors.
  5. Keep transactions as short as possible. One way to help accomplish this is to reduce the number of round trips between your application and SQL Server by using stored procedures or keeping transactions with a single batch. Another way of reducing the time a transaction takes to complete is to make sure you are not performing the same reads over and over again. If your application does need to read the same data more than once, cache it by storing it in a variable or an array, and then re-reading it from there, not from SQL Server.
  6. Reduce lock time. Try to develop your application so that it grabs locks at the latest possible time, and then releases them at the very earliest time.
  7. If appropriate, reduce lock escalation by using the ROWLOCK or PAGLOCK.
  8. Consider using the NOLOCK hint to prevent locking if the data being locked is not modified often.
  9. If appropriate, use as low of an isolation level as possible for the user connection running the transaction.
  10. Consider using bound connections.
  11. Adding missing indexes to support faster queries
  12. Dropping unnecessary indexes which may slow down INSERTs for example
  13. Redesigning indexes to be "thinner", for example, removing columns from composite indexes or making table columns "thinner" (see below)
  14. Adding index hints to queries appropriately(I dont prefer this mostly, but it has got its own scope)
  15. Redesigning tables with "thinner" columns like smalldatetime vs. datetime or smallint vs. int
  16. Modifying the stored procedures to access tables in a similar pattern
  17. Keeping transactions as short and quick as possible: "mean & lean"
  18. Removing unnecessary extra activity from the transactions like triggers
  19. Removing JOINs to Linked Server (remote) tables if possible
  20. Implementing regular index maintenance; usually weekend schedule suffices; use FILLFACTOR = 80 for dynamic tables (Needs a good evaluation)
The list really goes on. The solution will vary from situation to situation.