SQL Server Forums
Profile | Register | Active Topics | Members | Search | Forum FAQ
 
Register Now and get your question answered!
Username:
Password:
Save Password
Forgot your Password?

 All Forums
 SQL Server 2012 Forums
 SQL Server Administration (2012)
 Strategies for bulk updates without locking
 New Topic  Reply to Topic
 Printer Friendly
Author Previous Topic Topic Next Topic  

ferrethouse
Constraint Violating Yak Guru

340 Posts

Posted - 07/09/2013 :  17:19:33  Show Profile  Reply with Quote
We have many integrations with our application that involve nightly syncs. These syncs often do bulk updates and inserts into our database. These often cause locking until they are complete (sometimes for a few minutes). If I ran the inserts / updates through a stored proc that only performed one update/insert at a time in a distinct transaction with a modest (50ms) delay between records would that solve the locking by allowing other SPIDs access to the table between records? I realize that a 5 minute process may now take 20 minutes but at least the application is usable for the duration.

Thoughts?

Edit: not sure if I should have put this in TSQL forum.

Edited by - ferrethouse on 07/09/2013 17:20:31

robvolk
Most Valuable Yak

USA
15668 Posts

Posted - 07/09/2013 :  18:37:45  Show Profile  Visit robvolk's Homepage  Reply with Quote
I'm assuming table partitioning is not an option?

I have done bcp out/truncate table/bcp in for very large tables, but it's not really an online operation. You can do this to a staging table and then swap the two tables by name or schema, we do this now for daily data loads. Aaron Bertrand has a nice write-up on it here:

http://www.sqlperformance.com/2013/04/t-sql-queries/schema-switch-a-roo-part-2

Also read Part 1 linked in that article. You get the same performance as partition switching but it works in all 2005+ editions, however it has to work on the entire table.
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

USA
7020 Posts

Posted - 07/09/2013 :  19:17:59  Show Profile  Reply with Quote
quote:
Originally posted by ferrethouse

We have many integrations with our application that involve nightly syncs. These syncs often do bulk updates and inserts into our database. These often cause locking until they are complete (sometimes for a few minutes). If I ran the inserts / updates through a stored proc that only performed one update/insert at a time in a distinct transaction with a modest (50ms) delay between records would that solve the locking by allowing other SPIDs access to the table between records? I realize that a 5 minute process may now take 20 minutes but at least the application is usable for the duration.

Thoughts?

Edit: not sure if I should have put this in TSQL forum.



You might use an "in-between" method of loading the data into staging tables and then doing small batches of inserts/updates in a short transaction to keep the locking time to a minimum. This would probably be much faster that doing a single insert/update at a time.

You can adjust the batch sizes up or down till you get good load speed with minimum application impact. You might start with 5000 rows and work up or down from there.

You might also consider setting your database to read committed snapshot to minimize the number of locks taken.
Using Row Versioning-based Isolation Levels
http://msdn.microsoft.com/en-us/library/ms179599.aspx





CODO ERGO SUM
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

Sweden
30242 Posts

Posted - 07/10/2013 :  03:32:43  Show Profile  Visit SwePeso's Homepage  Reply with Quote
You can also consider using snapshot isolation. Then all rows are modified "off row" in tempdb and original data is still accessible.
Then commit the transaction and in a few milliseconds all updated are written to the original table.



N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

ferrethouse
Constraint Violating Yak Guru

340 Posts

Posted - 07/10/2013 :  17:59:31  Show Profile  Reply with Quote
quote:
Originally posted by SwePeso

You can also consider using snapshot isolation. Then all rows are modified "off row" in tempdb and original data is still accessible.
Then commit the transaction and in a few milliseconds all updated are written to the original table.



N 56°04'39.26"
E 12°55'05.63"




We are using snapshot isolation. The problem I'm encountering is write-write locking. Snapshot isolation only solves write-read locking if I'm not mistaken.
Go to Top of Page

ferrethouse
Constraint Violating Yak Guru

340 Posts

Posted - 07/10/2013 :  18:05:36  Show Profile  Reply with Quote
quote:
Originally posted by robvolk

I'm assuming table partitioning is not an option?

I have done bcp out/truncate table/bcp in for very large tables, but it's not really an online operation. You can do this to a staging table and then swap the two tables by name or schema, we do this now for daily data loads. Aaron Bertrand has a nice write-up on it here:

http://www.sqlperformance.com/2013/04/t-sql-queries/schema-switch-a-roo-part-2

Also read Part 1 linked in that article. You get the same performance as partition switching but it works in all 2005+ editions, however it has to work on the entire table.



Don't see schema swapping as an option since writes will be concurrently happening you would have to reconcile the staging and prod for the duration of the inserts into staging (5 minutes or whatever).

I realize that my proposed solution still causes locking but the locking is in small enough chucks that users will still be able to use the application.

Edit: I should also note that many of our tables do have triggers on them so the schema swapping wouldn't work. The triggers are actually the primary reason for the slowness and locking.

Edited by - ferrethouse on 07/10/2013 18:08:08
Go to Top of Page

robvolk
Most Valuable Yak

USA
15668 Posts

Posted - 07/10/2013 :  21:07:47  Show Profile  Visit robvolk's Homepage  Reply with Quote
Can you disable the triggers for these updates? Or rewrite them to detect it's a bulk update and skip whatever processing they do?

You can create triggers on schema swap tables. The trick is to make sure to only use one-part names inside the trigger body. When you schema swap those references will still be valid, and the trigger's schema is tied to the table.
Go to Top of Page
  Previous Topic Topic Next Topic  
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
SQL Server Forums © 2000-2009 SQLTeam Publishing, LLC Go To Top Of Page
This page was generated in 0.06 seconds. Powered By: Snitz Forums 2000