Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
rubs_65
Posting Yak Master
144 Posts |
Posted - 2006-05-16 : 11:05:43
|
Hi,We were encountering table locks in the application that is hindering the concurrency and we found that the insert into table was causing the table lock and I can easily reproduce this from query analyzer. The statement looks like following:Insert into db1..tab1 select * from db2..tab2Table tab1 contains about 12M rows and tab2 contains 5000 rows. Following are the stats and observations:There is only 1 index on tab1 that is unique clustered and is on sequence generated columns so we don’t expect lot of page splits since all the new data will go at the end. There is no memory pressure on SQL Server since this is the only user connected. If we insert only 2999 rows then SQL Sever takes row locks and IX locks on page and table but when we try to insert >3000 rows SQL Server escalate the row lock to table X lock so somehow there is some SQL Server algorithm that causes it to escalate it to table lock In SQL Server 2005, we are not able to reproduce the issue and it only takes row locks even for 5000 rows so it seems like the issue is fixed in new release. Also >3000 rows insert is not true for all table since on some table SQL Server escalate to table locks for >2000 rows so there is some other internal SQL Server algorithm. We are just wondering if someone else sees the same kind of issue and what the solution was.This is from Books online“when a transaction requests rows from a table, SQL Server automatically acquires locks on those rows affected and places higher-level intent locks on the pages and table, or index, which contain those rows. When the number of locks held by the transaction exceeds its threshold, SQL Server attempts to change the intent lock on the table to a stronger lock (for example, an intent exclusive (IX) would change to an exclusive (X) lock). After acquiring the stronger lock, all page and row level locks held by the transaction on the table are released, reducing lock overhead”It is very difficult to believe that threshold is just 5000 row locks for this type of insert since we had seen SQL Server taking millions of row locks and works fineThanks--Harvinder |
|
mcrowley
Aged Yak Warrior
771 Posts |
Posted - 2006-05-16 : 11:41:27
|
Just for sake of argument, can you run DBCC SHOW_STATISTICS (tab1, clustered index name) in db1? Look at the number of rows it returns. If this is significantly off, maybe you need to update the statistics, but this is somewhat of a guess. If I remember rightly, the auto-update of statistics will wait for around 30% of the table to be updated before it triggers. |
 |
|
rubs_65
Posting Yak Master
144 Posts |
Posted - 2006-05-16 : 12:00:22
|
I ran the update statistics with fullscan and still the same result |
 |
|
|
|
|
|
|