Author |
Topic |
pug2694328
Posting Yak Master
166 Posts |
Posted - 2009-04-14 : 10:56:24
|
We're hitting a table object lock when updating a table with an AFTER INSERT,UPDATE,DELETE trigger. The update query clocks incessantly, and can't be canceled, nor can the SPID be killed with a KILL <SPID> statement. A select on the table pulls all rows up to the row being updated then clocks incessantly until canceled.The lock shows up in the activity monitor in request mode IX; owner type of TRANSACTION.The only way we've been able to remove the lock is to restart the SQL Server service. The trigger runs a procedure from another database to send a notification via send_dbmail then runs code that updates tables on a series of client database tables to keep them in sync with this table.I can run the trigger code without errors as raw TSQL and it is working as designed. If I comment out the proc part of our trigger logic, the lock doesn't manifest on update.Any help would be greatly appreciated.Thanks! |
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2009-04-14 : 13:19:16
|
That is a terrible use for a trigger. One possiblilty would be to use replication to keep client db tables in sync with this table. That is what it was designed for. At the very least you should only use your trigger to populate a simple processing table. Then use a seperate process (service or job), which is not in the same context as the user transaction, to do all your sync-ing.Be One with the OptimizerTG |
 |
|
pug2694328
Posting Yak Master
166 Posts |
Posted - 2009-04-14 : 14:23:24
|
TG, thanks for the response I'll look into using replication and post whether your suggestion works. Our client side tables have an additional tinyint column that isn't present in the common database, so long as replication can accommodate that, it should work.What best practices should drive the decision to create/not create a trigger? I'm not convinced that the mere existence of other functionality is a good enough reason to use it (cursors for example).What do you mean by 'processing table'? Why create a table rather than using an update query?This may get me around the table lock, but doesn't explain the cause. Does anyone know what might be causing it? |
 |
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2009-04-14 : 15:15:42
|
The cause is that all that processing (emails, updates on tables in other databases) all is happening as part of the same transaction as the user action. so even with light concurrency a simple update of one row - each by a couple different people could take a long time to complete the transaction. Not to mention the possibility of deadlocks. And any failures along the way will rollback the user action.The "processing table" is a way of isolating the transaction from user action from all the other processing activity. The user action transaction would just have included a simple insert to a new table. That is very fast and has very little chance of failing. >>What best practices should drive the decision to create/not create a trigger?generally speaking a trigger should not be used:- to launch processes outside the database.- to maintain many rows of data based on one row changing in the table with the trigger- to perform validation that can be done using constraints- to keep data in sync which could be done using replication, log-shipping, etc.triggers are good for:- inserting to an audit table- inserting to a processing table - to run more intense processing OUTSIDE the context of the user transaction.>>Our client side tables have an additional tinyint column that isn't present in the common databaseWhat maintains this column? Can it be populated in the common database? Are clients writing to other columns in their versions of the table(s)?Be One with the OptimizerTG |
 |
|
pug2694328
Posting Yak Master
166 Posts |
Posted - 2009-04-14 : 16:05:58
|
TG, fantastic thanks so much for all the detail!The column is maintained by a GUI tool and allows clients to activate/deactivate common features and business logic.This explains the lock, I'll post my solution when it gels. |
 |
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2009-04-14 : 16:12:10
|
sounds like a simple solution may be to remove the tinyint from the client tables and put it in its own table (correlated by PK of the common table)Then you can set up replication (transactional with with push subscriptions to the client tables) That way the data stays in sync without triggers or manual processing.Be One with the OptimizerTG |
 |
|
pug2694328
Posting Yak Master
166 Posts |
Posted - 2009-04-14 : 16:53:12
|
That could work.If I attempt to delete a row in the common database that has a row on one or more client side tinyint tables, will the replication functionality trip an error similar to a foreign key constraint error? |
 |
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2009-04-14 : 17:36:28
|
I've never tried to set that up. I don't think it works that way. If you set a FK constraint on the client tables I'm not sure what happens to replication if the delete fails on the client. I don't think that sounds like a good idea though. Is that a requirement for you? To prevent deletes on the common table if a client table has a tinyint value set? I think a cascade delete should work on the client though - so if a common table row is deleted then the tinyint table row could also be deleted (when the replication delete goes through)One idea could be to not allow actual deletes but instead use logical deletes. Like set a [deleted] flag or something. Then at least you could have some alerts if a client has a tinyint set for a "deleted" row and take the appropriate action.Be One with the OptimizerTG |
 |
|
|