Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 Transact-SQL (2005)
 should I use an application lock here?

Author  Topic 

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2009-07-10 : 03:14:47
hi folks,

I have this database that accumulates rows at a rate of about 25k rows/minute. Let's call this table the FRESH table. As the rows accumulate in FRESH, they need to get processed outside of SQL Server, by a service that does some CPU intensive MATH on them. But a single service on one server can't process the rows (math is too hairy for one instance to keep up) so we use multiple instances of the MATH service, each one calling the same proc to fetch rows from FRESH to churn on. Each time a MATH instance calls to fetch rows to work on, it fetches 1k rows.

Now, we don't want to pass the same row to more than one MATH instance as that would be duplication of effort. Here's what we first came up with to keep that from happening (this is in the proc that MATH calls):


declare @temp (col1 bigint primary key, col2 float, col3 float, col3 int)

begin tran

-- save off the rows we'll eventually return to caller
insert @temp (col1,col2,col3,col4)
select top 1000 col1,col2,col3,col4 from FRESH

-- now delete the rows we'll return from FRESH
-- so other callers don't get them.
delete f
from FRESH f
join @temp t on t.col1=f.col1

commit tran

-- do some stuff with @temp, omitted for brevity

-- now return rows to caller
select col1,col2,col3,col4 from @temp



Now, the only problem with this is, it doesn't work. That is, the same row in FRESH ends up getting passed to different callers, resulting in duplication of effort. The transaction does not block other callers from reading the same rows from FRESH into @temp.

One solution I thought of was to use an application lock, by using sp_getapplock and sp_releaseapplock. Basically just acquire an exclusive applock at the start of the transaction above, and release it at the end. However that has the effect of blocking other callers from reading entirely until the current caller finishes up with the delete statement above.

It would be nicer if I didn't have to completely block other callers entirely, but only keep them from taking the same rows as the current caller. I'd prefer all pigs eating at once from same trough, rather than lining up. On the other hand using sp_getapplock works and completely prevents duplication of effort, which is the primary goal here.

Anyone know of a better way, that would allow all pigs at the trough at once without giving them the same bits to chew on?

thanks!



elsasoft.org

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2009-07-10 : 03:29:50
You could use the OUTPUT operator.
See http://weblogs.sqlteam.com/peterl/archive/2007/10/03/New-OUTPUT-operator.aspx

The deleted records are output as a select!
And you also have the TOP option for DELETE in SQL Server 2005.
DELETE	TOP (1000)
f
OUTPUT deleted.Col1,
deleted.Col2,
deleted.Col3,
deleted.Col4
FROM Fresh AS f


N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2009-07-10 : 03:42:23
Thanks! I think that will do it!

and if I still need the results in a temp table for processing before returning to caller, seems like something like this will work:


DELETE TOP (1000) f
OUTPUT deleted.Col1,deleted.Col2,deleted.Col3,deleted.Col4
INTO @temp
FROM Fresh AS f


-- do some stuff with @temp, omitted for brevity

-- now return rows to caller
select col1,col2,col3,col4 from @temp



thanks man!


elsasoft.org
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2009-07-10 : 03:57:49
You're welcome.



N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

khtan
In (Som, Ni, Yak)

17689 Posts

Posted - 2009-07-10 : 04:01:58
that's a neat trick.


KH
[spoiler]Time is always against us[/spoiler]

Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2009-07-10 : 04:03:57
You know what's neater? Composable DML!
In SQL Server 2008 you can have a DELETE statement (or UPDATE statement) as a derived table for further processing!
See http://weblogs.sqlteam.com/peterl/archive/2009/04/08/Composable-DML.aspx


N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

khtan
In (Som, Ni, Yak)

17689 Posts

Posted - 2009-07-10 : 04:07:31
quote:
Originally posted by Peso

You know what's neater? Composable DML!
In SQL Server 2008 you can have a DELETE statement (or UPDATE statement) as a derived table for further processing!
See http://weblogs.sqlteam.com/peterl/archive/2009/04/08/Composable-DML.aspx


N 56°04'39.26"
E 12°55'05.63"




Looks interesting. Got to find time to play with SQL 2008.

Thanks


KH
[spoiler]Time is always against us[/spoiler]

Go to Top of Page

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2009-07-10 : 10:53:41
I've tried the delete/output thing in the real system and it works great. thanks again!

now time to get back to vacation, right?


elsasoft.org
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2009-07-10 : 12:09:29
Sure!
Thank you for the feedback.

BTW, do you experience much of locking or blocking issues with my suggestion?


N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2009-07-10 : 12:37:15
so far no blocking problems. of course there may be blocking, but that's desired here.

the blocking is necessary in any case to prevent the same row from being passed to callers. Even though FRESH grows at 25k rows per minute, it's usually close to empty because the MATH servers call this proc very frequently - basically whenever their hopper is not full they are calling the proc to get more rows.



elsasoft.org
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2009-07-10 : 12:51:03
It's satisfying to hear the code works as wanted.



N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

Lamprey
Master Smack Fu Yak Hacker

4614 Posts

Posted - 2009-07-10 : 15:45:42
How important is the data?

My question relates to what happens if delete those 1000 rows and the math service craps out? Do you need to record which ones were successful? Would there be any bennifit to flagging those rows as In Process (or something) and then having the service delete those rows when it is done?

Just curious.
Go to Top of Page

jezemine
Master Smack Fu Yak Hacker

2886 Posts

Posted - 2009-07-10 : 16:08:32
quote:
Originally posted by Lamprey

How important is the data?

My question relates to what happens if delete those 1000 rows and the math service craps out? Do you need to record which ones were successful? Would there be any bennifit to flagging those rows as In Process (or something) and then having the service delete those rows when it is done?

Just curious.



yes, we do record that. the important bits of the rows get copied to a PROCESSING table i didn't tell you about.

It's done in this step:

-- do some stuff with @temp, omitted for brevity





elsasoft.org
Go to Top of Page

Lamprey
Master Smack Fu Yak Hacker

4614 Posts

Posted - 2009-07-10 : 17:49:34
Right on. Just thought I'd check. :)
Go to Top of Page
   

- Advertisement -