Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 Old Forums
 CLOSED - General SQL Server
 So how do YOU solve the common sql problem

Author  Topic 

SQLServerSpotlight
Yak Posting Veteran

57 Posts

Posted - 2005-05-03 : 01:01:32

It often happens that I (we?, you?) have loads of tsql or stored procedures to return data to various callers and within these calls is common blocks of code.

I generally have a sub-procedure that:
1. Does some sort of cacheing such that it is only regenerated every N seconds (Even if its as low as 1 second, as multiple spids can cause issues). Then in the 'use cache' block it just does a select * from cache_results
2. I do a begin transaction/commit transaction with a truncate table/ repopulate table block.

So in the case when N spids all want the info at the same time, the '1st' to achieve the cache-lock via does the re-populating:

update qs_cache_times
set lastupdate = getdate()
where name = 'qs_update_sysfiles'
and (getdate() > dateadd(second,cache_seconds,lastupdate)) -- cache_seconds = 5 seconds (say)

if @@rowcount = 0
begin
if @Debug > 0
print 'Cache is already up to date.'
end
else
begin
if @Debug > 0
print 'Repopulating the cache table.'

begin tran
truncate table cache_table
insert into cache_table select * from ....
-- Lots of other repopulating code here.
commit tran
end

if (@DoSelect <> 0)
select * from cache_table

The Following Spids just select the data or do their subsequent tsql 'knowing' that the data is up-to-date.

Do you need this type of solution?
if Yes, How do you all do this?

Jules.



X002548
Not Just a Number

15586 Posts

Posted - 2005-05-03 : 09:11:06
It looks like you're trying to swat a fly with a thermo-nuclear device

Wait, this is the season finally of 24...isn't it...

Seriously though, what are you doing?



Brett

8-)

EDIT: You do know that SQL Server handles it's own caching...

Go to Top of Page

mohdowais
Sheikh of Yak Knowledge

1456 Posts

Posted - 2005-05-04 : 08:37:43
Brett, I think sql server only caches execution plans and not data. if i understand correctly, what s/he is trying to do here is avoid a series of complex statements to retreive the data if it is already present in the cache table.

Spotlight, this is something that can be efficiently implemented at the business or UI layer. I haven't yet seen an implementation of this at the back-end. You should investigate using Indexed views, which provide functionality similar to what you are trying to do here, i.e. they can cache results of a complex query and automatically update the data whenever it changes. But creating indexed views can sometimes be a pain and they have quite a list of requirements to be fulfilled before you can create them.

OS
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2005-05-04 : 09:14:56
quote:
Originally posted by mohdowais

Brett, I think sql server only caches execution plans and not data.


Not true. You can see this yourself. Run a complex query, and determine the length of the rtransaction.

Then run it again. You'll see that it's much quicker.

I started to implement a bunch a stored procs in DB2, and this time I decided to log all usage of the sprocs...inclusing the length of the transactions...they go from 4 hundreths of a second, to like 4 thousands...

I should have done this in my SQL Server applications...my only concern with DB2 is the size of the log. There is no automatic growth for tablespaces...

Anyway, I would say the


insert into cache_table select * from ....
-- Lots of other repopulating code here.


lots of repopoulating code, is probably more overhead than the perceived benefit would be...especially if the database is properly index.

Now, if they want to promote the argument that the schema is very bad, then that might be a reason...but I might be more inclined to index views in that case...

Kinda hard to say with out seeing the "repopulating" code and the table structures..

Or what they are actually doing in the first place

Oh, MOO

Anyone else have an opinion on this?



Brett

8-)
Go to Top of Page

TG
Master Smack Fu Yak Hacker

6065 Posts

Posted - 2005-05-04 : 09:29:50
SqlServerSpotlight, you've done a pretty good job of explaining your solution, but I'm still not clear on what problem you're solving. When I think if "cache" I think, "In Memory". Sounds like you're preparing a summarized table of complex data to improve query performance. And don't want to do this in a nightly process because that only makes the data good as of yesteray. In order to provide this data to the application in "real-time" you're updating this table every n seconds which makes it "real-enough-time". Is that correct?

Be One with the Optimizer
TG
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2005-05-04 : 12:09:15
Any data that is "Cached" is out of date the time it's "cached". Unless your SQL Server, and you know that data hasn't changed..based on stuff I'm gonna have to read Delany for.

What model would you use to gaurentee your "cached" data is current?



Brett

8-)
Go to Top of Page

coolerbob
Aged Yak Warrior

841 Posts

Posted - 2005-05-04 : 13:18:15
If TG has got the story right, then wont an option be to replicate the summary to a summary table? Then its never out-of-date.
Go to Top of Page

SQLServerSpotlight
Yak Posting Veteran

57 Posts

Posted - 2005-05-04 : 18:48:52
Hey all,
This is whats happening.
I have this style of cache-solution multiple times.
When I do the 'repopulating', it is a loop around all the databases on the instances.

-- this is the repopulating section from above:
-- Note that I have Moved the truncate/insert to After the collecting into a temp table.
-- This is new, and has stopped my problem with 'deadlock victims', however it means I'm
-- Copying the data an extra time.
begin
CREATE TABLE #qs_dbcc_showfilestats (
[DBName] [sysname] not NULL default '',
[Fileid] [smallint] not NULL ,
[FileGroup] [smallint] not NULL ,
[TotalExtents] [int] NULL ,
[UsedExtents] [int] NULL ,
[Name] [sysname] NOT NULL ,
[FileName] [char] (255) NOT NULL
)

declare @DBName sysname
declare @sql varchar(2000)
set @sql = ''
set @DBName = ''
while 1=1
begin
select @DBName = min(name) from master.dbo.sysdatabases with (nolock) where name > @DBName and status & 4096 = 0 and has_dbaccess(name) <> 0
if @DBName is null
break

set @sql = 'use '+quotename(@DBName)+' dbcc showfilestats '
insert into #qs_dbcc_showfilestats (Fileid, FileGroup, TotalExtents, UsedExtents, Name, FileName)
exec(@sql)

update #qs_dbcc_showfilestats
set DBName = @DBName
where DBName = ''
end

begin transaction
---------------------------
-- START OF POPULATION CODE
---------------------------
truncate table qs_dbcc_showfilestats
insert into qs_dbcc_showfilestats select * from #qs_dbcc_showfilestats
commit transaction
end


I do this to cache the data from sysobjects, sysfiles, sysindexes, sysfilegroups, sysusers, dbcc loginfo, dbcc opentran in the context of each database. There are other cache collections that do NOT loop though database, but they are less important.

As the status/availability of databases can change I am not going to attempt a view across them all (even if it was possible - is it?)

I use these tables to join to, so that I can show objectname, indexnames, filenames of Large tables, fragmented indexes, what is objects are doing blocks etc.
The alternative was to loop through Every time in Every connection.

Thanks for your replies so far.
Jules.

Go to Top of Page

TG
Master Smack Fu Yak Hacker

6065 Posts

Posted - 2005-05-04 : 23:28:44
Well, it sounds like you're asking about a common database issue. Where the challenge is to provide data consumers with a manipulated/summarized collection of data that comes from various sources of constantly changing data. Usually, the main objectives are to provide it as close to real-time as possible and to avoid calculating values based on incomplete "units" of transactions and inconsistant transaction sequencing. No, I don't do it the way you're doing it. But I have the feeling that where my experience is business systems and user transactions, you are dealing with database systems and statistics. Maybe it's not important for you to present all your points of data for a consistent time period or transaction point.

Do you by any chance work for Quest, the company that makes Spotlight for sql server? Otherwise I don't have a clue what you're using this "cached" data for.

Be One with the Optimizer
TG
Go to Top of Page

SQLServerSpotlight
Yak Posting Veteran

57 Posts

Posted - 2005-05-05 : 00:27:58
By chance I DO work for Quest Software.
(So if you have a feature list for me, bring it on!)

I agree, its a common db issue - which is why I tried to ask the question in a general way.
You didn't mention how you solve this type of problem, I'd love to know if your way is applicable.

Jules.
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2005-05-05 : 09:41:31
I still have no idea what the actual thing you're trying to accomplish...

And don't tell me you're release production 3rd party vendor code with SELECT * in it....Please....

Please tell us what you're trying to accomplish...because it doesn't seem to be data related....is it? It seems Object related...sorta...



Brett

8-)
Go to Top of Page

TG
Master Smack Fu Yak Hacker

6065 Posts

Posted - 2005-05-05 : 10:17:56
Jules,
I haven't worked enough with Spotlight to come up with a feature list :) but I was
impressed with it.

As far as how I've solved (or lived with) this issue you're asking about, it's really
on a case by case basis. Most of the solutions I've been involved with don't really
seem to apply to your situation. The closest thing might be something like this where
the advantage is that consumers never wait or are blocked by processing.

Tables/Views:
Cache_Table_0
Cache_Table_1
MetaLiveTable (LiveTB bit) 1 row, 1 column defines which table is live
Cache_View

SPs: select <colList> from Cache_View

repeating or continious process:
truncate non-live Table
re-populate non-live table
begin tran
alter view to point to re-populated table
toggle MetaLiveTable
commit or handle errors



Be One with the Optimizer
TG
Go to Top of Page

coolerbob
Aged Yak Warrior

841 Posts

Posted - 2005-05-05 : 10:49:29
quote:
Originally posted by TG

Jules,


Tables/Views:
Cache_Table_0
Cache_Table_1
MetaLiveTable (LiveTB bit) 1 row, 1 column defines which table is live
Cache_View

SPs: select <colList> from Cache_View

repeating or continious process:
truncate non-live Table
re-populate non-live table
begin tran
alter view to point to re-populated table
toggle MetaLiveTable
commit or handle errors





How could that EVER work? You would have to do a SELECT * from the source table to "re-populate non-live table
" and that would lock users up. Overkill surely. I would still hazard a guess that merge/transactional replication would be a better bet.
Go to Top of Page

TG
Master Smack Fu Yak Hacker

6065 Posts

Posted - 2005-05-05 : 11:07:51
Coolerbob, he doesn't have a "source table". His source comes from system functions,
dbcc calls, etc from every user database on a server.
There is nothing to act as a replication publisher.

Be One with the Optimizer
TG
Go to Top of Page

coolerbob
Aged Yak Warrior

841 Posts

Posted - 2005-05-05 : 11:34:15
gotcha, oops
Go to Top of Page

SQLServerSpotlight
Yak Posting Veteran

57 Posts

Posted - 2005-05-08 : 20:33:35
Hmm, I'll try this approach to see if I get the gains. The 'MetaLiveTable' sounds like the key.

Thank you TG.

quote:
Originally posted by TG

Jules,
I haven't worked enough with Spotlight to come up with a feature list :) but I was
impressed with it.

As far as how I've solved (or lived with) this issue you're asking about, it's really
on a case by case basis. Most of the solutions I've been involved with don't really
seem to apply to your situation. The closest thing might be something like this where
the advantage is that consumers never wait or are blocked by processing.

Tables/Views:
Cache_Table_0
Cache_Table_1
MetaLiveTable (LiveTB bit) 1 row, 1 column defines which table is live
Cache_View

SPs: select <colList> from Cache_View

repeating or continious process:
truncate non-live Table
re-populate non-live table
begin tran
alter view to point to re-populated table
toggle MetaLiveTable
commit or handle errors



Be One with the Optimizer
TG

Go to Top of Page
   

- Advertisement -