Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2012 Forums
 Transact-SQL (2012)
 Help me optimze a 5min time slize query

Author  Topic 

Tomji
Starting Member

19 Posts

Posted - 2013-05-16 : 14:49:28
The query works fine, see it here:
[url]http://sqlfiddle.com/#!6/395aa/4[/url]

The problem is that I am dealing with a source table that has around 5 million rows. (Around 5 rows per computer per day)
My second query that creates #SourceTableStates will actually blow this table up from 5 millions to ~288 millions... because I slize the day up into 5 mins piece - 288 - instead of ~5 rows each containing a start and end time.
So the actual end result table contains around 1 million rows.

So I really need to look into ways to no save the intermediate 288million table, but I am unsure how to do this with minimal extra processing.

Lamprey
Master Smack Fu Yak Hacker

4614 Posts

Posted - 2013-05-16 : 15:22:50
Can you explain in words what you are trying to achomplish with your query?
Go to Top of Page

Tomji
Starting Member

19 Posts

Posted - 2013-05-16 : 15:48:54
Well I am taking from a table that contains data in starttime > endtime format for different events and want to aggregate them daily into a simply string, where each character in the string represents the event that took place during that time.
The idea is that afterwards I can very quickly query this table and for instance create a timeline for that machine.

i.e the source contains.
Was in state 5 from 0am to 12am and from 12am to 12pm in state 3.
I now want a row that says 555555555555333333333333
In this example the time slize is 1h instead of 5mins.
This will be more efficient permanent storage because some machines literally have 20-30 such events during a day.
Go to Top of Page

Lamprey
Master Smack Fu Yak Hacker

4614 Posts

Posted - 2013-05-16 : 15:54:51
quote:
Originally posted by Tomji

This will be more efficient permanent storage because some machines literally have 20-30 such events during a day.

I'm not sure if I'll have any time to look into this today, but I completely disagree with your efficiency statement. Storing your data that way is going to cause nothing but trouble.
Go to Top of Page

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2013-05-18 : 01:21:20
quote:
Originally posted by Lamprey

I completely disagree with your efficiency statement. Storing your data that way is going to cause nothing but trouble.



Agree with ya 100% Lamprey.
Go to Top of Page

ddamico
Yak Posting Veteran

76 Posts

Posted - 2013-05-20 : 15:45:12
Without actually creating any SQL i will offer two suggestions.

1) Single column of datatype varchar(X)
where X = number of data points you need to collect and each position within the field represents the value of the data point.
At each collection time you do some math to determine what position in the string needs to be updated and set that position equal to your event.

2) If you had fewer events and few data points to collect a consideration might be to use bit field and use bitwise operations in your selection query. Though with so many data points to collect the bit field probably couldn't handle the large value that would be necessary.
Go to Top of Page
   

- Advertisement -