SQL Server Forums
Profile | Register | Active Topics | Members | Search | Forum FAQ
 
Register Now and get your question answered!
Username:
Password:
Save Password
Forgot your Password?

 All Forums
 SQL Server 2012 Forums
 Transact-SQL (2012)
 Help me optimze a 5min time slize query
 New Topic  Reply to Topic
 Printer Friendly
Author Previous Topic Topic Next Topic  

Tomji
Starting Member

19 Posts

Posted - 05/16/2013 :  14:49:28  Show Profile  Reply with Quote
The query works fine, see it here:
http://sqlfiddle.com/#!6/395aa/4

The problem is that I am dealing with a source table that has around 5 million rows. (Around 5 rows per computer per day)
My second query that creates #SourceTableStates will actually blow this table up from 5 millions to ~288 millions... because I slize the day up into 5 mins piece - 288 - instead of ~5 rows each containing a start and end time.
So the actual end result table contains around 1 million rows.

So I really need to look into ways to no save the intermediate 288million table, but I am unsure how to do this with minimal extra processing.

Edited by - Tomji on 05/17/2013 22:06:03

Lamprey
Flowing Fount of Yak Knowledge

4600 Posts

Posted - 05/16/2013 :  15:22:50  Show Profile  Reply with Quote
Can you explain in words what you are trying to achomplish with your query?
Go to Top of Page

Tomji
Starting Member

19 Posts

Posted - 05/16/2013 :  15:48:54  Show Profile  Reply with Quote
Well I am taking from a table that contains data in starttime > endtime format for different events and want to aggregate them daily into a simply string, where each character in the string represents the event that took place during that time.
The idea is that afterwards I can very quickly query this table and for instance create a timeline for that machine.

i.e the source contains.
Was in state 5 from 0am to 12am and from 12am to 12pm in state 3.
I now want a row that says 555555555555333333333333
In this example the time slize is 1h instead of 5mins.
This will be more efficient permanent storage because some machines literally have 20-30 such events during a day.
Go to Top of Page

Lamprey
Flowing Fount of Yak Knowledge

4600 Posts

Posted - 05/16/2013 :  15:54:51  Show Profile  Reply with Quote
quote:
Originally posted by Tomji

This will be more efficient permanent storage because some machines literally have 20-30 such events during a day.

I'm not sure if I'll have any time to look into this today, but I completely disagree with your efficiency statement. Storing your data that way is going to cause nothing but trouble.
Go to Top of Page

russell
Pyro-ma-ni-yak

USA
5072 Posts

Posted - 05/18/2013 :  01:21:20  Show Profile  Visit russell's Homepage  Reply with Quote
quote:
Originally posted by Lamprey

I completely disagree with your efficiency statement. Storing your data that way is going to cause nothing but trouble.



Agree with ya 100% Lamprey.
Go to Top of Page

ddamico
Yak Posting Veteran

76 Posts

Posted - 05/20/2013 :  15:45:12  Show Profile  Reply with Quote
Without actually creating any SQL i will offer two suggestions.

1) Single column of datatype varchar(X)
where X = number of data points you need to collect and each position within the field represents the value of the data point.
At each collection time you do some math to determine what position in the string needs to be updated and set that position equal to your event.

2) If you had fewer events and few data points to collect a consideration might be to use bit field and use bitwise operations in your selection query. Though with so many data points to collect the bit field probably couldn't handle the large value that would be necessary.
Go to Top of Page
  Previous Topic Topic Next Topic  
 New Topic  Reply to Topic
 Printer Friendly
Jump To:
SQL Server Forums © 2000-2009 SQLTeam Publishing, LLC Go To Top Of Page
This page was generated in 0.08 seconds. Powered By: Snitz Forums 2000