Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 Transact-SQL (2005)
 Create a trigger that run every hour?

Author  Topic 

medtech26
Posting Yak Master

169 Posts

Posted - 2009-05-13 : 16:37:12
Hello,

Can you please refer to a tutorial or an article that shows how to accomplish this?

Also, I need to embed in the query time - one hour, what would be better to use: DATEADD(hh,-1,GetDate()), DATEADD(mi,-60,GetDate()), other, or it's all the same?

webfred
Master Smack Fu Yak Hacker

8781 Posts

Posted - 2009-05-13 : 16:49:41
A trigger is never fired by time. It is fired by an action like insert, update, delete.
To do something every hour you can schedule a job in sql agent.


No, you're never too old to Yak'n'Roll if you're too young to die.
Go to Top of Page

medtech26
Posting Yak Master

169 Posts

Posted - 2009-05-13 : 17:32:14
Thanks, I'll look into that.
Go to Top of Page

medtech26
Posting Yak Master

169 Posts

Posted - 2009-05-13 : 18:51:27
Got it to work as needed, thanks again.

Now, I have an archive table that keep growing (and growing and growing ... every hour ;-)). As this may overload the system I would like to create "sub archive" tables with the suffix of sequence numeric values (i.e. archive_table_1, archive_table_2, etc.). The logic should be as follows: If the number of records in the main archive table is over 500K create a new archive table, at this point it should check for the latest archive available (checking the numeric suffix), create the new archive accordingly, append data to it, and clean the copied data from the main archive.

Does this logic make sense?
Go to Top of Page

webfred
Master Smack Fu Yak Hacker

8781 Posts

Posted - 2009-05-14 : 04:27:23
What should be the advantage to have lots of tables (i.e. archive_table_1, archive_table_2, etc.) instead of one big table?


No, you're never too old to Yak'n'Roll if you're too young to die.
Go to Top of Page

Transact Charlie
Master Smack Fu Yak Hacker

3451 Posts

Posted - 2009-05-14 : 05:42:48
quote:
Originally posted by medtech26

Got it to work as needed, thanks again.

Now, I have an archive table that keep growing (and growing and growing ... every hour ;-)). As this may overload the system I would like to create "sub archive" tables with the suffix of sequence numeric values (i.e. archive_table_1, archive_table_2, etc.). The logic should be as follows: If the number of records in the main archive table is over 500K create a new archive table, at this point it should check for the latest archive available (checking the numeric suffix), create the new archive accordingly, append data to it, and clean the copied data from the main archive.

Does this logic make sense?



No it does not.

sqlserver is designed for massive data sets (just make sure they have a nice key). Why bother making up a whole bunch of dynamic tables that you have to shoehorn dynamic sql to reference? It is a bad plan and destroys normalisation.

If you are worried about performance then the top things to consider are:

1) Keys / Indices
2) Archiving old information (deleting or moving to another db / storage table)

If you are worried about write performance into your big table then choose a simple table structure with a strictly increasing primary key (like an IDENTITY column or similar) -- this will give you the least overhead for writes.


Charlie
===============================================================
Msg 3903, Level 16, State 1, Line 1736
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION
Go to Top of Page

Lamprey
Master Smack Fu Yak Hacker

4614 Posts

Posted - 2009-05-14 : 11:51:04
Yeah, I'd have to agree. Using dynamic SQL to create some complex process to break tables apart when the data sets are not that large might be a waste. However, if there is some performance issue you might want to take a look at Partitioning or Partitioned Views.
Go to Top of Page

medtech26
Posting Yak Master

169 Posts

Posted - 2009-05-14 : 13:17:47
Thanks for enlightening me. The main reason (as you guessed) is performance! Currently with over 7 million records it take forever to add data into that table as well as many system resources (mainly memory, and lots of it). Breaking down the data into smaller tables significantly improve write time as well as querying time.

Deleting/Migrating the data is not an option as I need this data available. In addition, I do have a primary key as well as several indexes but also two columns that can't be indexed (too long, 2500 characters each).

Please advise further.
Go to Top of Page

medtech26
Posting Yak Master

169 Posts

Posted - 2009-05-14 : 13:22:47
quote:
Originally posted by webfred

What should be the advantage to have lots of tables (i.e. archive_table_1, archive_table_2, etc.) instead of one big table?


No, you're never too old to Yak'n'Roll if you're too young to die.



Improves write time to main archive as well as querying time from all archive tables.
Go to Top of Page

Transact Charlie
Master Smack Fu Yak Hacker

3451 Posts

Posted - 2009-05-15 : 04:31:00
do you have any downtime / less busy periods?

You could set up your trigger to write to a table with a simple structure (the only index is the primary key which should be a strictly increasing seed)

Then schedule a job for your quite time to move the data from this table into your archive table. Then truncate the table the trigger writes to.

That way you wouldn't end up with a mass of little audit tables. And you'd be able to do the insert into the big audit table in 1 nice step rather than doing thousands of little inserts during the day.




Charlie
===============================================================
Msg 3903, Level 16, State 1, Line 1736
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION
Go to Top of Page

medtech26
Posting Yak Master

169 Posts

Posted - 2009-05-15 : 19:00:38
Hi Charlie,

Thanks for your response.

First I have to state that obviously we have more/less busy periods but all in all it's pretty busy most of the time as we're working with multilingual world wide sites.

I used to move data to the archive table twice a day (1AM and 1PM) but when the daily data over grow, it took the db too long and lots of memory to process this, in results, it took down the sites depending on it (queries timed out).

Now I scheduled it to move those records every hour and cleaned the archive table (to many smaller tables), as of now everything works fine.

BTW, I forgot to mention what seems to be very important now, both tables use the same primary key, it's an increased index on the smaller table but then written into the archive as "set" value (i.e. it's not increased seed on the archive table).
Go to Top of Page

LoztInSpace
Aged Yak Warrior

940 Posts

Posted - 2009-05-15 : 23:45:48
quote:
Originally posted by medtech26

Thanks for enlightening me. The main reason (as you guessed) is performance! Currently with over 7 million records it take forever to add data into that table as well as many system resources (mainly memory, and lots of it). Breaking down the data into smaller tables significantly improve write time as well as querying time.

Deleting/Migrating the data is not an option as I need this data available. In addition, I do have a primary key as well as several indexes but also two columns that can't be indexed (too long, 2500 characters each).

Please advise further.


7 million rows is nothing. Something is up if you are hitting performance issues adding data to a table this small.
Check you don't have a dumb clustered index on something inappropriate like a text column.
Go to Top of Page
   

- Advertisement -