Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 Transact-SQL (2008)
 Archiving / Saving Tables Monthly

Author  Topic 

MikeSpeck
Starting Member

9 Posts

Posted - 2014-08-11 : 14:11:54
Okay, Not sure if I'm posting this in the correct forum so If I'm not I'm sorry, please direct me to the proper location.

I am new with writing sql statements, but have a general understanding.
What I am looking to do is; can I archive my tables monthly? Save them to a file with the month/year in the file extension and then delete all data in the original table so that each month I can start with a new empty table? This should take place on the last day of each month shortly before midnight, automatically.

Again, I'm new, but hopefully someone can help.

Mike

richardwaugh
Starting Member

36 Posts

Posted - 2014-08-11 : 15:08:59
I'm sure someone will be able to provide a much better answer than what I am about to provide but this might be worth a try (I'm somewhat new to this stuff as well). You could do this using the SQL Server Agent. You would first create a scheduled backup to occur every month at the specified date and time and then you would create a scheduled task that would automatically run a stored procedure that you created (the stored procedure you create would basically be something like DELETE * FROM YourTable). Not sure if this helps you but hopefully points you in the right direction.
Go to Top of Page

MikeSpeck
Starting Member

9 Posts

Posted - 2014-08-11 : 15:17:32
Yes, This does help, Thank You richardwaugh.
I will need additional help but I can at least start with reviewing your idea, it makes sense.

Michael F. Speck
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2014-08-11 : 15:45:30
Why do you want to use files as the archiving strategy? Why not move the old data into a separate database instead?

1. Implement table partitioning, one partition per day.
2. Move old partition to staging table.
3. Import staging table's data into archive database
4. Truncate staging table

Tara Kizer
SQL Server MVP since 2007
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

MikeSpeck
Starting Member

9 Posts

Posted - 2014-08-11 : 16:07:58
tkizer,
This makes a lot of sense also,
Here's is my true issue. I work in a foundry where I have been tasked to collect data from the ovens to the final product. (Metal Recipes, temperatures from molds, etc) The data at the molds are being collected every 20 seconds.
My tables are in SQL server 2008 and my user forms were generated in Microsoft Access so the whole DB is very user friendly.
My main issue is that my SQL tables are holding a bunch of data that my Queries are reviewing and they are getting very slow.
Is there another way to speed up my queries? Or would it be best to limit the amount of data in my tables to say the above archiving?
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2014-08-11 : 16:43:28
You need to take a step back and determine why the queries are slow. It's unlikely due to volume of data if the tables are designed properly, indexed properly and the code is sound.

I would start by running a trace to identify the culprit queries and then check where the pain is. You likely need to add or modify indexes, perhaps need to update stats more frequently, etc.

Tara Kizer
SQL Server MVP since 2007
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

MikeSpeck
Starting Member

9 Posts

Posted - 2014-08-11 : 16:51:19
Your right, This database is roughly 3 months old and already running slow. I will do as you suggested and see what / where the true issue lies. My guess is with indexes.
Thank You for your help, I will follow up more once I have a better understanding of possible culprits.

Michael F. Speck
Go to Top of Page

ScottPletcher
Aged Yak Warrior

550 Posts

Posted - 2014-08-12 : 12:12:47
To that end, please run this code and post the results from both queries (preferably in a spreadsheet format).


USE [<your_db_name_here>]

SET DEADLOCK_PRIORITY LOW --probably irrelevant, but just in case

DECLARE @list_missing_indexes bit
DECLARE @table_name_pattern sysname

--NOTE: showing missing indexes can take some time; set to 0 if you don't want to wait.
SET @list_missing_indexes = 1
SET @table_name_pattern = '%' --'%'=all tables.
--SET @table_name_pattern = '%'

PRINT 'Started @ ' + CONVERT(varchar(30), GETDATE(), 120)

IF @list_missing_indexes = 1
BEGIN
SELECT
GETDATE() AS capture_date,
DB_NAME(mid.database_id) AS Db_Name,
dps.row_count,
OBJECT_NAME(mid.object_id /*, mid.database_id*/) AS Table_Name,
mid.equality_columns, mid.inequality_columns, mid.included_columns,
user_seeks, user_scans, ca1.max_days_active, unique_compiles,
last_user_seek, last_user_scan, avg_total_user_cost, avg_user_impact,
system_seeks, system_scans, last_system_seek, last_system_scan, avg_total_system_cost, avg_system_impact,
mid.statement, mid.object_id, mid.index_handle
FROM sys.dm_db_missing_index_details mid WITH (NOLOCK)
CROSS APPLY (
SELECT DATEDIFF(DAY, create_date, GETDATE()) AS max_days_active FROM sys.databases WHERE name = 'tempdb'
) AS ca1
LEFT OUTER JOIN sys.dm_db_missing_index_groups mig WITH (NOLOCK) ON
mig.index_handle = mid.index_handle
LEFT OUTER JOIN sys.dm_db_missing_index_group_stats migs WITH (NOLOCK) ON
migs.group_handle = mig.index_group_handle
LEFT OUTER JOIN sys.dm_db_partition_stats dps WITH (NOLOCK) ON
dps.object_id = mid.object_id AND
dps.index_id IN (0, 1)
--order by
--DB_NAME, Table_Name, equality_columns
WHERE
1 = 1
AND mid.database_id = DB_ID() --only current db
AND OBJECT_NAME(mid.object_id) LIKE @table_name_pattern
--AND mid.object_id IN (OBJECT_ID('<table_name_1>'), OBJECT_ID('<table_name_2>'))
ORDER BY
--avg_total_user_cost * (user_seeks + user_scans) DESC,
Db_Name, Table_Name, equality_columns, inequality_columns
END --IF

PRINT 'Midpoint @ ' + CONVERT(varchar(30), GETDATE(), 120)
-- list index usage stats (seeks, scans, etc.)
SELECT
ius2.row_num, DB_NAME() AS db_name,
CASE WHEN i.name LIKE ca2.table_name + '%'
THEN '~' + SUBSTRING(i.name, LEN(ca2.table_name) + 1 +
CASE WHEN SUBSTRING(i.name, LEN(ca2.table_name) + 1, 1) = '_' THEN
CASE WHEN SUBSTRING(i.name, LEN(ca2.table_name) + 2, 1) = '_' THEN 2 ELSE 1 END
ELSE 0 END, 200)
ELSE i.name END AS index_name,
CASE WHEN i.is_unique = 0 THEN 'N' ELSE 'Y' END + '.' +
CASE WHEN i.is_primary_key = 0 AND i.is_unique_constraint = 0 THEN 'N' ELSE 'Y' END AS [uniq?],
ca2.table_name,
i.index_id, --ius.user_seeks + ius.user_scans AS total_reads,
dps.row_count,
SUBSTRING(key_cols, 3, 8000) AS key_cols, SUBSTRING(nonkey_cols, 3, 8000) AS nonkey_cols,
ius.user_seeks, ius.user_scans, ius.user_lookups, ius.user_updates,
ius.last_user_seek, ius.last_user_scan, ius.last_user_lookup, ius.last_user_update,
fk.Reference_Count AS fk_ref_count,
DATEDIFF(DAY, CASE WHEN o.create_date > ca1.sql_startup_date THEN o.create_date
ELSE ca1.sql_startup_date END, GETDATE()) AS max_days_active,
FILEGROUP_NAME(i.data_space_id) AS filegroup_name,
ius.system_seeks, ius.system_scans, ius.system_lookups, ius.system_updates,
ius.last_system_seek, ius.last_system_scan, ius.last_system_lookup, ius.last_system_update
FROM sys.indexes i WITH (NOLOCK)
INNER JOIN sys.objects o WITH (NOLOCK) ON
o.object_id = i.object_id
CROSS JOIN (
SELECT create_date AS sql_startup_date FROM sys.databases WHERE name = 'tempdb'
) AS ca1
CROSS APPLY (
SELECT OBJECT_NAME(i.object_id/*, DB_ID()*/) AS table_name
) AS ca2
OUTER APPLY (
SELECT
', ' + COL_NAME(object_id, ic.column_id)
FROM sys.index_columns ic
WHERE
ic.key_ordinal > 0 AND
ic.object_id = i.object_id AND
ic.index_id = i.index_id
ORDER BY
ic.key_ordinal
FOR XML PATH('')
) AS key_cols (key_cols)
OUTER APPLY (
SELECT
', ' + COL_NAME(object_id, ic.column_id)
FROM sys.index_columns ic
WHERE
ic.key_ordinal = 0 AND
ic.object_id = i.object_id AND
ic.index_id = i.index_id
ORDER BY
COL_NAME(object_id, ic.column_id)
FOR XML PATH('')
) AS nonkey_cols (nonkey_cols)
LEFT OUTER JOIN sys.dm_db_partition_stats dps WITH (NOLOCK) ON
dps.object_id = i.object_id AND
dps.index_id = i.index_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats ius WITH (NOLOCK) ON
ius.database_id = DB_ID() AND
ius.object_id = i.object_id AND
ius.index_id = i.index_id
LEFT OUTER JOIN (
SELECT
database_id, object_id, MAX(user_scans) AS user_scans,
ROW_NUMBER() OVER (ORDER BY MAX(user_scans) DESC) AS row_num --user_scans|user_seeks+user_scans
FROM sys.dm_db_index_usage_stats WITH (NOLOCK)
WHERE
database_id = DB_ID()
--AND index_id > 0
GROUP BY
database_id, object_id
) AS ius2 ON
ius2.database_id = DB_ID() AND
ius2.object_id = i.object_id
LEFT OUTER JOIN (
SELECT
referenced_object_id, COUNT(*) AS Reference_Count
FROM sys.foreign_keys
WHERE
is_disabled = 0
GROUP BY
referenced_object_id
) AS fk ON
fk.referenced_object_id = i.object_id
WHERE
i.object_id > 100 AND
i.is_hypothetical = 0 AND
i.type IN (0, 1, 2) AND
o.type NOT IN ( 'IF', 'IT', 'TF', 'TT' ) AND
(
o.name LIKE @table_name_pattern AND
o.name NOT LIKE 'dtprop%' AND
o.name NOT LIKE 'filestream[_]' AND
o.name NOT LIKE 'MSpeer%' AND
o.name NOT LIKE 'MSpub%' AND
--o.name NOT LIKE 'queue[_]%' AND
o.name NOT LIKE 'sys%'
)
--AND OBJECT_NAME(i.object_id /*, DB_ID()*/) IN ('tbl1', 'tbl2', 'tbl3')
ORDER BY
--row_count DESC,
--ius.user_scans DESC,
--ius2.row_num, --user_scans&|user_seeks
db_name, table_name,
-- list clustered index first, if any, then other index(es)
CASE WHEN i.index_id IN (0, 1) THEN 1 ELSE 2 END,
key_cols

PRINT 'Ended @ ' + CONVERT(varchar(30), GETDATE(), 120)

SET DEADLOCK_PRIORITY NORMAL

Go to Top of Page

MikeSpeck
Starting Member

9 Posts

Posted - 2014-08-13 : 07:29:01
Sorry ScottPletcher,
I was playing maintenance man pretty much all day yesterday from 5:00am to 9:30pm, One of those days.!! So I'm just now getting back to this project.
I haven't had much time to review, when I get a moment I will continue to review and run your code.
Thank You for your help,
Mike

Michael F. Speck
Go to Top of Page

MikeSpeck
Starting Member

9 Posts

Posted - 2014-08-13 : 08:24:01
ScottPletcher,
Okay, I ran your requested code and have results.
Question is I cannot figure out how to attach the spreadsheet.
I will continue to review and try and figure out how to reply with the attachment.
Mike
Go to Top of Page

ScottPletcher
Aged Yak Warrior

550 Posts

Posted - 2014-08-13 : 11:46:11
For now, post the results any way you can. Hopefully that SQL instance has been up continuously long enough to provide representative index usage.
Go to Top of Page

MikeSpeck
Starting Member

9 Posts

Posted - 2014-08-13 : 13:09:25
That's the thing, I've tried pasting the information but the data is all over the place and is very difficult to decipher, And I can't figure out how to attach it as an image.
I'm still trying though . . . .
Go to Top of Page

MikeSpeck
Starting Member

9 Posts

Posted - 2014-08-14 : 09:04:18
Well I hate to post these results this way but I cannot figure out any other way to do so on this forum,

Mike


row_num db_name index_name uniq? table_name index_id row_count key_cols nonkey_cols user_seeks user_scans user_lookups user_updates last_user_seek last_user_scan last_user_lookup last_user_update fk_ref_count max_days_active filegroup_name system_seeks system_scans system_lookups system_updates last_system_seek last_system_scan last_system_lookup last_system_update
25 ACODATA PK_AgeRound Y.Y AgeRound 1 54919 ageRoundId NULL 0 1 0 2705 NULL 55:44.1 NULL 50:20.5 NULL 15 PRIMARY 0 0 0 0 NULL NULL NULL NULL
NULL ACODATA NULL N.N automation1 0 0 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
NULL ACODATA NULL N.N automation2 0 0 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
NULL ACODATA NULL N.N automation4 0 0 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
NULL ACODATA NULL N.N CEC_AGE_OVEN 0 0 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
23 ACODATA PK_Cell_1_TT_Device_1 Y.Y Cell_1_TT_Device_1 1 26220 Id_1 NULL 0 51 0 6485 NULL 22:06.8 NULL 53:23.3 NULL 2 PRIMARY 0 21 0 0 NULL 16:54.9 NULL NULL
2 ACODATA PK_Cell_1_TT_Device_2 Y.Y Cell_1_TT_Device_2 1 43404 Id_2 NULL 174 1028 0 43404 48:54.6 22:06.8 NULL 53:25.4 NULL 15 PRIMARY 0 121 0 0 NULL 23:06.0 NULL NULL
3 ACODATA PK_Cell_1_TT_Device_3 Y.Y Cell_1_TT_Device_3 1 43091 Id_3 NULL 138 1022 0 43093 34:21.3 22:06.8 NULL 53:25.1 NULL 15 PRIMARY 0 121 0 0 NULL 23:08.1 NULL NULL
4 ACODATA PK_Cell_1_TT_Device_4 Y.Y Cell_1_TT_Device_4 1 43196 Id_4 NULL 86 1022 0 43196 38:29.2 22:06.8 NULL 59:00.1 NULL 15 PRIMARY 0 121 0 0 NULL 23:10.2 NULL NULL
1 ACODATA PK_Cell_1_TT_Device_5 Y.Y Cell_1_TT_Device_5 1 28311 Id_5 NULL 931 1164 0 28311 49:43.0 22:06.8 NULL 25:40.6 NULL 15 PRIMARY 0 110 0 0 NULL 23:26.4 NULL NULL
5 ACODATA PK_Cell_1_TT_Device_6 Y.Y Cell_1_TT_Device_6 1 41760 Id_6 NULL 34 1000 0 41761 49:00.3 22:06.8 NULL 53:27.2 NULL 15 PRIMARY 0 121 0 0 NULL 08:01.4 NULL NULL
7 ACODATA PK_Cell_1_TT_Device_7 Y.Y Cell_1_TT_Device_7 1 43871 Id_7 NULL 16 993 0 43876 49:01.7 22:06.8 NULL 53:28.3 NULL 15 PRIMARY 0 121 0 0 NULL 23:12.3 NULL NULL
6 ACODATA PK_Cell_1_TT_Device_8 Y.Y Cell_1_TT_Device_8 1 43120 Id_8 NULL 17 997 0 43125 49:03.1 22:06.8 NULL 53:28.2 NULL 15 PRIMARY 0 121 0 0 NULL 23:14.4 NULL NULL
16 ACODATA PK_Cell_3_TT_Device_1 Y.Y Cell_3_TT_Device_1 1 39399 Id_1 NULL 5 265 0 20271 20:25.4 16:42.7 NULL 53:23.9 NULL 15 PRIMARY 0 24 0 0 NULL 18:15.7 NULL NULL
15 ACODATA PK_Cell_3_TT_Device_2 Y.Y Cell_3_TT_Device_2 1 32928 Id_2 NULL 0 266 0 16259 NULL 16:42.7 NULL 53:23.8 NULL 15 PRIMARY 0 24 0 0 NULL 07:17.6 NULL NULL
14 ACODATA PK_Cell_3_TT_Device_3 Y.Y Cell_3_TT_Device_3 1 28034 Id_3 NULL 0 267 0 15379 NULL 16:42.7 NULL 53:23.9 NULL 15 PRIMARY 0 24 0 0 NULL 18:16.5 NULL NULL
17 ACODATA PK_Cell_3_TT_Device_4 Y.Y Cell_3_TT_Device_4 1 27359 Id_4 NULL 0 265 0 14457 NULL 16:42.7 NULL 53:23.9 NULL 15 PRIMARY 0 24 0 0 NULL 18:17.2 NULL NULL
21 ACODATA PK_Cell_3_TT_Device_5 Y.Y Cell_3_TT_Device_5 1 24360 Id_5 NULL 0 264 0 13038 NULL 16:42.7 NULL 53:23.9 NULL 15 PRIMARY 0 24 0 0 NULL 18:18.2 NULL NULL
20 ACODATA PK_Cell_3_TT_Device_6 Y.Y Cell_3_TT_Device_6 1 27489 Id_6 NULL 0 264 0 15048 NULL 16:42.7 NULL 53:02.7 NULL 15 PRIMARY 0 24 0 0 NULL 18:19.3 NULL NULL
19 ACODATA PK_Cell_3_TT_Device_7 Y.Y Cell_3_TT_Device_7 1 14758 Id_7 NULL 0 264 0 14725 NULL 16:42.7 NULL 53:02.7 NULL 15 PRIMARY 0 64 0 0 NULL 42:23.3 NULL NULL
18 ACODATA PK_Cell_3_TT_Device_8 Y.Y Cell_3_TT_Device_8 1 16933 Id_8 NULL 0 264 0 16908 NULL 16:42.7 NULL 53:23.9 NULL 15 PRIMARY 0 72 0 0 NULL 07:06.6 NULL NULL
8 ACODATA PK_CELL_4_TT_DEVICE_1 Y.Y CELL_4_TT_DEVICE_1 1 59024 id_1 NULL 98 543 0 29425 17:39.7 16:19.2 NULL 53:15.0 NULL 15 PRIMARY 0 48 0 0 NULL 58:09.7 NULL NULL
13 ACODATA PK_CELL_4_TT_DEVICE_21 Y.Y CELL_4_TT_DEVICE_2 1 74192 id_2 NULL 0 417 0 29539 NULL 16:19.2 NULL 50:48.5 NULL 15 PRIMARY 0 24 0 0 NULL 28:47.9 NULL NULL
10 ACODATA PK_CELL_4_TT_DEVICE_31 Y.Y CELL_4_TT_DEVICE_3 1 67063 id_3 NULL 0 419 0 28941 NULL 16:19.2 NULL 53:10.6 NULL 15 PRIMARY 0 36 0 0 NULL 28:50.8 NULL NULL
11 ACODATA PK_CELL_4_TT_DEVICE_41 Y.Y CELL_4_TT_DEVICE_4 1 72422 id_4 NULL 0 419 0 28621 NULL 16:19.2 NULL 52:46.5 NULL 15 PRIMARY 0 24 0 0 NULL 28:52.6 NULL NULL
12 ACODATA PK_CELL_4_TT_DEVICE_51 Y.Y CELL_4_TT_DEVICE_5 1 68027 id_5 NULL 0 418 0 28554 NULL 16:19.2 NULL 53:28.0 NULL 15 PRIMARY 0 36 0 0 NULL 16:18.9 NULL NULL
9 ACODATA PK_CELL_4_TT_DEVICE_61 Y.Y CELL_4_TT_DEVICE_6 1 68289 id_6 NULL 0 420 0 29836 NULL 16:19.2 NULL 41:13.4 NULL 15 PRIMARY 0 36 0 0 NULL 11:07.0 NULL NULL
NULL ACODATA NULL N.N GETEST 0 14 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
NULL ACODATA NULL N.N Mustang 0 0 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
22 ACODATA PK_OldHT1 Y.Y OldHT1 1 173668 HTID NULL 3 131 0 57187 54:08.1 57:08.6 NULL 53:26.4 NULL 15 PRIMARY 0 1 0 0 NULL 54:08.0 NULL NULL
NULL ACODATA NULL N.N OldHT2 0 217156 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
NULL ACODATA NULL N.N OldHT6_25_2014 0 111193 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
NULL ACODATA PK_partmaster Y.Y partmaster 1 2 partnumber NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
NULL ACODATA NULL N.N Plant_Environment 0 0 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
NULL ACODATA PK_processmaster Y.Y processmaster 1 12 process_name NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
24 ACODATA NULL N.N turntable1 0 105549 NULL NULL 0 1 0 1767 NULL 11:11.9 NULL 10:44.4 NULL 15 PRIMARY 0 0 0 0 NULL NULL NULL NULL
NULL ACODATA NULL N.N turntable2 0 0 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL
NULL ACODATA NULL N.N turntable4 0 36480 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 15 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL


Michael F. Speck
Go to Top of Page

ScottPletcher
Aged Yak Warrior

550 Posts

Posted - 2014-08-14 : 09:56:59
Thanks, that's still very helpful to me (probably not to anyone else, but I've so used to that output I can still read it ).

Those usage stats aren't necessarily good, since they show a lot of scans.

Were there no missing indexes listed from the first query? That would be odd given the number of table scans.

[Of course, one problem is that "scan" may not be a full scan, so the missing index stats provide more evidence as to what is really going on.]

Go to Top of Page

MikeSpeck
Starting Member

9 Posts

Posted - 2014-08-14 : 10:22:31
There are 12 missing indexes.
Is there a way I could email you the spreadsheet? I wish the list above could have been formatted better, because I have a feeling that this post could be very beneficial to other users.
One way I guess I could get it readable to others is to enter the data on a reply rather than simply pasting.
Let me know if you would like for me to proceed entering data manually.
Thank You for your help.
Mike
Go to Top of Page
   

- Advertisement -