Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
rhavenn
Starting Member
14 Posts |
Posted - 2010-05-26 : 14:41:12
|
So, I'm trying to write a admin script that, among other things, loops through all the databases and performs some cleanup, etc... Right now I'm trying to use a nested cursor, to yoink all the database names from the sys_databases table and then perform some actions. It works fine for functions, etc.. that are in the global scope, but for items that require me to access individual databases it fails. I've tried:1) EXEC with USE flags, but the EXEC function seems to load it's own scope and anything declared inside that EXEC gets lost when it leaves.2) Using a @dbname variable inside the CURSOR, but it doesn't seem to like that syntax.code looks like this:--variablesDECLARE @dbname nvarchar(128);DECLARE @sql nvarchar (1000);DECLARE @logfile_ID nvarchar(5), @logname nvarchar(128);--load all databases; ignore system DBsDECLARE DbFind_Cursor CURSOR FAST_FORWARD FOR SELECT name FROM sys.databases WHERE name <> 'master' AND name <> 'model' AND name <> 'msdb' AND name <> 'tempdb'; --open cursorOPEN DbFind_Cursor;--loop through; set all DBs to SIMPLE except system DBsFETCH NEXT FROM DbFind_Cursor INTO @dbname; WHILE @@FETCH_STATUS = 0BEGIN --auto gen SQL code SET @sql = N'ALTER DATABASE ' + QUOTENAME(RTRIM(@dbname)) + N' SET RECOVERY SIMPLE'; PRINT 'Executing: ' + @sql; --execute SQL EXEC (@sql); -- find logfile; shrink logfile for each db DECLARE LogFind_Cursor CURSOR FAST_FORWARD FOR SELECT file_id,name FROM sys.database_files WHERE type = '1'; --open cursor OPEN LogFind_Cursor; --loop through; shrink all logfiles FETCH NEXT FROM LogFind_Cursor INTO @logfile_ID, @logname; WHILE @@FETCH_STATUS = 0 BEGIN --auto gen SQL code; set database to use and shrinkfile --exec commands has its own scope; so everything needs to be encapsulated into one command SET @sql = N'USE ' + QUOTENAME(RTRIM(@dbname)) + '; DBCC SHRINKFILE (' + RTRIM(@logfile_ID) + N', 10)'; PRINT ' Executing (' + RTRIM(@logname) + '): ' + @sql; --execute SQL EXEC (@sql); --loop next db FETCH NEXT FROM LogFind_Cursor INTO @logfile_ID, @logname; END --close cursor CLOSE LogFind_Cursor DEALLOCATE LogFind_Cursor --loop next db FETCH NEXT FROM DbFind_Cursor INTO @dbname; END--close cursorCLOSE DbFind_CursorDEALLOCATE DbFind_CursorThe problem is the LogFind_Cursor never iterates. I tried adding a @dbname.sys.databasefiles but it doesn't like that. If I use an EXEC it's inside its own scope.I could try and chain a bunch of @sql variable together and run a big EXEC at the end, but that seems to be an un-readable and un-manageable mess. If that's the only option I will switch out to PowerShell or C#.I did think about calling a stored proc with @dbname, but I think that will run into the same scope issues inside that proc, since you can't do USE @dbname. |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
|
rhavenn
Starting Member
14 Posts |
Posted - 2010-05-26 : 18:10:54
|
quote: Originally posted by tkizerBy the way, you should not be performing DBCC SHRINKFILE as a cleanup task on all databases. You need to read about this topic to understand why this is so bad. Read this for starters: http://sqlinthewild.co.za/index.php/2007/09/08/shrinking-databases/
For production databases I agree with you. If you have to do that then you're doing something wrong. However, on DEV and TEST databases where the devs are routinely a) restoring from PROD backups and leaving the FULL db type set and b) running scripts / tools that generate extremely large one-off data inserts / deletes that generate large amounts of log traffic it's a necessity to do here and there so we don't have a 10GB DB with a 485GB log file as DEV and TST DBs don't get transaction backups.I understand always doing shrinkfile can lead to disk fragmentation, etc... and on a PROD server you need to re-examine why they grow so much.Thanks for the code info. |
 |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2010-05-26 : 19:18:53
|
| Thanks for the info as to why you need a script like this. We don't allow the devs to perform sysadmin work on non-prod environments, so a DBA does it. It is our policy to switch the recovery model to SIMPLE in non-prod environments to avoid a large tlog due to lack of tlog backup. For the large one-off transactions, we are notified once we run out of disk space and the app gets the tlog full message. We then work with the devs to figure out what they were doing and why.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
 |
|
|
|
|
|
|
|