Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Backup times out after having to truncate log

Author  Topic 

pearsont74
Starting Member

27 Posts

Posted - 2007-02-01 : 10:43:05
I have a db that we do monthly imports to.
The procedure we use (please tell me if this is not the best procedure)
drop indexes
truncate the tables we are importing too (not sure if this is better to do then drop)
import
recreate indexes

since the db is pretty large (like over 10 gigs) we have had a space issue with the transaction log. I tried something last night. I dropped the entire db and recreated it from a mirror db on a different server. I have the logs limited to size groth and while importing i was gettin errors due to log full. So i increated the size and it kept gettin bigger. Finally i finsihed the import and went to run a full backup. Everytime i got a time out within in a min or 2 of the process?? I ran the shrink transaction log from the ms site here: http://support.microsoft.com/kb/272318/
using : BACKUP LOG dbname WITH TRUNCATE_ONLY
then DBCC SHRINKFILE(dbname_log,size)

is this not a good procedure? how can i take care of this

Kristen
Test

22859 Posts

Posted - 2007-02-01 : 10:57:53
"drop indexes
truncate the tables we are importing to
import
recreate indexes
"

Looks about right to me.

"truncate the tables we are importing too (not sure if this is better to do then drop)"

I expect that TRUNCATE is better than DROP if the data you are importing is similar to the previous.

"we have had a space issue with the transaction log"

How are you doing the IMPORT? BCP? DTS? Something else? Adjusting the Batch Size may help with the log growth.

Is the database Recovery model FULL or SIMPLE? I expect that FULL will be sufficient for this job, so then you don't have to backup the logs, but keeping the batch size smaller should reduce the log space requirement.

"Everytime i got a time out within in a min or 2 of the process"

Backup shouldn't time out. But it depends what you are running it from; if its an application, or a web page, then its probably the application which is timing out (rather than the backup).

"is [shrink] not a good procedure?"

Shrink (log) reduces the physical size of the log file.

At some point the Log file has to grow back again. That is a very resource intensive operation, and shrink/grow cycles fragment the table. So generally it is better to let the Log file grow to an "equilibrium" size, and leave it at that size.

So I reckon you need to concentrate on getting the Max. used size of your Log smaller.

By the by, how big is your MDF and LDF files? (Maybe your idea of big [for this sized job] and mine are going to be different!)

Kristen
Go to Top of Page

pearsont74
Starting Member

27 Posts

Posted - 2007-02-01 : 11:48:08
The tables are the same with no change to the design, the import is for new or updated data in the exact same format. Normally from a access db.
I currently do not use a DTS...jsut a straight inport thru the sql wizard...unfortantley the import changes so i havent created one. Which is the best method? DTS package? Not sure how to adjust the batch size....any tips on this or links for info would be great.
The recovery is full...should i have autoshrink on?
I run the backup on the serever itself and thru sql enterprise manager.
The mdf file size is 13.6 gigs......should the log file be the same range....
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2007-02-01 : 12:10:26
"Not sure how to adjust the batch size"

Well, if you use DTS or BCP that's something you can set. If you just use SQL - e.g.

INSERT INTO MyTargetTable
SELECT *
FROM MySourceTable

it will do ALL the rows in a single batch - which is very heavy on the log. Only alternative is to loop round doing them in batches, but that's quite a lot of programming.

DTS would be a better bet, I reckon, if you can use that.

"The recovery is full"

Do you need to be able to recover to point-in-time? If its just a reporting database its probably unnecessary, If its also an OLTP database then you probably do. Make sure you are doing Transaction Log Backups very frequently during the import (at least every 10 minutes); that will help to mitigate the Log file growing out of control!


"...should i have autoshrink on?"

Its the work of the devil, don't use that (for the reasons about fragmenting the database above)

"The mdf file size is 13.6 gigs"

I reckon that, for a database using FULL Recovery model, the LDF being up to 120% of the size of the MDF is something you have to live with - so if that's your scenario I wouldn't worry about it too much. An overnight maintenance routine that rebuild indexes is probably going to use that much log space ... let alone your import.

Kristen
Go to Top of Page

pearsont74
Starting Member

27 Posts

Posted - 2007-02-01 : 12:32:23
I'll create a dts for this but i am not sure how to regulate the batch in dts....something to play around with today i guess.

as far as the type of db....its not being writen too but its does have an application quering against it. I noticed the same db on our dev server has simple recover and the log file is alot smaller but the mdf is about the same size. Im not quite sure how the db can be that size when the access db i imported file size was only about 6 gigs

btw...thanks for al lyou help and fast replies.
Go to Top of Page

pearsont74
Starting Member

27 Posts

Posted - 2007-02-01 : 13:39:26
ok...i created a new backup device and i was able to back up the db without errors.
I can only assume the truncate and shrink process messes up the db backup device file.
and i see what you mean by batches now....duh LOL
thanks for your help.
Go to Top of Page
   

- Advertisement -