Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Issue with increasing Transaction log size

Author  Topic 

sqlserverdeveloper
Posting Yak Master

243 Posts

Posted - 2007-11-28 : 16:01:32
We have a SQL server 2000 DB, where every night we load the data using DTS packages from MAS90 SOA files to the db tables, here are the steps in sequence:
1. Drop tables
2. Create tables
3. Load the data from MAS90 SOA files.
4. Create indexes
All the above 3 steps ran fine and when the create indexes step started, the tran log which was about 10MB size increased to 46.5 GB and while the create index step is running I recvd following alert:
Alert: CPU Load on servername-E:\ Database 5cd02b4f is now 36 %
So I am thinking that create indexes step which is almost doing reindexing on the tables is causing that.
Please let me know how to handle the tran log size issue. Thanks.

anonymous1
Posting Yak Master

185 Posts

Posted - 2007-11-28 : 16:10:31
you are using the full recovery model? how often do you run log backups? are they scheduled during the period this is running?
Go to Top of Page

sqlserverdeveloper
Posting Yak Master

243 Posts

Posted - 2007-11-28 : 16:19:49
Yes it's full recovery model, log backed up once at 6.30PM, no they are scheduled at a different time.
Go to Top of Page

Zoroaster
Aged Yak Warrior

702 Posts

Posted - 2007-11-28 : 16:25:16
If you have enough drive space to accommodate the growth you should just do a log backup as the final step in the job to clear up the logical log space and then it will just reuse the space the each night without having to regrow. If you don't have the space you might be able to build in some log backups in between the steps of the job if they can be separated out like that, this will keep it trim.




Future guru in the making.
Go to Top of Page

sqlserverdeveloper
Posting Yak Master

243 Posts

Posted - 2007-11-28 : 16:33:38
I do'nt think I have enough drive space, currently I have just left with 18GB disk space.
Go to Top of Page

anonymous1
Posting Yak Master

185 Posts

Posted - 2007-11-28 : 16:43:03
you can include log backups in the some/all of the steps 2, 3 & 4. this prolongs the time to complete, but reduces the overall space usage. depending on you willingness to loss data and hours of operation, you can also change the recovery model before and after the process, but generally this is not practiced.
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2007-11-28 : 16:54:21
You should schedule log backups to run much more often. Every 15 minutes is a good place to start.

You can modify to datapump steps of your DTS packages to commit the data in smaller batches; the default is to do the datapump step in a single transaction. Setting the batch size to about 50,000 rows is a good starting point.



CODO ERGO SUM
Go to Top of Page

sqlserverdeveloper
Posting Yak Master

243 Posts

Posted - 2007-11-28 : 17:10:15
I am not sure if I need the log backups every 15 minutes, beacause once the data gets loaded into the sql database, it does'nt change until the sql dts job runs the following night. When the sql job runs in the night it drops/creates tables etc. and then loads the data into the tables for the following day, that's when we have new data.
So I am just taking db backup @4PM and then tran log backup @6.30OM once every day. Please correct me if I have to do something different.
Currently in the transform data task properties, I have the follow options checked:
1. Use fast load
2. Always commit final batch
3. Check constraints
4. I am using 1000 for insert batch size.
Thanks for all the responses.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-11-28 : 17:22:52
If the data doesn't change until the DTS job runs and you can recreate all of this data by rerunning the DTS job, then you should be using SIMPLE recovery model instead.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

sqlserverdeveloper
Posting Yak Master

243 Posts

Posted - 2007-11-28 : 22:39:27
quote:
Originally posted by tkizer

If the data doesn't change until the DTS job runs and you can recreate all of this data by rerunning the DTS job, then you should be using SIMPLE recovery model instead.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/



Thanks Tara, I shall change it to Simple tonight and will check the tran log tommorrow morning once the DTS job completes. And also since it's Simple, I shall take just the db backup once a day.
Go to Top of Page

sqlserverdeveloper
Posting Yak Master

243 Posts

Posted - 2007-11-29 : 14:46:04
The SQL DTS job completed and the log size(LDF file) was 13.2 GB, actually the log size increased from 10MB to 13.2 GB with Simple recovery model. When I had Full recovery model before the log increased to almost 41 GB. With Simple though there was a increase but it was'nt much comparatively. Is the tran log .LDF file increase is due to the create indexes step that's happening in the SQL job? Thx.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-11-29 : 14:48:08
Even with SIMPLE recovery model, the transaction log is used. You can not avoid this. What happens with SIMPLE though is that after the transaction completes, it is cleared from the tlog (unlike BULK or FULL which waits until a tlog backup). So if you want the system to use even less tlog space, you can perform the load in smaller batches (that's if the load is causing it). You'll need to do some research to see which step is using the tlog file the most.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

sqlserverdeveloper
Posting Yak Master

243 Posts

Posted - 2007-11-29 : 15:22:37
quote:
Originally posted by tkizer

Even with SIMPLE recovery model, the transaction log is used. You can not avoid this. What happens with SIMPLE though is that after the transaction completes, it is cleared from the tlog (unlike BULK or FULL which waits until a tlog backup). So if you want the system to use even less tlog space, you can perform the load in smaller batches (that's if the load is causing it). You'll need to do some research to see which step is using the tlog file the most.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/



Thanks for quick response, transaction completes: does it mean committed transactions
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2007-11-29 : 15:28:53
Committed or rolled back.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/
Go to Top of Page

sqlserverdeveloper
Posting Yak Master

243 Posts

Posted - 2007-11-29 : 15:40:44
quote:
Originally posted by tkizer

Committed or rolled back.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/



Thanks!
Go to Top of Page
   

- Advertisement -