Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 SQL Server Administration (2005)
 Logs filling up

Author  Topic 

poser
Posting Yak Master

124 Posts

Posted - 2011-11-08 : 09:30:09
I have a huge amount of data that get inserted nightly and I am having problems with my transaction logs.
I had the recovery model set to full but the logs grew way too big to fast (disk space).
I put the database in simple mode and turned off the auto grow and I got the error it was full.
I put it in simple mode and turned on auto grow and it is growning huge again.
How can I mangage this more efficiently....
Thanks for all help,
r/p

vikki.seth
Yak Posting Veteran

66 Posts

Posted - 2011-11-08 : 09:35:18
Check for the possiblity of using bulk insert. Put your database in bulk logged recovery mode before the insert process and change it back to full after the import completes.

If bulk insert/bcp is not possible, make changes to the insert procedure to insert records in batches rather than entire records in one transaction.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-11-08 : 12:59:37
Switching recovery models only determines what happens after the transaction commits or rolls back. It does not affect the tlog actual usage.

Have you tried batching the inserts?

How long does the process take? And how often are you backing up the transaction log?

I never recommend bulk logged recovery model as you have limited point in time recovery options during the bulk process. Doesn't make sense for critical data that requires all points of recovery.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

poser
Posting Yak Master

124 Posts

Posted - 2011-11-08 : 14:47:14
This job takes 3 hours to run. Programmers are not interested in rewriting code :(
I'm just trying to find a way to manage this so I don't have to do it all manually. Of course the programmers are okay with that LOL
No matter how many times I tell them truncating\shrinking log file is not a daily option.
I have to work with what I got...

Thanks,
r/p
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-11-08 : 15:11:10
Well if they are unwilling to rewrite it, then you'll need to add disk space so that the transaction log file can accomodate this process.

And stop truncating and shrinking the log, that is not a solution for a process that runs daily. You are causing more issues by doing it.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-11-08 : 15:18:15
quote:
Originally posted by poser

Programmers are not interested in rewriting code :(



They are here, in fact its a way-of-life.

When we see ways to refactor with the benefit of improved knowledge we do. There needs to be a perceived benefit - e.g. better performance, or reliability - but a key part is that the act of "improving" the code means that future code, written by the team, will have the benefit of the knowledgebase from the earlier refractoring, which will mean that, when reused, that knowledge will result in improved code on the next job, less downtime, more performance, and thus less time spend on debugging and tuning.

Slightly less tangible is the benefit of reduce customer dissatisfaction
, defection, and negative PR.
Go to Top of Page

vikki.seth
Yak Posting Veteran

66 Posts

Posted - 2011-11-09 : 03:40:49
quote:
Originally posted by tkizer

Switching recovery models only determines what happens after the transaction commits or rolls back. It does not affect the tlog actual usage.



I believe actual tlog usage does vary in full vs bulk logged recovery modes. If we are running a minimally logged operation such as bcp, the usage of tlog will be minimal (all pages changed due to minimally logged operation are marked in the BCM page).

Tlog backups will definately have all changed extents due to bulk operation, so the tlog backup size will be similar.

Pls correct me if I am wrong.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-11-09 : 08:37:44
You're right. I should have specified I was replying to the OP regarding the inserts and the fact that the OP is changing recovery models to see if it fixes it.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-11-09 : 12:00:24
Can anyone prove that with an example?

I tried with both INSERT INTO and Bulk Insert with both the recovery models but the log file size are same for both of them.

PBUH

Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-11-09 : 12:48:32
File size won't necessarily change. It's the used space inside the file that matters.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-11-09 : 12:57:00
So that's what I am trying to understand here.

If anything is going to be minimally logged it means the the size should not increase.In my test I had kept the log file size to 512 KB with 10% growth.

So if anything is going to be minimally logged then no of VLF's created should be less than what is fully logged(full recovery)and subsequently not increase the log size where it becomes same in size when the same actions are performed with full recovery model.



PBUH

Go to Top of Page

GilaMonster
Master Smack Fu Yak Hacker

4507 Posts

Posted - 2011-11-09 : 13:17:21
VLFs are created only when the log is created or grown, not when operations are logged.

Minimally logged doesn't mean the log won't grow. It means that there is less logged than in full recovery only. There's a whole bunch of requirements for operations to be minimally logged.

--
Gail Shaw
SQL Server MVP
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-11-09 : 13:51:53
So what exactly is meant by "minimally logged" and what do you mean by "less logged" ?

If log size is not going to be less and VLF's are going to be the same then why should one change to Bulk_Logged for certain operations with the disadvantage of not having point in time recovery.


PBUH

Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-11-09 : 13:58:11
That's exactly my point, you shouldn't change to bulk_logged recovery model. The disadvantage (less PIT recovery points) far outweighs the advantage (less logging but still logged), at least in my critical environments. Your mileage may vary.


Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-11-09 : 14:03:28
So whats the need for Bulk_logged recovery model in the first place if it should not be used for operations where by definition it is supposed to offer much greater advantages.

Also can anyone please explain what is "less logging" ? In my test I really could not find ways to understand that Bulk_logged indeed did "less logging ".

PBUH

Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-11-09 : 14:45:36
What operation did you do in your test? And how did you measure it?

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

GilaMonster
Master Smack Fu Yak Hacker

4507 Posts

Posted - 2011-11-09 : 15:01:00
quote:
Originally posted by Sachin.Nand

So what exactly is meant by "minimally logged" and what do you mean by "less logged" ?


There's a very good article in Books Online....

quote:
If log size is not going to be less and VLF's are going to be the same then why should one change to Bulk_Logged for certain operations with the disadvantage of not having point in time recovery.


The log size and the VLFs are only affected by CREATE DATABASE, ALTER DATABASE, autogrow and shrink. An operation writing to the log won't generate VLFs unless it forces an autogrow.

A minimally logged operation will log less than a fully logged operation. Less meaning exactly what the word usually means - a smaller amount.

--
Gail Shaw
SQL Server MVP
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-11-09 : 15:03:00
Here is the code.try changing the recovery model to full and bulk_logged.you will see the log file size being same for both the models.Not sure what I am missing here...

USE master
IF DB_ID('BulkLogged') IS NOT NULL DROP DATABASE BulkLogged
GO
CREATE DATABASE BulkLogged
ON
PRIMARY
(NAME = N'BulkLogged', FILENAME = N'C:\BulkLogged.mdf' , SIZE = 5MB , FILEGROWTH = 10%)
LOG ON
(NAME = N'BulkLogged_log', FILENAME = N'C:\BulkLogged_log.ldf' , SIZE = 512kb , FILEGROWTH = 10%)
GO
ALTER DATABASE BulkLogged SET RECOVERY bulk_logged

USE BulkLogged
GO


CREATE TABLE T1(c1 int identity primary key, c2 char(3))
INSERT INTO T1 SELECT TOP 100000 'aaa' FROM syscolumns a CROSS JOIN syscolumns b

dbcc sqlperf(logspace)
dbcc loginfo

SELECT * INTO T2 FROM T1

dbcc loginfo
dbcc sqlperf(logspace)



PBUH

Go to Top of Page

GilaMonster
Master Smack Fu Yak Hacker

4507 Posts

Posted - 2011-11-09 : 15:06:52
Without taking a backup both are still in pseudo-simple recovery model.

--
Gail Shaw
SQL Server MVP
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-11-09 : 15:12:59
quote:
Originally posted by GilaMonster

Without taking a backup both are still in pseudo-simple recovery model.

--
Gail Shaw
SQL Server MVP



Ok.Now that's interesting...So when should I take a backup before the BULK operation or after and what kind ?

PBUH

Go to Top of Page

GilaMonster
Master Smack Fu Yak Hacker

4507 Posts

Posted - 2011-11-09 : 15:14:50
I made the table a lot wider (it was so small there would be little visible effect) and added full and log backups

USE master
IF DB_ID('BulkLogged') IS NOT NULL DROP DATABASE BulkLogged
GO
CREATE DATABASE BulkLogged
GO

ALTER DATABASE BulkLogged SET RECOVERY bulk_logged

BACKUP DATABASE BulkLogged TO DISK = 'D:\Develop\Databases\Backups\BL.bak'

USE BulkLogged
GO


CREATE TABLE T1(c1 int identity primary key, c2 char(3), filler CHAR(200))
INSERT INTO T1 SELECT TOP 100000 'aaa', '' FROM syscolumns a CROSS JOIN syscolumns b

BACKUP LOG BulkLogged TO DISK = 'D:\Develop\Databases\Backups\BL.trn'

SELECT * INTO T2 FROM T1

dbcc loginfo
dbcc sqlperf(logspace)

GO

USE master
IF DB_ID('FullLogged') IS NOT NULL DROP DATABASE FullLogged
GO
CREATE DATABASE FullLogged
GO
ALTER DATABASE FullLogged SET RECOVERY FULL

BACKUP DATABASE FullLogged TO DISK = 'D:\Develop\Databases\Backups\FL.bak'

USE FullLogged
GO


CREATE TABLE T1(c1 int identity primary key, c2 char(3), filler CHAR(200))
INSERT INTO T1 SELECT TOP 100000 'aaa', '' FROM syscolumns a CROSS JOIN syscolumns b

BACKUP LOG FullLogged TO DISK = 'D:\Develop\Databases\Backups\FL.trn'

SELECT * INTO T2 FROM T1

dbcc loginfo
dbcc sqlperf(logspace)


-- BulkLogged 51.99219 15.26014 0
-- FullLogged 51.99219 54.63467 0

Log space used in full recovery - 54% of 52MB so 28MB
Log space used in bulk-logged - 15% of 52MB so 7.8MB

--
Gail Shaw
SQL Server MVP
Go to Top of Page
    Next Page

- Advertisement -