Author |
Topic |
NewHampshire
Starting Member
14 Posts |
Posted - 2009-01-21 : 10:37:26
|
Here's my current situation. I have a database that gets a nightly feed of about 1 gig. It is set to simple recovery, but for various reasons the transactions are mostly fully logging. I'm working on that (I'm new here) but in the meantime, I am perplexed why the log, 1. Gets so big (15GB) and 2. Doesn't empty.Usually in my past experience, this means I have a transaction open. But not here. Any ideas?Thanks. |
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-01-21 : 10:40:54
|
Do your nightly feed in batches. |
|
|
NewHampshire
Starting Member
14 Posts |
Posted - 2009-01-21 : 10:54:11
|
Bringing it over in batches would reduce the number of transactions, certainly.But the db isn't actually doing much with ths data now, so the file growth is out of whack. And the fact that the feed is in one piece might explain a large log file, but doesn't explain why it isn't clearing.I'd rather unerstand the issue rather than reduce the pain by changing the feed. I'm new here, and these explorations help me learn the environment. |
|
|
cshah1
Constraint Violating Yak Guru
347 Posts |
Posted - 2009-01-21 : 10:59:14
|
>>Bringing it over in batches would reduce the number of transactionsalso, issue CHECKPOINT between those batches... |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-01-21 : 11:09:16
|
Open transactions( Long running tran) will not truncate inactive portion of log even in simple recovery model. I wouldn't issue checkpoints in between as Simple recovery model already does automatic checkpoint. |
|
|
NewHampshire
Starting Member
14 Posts |
Posted - 2009-01-21 : 11:25:55
|
Right, I know that. That's why I said that there areno open transactions. |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-01-21 : 11:31:57
|
I mean while you are doing Nightly Feed? How Long does it run? Shrink it with DBCC shrinkfile to manage growth. |
|
|
cshah1
Constraint Violating Yak Guru
347 Posts |
Posted - 2009-01-22 : 09:40:28
|
sodeep, even though checkpoint is performed automatically in Simple Recovery model what will happen when you do? DELETE TOP (20000000) FROM MyTable even with simple recovery mode your log file will explode atleast based on my understanding. You will have to delete in batches and issue a checkpoint between those batches. |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-01-22 : 09:45:35
|
Here is from Books online:The Database Engine cannot recover from an interrupted checkpoint. If a checkpoint is interrupted and a recovery required, then the Database Engine must start recovery from a previous, successful checkpoint. |
|
|
edmays
Starting Member
6 Posts |
Posted - 2009-02-09 : 15:57:52
|
I'm having a similar issue. The database is a reporting database that gets loaded once daily. The daily transaction load is about 300MB. The database is using the simple recovery model. The log file continues to grow after each day's import. I thought that the simple recovery model allowed the existing transaction log to be re-used. I would prefer to keep the transaction log at a fixed size so that I don't have excessive file maintenance and fragmentation. |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-09 : 16:06:20
|
You should do in batches for Huge DML operations. |
|
|
edmays
Starting Member
6 Posts |
Posted - 2009-02-09 : 16:19:46
|
Whether I do batches or not, there will be transactions that fill the log. Why does the log continue to grow, even in simple recovery? |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
edmays
Starting Member
6 Posts |
Posted - 2009-02-09 : 16:37:28
|
I understand that SQL Server puts transactions in the log. How does it reclaim log space in Simple recovery mode? MS says that it does it automatically. From the overview of recovery models on TechNet:SimpleNo log backups.Automatically reclaims log space to keep space requirements small, essentially eliminating the need to manage the transaction log space.However, my log file keeps growing. It's not automatically reclaiming log space. |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-09 : 17:03:05
|
This is from Booksonline:If the database is using the simple recovery model, an automatic checkpoint is generated whenever the number of log records reaches the lesser of these two values: The log becomes 70 percent full.The number of log records reaches the number SQL Server estimates it can process during the time specified in the recovery interval option.Long-Running TransactionsThe active portion of the log must include every part of all uncommitted transactions. An application that starts a transaction and does not commit it or roll it back prevents SQL Server from advancing the MinLSN. This can cause two types of problems: If the system is shut down after the transaction has performed many uncommitted modifications, the recovery phase of the subsequent restart can take considerably longer than the amount of time specified in the recovery interval option.The log may grow very large because the log cannot be truncated past the MinLSN. This happens even if the database is using the simple recovery model, in which the transaction log is normally truncated on each automatic checkpoint. Replication TransactionsThe active portion of the log must also contain all transactions marked for replication, but that have not yet been replicated to a subscriber. If these transactions are not replicated in a timely manner, they can also prevent truncation of the log. |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-09 : 17:08:27
|
If you are looking more deeper explanation, dig in here.http://msdn.microsoft.com/en-us/library/ms179355.aspx |
|
|
edmays
Starting Member
6 Posts |
Posted - 2009-02-09 : 17:14:42
|
Thanks. The database has no replication, and there aren't any uncommitted transactions. In fact, checking the log_reuse_wait_desc, it returns nothing. I would have thought that at least it would have been waiting for a checkpoint. |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-09 : 17:29:19
|
quote: Originally posted by edmays Thanks. The database has no replication, and there aren't any uncommitted transactions. In fact, checking the log_reuse_wait_desc, it returns nothing. I would have thought that at least it would have been waiting for a checkpoint.
Then why don't you shrink the log file? |
|
|
edmays
Starting Member
6 Posts |
Posted - 2009-02-09 : 17:35:30
|
That's an option. Not one that I want to pursue right now, because I would be doing it without knowing why the behavior is occurring. Additionally, shrinking and expanding the log file is going to fragment the files. There has to be a reason why it's not allowing reuse. |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-09 : 17:36:56
|
DID YOU READ THE LINK I POSTED TO MAKE CLEAR TO YOU? |
|
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2009-02-09 : 17:42:56
|
Here is what it says:Log truncation frees any virtual logs whose records all appear in front of the minimum recovery log sequence number (MinLSN). The MinLSN is the log sequence number of the oldest log record that is required for a successful database-wide rollback. The log may grow very large because the log cannot be truncated past the MinLSN. This happens even if the database is using the simple recovery model, in which the transaction log is normally truncated on each automatic checkpoint. |
|
|
Previous Page&nsp;
Next Page
|