Author |
Topic |
elwoos
Master Smack Fu Yak Hacker
2052 Posts |
Posted - 2006-08-22 : 11:57:48
|
My database is pretty small, about 2Gb. The transaction log seems to be permanently around 8Gb. The Logs are backed up every half an hour during the day when the application is in use. There are never more than about 60 concurrent users, mostly doing simple data entry and we only add a few hundred records to the system per week. Most of the application doesn't use sproc's, it just simply connects to the d/b via an ODBC connection. Is this a "normal" size for the log files for this sort of scenario?cheerssteve-----------Don't worry head. The computer will do all the thinking from now on. |
|
eyechart
Master Smack Fu Yak Hacker
3575 Posts |
Posted - 2006-08-22 : 12:02:44
|
sure, seems normal to me.-ec |
 |
|
SwePeso
Patron Saint of Lost Yaks
30421 Posts |
Posted - 2006-08-22 : 12:03:33
|
How much of the log file is actually used?For only 60 users and some hundred inserts per week, it seems a little large.Peter LarssonHelsingborg, Sweden |
 |
|
Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)
7020 Posts |
Posted - 2006-08-22 : 12:26:09
|
Seems very large for the number or transactions you described. Maybe it is getting inflated by some process like reindexing?You can start by checking the size of the transaction log backups over the course of a week or so. If they are all much smaller than 8 GB, you could just run a DBCC SHRINKFILE to reduce the transaction log to the proper size.If you are getting large transaction log backups, you could schedule transaction log backups more often, say every 5 minutes, to keep the size under control.CODO ERGO SUM |
 |
|
MichaelP
Jedi Yak
2489 Posts |
Posted - 2006-08-22 : 12:26:28
|
8GB TX Log seems very large to me for such a small database, small number of users, and a small number of inserts.Michael<Yoda>Use the Search page you must. Find the answer you will. Cursors, path to the Dark Side they are. Avoid them, you must. Use Order By NewID() to get a random record you will.</Yoda>Opinions expressed in this post are not necessarily those of TeleVox Software, inc. All information is provided "AS IS" with no warranties and confers no rights. |
 |
|
Wanderer
Master Smack Fu Yak Hacker
1168 Posts |
Posted - 2006-08-22 : 12:53:35
|
quote: Originally posted by Michael Valentine JonesMaybe it is getting inflated by some process like reindexing?
I agree that (a) that seems large, and (b) reindexing may be the culprit. I'm still smarting from being ignorant of the impact of reindexing, when (in 2001/2002 iirc) we migrating a large insurance DB (180GB) from SQL 7 to SQL 2000. We had a DBreindex step, and our log went from about 10GB to 70GB....CiaO*##* *##* *##* *##* Chaos, Disorder and Panic ... my work is done here! |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2006-08-22 : 13:19:54
|
They say that a good starting point for a transaction log is 25% the size of the MDF. None of our databases are even close to that due to reindexing. Most of our transaction logs are slightly bigger than the MDFs due to reindexing. 4 times the size seems a bit high though. Perhaps some process needed it at some point though. I wouldn't shrink it down unless you know for sure that you never need that size again due to the performance issue of expanding files.Tara Kizer |
 |
|
eyechart
Master Smack Fu Yak Hacker
3575 Posts |
Posted - 2006-08-22 : 13:21:19
|
Chances are the tlog will grow back to 8gb in size if you shrink it. There is probably a maintenance job, a large data load, a series of missed tlog backups or some combination of the above that has caused the tlog to grow to this size. As long as you aren't running out of space on disk there is no reason IMHO to shrink the file.the only time I shrink datafiles is when I want to copy a database to another system or if I run into critical disk space issue.-ec |
 |
|
elwoos
Master Smack Fu Yak Hacker
2052 Posts |
Posted - 2006-08-23 : 03:42:45
|
Thanks to all for the replies, very helpful as always. quote: Maybe it is getting inflated by some process like reindexing?
I must admit I hadn't really thought of reindexing being a potential cause. I will have a look at that. I've looked at the size of the TLog backups for the last two days and they are mostly less than 1MB!! Though there seem to be peaks at odd times. In particular, between 0645 and 0715, the Tlog backups then for the last two days were 192Mb. The ones before and after were around 350Kb!!! There are likely to be very few people if any using the system at that time of day. There may also be a much smaller peak around lunch time. I think I have sussed it now. There is a DTS package which populates some tables overnight for reporting. That drops indexes before truncating the tables, then recreates the indexes when the tables have been populated. The tables aren't massive but there appear to be over 20 indexes recreated.I think I'll try running this package just before the full daily backup but after the TLog backups (I only run them from 0715 to 1900 daily) have finished to see if that sorts it.Just in case anyone is wondering why I didn't know about this one. I was forced to give our "developer" sys admin rights so that he can "do the work he needs to when I am on leave"!Thanks again everyonesteve-----------Don't worry head. The computer will do all the thinking from now on. |
 |
|
Wanderer
Master Smack Fu Yak Hacker
1168 Posts |
Posted - 2006-08-23 : 05:21:22
|
quote: Originally posted by elwoos Thanks to all for the replies, very helpful as always. quote: Maybe it is getting inflated by some process like reindexing?
I must admit I hadn't really thought of reindexing being a potential cause. I will have a look at that. I've looked at the size of the TLog backups for the last two days and they are mostly less than 1MB!! Though there seem to be peaks at odd times. In particular, between 0645 and 0715, the Tlog backups then for the last two days were 192Mb. The ones before and after were around 350Kb!!! There are likely to be very few people if any using the system at that time of day. There may also be a much smaller peak around lunch time. I think I have sussed it now. There is a DTS package which populates some tables overnight for reporting. That drops indexes before truncating the tables, then recreates the indexes when the tables have been populated. The tables aren't massive but there appear to be over 20 indexes recreated.
Are all those indexes necessary? If those are reporting tables, I guess they may be. Do your reporting table need to be in your 'live' database?quote: Originally posted by elwoosI think I'll try running this package just before the full daily backup but after the TLog backups (I only run them from 0715 to 1900 daily) have finished to see if that sorts it.
sounds like it should do.quote: Originally posted by elwoosJust in case anyone is wondering why I didn't know about this one. I was forced to give our "developer" sys admin rights so that he can "do the work he needs to when I am on leave"!
been there, done that, got the restore scripts ready...*##* *##* *##* *##* Chaos, Disorder and Panic ... my work is done here! |
 |
|
elwoos
Master Smack Fu Yak Hacker
2052 Posts |
Posted - 2006-08-23 : 13:54:49
|
quote: Do your reporting table need to be in your 'live' database?
That's a good question, I think the answer now I come to think of it is probably not any more. I have access to a few servers now and we could do with some consolidationquote: been there, done that, got the restore scripts ready
LMAOThanks everyonesteve-----------Don't worry head. The computer will do all the thinking from now on. |
 |
|
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2006-08-23 : 18:34:55
|
quote: Maybe it is getting inflated by some process like reindexing?
I am interrested here where would be the likeliest place to perform this process from a commercial developers standpoint? as a scheduled job or behind the scenes in the actual application itself?'The only stupid question is the one you don't ask' |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Jim77
Constraint Violating Yak Guru
440 Posts |
Posted - 2006-08-23 : 18:57:28
|
thats is enlightening thank you, good old DBCC INDEXDEFRAG. it seems a very frustrting part of a DBA role to keep tabs on remote updates from dodgy developers to exsisting applications that might affect re-indexing jobs. '' Shallow thinkers seldom make deep impressions '' not in this case. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-09-01 : 18:36:02
|
"I think I have sussed it now. There is a DTS package which populates some tables overnight for reporting."Its probably deleting all existing rows, and then inserting the data. Often 99.999% (note: 5 x 9's, expensive that!!) of the data is unchanged, but the act of deleting it all, and inserting it all-over-again is pretty expensive.An option would be to use DTS to insert into a temporary table, in a different database using SIMPLE recovery model, and then selectively Update / Insert / Delete to get the table in the main database in sync.Kristen |
 |
|
elwoos
Master Smack Fu Yak Hacker
2052 Posts |
Posted - 2006-09-04 : 03:09:42
|
quote: Its probably deleting all existing rows, and then inserting the data
That's exactly it quote: An option would be to use DTS to insert into a temporary table, in a different database using SIMPLE recovery model
Must admit it never really occured to me to do it in quite that way. What a cracking idea, thanks Kristensteve-----------Don't worry head. The computer will do all the thinking from now on. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-09-04 : 16:14:32
|
Here's a snippet of code that I use to make the "UpSert" from the Source table to the Destination table (I actually use a routine that outputs the code, for all columns, rather than typing this guff manually!)Hopefully it may give you some ideas. Note that the TEXT columns are only compared on LENGTH and first 8,000 characters - so items of identical length and difference only outside the first 8,000 characters would slip through the net.DELETE DFROM DestinationDB.dbo.[MyTable] AS DWHERE NOT EXISTS ( SELECT * FROM SourceDB.dbo.[MyTable] S WHERE D.[MyPK] = S.[MyPK]UPDATE DSET-- [MyPK] = S.[MyPK], [MyStringCol1] = S.[MyStringCol1], [MyIntCol2] = S.[MyIntCol2], [MyTextCol3] = S.[MyTextCol3]FROM DestinationDB.dbo.[MyTable] AS D JOIN SourceDB.dbo.[MyTable] AS S ON D.[MyPK] = S.[MyPK]WHERE ( (D.[MyStringCol1] COLLATE Latin1_General_BIN <> S.[MyStringCol1] OR (D.[MyStringCol1] IS NULL AND S.[MyStringCol1] IS NOT NULL) OR (D.[MyStringCol1] IS NOT NULL AND S.[MyStringCol1] IS NULL)) OR (D.[MyIntCol2] <> S.[MyIntCol2] OR (D.[MyIntCol2] IS NULL AND S.[MyIntCol2] IS NOT NULL) OR (D.[MyIntCol2] IS NOT NULL AND S.[MyIntCol2] IS NULL))-- OR (D.[MyPK] <> S.[MyPK] -- OR (D.[MyPK] IS NULL AND S.[MyPK] IS NOT NULL) -- OR (D.[MyPK] IS NOT NULL AND S.[MyPK] IS NULL)) OR (DATALENGTH(D.[MyTextCol3]) <> DATALENGTH(S.[MyTextCol3]) OR CONVERT(varchar(8000), D.[MyTextCol3]) COLLATE Latin1_General_BIN <> CONVERT(varchar(8000), S.[MyTextCol3]) OR (D.[MyTextCol3] IS NULL AND S.[MyTextCol3] IS NOT NULL) OR (D.[MyTextCol3] IS NOT NULL AND S.[MyTextCol3] IS NULL)) )INSERT INTO DestinationDB.dbo.[MyTable]( [MyPK], [MyStringCol1], [MyIntCol2], [MyTextCol3])SELECT S.*FROM SourceDB.dbo.[MyTable] AS SWHERE NOT EXISTS ( SELECT * FROM DestinationDB.dbo.[MyTable] AS D WHERE D.[MyPK] = S.[MyPK] ) Kristen |
 |
|
rockmoose
SQL Natt Alfen
3279 Posts |
Posted - 2006-09-04 : 16:52:33
|
Standard procedureDelete where not existsUpdate inner joinInsert where not existsaye aye captain, grok thatdts is the mother of evil wizards |
 |
|
elwoos
Master Smack Fu Yak Hacker
2052 Posts |
Posted - 2006-09-05 : 03:00:09
|
Many thanks Kristensteve-----------Don't worry head. The computer will do all the thinking from now on. |
 |
|
Wanderer
Master Smack Fu Yak Hacker
1168 Posts |
Posted - 2006-09-05 : 04:41:12
|
quote: Originally posted by Kristen Here's a snippet of code that I use to make the "UpSert" from the Source table to the Destination table (I actually use a routine that outputs the code, for all columns, rather than typing this guff manually!)Hopefully it may give you some ideas. Note that the TEXT columns are only compared on LENGTH and first 8,000 characters - so items of identical length and difference only outside the first 8,000 characters would slip through the net....INSERT INTO DestinationDB.dbo.[MyTable]( [MyPK], [MyStringCol1], [MyIntCol2], [MyTextCol3])SELECT S.*FROM SourceDB.dbo.[MyTable] AS SWHERE NOT EXISTS ( SELECT * FROM DestinationDB.dbo.[MyTable] AS D WHERE D.[MyPK] = S.[MyPK] ) Kristen
One assume the live version doesn't have :SELECT S.*FROM SourceDB.dbo.[MyTable] AS Sbut ratherSELECT S.MyPK, S.MyStringCol1, S.MyIntCol2, S.MyTextCol3FROM SourceDB.dbo.[MyTable] AS S...On another not, in SQL2005, you might be able to vast the TEXT into a VARCHAR(MAX), which may allow for full comparison's. But I think the performance may be painful...*##* *##* *##* *##* Chaos, Disorder and Panic ... my work is done here! |
 |
|
Kristen
Test
22859 Posts |
Posted - 2006-09-05 : 06:42:52
|
"One assume the live version doesn't have ..."Actually not, but its a good question and well spotted!I very deliberately use SELECT S.* so that if a new column is added to the Source table there will be a different number of columns from the Target database and the routine will blow up.Of course if someone just swaps over the ordinal position of two columns in the Source Table I'll be in Big Trouble! ... but because in my circumstances I'm in control of the tables at both end this safety-catch works OK for me.But in other instances, particularly where SourceTable and TargetTable may deliberately have different numbers of columns, e.g. where not all columns are required, then a Column List would be appropriate, and either way a Column list will be better for efficiency and future proofing against issues other than the Source database gaining needed-columns.Kristen |
 |
|
Next Page
|