Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 Transact-SQL (2008)
 Log Shipping

Author  Topic 

loknathmahato
Starting Member

27 Posts

Posted - 2011-10-28 : 18:14:07
Hi,

I want to implement Log Shipping on my MS SQL Server 2008 Database for live backup. But On Weekly basis i do maintainance. In my maintainance task first i detach my database and taking backup then after attaching database i do perform some data deletion work due to that my DB log file size became 15-20 Gb. To reduce the log file size i perform Database shrinking then I do reindexing.

can somebody advice me that this maintainace process is ok for Log shipping?

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-10-28 : 19:40:13
Your plan is NOT okay. Why do you need to detach the database AT ALL?

Do NOT shrink the database! Why don't you instead just backup the tlog more frequently.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-10-29 : 02:13:55
This does not make sense..

You first shrink the log file to reduce the log size and then again reindex it which for me should again increase the log size..

PBUH

Go to Top of Page

Vinnie881
Master Smack Fu Yak Hacker

1231 Posts

Posted - 2011-10-29 : 12:23:38
Are you detaching the database so you can backup by not using Microsoft SQL, or is there another reason you are detaching?

Also, are you sure you log shipping is what you are looking for?

This article may give you a little more insight as far as defining the best approach for a "live backup"
http://nirajrules.wordpress.com/2008/12/08/snapshot-vs-logshipping-vs-mirroring-vs-replication/


Success is 10% Intelligence, 70% Determination, and 22% Stupidity.
\_/ _/ _/\_/ _/\_/ _/ _/- 881
Go to Top of Page

loknathmahato
Starting Member

27 Posts

Posted - 2011-10-29 : 14:14:32
Hi Friends,

my database size is more than 15 GB and its taking much time to backup. OK , from now i will take backup of my database but can u tell me that if i will implement log shipping in my database then how will i do my DB Maintenance, because i have tested that during the log shipping when i run re-indexing then its taking much time to execute and more than 4 to 5 GB size transaction logs are created in the interval of 15 minutes due to that my disk space getting filled frequently. Normally my transaction log file size which is created in the interval of 15 minutes is 1 to 2 GB this is also not affordable.So should i perform log shipping for my database or choose any different way because in a second more than 2-3 hundreds query are fired in my database. Please provide me a proper solution
Go to Top of Page

Vinnie881
Master Smack Fu Yak Hacker

1231 Posts

Posted - 2011-10-29 : 15:33:10
Backup the database and your log first using Mssql just so we have a clean slate to check. i am not aware of any reason it will effect the log size, but I never managed backups by copying a detached file, so I want to ensure this has no effect on why the logs are growing so quick. The sizes of your log sound very high, but regardless there is really not a need to reindex each time unless there are reasons the records in the table have significant amounts of selections or other reasons you would want to reindex.

Also, is your log shipping functioning currently where if your main server went down the other server is ready to go and setup or are you starting from scratch?


Success is 10% Intelligence, 70% Determination, and 22% Stupidity.
\_/ _/ _/\_/ _/\_/ _/ _/- 881
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2011-10-29 : 16:39:07
4-5GB tlog files sounds fine for a 15GB database. The size of the tlog backup files are dependent upon the largest transaction, which in the case of a reindex is the size of your largest index.

If backup time is too slow, you need to look into buying better disks.

By the way, 15GB is a tiny database these days. If you've got slow disks and free disk space concerns, well you've got some serious problems.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Vinnie881
Master Smack Fu Yak Hacker

1231 Posts

Posted - 2011-10-29 : 23:20:54
He stated in general the logs are reaching 2 gb's in 15 minutes without the reindex, you don't feel that seems odd? One of our databases that are roughly 15gb only does a transactional log backups 1x per day (it's just a reference database for the most part), but those logs are not much larger and far more infrequent(I'm not in the office to verify though, but will double check Monday)? 2gb in 15 minutes with no maintence seems odd to me, but then again I have no comparison besides that reference db.


Success is 10% Intelligence, 70% Determination, and 22% Stupidity.
\_/ _/ _/\_/ _/\_/ _/ _/- 881
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-10-30 : 03:53:00
quote:
Originally posted by loknathmahato

.... Normally my transaction log file size which is created in the interval of 15 minutes is 1 to 2 GB this is also not affordable.So should i perform log shipping for my database or choose any different way because in a second more than 2-3 hundreds query are fired in my database. Please provide me a proper solution



Do you understand that the Log Shipping is NOT used for making log files less but rather to be used as a DA solution ?

You seriously have a very very badly written queries which are generating 1 to 2 GB logs for 15 GB database every 15 minutes.

PBUH

Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-10-30 : 06:08:38
"You seriously have a very very badly written queries which are generating 1 to 2 GB logs for 15 GB database every 15 minutes"

Please tell me how a poorly written query can be improved to reduce log file usage?

Only situation I can imagine is where a permanent table is being used for temporary data, and thus has high INSERT/DELETE
Go to Top of Page

Sachin.Nand

2937 Posts

Posted - 2011-10-30 : 12:41:43
quote:
Originally posted by Kristen

"You seriously have a very very badly written queries which are generating 1 to 2 GB logs for 15 GB database every 15 minutes"

Please tell me how a poorly written query can be improved to reduce log file usage?

Only situation I can imagine is where a permanent table is being used for temporary data, and thus has high INSERT/DELETE



What I mean by "poorly written queries" is NOT the select queries but rather queries which do large data modifications in transaction then rollbacks.

I had seen a database for one of my client where the query was actually doing a set of data modifications in like 5 tables for fairly large bunch of rows in a transaction and then when it was about to do the modification to the final table a validation was written to check existence of some value and if found false the whole transaction was rolledback.

Another place where a log file bloats are when there are consistent data modifications on clustered indexes.

PBUH

Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-10-30 : 12:42:59
quote:
Originally posted by Sachin.Nand

I had seen a database for one of my client where the query was actually doing a set of data modifications in like 5 tables for fairly large bunch of rows in a transaction and then when it was about to do the modification to the final table a validation was written to check existence of some value and if found false the whole transaction was rolledback.

Another place where a log file bloats are when there are consistent data modifications on clustered indexes.


Interesting, thanks.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2011-10-30 : 12:45:43
I suppose:

UPDATE MyTable
SET MyColumn1 = 'FOO'
WHERE MyColumn2 = 'BAR'

would be a candidate, and is something I see quite often - where many rows with MyColumn2 = 'BAR' already have MyColumn1 = 'FOO' and adding

AND (MyColumn1 <> 'FOO' OR MyColumn1 IS NULL)

to the WHERE clause can reduce the number of rows processed (and Log Space as well as improve Performance, and if there is a trigger on the table those can be significant, of course ...)
Go to Top of Page
   

- Advertisement -