Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 Old Forums
 CLOSED - General SQL Server
 Shippin Those Logs Umph!

Author  Topic 

SamC
White Water Yakist

3467 Posts

Posted - 2006-07-26 : 08:07:27
We don't have the "Enterprise" version of SQL server which I hear has an integrated Log shipping solution. Is that right? FTP and everything?

Our backup device is in another location. We're not going to map drives or anything like that to solve the shipping problem. It's FTP, SFTP actually.

I have a scheduled job setup using the IpSwitch WS_FTP scheduler. While it works, it really isn't doing a very good job. Some of the problems include:
  • It occasionally "pauses" a scheduled move. Don't know why. It leaves a "partial" log on the remote location.

  • Subsequent runs of the job do not examine remote file size. A "partial" log is considered "full" so no retry is performed on the move. It takes manual intervention to detect and correct this problem.

  • Oops! Don't schedule the jobs too close together. If the first shipment isn't finished, the second scheduled job will start shipping files that the first job is about to do. Now you have two jobs shipping the same files. This is messy and somehow this problem repeats through subsequent jobs. It takes operator intervention to solve this problem... by the way, I hope you don't pay for bandwidth used if this problem slips by you for a few days.

  • IpSwitch doesn't "detect" the remote date / time correctly using SFTP. Not a big deal with me as my log files have the date / time in the filename.


Can anyone found a better OTS solution for log file shipping?

Sam

Kristen
Test

22859 Posts

Posted - 2006-07-26 : 08:54:43
I thought (but I haven't used it) that all the Enterprise Version added was a Wizard which made setting everything up easier, but that Log Shipping was just as doable with the lesser-versions.

I've always taken rather a sledge-hammer approach to these sorts of file-synchronisation issues to try to reduce manual-intervention - "spend-some to make-some".

For example:

Create one-time-folder

MOVE files from BACKUP folder to one-time-folder.

Rational: The MOVE will fail on backup files which are in the process of being created, this prevents copying a partial file to the remote location.

One-time-folder allows concurrent tasks, the MOVE provides an Atomic outcome!

Now ZIP all the files in the one-time-folder, and delete the original files (actually for SQL Backup files I then MOVE them to a "done" folder as they need to be kept for a while)

CRC test the ZIP file contents - bail out if that fails (Don't want to be the victim of an obscure ZIP bug or memory/disk fault)

PGP the ZIP file

Copy the PGP file to the remove server (by FTP or FileShare etc.) Place in an "Inbound" folder on remote machine.

Delete the local PGP and ZIP files, remove the one-time-folder

(See stuff about Logging the outcome, below)


Remote machine scheduled task then MOVES file(s) from Inbound folder to WORKING folder - again, files in-transit [open] will fail the MOVE task.

Create one-time-temp folder
Move PGP file to one-time-temp folder
Decrypt PGP file
CRC Test the ZIP file - bail out if error

Unzip the ZIP file

Process/Restore/Whatever the files extracted from the ZIP file

Delete the PGP, ZIP and Zip content files

Remove the one-time-temp folder


As a further precaution I output a "Log" of the steps within a job - e.g. to a file called MyTask.LOG

When the job completes, successfully, I append MyTask.LOG to AllTasks.LOG and delete MyTask.LOG

Consequently anything which fails half-way-though leaves a MyTask.LOG file behind. So the VERY FIRST THING that MyTask does is to look for a pre-existing MyTask.LOG file, and if it finds one it Emails it and aborts - thus a process that terminates unexpectedly WILL require manual intervention before the process kicks in again.

I also have a Flag File, the presence of which prevents the process starting. I use this during routine maintenance (like a scheduled server reboot) to stop any such tasks from starting at an inconvenient time. A quick DIR of the LOGs folder shows me if there are any current (or aborted!) MyTask's running.

VERY long-winded, TEDIOUS and TIME CONSUMING to set up, but in practice I've found this to be a pretty robust approach.

Summary: Not for the fainthearted!

Kristen
Go to Top of Page

SamC
White Water Yakist

3467 Posts

Posted - 2006-07-26 : 09:07:58
What are you using to MOVE the files to the remote? FTP? Are you handling all these steps using a DOS batch file or what?

Sam
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2006-07-26 : 10:28:53
Yup, DOS batch file. DOS MOVE command to move them locally to a one-time-folder, MOVE will NOT move a file which is open - so this method prevents half-created files being inadvertently "processed"

Then copy them by any suitable means (built in FTP will do) to the remote.

ROBOCOPY is well recommended too, I don't know if that can "reach" your remote site, but its got lots of useful options

Part of my rational for ZIPping is that ZIP file can be tested (for CRC integrity) which tells you if you have a proper file, or half of it!, before you start trying to use the file and potentially only get half way through a restore.

Kristen
Go to Top of Page
   

- Advertisement -