Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2002-01-30 : 15:18:20
|
| Ok, still working on stuff here (can you tell?)My situation is that I am replicating between two production environments. Downtime has to be minimized for both.What I'm finding is that replication makes it a real pain to edit table schemas, stored procedures, functions, you name it; the most minor changes result in a complete rebuild of the snapshot (with lots of nasty table locking), an hour or two of de-synched time, and lots of aggravation.How do people manage replication in a production environment? I'm thinking it may be best to split my database into lots of publications... probably not one publication per object, but I'm even considering that (especially for really big tables).It certainly seems like it makes sense to publish the sp's seperately from tables, so changes to small sp's don't force any bulk copies.Is that a normal approach? Am I missing something?My environment is:Merge replication2 SQL2K boxes, cross country from each other, connected @10mbpsAbout 100 tables, 8GB of dataAbout 250 SP'sAbout 100 UDF'sThanks-b |
|
|
DavidD
Yak Posting Veteran
73 Posts |
Posted - 2002-01-30 : 21:16:37
|
| Hi Aiken, Microsofts 'out of the box' replication has plenty of limitations, not only can you not alter your published tables, but just try using it with multiple subscribers over slow comm lines or see what happens when your distributor goes bung. Our experience with it turned into a disaster. How is your T-SQL? It is not too hard to write your own replication setup using Stored Procs, triggers and distributed transactions, which is essentally what Microsofts replication involves, but with far more overhead. Basically create triggers that write inserts/edits/deletes to a History database, then a stored proc can copy them over to your other db using distributed transactions. Use identity columns and marker fields to keep track of what has been sent and what needs to go, I have been down this path so can help you out if you decide this is the way to go.DD |
 |
|
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2002-01-30 : 22:53:59
|
| It's tempting... but I just don't have the time to develop something like that. Plus all of the queuing, conflict resolution, and so on. My application will see hundreds of inserts a minute at both locations, and I'm counting on merge replication to handle that for me (with some custom resolvers I'll have to write).Thanks... I'm definitely finding the limitations here, but I'm afraid I'm stuck with it.Cheers-b |
 |
|
|
aiken
Aged Yak Warrior
525 Posts |
Posted - 2002-02-02 : 15:55:13
|
| Anyone else want to bite? I really need to get this going.For the time being, I've decided to create several publications for my single database:- One publication each for the two biggest tables (4GB and 1GB)- One publication for each "section" of the database tables (logging, forums, site metadata, user preferences, etc) so I can work on sections without forcing a whole new snapshot- One publication for all of the UDF's- One publication for all of the SP's...Does that sound reasonable? I'd rather have the elegance (and lower overhead) of a single publication, but it just doesn't seem like that's a viable solution, as so many changes would force a new snapshot of the entire database.Thanks-b |
 |
|
|
|
|
|
|
|