Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
djorre
Yak Posting Veteran
94 Posts |
Posted - 2009-04-21 : 11:32:13
|
| Hi,I'm having a problem with triggers.I got one first primary important sql database.A new server needs to be installed with some software and database modifications. In the end, I want to run the applications on this new server but in case of problems the old server must be able to start.So I created triggers on all tables on the new server that trigger every action back to the old server in exact the same tables.Those triggers need to be tested first (we can not afford any data loss). Because they are a lot.My plan to test them was to set the triggers on the 1st database, and see if the new database gets the same values (reports). Then put the verified triggers on the second firing to the first database. That would be 'safe' without data loss if a trigger fails. But it's not because if a trigger fails the whole transaction fails and crucial proces information in the primary database would be lost!How can I create a trigger that does not abort the transaction when it fails? Or are there other easier ways to do this? It's all new to me.Thanks...greetings,Joris |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2009-04-21 : 12:14:58
|
| Replication might be a way to do it.If you want to use triggers:Create a staging table on the new server for each table that needs to be replicated - name <table_name__src_stg same structure as the table but with an identity z_id, datetime z_dte, action z_action.Now put a trigger on the table to populate this table with the inserted and deleted values (you can do an update or delete/insert) and z_dte = getdate() z_action = I, U, D.Now create an identical tables on the old server but with z_id not an identity.Now create a job on the new server to transfer the data using z_id - just get the next 100 rows and copy across. Keep a record of the last _id transferred for the next run - you can run this every minute if you wish it shouldn't impact anything much.Now create a job on the old server to process each row in turn in these staging tables and apply the operation to the tables. Again keep a record of the last ID processed.This will keep the servers in step.There is a problem if you need to keep erferential integrity between tables as this doesn't order the updates bewteen tables. If you want to do that you will need to keep a table which holds the table name, id of the entry in the staging table and an identity and use this to order the updates on the remote server.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
TG
Master Smack Fu Yak Hacker
6065 Posts |
Posted - 2009-04-21 : 12:36:51
|
| Definately not my area of expertise but I would think either (as nr suggests) replication or perhaps Log Shipping are more typical ways of keeping a backup server in sync and ready to "flip".Be One with the OptimizerTG |
 |
|
|
|
|
|