Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 New to SQL Server Programming
 replications with 200GB database

Author  Topic 

edwardsbill95
Starting Member

1 Post

Posted - 2008-06-05 : 14:43:18
I’ve recently taken on a SQL Server related assignment with a client running a very large database in a very poor environment.

We have a 200GB database with virtually no failover support. I’m interested in transactional replication, hardware redundancy, failover support and find that before moving forward on that we probably need to seriously re-evaluate our whole logical and physical design. My experience is more on the TSQL side rather than full dba hardware, optimization, tuning areas. Any suggestions for quickly ramping up my skills on the dba side (books, courses, etc.) and also recommendations for short term support resources would be appreciated. I’m want to make sure I cover all my bases with regard to future scalability, disaster recovery, high availability etc.



tosscrosby
Aged Yak Warrior

676 Posts

Posted - 2008-06-05 : 15:14:39
Third-party backup options such as LiteSpeed (quest) or SQL Backup (red gate) or any other vendor for DR, considering you're around 200GB. These will greatly reduce backup/restores times and backup file sizes. Future scalability is going to depend on growth of the data. You need to get a handle on how often (and how much) the data is growing and create a projection for disk space needs (I have some scripts I use if you need/want them). Is archiving data an option? Log shipping would be an option for high availability. Another option could be clustering. As far as research, this site has some topics posted about these subjects that would make for good reading as well as http://www.sqlservercentral.com/ and any number of others. A wealth of information - free of charge. This is only brief answer to some of your questions but should get you started. Good Luck.

Terry
Go to Top of Page

sodeep
Master Smack Fu Yak Hacker

7174 Posts

Posted - 2008-06-05 : 15:31:28
quote:
Originally posted by tosscrosby

Third-party backup options such as LiteSpeed (quest) or SQL Backup (red gate) or any other vendor for DR, considering you're around 200GB. These will greatly reduce backup/restores times and backup file sizes. Future scalability is going to depend on growth of the data. You need to get a handle on how often (and how much) the data is growing and create a projection for disk space needs (I have some scripts I use if you need/want them). Is archiving data an option? Log shipping would be an option for high availability. Another option could be clustering. As far as research, this site has some topics posted about these subjects that would make for good reading as well as http://www.sqlservercentral.com/ and any number of others. A wealth of information - free of charge. This is only brief answer to some of your questions but should get you started. Good Luck.

Terry



Logshipping is not high availability option, it is warm-standby option.Well the answer would be how high is the transaction rate? How much downtime you can bear? If you are in SQL 2005, you can use Database Mirroring but as of my experience with it, I wouldn't suggest it for Large high OLTP environment? Clustering would be best solution for this.
Go to Top of Page

tosscrosby
Aged Yak Warrior

676 Posts

Posted - 2008-06-05 : 16:31:53
Thanks sodeep. I stand corrected. High availability depends on the business needs/expectations. Log shipping qualifies where I'm at as we can live with a downtime of 2 hours (not immediate failover).

Terry
Go to Top of Page
   

- Advertisement -