Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 Database Design and Application Architecture
 SQL Server architecture HELP

Author  Topic 

igorsl
Starting Member

1 Post

Posted - 2007-06-14 : 10:48:45
We are designing a SQL Server architecture that will need to handle a lot of inserts and many users.

We are not sure what approach to take if we should build a distributed server architecture with a Single Master SQL and many Slave SQL Servers.

Or if we should get a really powerful machine with many processers to handle all the connections.

We project we will have about 5,000 Inserts per sec coming from 500,000 users.

What do you think or what has your previous experience been with handling many inserts and few reads for many users?

Thank you

rmiao
Master Smack Fu Yak Hacker

7266 Posts

Posted - 2007-06-14 : 13:59:43
Have to watch blocking issue, possible to use multiple tables to reduce contention?
Go to Top of Page

RocketScientist
Official SQLTeam Chef

85 Posts

Posted - 2007-06-26 : 18:00:19
The real question is how long is the maximum allowed turnaround time from insert to read?

If that time period is non-critical, a layered idea would work fine, with several servers taking orders and relaying them to the master with a queuing system to maintain a steady workload on the master. That would give you a very high capacity but a high latency between write and read. If the time period between write and read is very small you're going to need to manage the data volume and indexing very carefully and manage the reads super-carefully.

We did a similar idea once where we had individual web servers collecting data. The data was actually spooled onto each web server on a text file, and the SQL Server simply polled each web server occasionally and imported data. That was for a high-volume system where very high latency was completely acceptable and even an extremely small amount of data loss was acceptable. I'm thinking this is not a good solution for your problem, but might help you see the full spectrum of the situation.
Go to Top of Page
   

- Advertisement -