Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
thiyait
Yak Posting Veteran
70 Posts |
Posted - 2005-12-25 : 11:50:40
|
Hi, Our software is an ERP application developed in SQL sever 2000. It has been designed to work under Single Server - Single database concept. All the database objects reside on a single server with a single database. Our application is being used by 350-500 concurrent users, performing a lot of transactions daily. Our database size is around 60GB now and is ever increasing. Now the performance of the product is a major concern for us when concurrent users access the system. Is our software design of Single database has anything do with that ? If yes, Is there anyway we can improve the performance without changing the present design of the product ? Would it improve the performance significantly if we split the database objects into mutliple databases ? Thanks for your answers at the earliest. With regardsGanesh Kartik |
|
SamC
White Water Yakist
3467 Posts |
Posted - 2005-12-26 : 08:58:23
|
quote: Originally posted by thiyait Now the performance of the product is a major concern for us when concurrent users access the system. Is our software design of Single database has anything do with that ?
You don't state why the performance is a concern, but I imagine you have hard data to show that SQL is not keeping up with the transaction load?I've worked with engineers who have worried so much about what *might* happen if the server were overloaded, yet these same engineers had no idea where they were in the load curve. Since they were *overworked* they couldn't take performance measurements. The concern spreads to management, and before long, an action plan must be devised to address the problem that nobody had measured. Large money was spent. On and on. Incredible.OK. I'm a little off topic here... If you do have a performance problem, the first thing to do IMHO is to find which SQL operations are consuming the CPU Time, or DISK bandwidth. I would not suggest splitting the DB into two as a first resort. You might get a fininte (50%) improvement implementing a split DB, but correcting a query with a bad or missing index could result in several orders of magnitude of improvement. There are primitive tools provided with MS SQL to identify the load and some sophisticated 3rd party tools which do a better job to help you identify the problem queries (if there are any).Edit: The MS SQL tool you should look into is "Profiler". |
 |
|
afrika
Master Smack Fu Yak Hacker
2706 Posts |
Posted - 2005-12-26 : 12:11:25
|
quote: Originally posted by thiyait Hi, Our software is an ERP application developed in SQL sever 2000. It has been designed to work under Single Server - Single database concept. All the database objects reside on a single server with a single database. Our application is being used by 350-500 concurrent users, performing a lot of transactions daily. Our database size is around 60GB now and is ever increasing. Now the performance of the product is a major concern for us when concurrent users access the system. Is our software design of Single database has anything do with that ? If yes, Is there anyway we can improve the performance without changing the present design of the product ? Would it improve the performance significantly if we split the database objects into mutliple databases ? Thanks for your answers at the earliest. With regardsGanesh Kartik
I personally dont think it would affect performance. As stated below by Sam use the profiler. If you are looking seriously at performance then i would advice you partition your application between, application server and database server and not multiple databasesmultiple concurrent users would just be queuedAlso i would ask why is your db 60gb ? do you have redundant data ? you coudl back up? Just suggestingAfrika |
 |
|
|
|
|
|
|