Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 Database Design and Application Architecture
 Server Design

Author  Topic 

albertkohl
Aged Yak Warrior

740 Posts

Posted - 2010-08-26 : 19:52:16
Hey guys, i wanted to see if i can get some input on the server i'm re-configuring...

the database is an OLAP design, about 15 tables strong with 184 million rows. here's what i got to work with:

Current Config
-(2) SSD 80gb Intel Drives (those are the system drives)

-SQL 2008 (ent) running on the system drives (tempdb, master and all)

-(8) 15k RPM 300gb Cheetah Drives (these are all setup as single disks across two separate controllers

-20bay Chassis (at this point only the 8 cheetah drives are taking up any bays, the SSD drives mount internally

-(2) Quad Core Xeons 65x


Things to play with:
-(4) 150GB 10k rpm Raptor Drives

-(5) 500GB 7200 rpm Drives

-(4) open ports on onboard Intel Raid Control (0 / 0+1 only)


right now the 8 cheetah drives are running a database that is partitioned horizontally 8 ways. the database is Primarily hit with nothing but reads however it does ship data too. right now the way i ship data is query data using a select into and dump data (into the same database) into a table named Order_[HASH] then using a script i BCP Export the data to a text file, zip and email/ftp depending on size.

so what i'm trying to figure out is should i move tempdb, and what i should do, if anything with the additional drives?

should i have a separate disk set for the order-processing? just to give you an idea the server ships about 9 million rows a month.


any and all advice is most welcome!

Thanks guys!

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2010-08-27 : 06:23:48
I'm a little confused...why are you dumping data into the Order_[HASH] table? Why aren't you just dumping it directly using BCP? In all honesty 9 mill rows of data in a month isn't all that much so I can't really see that you should have any performance problems due to this. How many MB/GB are these 9 mill rows uncompressed? As a general remark it's always good to have tempdb on a separate set of disks. Are you having any performance problems at the moment?

- Lumbago

My blog (yes, I have a blog now! just not that much content yet)
-> www.thefirstsql.com
Go to Top of Page

albertkohl
Aged Yak Warrior

740 Posts

Posted - 2010-08-27 : 18:55:29
I'm dropping it to a table first because I need to export the data with text qualifiers and a header row. The export can contain any combination of about 200 different columns, so with all this I found a stored proc that will take data from a table (query results) and then dynamically build a view to run the bcp export off of.

As far as temp goes, should I place it on an array? Or just a single disk by its lonesome. My issues now are the more complex queries just take too long and I'm trying to figure out where I need to up-scale.
Go to Top of Page
   

- Advertisement -