Author |
Topic |
gavis
Starting Member
39 Posts |
Posted - 2008-10-07 : 03:23:13
|
hi, currently we have over 400 DBs with sizes ranging from 20gbs to 100gbs, on a daily bases we have experience time out errors, the temp DB is set to auto growth by % and the primary DB is also set to auto growth by 40mbs,is there a solution to this problem, (hardware is good Raid 0 for the log and raid 5 for the db) or does anyone know how to increase physical space on the database via a script which i think could help it if its set up as a task which increase the size at night when the DB is not been used. thanksGP |
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2008-10-07 : 09:02:45
|
quote: Originally posted by gavis hi, currently we have over 400 DBs with sizes ranging from 20gbs to 100gbs, on a daily bases we have experience time out errors, the temp DB is set to auto growth by % and the primary DB is also set to auto growth by 40mbs,is there a solution to this problem, (hardware is good Raid 0 for the log and raid 5 for the db) or does anyone know how to increase physical space on the database via a script which i think could help it if its set up as a task which increase the size at night when the DB is not been used. thanksGP
It is too less for big database. We set 500 MB for Big database. Also have you spread your data files across multiple disk for reducing I/O contention. What Version of SQL Server and SP level you have? |
 |
|
gavis
Starting Member
39 Posts |
Posted - 2008-10-07 : 09:18:24
|
due to cost restrictions we have sql 2005 with service pack 2, previously i did try increasing the file growth and the errors got worst, according to my knowledge the smaller the growth the quicker the size increase and returning its used resources, it also prevents deadlock, we insert over 200 000 rows on a table everyday, there is also allot of select statements running during the inserts.any feedback concerning this would be appropriated.thanks GP |
 |
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2008-10-07 : 09:24:06
|
quote: Originally posted by gavis due to cost restrictions we have sql 2005 with service pack 2, previously i did try increasing the file growth and the errors got worst, according to my knowledge the smaller the growth the quicker the size increase and returning its used resources, it also prevents deadlock, we insert over 200 000 rows on a table everyday, there is also allot of select statements running during the inserts.any feedback concerning this would be appropriated.thanks GP
Not really, You don't want to give pressure for SQL server to increase size every hours by setting 40 MB. Set it to something reasonable.Also looks like you are having blocking,deadlocking issues more than size issues? Have you traced with profiler? |
 |
|
gavis
Starting Member
39 Posts |
Posted - 2008-10-07 : 09:34:17
|
i haven't tried that yet, true but if i do set it anything more the 60mbs the timeout gets worst.i did do some testing by allocating more space to the database 1gb and still given the option to auto grow, it prevented the timeouts for over 3 weeks but then its back, i just want to try and grow it at night by its own throw a task as for me to do 400 databases...it'll take me forever. |
 |
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2008-10-07 : 09:47:06
|
Looks like you have only 1 Mdf set for large database. Is it correct? |
 |
|
gavis
Starting Member
39 Posts |
Posted - 2008-10-07 : 09:53:53
|
yes, no file groups... as sql Enterprise is to expensive to implement R100 000 * 400 = A LOT |
 |
|
sodeep
Master Smack Fu Yak Hacker
7174 Posts |
Posted - 2008-10-07 : 10:27:10
|
quote: Originally posted by gavis yes, no file groups... as sql Enterprise is to expensive to implement R100 000 * 400 = A LOT
What does this mean? |
 |
|
gavis
Starting Member
39 Posts |
Posted - 2008-10-07 : 10:32:27
|
the software is to expensive |
 |
|
gavis
Starting Member
39 Posts |
Posted - 2008-10-07 : 10:48:36
|
alter database test101modify file( name = 'Stock_Data' ,size = 63 mb,maxsize = unlimited,filegrowth = 12 mb)the script above works fine to increase the size of the db, but know i want to loop it, if the available spaces gets less then a certain size this script should increase the size. |
 |
|
saurabh811
Starting Member
33 Posts |
Posted - 2008-10-07 : 12:44:25
|
well make your filegrowth 1 mb.i also recommand maxsize 10 to 20 gb in your case.also use multiple files into same filegroup or different filegroup.oryou can go with unlimited maxsize and filegrowth 1mb.but according to your database size (20 to 200gb) is really small databases for unlimited maxsize.well for my enviorment we have fix maxsize with i mb filegrowth.and we dump avg 1 to 50 mil rows everyday.and it works fine.also dont set any database(ind. temp) filegrowth in %.well thats really end up with unwanted issues..you can also set space alert script for database growth. |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
|