Please start any new threads on our new site at We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 SQL Server Administration (2005)
 Split large database

Author  Topic 

Yak Posting Veteran

84 Posts

Posted - 2010-12-06 : 02:49:26
Hi all,

I'm using SQL 2005 Express with just 1 single database. Now my database is quite close to the 4 GB limit. Checking my hosting provider shows that using MS SQL 2005 workgroup edition would more than double my monthly costs, so instead I prefer to split my current database into several smaller database (as this also has some positive side effects e.g. regarding restores [url][/url]). Having a little knowledge about T-SQL, I'm more or less a beginner regarding MS SQL administration, so I have several questions:
  • Are there any important arguments against this plan?

  • As said, the db is already around 4 GB. What's the best (easy, fast) way to create multiple databases?

  • Is there anything else I need to take care of?

Many thanks in advance!


22859 Posts

Posted - 2010-12-06 : 03:08:35
"(as this also has some positive side effects e.g. regarding restores"

Multiple database restores are a positive benefit?

I would have thought the opposite. How are you going to restore several databases so that they are at the same transaction point?

Security across databases may be a bit more of a pain across databases.

Code will have to be changed to explicitly reference the table in a different database (although you could get around that with VIEWs)

4GB is quite large - depends what you are storing of course. Do you need all that space? Maybe you have a large log file that is actually empty (although maybe Express doesn't count that towards the size limit).

What sort of data is the database storing?

I would favour moving stale / archived data to an overspill database, rather than splitting the application data into "logical units"
Go to Top of Page

Yak Posting Veteran

84 Posts

Posted - 2010-12-06 : 03:18:30
Hi Kristen,

sorry, maybe the point regarding restores was a bit confusing: If I need to restore a single large database then all data is restored, also from tables which maybe do not need to be restored. Having multiple databases allows to restore a single database without touching the others (example was about customers and orders which are split in 3 databases). Of course you are right, it's easier to restore one single database instead of multiple, but if I only need to restore 1 of 5 databases that's probably even easier. Anyway, this was just a remark.

Yes, 4 GB is quite large, and I'm currently busy with housekeeping, deleting old data. It seems that during the data deletion, the size even increased.... Might this be correct? Is it possible to 'defragment' an SQL database? Did not find anything useful.

Nevertheless I currently like the idea to split into several databases as I hope that the size increases (which would mean that I got more users on my webpage ). Additionally the database could be split logically easily as there are several blocks, like user data, messaging data etc.

So of course there might be several things to discuss, in current thread I would like to focus on the process of splitting the database. I'm happy also for any other feedback, but I don't want to lose the focus here.
Many thanks!
Go to Top of Page


22859 Posts

Posted - 2010-12-06 : 05:33:20
"(example was about customers and orders which are split in 3 databases)"

The problem I see with that is that you restore the Customer database, say. You then have recent Orders which are for Customer ID's that no longer exist. Furthermore if you do not remember/realise to fix-up the ID numbers on the restored Customer database (immediately!) then new Customer IDs will be created which WILL match the existing Customer-IDs on recent Orders - but will actually be for completely different customers

"It seems that during the data deletion, the size even increased.... Might this be correct?"

Deletions will be logged (so that if they fail half-way-through then they will be rolled back and you won't be left in a half-and-half situation - either the delete worked, or it will be as if it did not happen - sorry if you already understand about ATOMic transactions).

If your database is set to FULL Recovery Model then those logged transactions will be retained until the next Transaction log Backup (whereupon the transactions that are backed up will be marked for "reuse", and those parts of the file will be overwritten in the future - but the physical file will NOT reduce because constant Grow/Shrink both takes CPU time and fragments the file - if you have exceptional circumstances (such as the big delete you are now doing) which "bloats" the file then you have to manually SHRINK it to get it back to its normal size)

If, instead, you are using SIMPLE Recovery Model then the transaction is still logged (in case you get a power cut half way through, or some other need to rollback the transaction) but shortly afterwards (at a "CHECKPOINT") the transaction will be marked for reuse [i.e. without the need for a Transaction log Backup]

If you do

DELETE FROM MyTable WHERE MyDate < '01Jan2010'

and lets say that deletes a huge number of records, they will be deleted in a single ATOMic transaction; if you have never done a transaction that big before then that might well cause the LOG file to be expanded to be able to accommodate the potential ROLLBACK. You can SHRINK the Log File when you are done, back to a sensible level (but leave it large enough for whatever the maximum normal load is, otherwise it will just regrow and become fragmented if you do that repeatedly)

A large deletion will NOT increase the size of the data file (AFAIK!). However, a REINDEX might. It will copy the whole index to a new part of the DATA file, all nice and tidy with no gaps , (which would be a good idea once you have done all your deletions). The space released from the old, now "gappy", index [i.e. after Reindex] will be available for reuse by the system (or you can SHRINK the Data File - but that will re-fragment your newly rebuilt & tidy Indexes by shuffling the index pages into free pages earlier in the file.

Deleting in smaller batches will prevent the log file growing (if you are using SIMPLE Recovery Model)

"there are several blocks, like user data, messaging data"

Think about what happens when you recover User Database and there are already messages for newer registered users who will be lost in the restore.

Personally I would move old Messages to an archive database instead. You can have a function for viewing "archived messages" perhaps, or just have a VIEW that you query that "joins them all back together again":

SELECT Col1, Col2, ...
FROM dbo.MyMessages
SELECT Col1, Col2, ...
FROM ArchiveDatabase1.dbo.MyMessages
SELECT Col1, Col2, ...
FROM ArchiveDatabase2.dbo.MyMessages

and just query that when you want to show a screen including old, archived, messages (performance will be slower though, unless you get the federated horizontal partitioning right using Check Constraints and indexes - I presume that is supported in Express, but anyways ... you can worry about that when you have a performance problem - come back and ask how to do that later on IF you have a performance problem.)

I also recommend that you build your design so that you COULD move data back into one database if you want to - don't burn your bridges in other words Thus I would use VIEWs that can easily be changed, and have your application query the VIEWs - rather than the underlying database(s) - easy then to change how many / which databases there are in the future, just by altering the views, rather than changing all the code dotted around your application
Go to Top of Page


22859 Posts

Posted - 2010-12-06 : 05:34:25
P.S. Find out which are the biggest tables - both in terms of number of rows, and Bytes used, that is the "low fruit" to change first.

I expect that Messages can be long, and will be the biggest table.
Go to Top of Page

Master Smack Fu Yak Hacker

3385 Posts

Posted - 2010-12-06 : 05:48:27
Problem is that if there is an error that needs a restore that will be at the database level. If you split into different databases the data can be inconsistent.
Ways around this
Rely on full backups taken when the system is down - means loss of transactions from last backup.
Archive old data into a separate database which becomes effectively read only.
Split the databases via high level objects - by company/cusomer/city - might mean duplicated lookup data and some redesign.
Keep less important data in another database - audit trails etc.
Look at reducing the amount of data held - this might be a very temporary solution though.
Live with the fact that data may get inconsistent and you will need to clear it up after a restore.

Cursors are useful if you don't know sql.
SSIS can be used in a similar way.
Beer is not cold and it isn't fizzy.
Go to Top of Page

- Advertisement -