Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
sql_server_dba
Posting Yak Master
167 Posts |
Posted - 2011-03-09 : 20:01:48
|
| Experts,i am updating a table of 100 Million records. What are the factors which i need to consider while doing this?Its just an UPDATE statement..will three be any affect on Temp DB or something? |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
|
jezemine
Master Smack Fu Yak Hacker
2886 Posts |
Posted - 2011-03-09 : 23:54:40
|
this may sound crazy but for very large tables (like 1-10 billion rows) I have done this on occasion:1. write select statement that produces a result set containing the data I would *like* the table to have.2. export that result set, using bcp3. import result to new table. (alternative to bcp export/import is select into. import step goes fastest if you are simple or bulk-logged recovery mode - i am always in simple, often forget that some people need point in time recovery... :)4. recreate indexes as needed.5. rename old table, rename new table to the old one's previous name.6. when you feel comfortable, drop old table. elsasoft.org |
 |
|
|
sql_server_dba
Posting Yak Master
167 Posts |
Posted - 2011-03-10 : 01:23:36
|
quote: Originally posted by tkizer Make sure you have enough tlog space if you intend to do this in one transaction. You should consider batching it up though to prevent disk space issues. You should do this inside a maintenance window as it's going to exclusively lock the table unless you do it in smaller batches.Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog
So, when you say..make sure the tlog has enough space for one transaction.....is it like when the update statement runs, does the updated data for the entire table stores with in the transaction log and it frees up once it finishes the transaction? |
 |
|
|
sql_server_dba
Posting Yak Master
167 Posts |
Posted - 2011-03-10 : 01:25:54
|
quote: Originally posted by jezemine this may sound crazy but for very large tables (like 1-10 billion rows) I have done this on occasion:1. write select statement that produces a result set containing the data I would *like* the table to have.2. export that result set, using bcp3. import result to new table. (alternative to bcp export/import is select into. import step goes fastest if you are simple or bulk-logged recovery mode - i am always in simple, often forget that some people need point in time recovery... :)4. recreate indexes as needed.5. rename old table, rename new table to the old one's previous name.6. when you feel comfortable, drop old table. elsasoft.org
Thanks for your reply jezemine....your way makes sense, but it doesnt work for our environment.i need to update table frequently.. |
 |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
|
sql_server_dba
Posting Yak Master
167 Posts |
Posted - 2011-03-10 : 11:40:59
|
| What if the recovery model is Simple...it should not occupy any space correct? |
 |
|
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
|
|
|
|
|
|