Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 Transact-SQL (2000)
 UPDATE query efficiency

Author  Topic 

sangle
Starting Member

18 Posts

Posted - 2004-07-07 : 14:44:36
Hi,

I am working on an application which will have a db for appointment management.
The application if defined to be compatabile with any db(SQL Server,mySQl,postgres)
Presently in the design stages what we are most concerned about is efficiency,and soem of us were had experienced slowness with ORacle daatabases doign updates on large databases.

So instead of and UPDATE would a DELETE and INSERT be more efficent???

Now the problems comes when we have auto indexed records.The solution some of us came up with was to instead have a meta table which would store the lookup id of the autonumbered field.

However,I would still be intersted in knowign if this idea of using DELETE+INSERT for an UPDATE would increase efficiency ??? Has anyone come across this issue before??






X002548
Not Just a Number

15586 Posts

Posted - 2004-07-07 : 15:03:56
Let me ask you...do you think doubling the number of operation is more effecient?

I would only do the DELETE INSERT thing when there's a key change.

Operationally, I want the business to think of this as a new entity...

And why was the Oracle updates so slow?

Was there an index?

Where you trying to update millions of rows at once?

Did you have stage 2 predicates?

We would need to see that query and DDL.



Brett

8-)
Go to Top of Page

sangle
Starting Member

18 Posts

Posted - 2004-07-07 : 15:09:08
I compeltely agree with you about the logic that 2 steps can never be better than single step.But we have a team member claiming she has suffered with the way Oracle handled this.
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2004-07-07 : 17:10:05
Fine, let her complain...

Let's see the DDL of the table, including the indexes and her update statement, and explain to us what her problem was....

Make her prove to you that it's a problem....

Oracle flys!

I'm thinking she didn't know what she was doing....



Brett

8-)
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2004-07-08 : 04:15:25
My $0.02 is that if efficiency is the main concern, then multi-platform compatibility is going to be a big issue.

Because: this [performance requirement] would suggest to me that you will need Stored Procedures for MSSQL, and they are going to be completely different to Oracle (and presumably to postgres as well).

When I've did DB-compatible stuff in the past we used Dynamic SQL and a "database-independant insulation layer" between the application and the database, and yes the performance was pretty terrible - with what I know now!

Now we exclusively use MSSQL, so we do everything in Stored Procedures, many of them carefuly tuned, and I can't believe how much throughput we get compared to the old methods we used - I would not be exagerating to say 100 fold improvement.

We started off supporting Oracle/Sybase/MSSQL because clients were stuffy (in the mid 1990's) about only using their inhouse perferred database, and by the end of the 90's they had changed to using the preferred database of the vendor to ensure maximum application efficiency and best-of-bred support knowledge, and instead spent effort on providing in-house support for the main database vendors, and slinky tools for data transport between them.

I'm don't know your business needs for multi-platform support, but its a nightmare worth considering avoiding if that might be an option!

Kristen
Go to Top of Page

ditch
Master Smack Fu Yak Hacker

1466 Posts

Posted - 2004-07-08 : 04:58:01
I have found that when wanting to delete from a big table that selecting the records you want to keep into a staging table, then dropping the original table and renaming the staging table to the original table seems to run a whole lot quicker than performing the delete.

so a delete and insert method (if the deletion is performed the way mentioned above) may well turn out to be a lot quicker.

Thats my $0.02 worth.


Duane.
Go to Top of Page

Wanderer
Master Smack Fu Yak Hacker

1168 Posts

Posted - 2004-07-08 : 05:48:31
Duane,

In my experience, your process works well, IF the amount of data you want to delete is more than the data you want to keep.

I only do this as a VERY controlled task, since by dropping a table you can have a lot of fun, if for example, you forget to recreate all your indexes etc. Also, obviously, your table is unavailable for a period of time (between the drop and recreate), where in a more normal delete operation, the table might be available, escpecially for NOLOCK queries.

Don't get me wrong, I've done this as well - for example when we needed to keep 10 million rows, out of a 160 Million row table - you do NOT want to do a 150 Million row delete if you can, especially when there are 5 indexes. But a select of 10 Million into a non-indexed table, then renamed, then indexes, was orders of magnitude faster (obviously, we did this through several testing areas )

Anyway, back to the topic - I get the feeling that they are not necessarily talking about changing a vast number of rows, or a significant precentage of the table, but rather that they were working on VERY large tables. I am tempted to go with Brett here, and say - show me the indexes etc. If they are updating a column/columns that have several indexes on them, that may also speak to their problems. Then again, I haven't had the privelege (?) of playing on Oracle before.

CiaO 4 NoW

*##* *##* *##* *##*

Chaos, Disorder and Panic ... my work is done here!
Go to Top of Page
   

- Advertisement -