| Author |
Topic |
|
ereader
Yak Posting Veteran
50 Posts |
Posted - 2003-01-20 : 01:49:15
|
| H!We people are working on the applcation which is Read/Write intensiveWill it be possible and good for the performance point of view ?so that we can create less no. of indexes for write and more indexes for the read operationsusing a script if yes how will we manage the removal of clustered index to non clustered index while generating a specific query for the retreivalThanks |
|
|
rihardh
Constraint Violating Yak Guru
307 Posts |
Posted - 2003-01-20 : 02:55:11
|
| I read and read and I still don't get it... |
 |
|
|
ereader
Yak Posting Veteran
50 Posts |
Posted - 2003-01-21 : 05:57:27
|
quote: I read and read and I still don't get it...
what we are asking is that can we create dynamic index for reading a specific table(col) Batch ---create index ---SQL statement ---drop indexcreate indexJust for reading specific value from specific col Drop index after read |
 |
|
|
rihardh
Constraint Violating Yak Guru
307 Posts |
Posted - 2003-01-21 : 06:27:05
|
| Why do you want a "dynamic" index?The creation and droping of indexes, like your suggesting, would take more time and server resources than a bad static index...Just create various indexes on which read operations take place. |
 |
|
|
nr
SQLTeam MVY
12543 Posts |
Posted - 2003-01-21 : 06:31:17
|
| You can do this but I wouldn't advise itcreate index ix on tbl (fld1, fld2)goselect * from tbl ...drop index tbl.ixgoYou won't be able to do it from an SP though and the database may get confused about objects.You are doing the read as a batch process so consider crating another table from the one that is updated with the read indexes on it. Put it in another database as it is redundant data.Run a job to populate it for reading.==========================================Cursors are useful if you don't know sql.DTS can be used in a similar way.Beer is not cold and it isn't fizzy. |
 |
|
|
ereader
Yak Posting Veteran
50 Posts |
Posted - 2003-01-21 : 09:45:30
|
quote: You can do this but I wouldn't advise it
what the problem we are facing is that we have our application which is running on production.At the same time 50 user are simultaneously performing both read/write operationbut are main priority is for read but when we start optimizing the database for read operation, by putting specific indexes --??? it slow down the write operation so what the best way to overcome this problemThanks |
 |
|
|
ValterBorges
Master Smack Fu Yak Hacker
1429 Posts |
Posted - 2003-01-21 : 09:59:54
|
| Does the read data need to be available real-time?I'm thinking of having another copy of the db or even some of the tables.Edited by - ValterBorges on 01/21/2003 10:00:56 |
 |
|
|
robvolk
Most Valuable Yak
15732 Posts |
Posted - 2003-01-21 : 10:01:35
|
| The point is, by dropping and creating indexes on the fly you'd only be adding overhead to an already burdened server. Since everyone is doing both read and write there's no way you could possibly make such an operation effective anyway.And you mentioned that it "slows down the write operation"...how slow does it go? Does it go from 1 second to 2 seconds, or 2 minutes? Slower performance does not automatically mean unacceptable performance.Like rihardh said, the best thing to do is find a balance of useful indexes that provide enough read performance without hampering write performance. If you're indexing every column, or almost every column, that's bad. That's also an indication that your tables may not be properly designed for the burden(s) you're placing on them. If you post your table structures and the types of queries you're running we may be able to provide some more insight. You may also want to look at running the Index Tuning Wizard that's available in Enterprise Manager; it is also documented in Books Online. Also look at FILLFACTOR when creating your indexes. Your write activity might be causing a lot of page splits that the right FILLFACTOR setting might prevent.You also didn't say what kind of hardware you are using. 50 users is not a lot for SQL Server to handle unless you are using a low-power machine, or running other processes on the same box (Exchange server, IIS, etc.) It may well be that the machine you have simply isn't suited to handle this load and you need to upgrade it. |
 |
|
|
Robwhittaker
Yak Posting Veteran
85 Posts |
Posted - 2003-01-21 : 10:09:21
|
| It's important to know how the execution plan is using the indexes. Check that all the indexes are being used when you do a select, you might have some redundent indexes. If you have joins in your selects, play with having an index on 1 or both tables, and which table they are on. Sometimes not having an index on a big table that you're not really filtering, but having one on a smaller table that is being used a lot to look up a value is much quicker. Have you looked in to other tuning methods, to make your selects run quicker. Throwing more indexes at a table quite often isn't the answer. |
 |
|
|
ereader
Yak Posting Veteran
50 Posts |
Posted - 2003-01-22 : 00:25:48
|
| Thanks for ur valuable timewe are on the real time applicationconfiguration 4 processor with 2GB RAM when the delete operation is performed some time it give time out error or sometime takes 1 min. and at the same time the read operation takes long time though they are optimized. |
 |
|
|
harshal_in
Aged Yak Warrior
633 Posts |
Posted - 2003-01-22 : 05:05:24
|
quote: Thanks for ur valuable timewhen the delete operation is performed some time it give time out error or sometime takes 1 min. and at the same time the read operation takes long time though they are optimized.
At this level if you try to add indexes dynamically that is naturally going to add the overhead and slowdown the execution of the queries to larger extent.Expect the UnExpected |
 |
|
|
mr_mist
Grunnio
1870 Posts |
Posted - 2003-01-22 : 05:15:52
|
| Maybe you could consider some kind of alternative approach to deleting records, such as having an extra table to contain the states of the records in the main table, just as flags like live, pending, deleted (or just deleted). Your read queries could then reference this table to see which records should be collected, and your delete queries could add to or update this table. Then the "real" deletes could be done as a batch process during less busy times. I'm not sure how much you would gain from this if anything, but you could save having to rebuild your main table's indexes if that is your problem, at the cost of joining to the state table.-------Moo. |
 |
|
|
Robwhittaker
Yak Posting Veteran
85 Posts |
Posted - 2003-01-22 : 05:20:44
|
| I've used this method in the past, and it is effective. It also enabled me to put a month grace period in, where users could call help desk and get back records that were deleted by accidient. This was a good selling point, due to the size and nature of data that could be deleted at a touch of a button, even with validation. You just have to manage this capability well! |
 |
|
|
ereader
Yak Posting Veteran
50 Posts |
Posted - 2003-01-22 : 07:20:06
|
| Thanks everyoneone more thing we want to ask is thatwe have one read operation which took 4 sec. on the desktop machinewith 128MB RAM and Pentium III processorbut if we perform the same read operation on the production systemof the above mentioned configuration(4 Proccessor) it takes the same time. WHY?-- It's making use of all the four processor-- Is there is any specific setting we have to do on the sql server machineIf we want read performance on the production sys, which RAID level should we opt.Thanks |
 |
|
|
mr_mist
Grunnio
1870 Posts |
Posted - 2003-01-22 : 08:30:18
|
| Maybe that particular operation is not processor-intensive. If the slowest link in the chain is not the processor then you could (in theory) have a million processors and not speed it up.-------Moo. |
 |
|
|
|