Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 Transact-SQL (2008)
 Correct approach of indexing

Author  Topic 

anujkrathi
Starting Member

9 Posts

Posted - 2013-02-05 : 09:54:09
Hi experts,
I have to optimize a database. I have almost done everything(modified stored procedures, implemented index also). But I am not satisfied & I am bit confused at this moment due to some sections.

There are few tables (contains millions of rows) which take part in public search. These tables also takes part in INSERT & UPDATE operations.

I have performed maximum possible normalization & cannot break my tables any more.
There is a main table & several related tables (so called child tables). Main table has 18 columns.
10 columns are numeric (int, bigint & numeric(18,2)), 4 columns are varchar (128), 2 columns are datetime and 2 columns are bit. Some columns are nullable also.

Search is based on different conditions & user can select one or more search conditions.
We are using dynamic query in our stored procedures. All columns can take part in "where" clause.
Now my questions are:

1. Should I create index on every columns? if yes, there will be 18 index (1 clustered & 17 non clustered). Is this correct approach?

2. Should I create covering index with all non clustered index or Is it sufficient to create covering index with only one non clustered index (on that column which always takes part in where clause).

Please suggest me proper approach.

Thanks in advance !

visakh16
Very Important crosS Applying yaK Herder

52326 Posts

Posted - 2013-02-05 : 10:44:05
no need of adding index on each column. it will have negative effect on DML operations.
covering index should only be added if query which covers it is frequently fired and is a performance bottleneck.

------------------------------------------------------------------------------------------------------
SQL Server MVP
http://visakhm.blogspot.com/

Go to Top of Page

James K
Master Smack Fu Yak Hacker

3873 Posts

Posted - 2013-02-05 : 10:45:10
I don't have sufficient information to suggest what indexes might be appropriate. However, if you create index/covering index on every column, that is going to slowdown the inserts/updates. Also, it is going to take up a lot of disk space.

Depending on your workflow, you might want to consider setting up a data warehouse where you DENORMALIZE the data into (one or more) fact tables and set up dimensions based on the queries you expect to receive. This would be separate from OLTP database; all the searches and queries would go against the data warehouse and all the inserts and updates would be in the OLTP database. The data warehouse approach works best in scenarios where historical data is static and new data comes in as additions each day. Even in cases where there are updates to old data, it can be made to work.
Go to Top of Page

djj55
Constraint Violating Yak Guru

352 Posts

Posted - 2013-02-05 : 15:21:51
For indexes I would index the child tables on the join columns. As stated too many indexes can slow down queries, especially with inserts.

Too many child tables will actually slow down queries. (see suggestion by James K).

djj
Go to Top of Page

srimami
Posting Yak Master

160 Posts

Posted - 2013-02-05 : 23:32:11
As you have clearly mentioned search is based on all columns, create 8-9 NCL with other columns in included clause (Covering Index). Having more number of NCL would not slow your performance however, you have to update the stats and check for fragmentation at regular intervals to ensure performance is not effecting.

We have got a similar situation some time back in our prod server and this helped us.
Go to Top of Page

visakh16
Very Important crosS Applying yaK Herder

52326 Posts

Posted - 2013-02-06 : 00:47:53
quote:
Originally posted by srimami

As you have clearly mentioned search is based on all columns, create 8-9 NCL with other columns in included clause (Covering Index). Having more number of NCL would not slow your performance however, you have to update the stats and check for fragmentation at regular intervals to ensure performance is not effecting.

We have got a similar situation some time back in our prod server and this helped us.


Not agreeing fully with this

We had a situation where there was a high transaction oriented table with about 20 + indexes and it had a drastic impact on ETL process performance.
Remember each new NCL you add, it has to update index for every DML performed on the main table. Also index table itself will take considerable amount of space for large tables.
An index should be added only if queries frequently can take advantage of it and get the benefit
Also in case of large tables, a strategy has to be devised to drop indexes before bulk dml operations and then recreated afterwards.


------------------------------------------------------------------------------------------------------
SQL Server MVP
http://visakhm.blogspot.com/

Go to Top of Page
   

- Advertisement -