Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
| Author |
Topic |
|
malakai
Starting Member
8 Posts |
Posted - 2004-06-12 : 15:15:01
|
| SQL Wizards,I have some queries which run against a large dataset that rely on the use of UDF's (not scalar, but many table result UDFs).I can't seem to profile them accurately. Also, whether it be UDFs, straight SQL against a table or indexed view, or SP's, I'm seeing some sort of 'caching' occur, that I can't disable (for purposes of knowing when I change a param it helped performance).For example, on SQL Function on first run with poorly planned indexes took 8mins. The next time it ran, it took 30s. No changes. Now is there some sort of keyword or setting option that I can use to prvent SQL from storing either the QueryPlan or caching something or whatever it's doing that causes the first run to be different from same params 2nd time? I need this before I can seriously work on improving performance.Second example, I was having a problem with a query that access an Indexed view. It was taking 12s. I tried to add 'WITH (NOEXPAND)' to it, and it complained my session settings were not conducive to this setting. Fine, I added the standard 7 settings (ARTHABORT, ANSI_NULLS...etc, as specified in BOL) to the Query, and re-ran it. It took 0s. Ok good, thought I found my problem. So as a test, I Removed the NO EXPAND hint again, same speed. Hrmm. So I removed the settings (switched them to their antithesis) and speed remained constant.So I'm concerned my tests are false. The data I'm seeing is not directly caused by my changes. Do anyone have recommendations or procedures to follow so I can methodically change one param at a time and measure it's improvement against non-changing data?Thanks,-Frank O'Connor |
|
|
Lumbago
Norsk Yak Master
3271 Posts |
Posted - 2004-06-12 : 18:56:46
|
| If you take a look a bit down on this post: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=32728 mohdowais has answered the same question for me, and the answer is basically to run DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE each time you run the test-scripts. Bare in mind though that this is not recomended to do in a production environment (for obvious reasons)...--Lumbago"Real programmers don't document, if it was hard to write it should be hard to understand" |
 |
|
|
malakai
Starting Member
8 Posts |
Posted - 2004-06-12 : 23:44:47
|
| Perfect, thanks.-Frank O'Connor |
 |
|
|
kselvia
Aged Yak Warrior
526 Posts |
Posted - 2004-06-12 : 23:55:43
|
| Also try running SET STATISTICS IO ON, then run your query to see logical (from cache) vs. physical (from disk) I/O's. Try to reduce logical I/O's. Doing so will reduce physical I/O's when data is not cached. |
 |
|
|
|
|
|
|
|