Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2000 Forums
 SQL Server Administration (2000)
 Trace Analysis help

Author  Topic 

JeffK95z
Starting Member

19 Posts

Posted - 2006-03-08 : 15:38:52
Hi folks, hoping you can help me!

I'm a fairly new sql server dba, and i've setup some traces on our server to monitor whats going on.

I'm trying to determine the unit of measure for Reads/Writes in the trace file.

In this thread from 2003, it says each read/write is 1 page (8K), but i'm not sure if that seems valid for me...
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=26940&SearchTerms=trace,read

here's my reasoning...

My test:

I start a trace
I execute a select into based on a known source table
I stop the trace

Based on analysis heres my findings:
Note: Org. table is 11,728 KB (data) based on sp_spaceused
New table is 11,368 KB (data) based on sp_spaceused

Stat________Value____Conversion
Duration_____1594_____0.02 seconds
Reads_______5339_____42.712 MB's
Writes_______1422_____11.376 MB's
CPU_________891______0.01 seconds

Now, the writes seem fine, pretty much perfect. But does it make sense that it would have to read almost 4 times the amount of data to write it?

Hope this makes sense and thanks in advance!!

Cheers,

jeff

graz
Chief SQLTeam Crack Dealer

4149 Posts

Posted - 2006-03-08 : 16:55:28
Jeff,

Those are logical reads and not physical reads. I'm not sure why it's so far off but most of those are probably being satisfield from the cached data.

-Bill

===============================================
Creating tomorrow's legacy systems today.
One crisis at a time.
Go to Top of Page

JeffK95z
Starting Member

19 Posts

Posted - 2006-03-08 : 19:53:23
from my limited understanding... is it possible that:

the base/orginal table is spread out over many many pages, and that its not sequentially in order...

not sure if that makes sense, but what i'm trying to say is...

1 page is 8K... each record is less then 1K, so its possible that for each page, there is only 1 record from the base table in that page, and the rest is data from other tables.

So if my base table has 400,000 records, its possible that its spread out over 300,000 pages (if the database is that big... in this case the db is around 40 gigs).

Therefore it would have to read each of those pages in order to gather all the data it needs

so in my example (which is probably extreme?, but i'm just making up numbers), it would have to read the 300,000 pages (which is like 2.4 gigs or so?) just to pull all the data together? then it writes the 400,000 records.

hope this still makes sense and i'm not completely out to lunch :)

thanks for the reply Bill!!

Jeff
Go to Top of Page
   

- Advertisement -