Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
JeffK95z
Starting Member
19 Posts |
Posted - 2006-03-08 : 15:38:52
|
Hi folks, hoping you can help me!I'm a fairly new sql server dba, and i've setup some traces on our server to monitor whats going on.I'm trying to determine the unit of measure for Reads/Writes in the trace file.In this thread from 2003, it says each read/write is 1 page (8K), but i'm not sure if that seems valid for me...http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=26940&SearchTerms=trace,readhere's my reasoning...My test:I start a traceI execute a select into based on a known source tableI stop the traceBased on analysis heres my findings:Note: Org. table is 11,728 KB (data) based on sp_spaceusedNew table is 11,368 KB (data) based on sp_spaceusedStat________Value____Conversion Duration_____1594_____0.02 secondsReads_______5339_____42.712 MB'sWrites_______1422_____11.376 MB'sCPU_________891______0.01 secondsNow, the writes seem fine, pretty much perfect. But does it make sense that it would have to read almost 4 times the amount of data to write it?Hope this makes sense and thanks in advance!!Cheers,jeff |
|
graz
Chief SQLTeam Crack Dealer
4149 Posts |
Posted - 2006-03-08 : 16:55:28
|
Jeff,Those are logical reads and not physical reads. I'm not sure why it's so far off but most of those are probably being satisfield from the cached data.-Bill===============================================Creating tomorrow's legacy systems today.One crisis at a time. |
 |
|
JeffK95z
Starting Member
19 Posts |
Posted - 2006-03-08 : 19:53:23
|
from my limited understanding... is it possible that:the base/orginal table is spread out over many many pages, and that its not sequentially in order... not sure if that makes sense, but what i'm trying to say is...1 page is 8K... each record is less then 1K, so its possible that for each page, there is only 1 record from the base table in that page, and the rest is data from other tables.So if my base table has 400,000 records, its possible that its spread out over 300,000 pages (if the database is that big... in this case the db is around 40 gigs).Therefore it would have to read each of those pages in order to gather all the data it needsso in my example (which is probably extreme?, but i'm just making up numbers), it would have to read the 300,000 pages (which is like 2.4 gigs or so?) just to pull all the data together? then it writes the 400,000 records.hope this still makes sense and i'm not completely out to lunch :)thanks for the reply Bill!!Jeff |
 |
|
|
|
|
|
|