How can this query show > 10M reads?

gbritton 2016-09-19 16:51:21

I have one fairly simple Insert query. Looking at the data below, I cannot undesrstand why the results show > 10,000,000 reads! (I really hope that didn't happen!!)

Statement   Est Cost    Duration    CPU Est CPU Cost    Reads   Writes  Est IO Cost Est Rows    Actual Rows EndTime Degree of Parallelism   Parallel Operations Key Lookups Clustered Index Scan Operations Sort Operations Hash Match Operations   Index Scan Operations   Table Scan Operations   StartTime
INSERT [Cert].[Certificate] ( PortfolioFeedID ,BatchControlID ) OUTPUT inserted.[ParentCertificateID] INTO #NewParentCertificates SELECT @PortfolioFeedID, BatchControlID FROM #NewImportRowID -- Create mapping between ImportId and newly generated ParentCertificateIDs for increased performance ------------------------------------------------------------   22.2%   83,363  32,984  6.3%    10,984,619  14,026  29.5%   2,451,640   2,451,638   2016-09-19 12:41:30.077 1   0   0   1   0   0   0   0   2016-09-19 12:40:06.713
Aaron Bertrand 2016-09-19 16:54:26
This is what Profiler would have told us. Did you run the query in Profiler and compare? Or in Management Studio with SET STATISTICS IO ON?
SQLkiwi 2016-09-19 17:39:27
It's certainly possible, especially if the table being inserted to had a large number of rows before the insert (several b-tree levels).

Logical reads are reported for non-minimally-logged inserts when SQL Server navigates down the b-tree to locate the insert point, and/or to read output values, depending on the exact scenario. Where multiple levels of b-tree are navigated per row inserted, the number may indeed exceed the number of rows. Extra reads may also be associated with prefetching activity.

The information reported by PE should be the same as given in SET STATISTICS IO output, since that's where the data comes from.