What can I do to help improve the estimates with this query?

I've been working on this and trimmed from 3 million logical reads to this point. However, I'm having trouble getting the estimates to be more accurate. Do you have any insight on what I can do specifically with improving the OBJECT4.Index2 seek, which is having a discrepancy of data and row counts to what it actually ends up needing?

avatar image By g00p3k 33 asked Apr 22, 2015 at 08:09 PM
more ▼
(comments are locked)
10|10000 characters needed characters left

2 answers: sort voted first

i don't know the answer to that question but i am not convinced it is your problem. Since the actual row count is way less than the estimate and that this is a NL join i think it is OK.

I would be more concerned with the 3 RID lookups on the same object8 farther to the left. can you change the index and INCLUDE those columns so the seek on Object8.Index4 returns those columns too?

avatar image By Dan Holmes 725 answered Apr 22, 2015 at 08:20 PM
more ▼
(comments are locked)
avatar image g00p3k Apr 22, 2015 at 08:51 PM

Ok. I got rid of the RID's, as I had removed the included columns on the temp table. Still have some variances in estimates. Anything else you noticed?

Is a lower actual than the estimated less of a concern to you? Any insight on this would be appreciated.

avatar image g00p3k May 05, 2015 at 07:28 PM

@Dan Holmes, can you comment on your status about actual row count being less than estimate being ok when a NL join?

avatar image Dan Holmes May 05, 2015 at 07:45 PM +

I idea is because a record read(s) on the 'bottom' of a NL join is executed for every row on the 'top' side, a lower value for the actual on the top side than the estimate means it will do even less.

If the optimizer thought that a NL join was the correct way and it turns out the rows are less (way less in this instance) then it is alright. Had it been the other way and the actual row count was way higher then you would have really been in a fix. A hash or merge join would possibly produced quicker results.

(Reasonable chance that is a terrible explanation)

10|10000 characters needed characters left

You could consider creating a filtered index or filtered statistics on Column3 to incorporate the effect of the residual predicate ScalarString218 (if possible):

Residual

But I doubt this is worth it. A more accurate estimate is unlikely to change the plan shape much, from what I can see.

The query already executes pretty swiftly, given the number of joins. The 867ms of NETWORK_IO suggests the bulk of the time is spent waiting for the large rows (LOB data, lots of columns) to pass across the network. You might get a better measure of the raw query performance by writing the results to variables (or a temp table).

If doing that really does drop the elapsed time for the query down to 1389 - 867 ~= 500ms, I'd ne asking myself how much quicker I really need it to go.

sp.png (5.9 kB)
avatar image By SQLkiwi ♦ 6.6k answered Apr 23, 2015 at 03:28 AM
more ▼
(comments are locked)
10|10000 characters needed characters left
Your answer
toggle preview:

Up to 50 attachments (including images) can be used with a maximum of 209.7 MB each and 209.7 MB total.

Follow this question

Topics:

x641
x166
x89
x39

asked: Apr 22, 2015 at 08:09 PM

Seen: 116 times

Last Updated: May 05, 2015 at 09:07 PM