Lesser estimated rows compared to actual no of rows

SQLDB 2016-04-26 13:35:23


I am trying to improve the performance of the attached plan.

Earlier it was taking 4 minutes and now has been reduced to 2, by updating stats , improving fragmentation and removing all sorts of key look up by adding approprt covering indexes.

But as per the requiremnent , for 4K records it is expected to complete in 20-30 seconds seeing the no of records as O/P.

Please suggest me how can i improve the plan, what could be the reason estimates vs actual are way off?

Any help would be appreciated.

New plan, after suggestions:- Newsqlplan

SQLkiwi 2016-04-26 19:01:53
I see this is a duplicate of your other question, but with a non-anonymized plan. Please see the general advice in my other answer. Where I talk about the residual predicate there, the example I use turns out to be tblAssetStatusHistory.

The residual predicate is:


This means there is a missing index like:

CREATE INDEX dbo_tblAssetStatusHistory_
ON dbo.tblAssetStatusHistory (id, [Status])
INCLUDE (InsertedWhen);

A similar analysis can be used to identify other index improvement opportunities. You should look to eliminate the costly scans, improve the seeks where possible, and eliminate the Key Lookups. That will be a good start.

You also appear to be passing a delimited string in and using some sort of string splitting function on it. This is not a good idea if you want good cardinality estimates. Split the string first, and put the results in a temporary table (not table variable) with suitable indexing.

Identify and work on one problem at a time. Do break the query up into simpler parts if you can. Struggling with one big plan when several smaller ones would be much easier is a huge waste of time among developers. Store small intermediate results in properly typed and indexed temporary tables to improve cardinality estimates at key points.


If you are still seeing lookups, you have not created a covering index by definition. All columns from the Output List of the lookup plus any columns referenced in a predicate on the lookup are required to make the index covering.

The spills are a result of poor cardinality estimates, not a problem with memory or storage. Memory is allocated for a sort or hash based on estimates, and cannot grow once execution starts, how ever much memory the system has available. If the system allocates enough memory to sort 1000 rows but 10,000,000 turn up, the sort will spill. A lot.

SQLDB 2016-04-27 13:42:58
Thanks a lot @SQLkiwi for the inputs. I have tried to implement the suggestion mentioned by you. I see a very little improvement though , may be gaining 5-10 seconds. Also, i have discussed with developer to work on code to make it less complex.

However, in the new plan i still see lookups, even though i have added covering indexes for most of them. Also i tried looking at residual predicates to create missing indexes where scan was seen, some worked but some with no success. May be i am not that clear with that. Can you illustrate one similar example from this new plan or refer a link where i can understand this in depth how to analyse.

One last thing, i see lots of warnings on sort and hash for spill into tempdb. Does it indicate any issue related to memory or storage because i can see enough available memory on system, but not sure on storage.

Please suggest, thank once again.

SQLkiwi 2016-04-27 13:59:31
Answer updated.