Lesser estimated rows compared to actual no of rows

Hello,

I am trying to improve the performance of the attached plan.

Earlier it was taking 4 minutes and now has been reduced to 2, by updating stats , improving fragmentation and removing all sorts of key look up by adding approprt covering indexes.

But as per the requiremnent , for 4K records it is expected to complete in 20-30 seconds seeing the no of records as O/P.

Please suggest me how can i improve the plan, what could be the reason estimates vs actual are way off?

Any help would be appreciated.

New plan, after suggestions:- Newsqlplan

avatar image By SQLDB 60 asked Apr 26, 2016 at 01:35 PM
more ▼
(comments are locked)
10|10000 characters needed characters left

1 answer: sort voted first

I see this is a duplicate of your other question, but with a non-anonymized plan. Please see the general advice in my other answer. Where I talk about the residual predicate there, the example I use turns out to be tblAssetStatusHistory.

The residual predicate is:

Screenshot

This means there is a missing index like:

 CREATE INDEX dbo_tblAssetStatusHistory_
 ON dbo.tblAssetStatusHistory (id, [Status])
 INCLUDE (InsertedWhen);

A similar analysis can be used to identify other index improvement opportunities. You should look to eliminate the costly scans, improve the seeks where possible, and eliminate the Key Lookups. That will be a good start.

You also appear to be passing a delimited string in and using some sort of string splitting function on it. This is not a good idea if you want good cardinality estimates. Split the string first, and put the results in a temporary table (not table variable) with suitable indexing.

Identify and work on one problem at a time. Do break the query up into simpler parts if you can. Struggling with one big plan when several smaller ones would be much easier is a huge waste of time among developers. Store small intermediate results in properly typed and indexed temporary tables to improve cardinality estimates at key points.

Update

If you are still seeing lookups, you have not created a covering index by definition. All columns from the Output List of the lookup plus any columns referenced in a predicate on the lookup are required to make the index covering.

The spills are a result of poor cardinality estimates, not a problem with memory or storage. Memory is allocated for a sort or hash based on estimates, and cannot grow once execution starts, how ever much memory the system has available. If the system allocates enough memory to sort 1000 rows but 10,000,000 turn up, the sort will spill. A lot.

sp.png (8.6 kB)
avatar image By SQLkiwi ♦ 6.6k answered Apr 26, 2016 at 07:01 PM
more ▼
(comments are locked)
avatar image SQLDB Apr 27, 2016 at 01:42 PM

Thanks a lot @SQLkiwi for the inputs. I have tried to implement the suggestion mentioned by you. I see a very little improvement though , may be gaining 5-10 seconds. Also, i have discussed with developer to work on code to make it less complex.

However, in the new plan i still see lookups, even though i have added covering indexes for most of them. Also i tried looking at residual predicates to create missing indexes where scan was seen, some worked but some with no success. May be i am not that clear with that. Can you illustrate one similar example from this new plan or refer a link where i can understand this in depth how to analyse.

One last thing, i see lots of warnings on sort and hash for spill into tempdb. Does it indicate any issue related to memory or storage because i can see enough available memory on system, but not sure on storage.

Please suggest, thank once again.

avatar image SQLkiwi ♦ Apr 27, 2016 at 01:59 PM

Answer updated.

10|10000 characters needed characters left
Your answer
toggle preview:

Up to 50 attachments (including images) can be used with a maximum of 209.7 MB each and 209.7 MB total.

Follow this question

Topics:

x641
x455
x117
x46
x25

asked: Apr 26, 2016 at 01:35 PM

Seen: 80 times

Last Updated: Apr 27, 2016 at 01:59 PM