Slow Query – Incorrect Estimated Executions?

ijabit 2015-04-01 17:54:11

This query takes 1-2 minutes to execute and I can't figure out what the problem is. I noticed that the Est Rows vary's from the Actual Rows by millions in some cases. I tried updating statistics on all of the involved tables but the row counts seem to be off because of the number of executions being much higher than the estimates. How can I get the estimates to be more accurate?? Is that even what is causing the slow down?

EDIT: I made some changes to the query from the comments below. Attached is the new plan with the query text run from inside of it. I think I was able to push the filter down a bit further and I removed the datetime2 to datetime conversion manually as a test (didn't seem to help much). Here it is: slow query – new

EDIT 2: Thanks for the comments, they're really helping me along. I think LINQ isn't that great at pushing filters down past a .Union statement. So I moved the .Union as far down in the queries and switched some things around to eventually eliminate the .Union (though SQL still did a union anyways). What took over a minute to run before now only takes < 10 seconds! The new plan is here. I'd like to have this run in < 5 seconds but 10 is a lot better. Now it looks like the Distinct Sort is taking most of the time. I'm using that to get the last record by date. Instead of sorting descending and grabbing the first, I tried using Where Date = MAX(Date). That gave a very different plan with the same results, but is actually a second or two maybe that's not such a big issue that the plan makes it out to be?

Aaron Bertrand 2015-04-02 12:22:55
Can you run this query from Plan Explorer, so we get the runtime metrics as well? And can you post the entire query somewhere (the query text included in the plan XML is truncated by SQL Server).
SQLkiwi 2015-04-03 10:14:12
The problematic area of the execution plan is:

Plan section

There are multiple issues here. From a runtime performance point of view, one of the main problems is the late application of the Filter:

Late filter

The filtering expression refers to a column on the Provider Payment History. The optimizer is unable to push the Filter down past the Outer Join / Top / Semi Join combination, but you may be able to rework the original query specification to achieve this (while retaining correct results). If you are able to do that, it should improve performance markedly. The optimizer is limited/conservative about pushing Filters below an outer apply with top in particular.

Aside from the outer apply / top pattern not being particularly optimizer-friendly, There are other issues as well. Certainly, estimates can have a profound effect on the plan shape and operators chosen by the optimizer.

In particular, the Merge Interval + Seek pattern here is caused by a mismatch in parameter types. Specifically, the following parameters appear to be incorrectly typed, compared with the columns they are applied to:

span class="re0"> @p__linq__0

All these are typed datetime2(7) as parameters, but are associated with GetRangeWithMismatchedTypes function calls in the XML. These often result in poor estimations.

There are other things as well, but relocating the filter and addressing the incorrect types are probably the main things. Almost certainly, the execution plan and performance will be very different after making those changes.


Don't be misled by the high estimated cost on the Hash Match Flow Distinct:

Flow Distinct

The 59.3% shown is the optimizer's internal estimate, used only at compilation time to make internal choices about plan operators and shape. These cost numbers never show runtime performance. The Flow Distinct in this plan simply passes on the first row it encounters, though it is executed 1,736 times overall. Even so, the actual impact of this operator on overall performance is negligible.

The poor performance of the query is now almost all due to the 5-million-odd seeks into the Provider Payments table, and to a slightly lesser extent, the Lazy Spool. Both shown below:

Seeks and Spool

Check you have the right indexing on Provider Payments. It seems you need an index keyed on ProviderID, PaymentID whereas the current (FK-associated) index only contains ProviderID. This may or may not be enough to reduce the seek impact to acceptable levels; it depends on factors I can't inspect from here.

The remaining issues in that area of the plan are:

  1. The incorrect choice of Nested Loops on that join;
  2. Seriously inaccurate cardinality estimates in general; and
  3. The Lazy Spool on the preceding join

Investigating these would require detailed analysis and full access to the database. I can say that the unfortunate choices are almost certainly due to the way the query is currently expressed in SQL, but there are no obvious quick fixes like a change in indexing. Sorry about that, but there are limits here ๐Ÿ™‚

ijabit 2015-04-07 20:43:26
Okay, I made some changes based on your feedback. I think I'm closer but there's still a huge performance issue with the Hash Match operation and a ton of rows at the bottom of the plan. Please take a look and let me know what else I can tweak.


SQLkiwi 2015-04-08 00:58:10
Answer updated.
ijabit 2015-04-10 17:52:10
Okay, I edited my question after making some more changes. It's much improved now! There's a Distinct Sort in the new plan taking 63.6% but like you said in your last update, this maybe isn't that accurate. How can you tell what costs the most if the costs are just estimates? Do you see anything else that might cut the query down some more?
SQLkiwi 2015-04-10 19:50:49
Yes that looks much better. There are always things that can be improved, but I think we're at the stage where this is becoming consultancy work ๐Ÿ˜‰

Just quickly though: You could look to remove the Merge Intervals as before (your parameter types have gone back to being incorrectly typed). (2) The Eager Index Spool suggests an index is missing on MessageQueue (ReferenceID, GatewayID, CreatedOn) INCLUDE (Notes, SendOn).

As far as the "actual cost" thing is concerned, there is no 100% accurate way to assess that right now. Perhaps things will change in a future release of SQL Server.