Any idees how to get better performance for this query, it's a vendor app I cant change the query just indexes, statistics and plan guide.

avatar image By Hans 6 asked Apr 01, 2015 at 06:03 PM
more ▼
(comments are locked)
10|10000 characters needed characters left

1 answer: sort voted first

The biggest problem in the plan appears to be this section:

Join

The query optimizer chose a nested loops join with index lookup strategy because it expected two rows from the hash join. In the event, 1.1M rows arrived. In addition, the Index Seek is actually 15 different seek operations in one, meaning 1,110,464 * 15 = 16,656,960 seeks overall. Given that the table in question only has 10,798 rows in total, this is a remarkably inefficient outcome. A hash or merge join would have been more appropriate, and the optimizer would certainly have chosen one of those had it known the real number of rows that would arrive from the hash join.

This is a typical problem with deep query trees (many operators, lots of joins and predicates). The quality of estimation possible from base table statistics is a lot more limited than is commonly supposed. Estimating the selectivity of a single join can be tricky enough, never mind the 22 joins and other complex operations on the branch of the plan that feeds into the hash join:

Input to the hash join

Sudden increases in row count are especially unlikely to be guessed by the optimizer (and anything beyond a few complex joins becomes essentially guesswork).

You could certainly ensure base statistics are representative of the underlying data, and create those statistics marked as 'missing' in the execution plan where this is possible (not all data types support statistics, e.g. xml). I doubt this will improve the quality of the estimation at the crucial point, however. Neither is it likely that a new or improved index will help (except by accident, perhaps).

If you have access to the underlying SQL, you might be able to force a hash join instead of the problematic nested loops join shown above. A hash join will likely spill (just like the existing one), but it would probably still improve performance markedly, compared with the current unfortunate seeking arrangement.

Not that this will solve the underlying problem of the poor estimates, and nor is it guaranteed to work for future states of the database. It may also extend compilation time for the query quite a bit because XML show plan is expensive to parse and is only used to guide the optimizer's search.

The best solution might be to take up the issue of the poor performance with the vendor, because they are in a position to write more sensible underlying code, which is the key to long term success here.

sp.png (12.6 kB)
sp.png (126.9 kB)
avatar image By SQLkiwi ♦ 6.6k answered Apr 03, 2015 at 11:46 AM
more ▼
(comments are locked)
avatar image Hans Apr 07, 2015 at 08:05 AM

Thank you for a good and well argued answer Regards Hans

10|10000 characters needed characters left
Your answer
toggle preview:

Up to 50 attachments (including images) can be used with a maximum of 209.7 MB each and 209.7 MB total.

Follow this question

Topics:

x641
x166
x27
x1
x1

asked: Apr 01, 2015 at 06:03 PM

Seen: 109 times

Last Updated: Apr 07, 2015 at 08:05 AM