We are trying to tune this query and are stumped at why the query plan is showing the maximum hit twice to an index seek on a table that is not referenced anywhere in the query. We have looked at foreign keys, triggers everywhere (no views in the query) to find out where LA_BOMHeader table is used in the query. No success.
The other 2 issues that jump out are the table heaps (3rd Party tables in Great Plains) and another table heap(clustered index scan) that returns almost 2 million rows. That one, we feel the statistics are upto date, but still showing huge disparity between estimated and actual rows. Is it possible that the stats are correct, but we have such a huge variation?
What could be the best line of attack in tuning this monster query? Thank you.
On further investigation, turned out the
By Yaraj 37 asked Jul 22, 2015 at 02:59 PM
The section of the plan is:
The 1,833,168 rows is a total over 8,688 iterations of the scan. There is a poor estimate at the Filter: 2,798 rows actual versus 1 estimated. This is compounded by the two intermediate joins, so the scan is performed 8,688 times versus the optimizer's estimate of 5. Assuming correct indexing, the optimizer would never have chosen the scan if it had known about the 8,688 rows.
The separate Filter looks odd. Unless
The Clustered Index Scan is also something you should look at. Instead of a residual predicate on the nested loops join on the
Ensure there is a suitable index on the scanned table. From the plan, an ideal index would be:
You should obviously review that idea against the existing indexes and in the context of your workload as a whole before deciding to implement it.
The first thing I would do is to lose the CTEs. CTEs are NOT temporary tables. Repeated references to the same CTE generally result in duplicate parts of the finished plan - perhaps you have noticed this already.
Anyway, replace the CTEs with proper temporary tables as a first step. This will break the 'monster' into smaller parts, which should make it easier for you to identify missing/incorrect indexes and other tuning opportunities. Using temporary tables will also give the optimizer better statistical information to work with.
By SQLkiwi ♦ 6.6k answered Jul 23, 2015 at 05:39 AM