I'm trying to optimize the attached plan. The problem hides in two tables:
which already have nonclustered indexes:
If I create following indexes the execution plan completely changes and "Nested Loops", "Lazy Spool" steps disappear, but I'm wondering is it possible to optimize this plan just with statistics without introducing new indexes with permuted columns:
Any idea for appropriate statistics or other optimization trick?
By VeskoJl 1 asked Mar 06, 2015 at 09:51 AM
You've anonymized everything, so it's tough to correlate your proposed index to what's going on in the actual query, and thus impossible to offer any guidance on that. The plan is extremely complex. Again, I can't see the statement since it's been anonymized, but looking at the join diagram leads me to believe that SQL Server will have a hard time optimizing this no matter what you do:
This is also just an estimated plan, so I can't get a feel for how many of these estimates are anywhere near accurate. Are there really 128 trillion rows possible anywhere? Are your join conditions sensible or is it feasible that you have some unintended Cartesian products involved?
You definitely want to get rid of the two table spools (one is estimated to be executed 16 billion times!) and the two key lookups. But with the information I have available to me right now, I can't begin to suggest how to do that.
By Aaron Bertrand ♦ 1.7k answered Mar 10, 2015 at 02:00 PM
As Aaron noted, it's very hard to say on the basis of an estimated, anonymized plan alone, particularly where the plan is on the large side.
That said, from the general shape of things, it seems most of the effort is centred around finding the highest (or lowest) row from a grouping. There are two repeating patterns:
The above fragment is associated with row numbering and filtering for a particular row number (typically '1'). This is convenient to write, but the Segment & Sequence Project require a temporary end to parallelism, and all rows must be examined and numbered, not just the 'first' one we are interested in.
This second pattern hints at a similar task: joining to an aggregate derived from another table. The optimizer's decision to spool these subtrees is a side-effect of cost estimations that are highly unlikely to be accurate.
Rather than asking for very specific advice about the indexes and spools, I think you should be looking at ways to reduce the amount of work involved in locating the min\max rows. That would probably be a new question though, and would require details.
By SQLkiwi ♦ 6.6k answered Mar 18, 2015 at 04:05 AM