best way to get rid of nested loops but I need to reteive the feilds

Me 2013-04-10 08:30:08

Dan Holmes 2013-04-10 11:17:36
Why do you think the nested loop is a problem? the duration is 182ms. Is this data a subset of what you will eventually encounter?

I think the optimizer got this one right.

Me 2013-04-10 17:40:30
the data set should be approx. the same size. basically the I need the sp to be able to run as fast as possible because I plan it getting called approximately 20,000 BR/sec. Needed to have fresh eyes on it to see if they could do anything better
SQLkiwi 2013-04-19 05:31:35
Any procedure that will be called 20,000 times per second will need to be very carefully optimized and as simple as possible. The anonymized plan makes it impossible to make specific recommendations, but there are some general observations to make about queries that are called very often:

  1. A smaller number of joins is better. The current plan has eleven join operators of one sort or another, which is quite a lot.
  2. Nested loops with optimal indexing (or possibly merge join for higher row counts and suitable input ordering without explicit sorts) is generally preferred over hash join. Hash join requires a memory grant, may spill to tempdb if the fixed memory grant is exceeded at runtime, and is blocking on the build input.
  3. Too many index seeks in the plan have residual predicates. Ideally, all predicates would be seekable.
  4. There are two explicit sorts, both of which require a memory grant, may spill to disk, and are fully blocking. The first is associated with some sort of row numbering scheme, the second appears to be needed to resolve a presentation order. Ideally, suitable indexes would provide rows in the required order naturally, without explicit sorts.
  5. There is a large variance between estimated and actual row counts at various points in the plan. Optimizer decisions based on incorrect numbers can be worse than guesses. A simpler plan with tighter indexing and custom statistics would be much more likely to produce accurate optimizer reasoning.
  6. The row number + filter arrangement is unlikely to be optimal. Numbering a complete set and filtering can frequently be replaced with a materialized view or a denormalized table maintained with triggers.
  7. There is a table scan with a residual predicate on the inner side of a nested loops join. It is worth checking to see if there is a better alternative than scanning this table 2423 times. It is also possible the outer join + filter could be replaced by an EXISTS or NOT EXISTS check.

Overall, this plan is a long way short of optimal. It is doubtful that it will perform adequately or get close to being able to be executed 20,000 times per second. The plan is complex enough that reworking it is unlikely to be trivial, and actually impossible given the important information masked by the anonymization process.