Why is my intermediate result set so huge?

JNK 2013-07-31 12:15:34

I'm trying to optimize this stored procedure which involves aggregations from a number of joined tables.

I made a change to one of the views that is joined to make it run significantly faster (using a window function instead of group bys and elminating joins so it returns same results in 4 seconds instead of 33s) but now the proc is taking LONGER to run and I'm at a loss as to why.

SQLkiwi 2013-08-02 02:38:54
>Why is my intermediate result set so huge?

The query plan is optimized on the basis that the table-valued function returns one row:

alt text

The optimizer knows that its estimate here might be slightly wrong, so it introduces a table spool to cache the results of the view:

alt text

The reasoning is that evaluating the subtree below the spool is expensive, so caching the results will be a big win if there turns out to be more than one row on the upper input to the nested loops left outer join.

Unfortunately, the estimate of the number of rows to be cached by the spool is also wrong. The estimate is 6327 rows, but there are 78638 rows at runtime:

alt text

Overall, the effect is that the 78,638 rows are read from the table spool 6,143 times, giving a grand total of 483,073,234 rows:

alt text

One way to give the optimizer better information would be to materialize the results of the table-valued function in a temporary table. The optimizer will likely choose a hash or merge outer join instead of nested loops, resulting in a plan that only processes the output of the view once.