The table spool operator costing 32.9% appears to have a massive difference between estimated number of rows and actual number of row. Is this a stats problem ? Do i need to remove this operator ?

Pearljammer1 2015-04-16 09:25:37

SQLkiwi 2015-04-17 06:33:22
The estimated number of rows is wrong at the Eager Spool, but that is not where the inaccuracy originates. The first point where estimates significantly diverge from actual row counts is here:

Plan fragment

There are two different issues at this join. First, the estimated number of rows from the Index Seek shown is a bit low, most likely due to the complex predicates:


The second issue is that the lower input to the hash join comes from a very deep tree of joins and other operations. The optimizer is most likely making an educated guess at the result of the hash join. You may get better results by materializing the 1,491 rows on the lower input to a temporary table, then joining to that.

That said, it is unusual to see a join producing many more rows than either of its inputs. This therefore looks like a many-to-many join, which is often the result of e.g. a logic error in the query or unexpected duplicates. You should review the query at this join to make sure the logic is correct, and you're not introducing more rows than are necessary.

The Eager Spools themselves are due to the mixture of regular aggregates (AGG(x)) with a distinct aggregate (AGG(DISTINCT x)) in the select list. This may be unavoidable without significantly rewriting the query, but you can at least prevent the expensive-looking sorts spilling to tempdb by getting their input cardinality estimates closer to reality.