For presentation purpose (Parameter sniffing) I created sample table
And then I run:
The optimizer uses index
I can understand cluster index scan or ix_Customer scan, but what is the reason of scanning ix_region?
By Yorik 16 asked Dec 28, 2015 at 02:59 PM
This is essentially a coin flip. When I tried your repro, it used ix_Customer on my system. If you change the stored procedure to force either index, e.g.:
You will see that all of the properties of the plan are the same - the cost numbers involved with the scan are identical, and a key lookup is still required in both cases. So it is just as efficient to scan either index to satisfy the query. A seek isn't possible because (a) none of the indexes have both
By Aaron Bertrand ♦ 1.7k answered Dec 28, 2015 at 04:37 PM
The Clustered Index Scan is not chosen because the estimated number of rows returned is so small (just one). One Key Lookup is not enough to make a Clustered Index Scan cheaper than a Nonclustered Index Scan plus Key Lookup.
The more interesting choice is between the two nonclustered indexes. Intuitively, one might expect
Though you can't see them in the final plan, during compilation the predicates in the query are contained in Filters and costed. A post-optimization rewrite pushes the Filter(s) into the index scan or lookup. We can expose these with undocumented trace flag 9130:
In this plan, all predicates are evaluated in the single Filter:
The estimated cost of the Filter is proportional to the number of rows it tests (1.1 million) and the number of predicates (4). The result of this computation is 1.408 cost units for CPU.
This plan has two separate Filters.
The one closest to the Index Scan tests the
The estimated cost of this Filter is again related to the number of rows (1.1 million, as before) and number of predicates (only 2 now). The result of this computation is 0.748 cost units for CPU. Importantly, the cardinality estimate is still just one row.
The second Filter is positioned later, after the
This Filter is almost free, since it has just a single row estimated to be tested against the two predicates. The estimated CPU cost is 0.0000007 units.
Splitting the predicates, the fact the second predicate is only expected to be tested against one row (instead of 1.1 million), while retaining a single estimated lookup means the
The cost estimates are not realistic of course, but if you were asking why the optimizer makes this decision, this is the answer. In the final plans without TF 9130, the two Filters are pushed into the Scan and Lookup respectively, but as I mentioned, this is a post-optimization rewrite i.e. after all the cost-based decisions have already been made.
As Aaron mentioned, as a proper solution I would probably employ
By SQLkiwi ♦ 6.6k answered Jan 02, 2016 at 11:09 AM