link textHi, This is the plan SQL generates when we are passing DateTime parameter to query. This query utilizes a Linked Server, which is pulling large table across when DateTime value is passed via Parameter. Same query, if we plug " o.CreatedDate >= DATEADD(MONTH, DATEDIFF(MONTH, 0, getdate())-2, 0) and o.CreatedDate <= DATEADD(s,-1,DATEADD(mm, DATEDIFF(m,0, getdate())-1,0))" instead, it returns data quickly and generates better plan. We tried to pass in Index hint, blow away cache, used Recompile...but nothing seems to be working when we are passing Parameter to the query, but passing direct values like above returns data faster.
Wanted to know, why different plan gets generated when parameter is passed in. We are using SQL Server 2012 Standard Edition.
Thank you, Hiren Patel
By HirenP 16 asked Dec 30, 2016 at 02:00 PM
Optimization is cost-based, and one of the factors in this is how many rows will need to be returned by the query.
When you specify a hard-coded constant, SQL Server can look directly at the stats and determine how many rows will match that predicate (although there are potentially misleading problem when using both datediff and dateadd).
When you use a parameter, SQL Server by default will try to optimize for the first parameter value you passed. It will reuse that same execution plan for subsequent parameters, even if those new parameters would match a vastly different number of rows, and the same execution plan might not be appropriate for that number of rows. In an extreme example, if you have a predicate like this against an index with 1 billion rows:
Let's say the first time you run the query, you pass @DateParameter = '19050105' - which matches 100 rows. SQL Server is likely to choose an index seek here, and that is the plan that gets cached. The next time the query is run, let's say the parameter you pass is '20151201' - now you match over 99% of the rows in the table. But SQL Server has already decided to re-use the plan that is optimized for 100 rows, which uses a seek. This is not good for a query that returns ~1 billion rows.
If you have a volatile table where statistics can't be relied on consistently, or data skew that is very sensitive to changes in parameter values, typically you can work around this using OPTION (RECOMPILE) at the statement level (which isn't free - there is a measurable cost to compiling every single time the query runs).
That's somewhat different from creating the procedure WITH RECOMPILE, and that (as well as much more detail on why parameter sniffing works this way) is explained in depth in Paul White's post, Parameter Sniffing, Embedding, and the RECOMPILE Options.
By Aaron Bertrand ♦ 1.7k answered Jan 09 at 03:03 AM
Just to add to Aaron's answer, the point about the local estimates in this case is that it affects how SQL Server decides to run the remote queries.
For a small number of local rows, it may decide to run multiple remote queries with local values used as parameters to the remote queries. These appear as '?' in the Remote Query text of the Remote Query plan operator. For example, the faster queries use a a parameterized remote query:
whereas the slow query decides that too many remote calls would be needed, so it fetches the rows locally once:
The specific problem here is that SQL Server estimates 15,000 rows for the above remote query, when nearly 38 million rows come back at runtime. Normally, I would say that this guess may be due to the linked server setup having insufficient permissions on the remote server. This was more of a problem before SQL Server 2012 SP1, and you are on 2012 SP2 so
By SQLkiwi ♦ 6.5k answered Jan 09 at 06:33 PM