Can I safely deactivate Trace Flag 4136?

JonasWanninger 2017-12-14 15:44:02

Hello SQL Experts,

I am wondering if someone could help me with this. I don't know if I can safely deactivate the Trace Flag 4136. The TraceFlag is not officaly documented but i could find the following:

When Trace Flag 4136 is enabled, the
Default "parameter sniffing" behaviour
is changed so that a more average
Estimate of selectivity (based on
density statistics) is used by the
Optimizer instead of the default
Parameter-specific estimated (based on
the histogram).
From the course 80428: SQL Optimization for Dynamics AX 2012

We are using Dynamics AX. Our biggest performance problem are TempDB latencies. Latencies are never under 20ms.

My guesses for these latencies are

  1. Snapshot isolation
  2. TempDB spills from queries

TempDB setup:

  1. Has its own LUN on our SAN
  2. Has 8 files
  3. Is equally and big enough sized

I have also attached the Crystal DiskMark stats for text

The offical Microsoft documentation for Dynamics AX recommends the trace flag for overall better performance but also states that this should be tested before enabeling it in production. In our enviorment this Trace Flag is active since the launch of the application so no one ever tested it.

So I did the testing with the RML tools and an hour long trace capture from the production system. The test stated that some of the query saw an improvement but also other queries got a lot worse. I am not really sure how to interpret the results or if this is the right kind of test.

Dynamics AX uses a lot of cursors. If you open a form in AX it issues a SQL statement with an added query hint like OPTION FAST(XXX). All other queries from batch jobs etc. do not have these hints. This is the first peculiarity of Dynamics AX.
The second one is that almost every index starts with an field that has a density always one. So if you take a look at the histogram you would only see one step

I Imagine the following if I disable the Trace Flag:

  1. Queries with the added OPTION FAST(XXX) would not be affected since the estimated rows are overwritten by this hint
  2. Queries without the added query hint would suffer, since the histogram is not very good

Are my assumptions correct? What is another good way to test if the Trace Flag adds value or makes things a lot worse.

Also our SQL Server setup
1. SQL Server 2012 SP2 CU8 (Please don't judge me)
2. Application: Microsoft Dynamcis AX 2012 R3 CU 8

I hope i could provide enough information.

Hugo Kornelis 2017-12-15 12:55:13
Okay, let's go over a few of your points one by one.

  1. Contention in tempdb due to snapshot isolation: Possible, and hard to avoid: snapshot does come with a lot of overhead in tempdb. Check Paul Randal's post first: – if you have done all that, then you'll probably need to increase the number of tempdb files. (Make sure to keep them equally-sized; make sure to enable trace flag 1117 so that if autogrow ever kicks in, they'll stay equally-sized).
  2. Contention in tempdb due to spills: You should not suspect this, you should know this. Set up an extended events session to monitor hash_warnings and sort_warnings. If they do occur frequently, find the queries that cause them and try to fix them.
  3. (Disabling) parameter sniffing: Parameter sniffing only affects stored procedures and parameterized queries. When first executed it compiles a plan based on the parameter values passed in, which will then be reused for later executions. This can result in better, worse, or similar plans than what you would get without parameter sniffing. See for more details.
  4. "almost every index starts with an field that has a density always one": That is weird and you might see better performance overall if you remove that column from the index or change the index order. But it will probably not affect the estimates. When SQL Server needs statistics on a column, it will create it (unless you changed the default settings to switch off auto_create_statistics)
  5. OPTION FAST(XXX): This affects estimates but does not overwrite them. This hint has the same effect on cardinality estimates as using TOP(xxx) in your query. So if for instance you have a 100-row table and a 25% selective filter, then a normal execution plan would estimate reading 100 rows into the filter, and passing 25 rows from the filter to the client. A TOP(4) version of the query would estimate reading 16 rows to produce the required 4 output rows. A FAST(4) query would use the same estimates, but then still run the query through completion (so the actual counts would be much higher). Because the hint does not overwrite the estimates, enabling or disabling parameter sniffing will affect these queries.
  6. "Queries (…) would suffer, since the histogram is not very good": As mentioned above, SQL Server will auto-create statistic for a column when it needs them. See the link above for a breakdown of the types of queries that are likely to be positively, negatively, or not affected by parameter sniffing.
  7. Ways to test: Since you are not on SQL Server 2016 or higher, you cannot use the Query Store to compare performance and execution plans with and without the trace flag. That leaves you with limited options, such as capturing and replaying a workload, getting your coworkers to simulate a normal workload on the test system twice, or writing your own application code to generate a normal (or even high) workload. You will want to monitor tempdb contention while running the tests, but also overall system performance (memory, cpu, IOs), response times (the only thing the end user cares about), etc.

Good luck!

JonasWanninger 2017-12-27 12:06:43
Thanks Hugo for the detailed answer and sorry for not answering so long.

I played a little around and my conception always was that if I use an index the statistics of the index will be used for estimates. But this seems only partially true.
I used the query below and the AdventureWorks2014 database. I modified the 'Person' table and added a 'Partition' column which has always the value '123456789' in it. Just to simulate how Dynamics AX would do it.

N'select LastName from [AdventureWorks2014].[Person].[Person] where LastName LIKE @p1 and Partition = @P2',
N'@P1 varchar(100), @P2 BIGINT', 

I also generated an index with the following order:
– Partition
– LastName
– FirstName
– MiddleName

Although the index I created has a bad histogram SQL Server guess the estimated rows 100% percent right. So am i right, that even if i use an index the estimates can come from the column stats? Do you know when SQL Server will use the index statistics rather than the column statistics?

Hugo Kornelis 2017-12-27 17:29:12
Hi Jonas,

SQL Server uses different types of statistics. The below assumes default settings for auto_create_statistics.

  1. For each index, it will automatically create a statistics based on all the columns in the index, plus the clustered index (if any). So for an index on Lastname, Firstname, it will create a statistics on Lastname, Firstname, PersonID.
  2. When and as needed, it automatically creates single-column statistics. With the query above, and assuming that the table is new and has no statistics yet (other than those for the indexes), it realizes it needs information on the data distribution so it creates the required statistics. These auto-created statistics are always named _WA_Sys_ plus two hexadecimal codes that represent the object ID and the column number.
  3. By explicit request (CREATE STATISTICS), you can manually create single- or multi-column statistics. Multi-column statistics are never created automatically (except for the index-related statistics). Manually created statistics can also be filtered.

All these statistics have generic information on the table and the statistics object; density information on combinations of one or more columns starting from the left; and a detailed statistics histogram of the left-most column. So for example I gave above, of a statistics on (Lastname, Firstname, PersonID), you will have generic information, density information (i.e. "how many distinct values") for Firstname only, for Firstname + Lastname, and for the full set of all three columns. Finally, the histogram will have information on specific Lastname values.

In your example query above, and assuming there were not already any statistics, the optimizer will have found information on Partition in the statistics for the index you created, and will have auto-created statistics for the LastName column. The first statistics told it, in the histogram, that 100% of the rows are expected to have partition 123456789. The second statistics, also based on the histogram, gave X% for Perry. Combined, that is 100% of X%; which is why the estimate was correct.


JonasWanninger 2017-12-28 07:18:15
Thats Hugo. I think that is all i need.