Doesnt Return Data, TempDB Spillover, Implicit Conversion

Monica Rathbun 2016-02-10 15:39:56

Reads Highly transactional table
12 GB memory on box 10 alocated to SQL but its only using 8 Gb now

Aaron Bertrand 2016-02-10 16:09:07
Hi Monica, this looks like an estimated plan from SSMS or the plan cache opened up in Plan Explorer. Any chance you can use Plan Explorer to generate an actual plan for calling the stored procedure with those same parameters?
Monica Rathbun 2016-02-10 17:45:09
Hey Aaron stored procedure wont complete after running 40 minutes so I have no actual plan
Kevin 2016-02-10 23:21:29
Hi Monica,

As Aaron mentioned, an actual plan would help us a lot more than this estimated plan. One way to get us there would be to run the constituent parts as SQL batches and collect those actual plans. However, I have a few more suggestions since there are a number of things going on here:

  1. Rewrite the @Record table variable as a #temp_table, with NC indexes on each of the three columns. Table variables are notorious for have a cardinality estimate of 1 record in any query optimization. I'm just guessing that your database has many 100's of times more records than that.
  2. Fix those implicit converts pronto. In each case, they cause the optimizer to disregard indexes that it might otherwise use. Like with item #1 above, these may cause the query optimizer to grossly misjudge the memory space needed, not to mention the obvious benefits of better cardinality estimates.
  3. There are some recurring issues with residual predicates causing the spools. I can't explain things better than the Jedi Master, Paul White, so I encourage you to read his post on this topic here – https://answers.sqlperformance.com/questions/2900/how-to-eliminate-table-spool.html.
  4. Notice this particular section of the execution plan:
    List item
  5. See how there are two thin lines (small about of I/O going in) but a HUGE one coming out? That is a VERY BAD SIGN in the official parlance. 😉 What's happening here is a MANY-TO-MANY operation within the merge join multiplies the number of records from each input stream because neither input can be guaranteed as unique. It's probably due to the inequality evaluations. Again, the Jedi Master has spoken on this scenario too – read more about it at https://answers.sqlperformance.com/questions/285/why-is-the-merge-join-passing-4513373-records.html.

Hope that helps! Best regards,
-Kev