CPU Spike Randomly due to SQL queries

Manav 2018-02-09 07:37:03

SDyckes2 2018-02-09 17:03:15
The plan you have provided is an Estimated plan. Since you are asking about a CPU spike due to SQL queries, an actual plan will allow you to determine the CPU intensive part of the query, but it will only provide information about the one query. Can you post an Actual plan for the query or queries in question?

If you right-click in the white space of the plan, you may choose what the Cost-By percentages display as, CPU, IO, or CPU+IO. On your estimated plan, the estimated CPU cost is 90.3% for the Index Seek [APP_ITEM].[UD_IXNC_APP_ITEM_ASSIGNED_GROUP_ID_TICKET_STAUTS_4D792], while the CPU+IO cost is estimated at 86.4%.

Ideally, you would have been monitoring the SQL Server when the spike occurred, collecting CPU data per query. You could then look at the overall picture to determine which query(s) were part of the spike. For example, if you are using SentryOne's SQLSentry, you could choose your time range, see the CPU spike on the dashboard, then switch to the Top SQL/Completed Queries to look for any correlation. If none was found, check the Procedure Stats and Query Stats to see if a fast running query is part of the problem.

Hugo Kornelis 2018-02-09 20:51:44
In addition to the answer by SDyckes, here are a few other things you can do to find the cause or to provide us with better information to help you.

  1. You mention random CPU spikes due to queries (plural) but you post the plan for one query. Are all queries that cause the CPU spikes very similar to this one? If yes, then do all of those queries always cause spikes? If only some of them do, are you sure the one you posted is a "bad" example?
  2. When those spikes occur, are multiple copies of this query (or other queries) running? Or is it just a single copy?
  3. The query uses a view. Though we can get a lot of information about the view by looking at the execution plan, we might see more if you show us the CREATE VIEW statement for that view.
  4. The query uses a table-valued user-defined function. In the execution plan we can only see that this function is invoked (multiple times, even). We do not see how much (if any) I/O occurs when the function executes, nor how the function is defined. You can change the latter if you show us the CREATE FUNCTION statement for that function.
  5. If the view mentioned in step 3 and/or the user-defined function mentioned in step 4 reference any other views or user-defined functions, then please post their CREATE statement too.
  6. I am not sure how you arrived at the execution plan you posted. All the ways I normally use to get an estimated execution plan will automatically include the estimated execution plan for any user-defined functions used, but this is not the case in the plan you posted. Can you please copy/paste the full statement in the Command Text tab, then hit Get Estimated Plan. This should result in a plan that also shows what happens in the user-defined function. Perhaps this shows you the root cause of your issues. If not, then save the estimated plan and post it here.
  7. Finally, with the same query still in the Command Text, click the arrow next to the Get Estimated Plan button and click it. Make sure that With Live Query Profile is NOT checked (*), then execute. The actual plan will a.o. reveal where there are big differences between estimated and actual number of rows, that may cause the query to do much more I/O than it should. Perhaps this shows you the root cause of your issues. If not, hen save the plan and post it here.

(*) Note that I would normally recommend that you leave this option ON because it includes more information that might help. However, if the root cause of your issue is that the user-defined function executes a million times, then capturing an actual execution plan with this option enabled will capture a million (probably identical) execution plans. Hence the suggestion to, IN THIS CASE ONLY, disable that option.