How can I improve performance?

Ronwell 2013-10-02 16:16:59

I have this stored procedure that is taking around 11 minutes to complete, when I'd like it to be closer to subsecond.

I've tested in production environment under load and in dev environment with no load, all with the same slow results.

Is there anything glaring that I can look at? I see some missing index recommendations and I'll explore that route, but I'm pretty green at examining execution plans, so if anyone could offer any other advice that would be great.

Update

Sure, no problem. I realize my question is vague, so if there's anything else I can supply just ask.

I think I can improve performance by replacing some CTE's with temp tables. I'm testing that now.

UPDATE 2

Attached the plan analysis after running the query in plan explorer.

Jonathan Kehayias 2013-10-02 16:51:26
Can you share the non-anonymized version of the plan? There could be a number of things you could do if we could see the statements that are being executed, but with the anonymous plan there isn't much I can recommend.
Ronwell 2013-10-03 15:19:48
The implicit conversion from nvarchar to varchar was causing the problem. I made the columns match and now it's a subsecond procedure.
SQLkiwi 2013-10-03 15:39:51
It appears you added this while I was writing my answer! Did you have to change a table column definition, or just the view? Purely out of curiosity, I'm glad you found your solution.
Ronwell 2013-10-03 16:19:29
Column definition.
SQLkiwi 2013-10-03 15:36:34
Thanks for running the query from Plan Explorer. Although the plan changed a little between executions, the query analysis file is still able to highlight the cause of the poor performance:

The MERGE plan performs 1,613 full scans of the Queries table. That table contained 1,692,170 rows when the plan was last compiled. In total, over two billion rows (2,729,470,210 to be precise) are processed by the scans, though 'only' 10,149,811 rows pass the residual predicate on that index scan. The residual predicate applied to two billion rows is:

CONVERT_IMPLICIT(nvarchar(16),[AFS-Prod].[TrueChecks].[Queries].[RoutingNumber] as [q].[RoutingNumber],0)=[AFS-Prod].[Batch].[Queries].[RoutingNumber] as [q].[RoutingNumber] AND ([AFS-Prod].[TrueChecks].[Queries].[ClientId] as [q].[ClientId]<[@clientId] OR [AFS-Prod].[TrueChecks].[Queries].[ClientId] as [q].[ClientId]>[@clientId])

The Key Lookup immediately following the Index Scan is also very expensive. It is executed 10,149,811 times in total, applying the following residual predicate to the rows in locates:

CONVERT_IMPLICIT(nvarchar(50),[AFS-Prod].[TrueChecks].[Queries].[SearchableAccountNumber] as [q].[SearchableAccountNumber],0)=[q].[SearchableAccountNumber] AND CONVERT_IMPLICIT(nvarchar(50),[AFS-Prod].[TrueChecks].[Queries].[SearchableCheckNumber] as [q].[SearchableCheckNumber],0)=[q].[SearchableCheckNumber]

All this was very hard to spot in the original plan, because the query optimizer had engineered a dynamic seek. The huge number of rows were hidden by the innocent-looking Index Seek below the Merge Interval. In fact that 'seek' was selecting all Queries rows where the client id was not equal to the @ClientID parameter. The result was filtered by an equally-horrible residual predicate.

So, yes, the problem is caused by the implicit conversions (because they prevent an effective index seek) and inadequate indexing (because far too many rows require residual predicates and lookups).

Ideally, the Index Scan and Key Lookup with complex residual predicates needs to be replaced by a very selective seek (and not a dynamic one constructed by the optimizer). The seek could possibly include a lightweight residual and key lookups, but only if it is impractical or undesirable to make the index fully SARGable and covering.

The view definition behind the query appears to be moderately complex. A full analysis of the ideal index arrangement would require the view definition and table definitions (for data types).

Ronwell 2013-10-03 16:19:57
Wow. I marvel at your knowledge. Thanks!