How to reconcile the cumulative duration to the sum of the pieces

Dan Holmes 2015-04-02 11:47:36

Where is the other 600 ms hiding in this proc? I see an insert for 410 and a significant number of other single and double digit times but none of that adds up to the 1+ second duration reported.

Dan Holmes 2015-04-02 13:09:19
Figured it out. It is the compile time of the statement with OPTION(RECOMPILE). I can't win on this proc.


and with a bit of denormalization and one new index, i have this proc execution to regular durations of <200ms. That is significantly better than the 1.2+seconds i started with.

Aaron Bertrand 2015-04-02 13:28:37
I was about to comment about compile time. There's a half-second on the insert into #UpcomingStops. Do you think it would be valuable to have a column in the statement grid indicating compile time? The complication is that (compile + duration) > (end time – start time).
Dan Holmes 2015-04-02 13:31:32
SQLkiwi 2015-04-03 11:59:03
Don't be scared about splitting this statement up even more. High compilation time is to be expected where >= 5 table references are in the query because optimization will likely enter stage 2 (full) optimization.

Even if you find further materialization is counter-productive, you may learn some things about the plan shapes that are produced, and/or it may lead you to potential optimizations you have missed previously.

You could also look to avoid the GetRangeWithMismatchedTypes associated with the seek on TripDate (MergeInterval subtree). It probably won't be affecting performance, estimates, or plan selection much, but (a) you never know; and (b) I have this thing about types 🙂