At the top left of your execution plan is a Nested Loops join. The top input has 664 rows (699 estimated). The bottom input includes a Table Spool operator. That is SQL Server saying "hey I just did a lot of work getting you this result and you are probably going to need it again, so I'll save it for you". It can do this because the work done in the lower branch is done for a specific value of the expression [CPPRAFR].[uga].[CUSTOMER].[ID].
The optimizer thinks that this branch will run 699 times, mostly for the same ID value. So it feels good about having a relatively expensive branch to get the results because they are reused and not re-evaluated.
In reality, the ID value changes on each of the 664 rows, so the effort of saving the results is wasted - and what's worse, the huge effort of producing them is repeated 664 times.
Your query text is not only quite complex, it is also littered with data type conversions. These make it incredibly hard for SQL Server to predict things like the above; this might well be the reason why the query is not performing well. If you don't need these conversions throw them out. If you do need them, take action to get your data model fixed.
Oh, and you should also reconsider your use of NOLOCK. Unless you don't care whether the results are correct or not.
By Hugo Kornelis 271 answered Dec 21, 2017 at 07:21 PM
Without spending too much time on the query, here are a few things I believe I am noticing:
• From that I can tell, you are doing a left outer join below, but not using data from the Address table? Recommendation: Look at the entire query and eliminate joins on any table(s) that are not really needed.
• There is another part of the query,
I recommend you convert it to a "NOT EXISTS" in the filter? You are doing an INNER JOIN, then adding a "IS NOT NULL"? Here is a good reference for review:
By DBNewbie_2007 0 answered Dec 21, 2017 at 08:22 PM