Interesting Performance problem
Interestingly when I captured the plan today (1st May) there is no “Bookmark key lookup” operator. No index modification done. Why it's not using that operator now since query is same ? Attaching today's plan and index details also.
FYI : Updating IndexStats on daily basis.
Will appreciate your response on this. Thanks.
The query you write is just a logical specification of the results you require. SQL Server can choose from a variety of physical execution strategies to produce the results. Which physical plan it chooses depends heavily on statistical information. As the statistical information changes, so might the physical execution plan choice.
With a large and complex query involving a large number of joins, there are more potential strategies than SQL Server has time to consider, so heuristic rules are used to narrow down the search space. In addition, the chances that SQL Server's estimates are accurate quickly diminish as the size of the query increases.
Is there any other way to optimize this query?
Of course. It is clearly not reasonable for a query that accesses such small tables and returns so few rows to run for several hours.
The 68% cost percentage you see on the key lookup is just an optimizer estimate, even in a post-execution plan with 'actual' row counts. You should not base your tuning efforts on these numbers alone, because estimates frequently do not match the reality. One reason is that estimated row counts might not match actual row counts, meaning the whole basis for the optimizer's execution plan choice was unsound in the first place. This is much more likely in complex query plans where repeated estimations compound errors.
The query plans you posted have a number of other potential issues that look much more serious than the key lookup. There are far too many of these to detail, but most relate to a poor estimate of the number of rows that will be encountered. Some operations, like sorts and hashing, are very sensitive to estimation errors. In any case, inaccurate estimations mean SQL Server will likely choose the wrong physical execution strategy.
Fundamentally, the query just tries to do too much in one step. Quite aside from the problems SQL Server faces in choosing a reasonable execution plan, it looks like a bit of a nightmare for a human to maintain.
You should rewrite the query in a smaller number of steps, using temporary tables to store intermediate results. This will help people understand and maintain the query, make it easier to identify suboptimal indexing or coding practices, and most of all it will help SQL Server produce an efficient execution plan based on accurate estimations.
The query is far too large to attempt such a rewrite here, but this is my advice. Break the query into smaller steps with queries you can tune individually. The intermediate result set sizes should be small, and you might consider adding helpful indexes to the temporary tables as you go. This process might even allow you to remove that final SELECT DISTINCT you are doing. This is normally a sign that the query introduces duplicates unintentionally. Breaking the query down will help you identify where this first occurs, and take steps to eliminate it.
And remove all those NOLOCK hints. If you really want to run at READ UNCOMMITTED isolation, use a SET TRANSACTION ISOLATION LEVEL statement.
Finally, note both query plans you posted request a memory grant of between 2.5 and 10GB. This is again a consequence of inaccurate estimations. There are few big operations in the query that require significant memory, breaking the query up will help you tune this as well.