Need help interpreting Exection Plans to improve performance
The top branch of the plan that gets the employee information is different between the good and bad plans. The bad plan expected one row from the employee data and the good plan expects 271. Those numbers change the access patterns to the rest of the data.
As the days move one what data changes in those tables?
I think Aaron's OPTION(RECOMPILE) will give you more consistent results.
The root cause of the problem is that the query is not very optimizer-friendly as written. The optimizer does a remarkable job in the 'good plan' case in spite of this. Nevertheless, that outcome is likely as much down to luck as anything.
By 'not optimizer-friendly' I mean the query uses some unusual constructions that typically fall outside the optimizer's cost model and therefore result in less reliable plan generation. The specific 'unusual constructions' are the complex AND/OR predicates at various levels (making cardinality estimation essentially guesswork); and the (SELECT COUNT(*) FROM …) = 0 syntax, which is an odd way to write a NOT EXISTS clause. Similarly, the subquery (SELECT columns FROM …) IS NULL is another odd way to express NOT EXISTS. This odd form also requires a Stream Aggregate + Assert to check the subquery only produces a maximum of one row on each correlated execution.
All these things (and more) make this query less readable for humans as well as making life hard for the query optimizer. My preference would be to rewrite and simplify the query to make the logic clearer – helping the optimizer produce more reliable and stable plans while making future maintenance easier too.
Specifically, I would rewrite the odd code sections using the appropriate NOT EXISTS syntax. I would also break the query into up to three parts, using temporary tables to store the small intermediate result sets generated. This is a very lightweight technique, and gives the optimizer crucial cardinality and statistical distribution information.
The first logical part of the query joins the employee tables (list, hours, and schedules) applying the relevant predicates as it goes. The results (just the columns needed!) should be stored in a temporary table and indexed appropriately.
The second stage checks for matches (NOT EXISTS) in the Master tables. This query should be easy to write naturally given the results from stage 1. You should also be able to avoid the index spools currently present in the 'good plan'. Building an index on the fly for every execution is normally significantly less efficient than providing a permanent useful index on the base table.
The third stage (if necessary – the result from previous stages is empty in both your examples) is another NOT EXISTS operation to replace the COUNT(*) = 0 code.
This may seem like more work, but it is the difference between applying a band-aid and truly fixing the problem in a robust and permanent way. It is almost always preferable to write queries that will optimize well rather than writing complex hard-to-analyze queries that may suddenly vary in performance at any time.