Why a 5X Slowdown When Adding a Filter??
Attached are the two (very different) plans with one having the filter (slow) and the other doesn't (fast). The fast query uses Parallelism and a very different plan. There are far fewer records returned by the indexes in the fast query than the slow query, but it should be the opposite.
The information available to the optimizer (and its internal assumptions) mean the Index Seek shown below is estimated to return one row, whereas in reality 129,619 rows are produced:
The remainder of the plan is optimized for the expected one row:
This is a very reasonable strategy for one row, but a bit of a disaster for 129,619 rows:
In particular, the whole inner-side tree of the Left Semi Join is executed 129,619 times instead of (the expected) once.
As a first step, you should ensure database statistics are representative of the underlying data – paying particular attention to the ProviderPaymentHistory table.
If this were a regular SQL query (instead of generated code) I would suggest adding an OPTION (RECOMPILE) hint to the query, because this will allow the optimizer to see the actual values of parameters and variables at compilation time, rather than treating them as unknowns. If it is possible to add that hint in your environment, please test it.
The combined predicate on the index highlighted above is nevertheless a complex one, which is difficult to estimate expected cardinality for:
Some of this (and the Merge Interval subtree) could be simplified, removed, or at least assessed more accurately if OPTION (RECOMPILE) were used. Again, if you were using a regular stored procedure, I would suggest trying to materialize the 129,619 row set in a temporary, then completing the rest of the query against that (with accurate cardinality, and perhaps additional indexing).
To answer your other point, removing the extra filter simply changes the estimates in a way that happens to produce a better-performing plan.