The query is taking around 13 minutes to execute where prior it was just fraction of seconds.
There has been no change to that above running query and now it takes 13+ minutes to execute.
There is a table which gets records updated to about 20K per month.
If you know the previous plan will always be good, regardless of how the data changes in future, you could look to capture that plan and use it in a Plan Guide. Exactly how you capture the plan depends on your circumstances. For example, the prior plan may still be in the plan cache, or may be reproducible on your test/QA/dev environment.
The second option requires more effort and expertise, and is suitable if you need to ensure the query will produce good plans in future, regardless of how the data changes. It typically involves providing better access methods (e.g. indexes) and better information about the data to the optimizer.
For example, starting at the far right of the plan, the first sign of trouble is that the optimizer is having to construct an index intersection (two indexes on the same table) with a lookup to locate the data:
This whole section of the plan can often be replaced by a single covering index. An important benefit of doing this it that the new index would normally provide more accurate statistics, so the current estimate of 1 row would likely be replaced by something closer to the 37 rows actually encountered. This would be one of the first things I would look to change, then continue the analysis after the impact of the indexing change is assessed.
The next section of the plan is quite likely to change after the indexing improvement mentioned above, but it is still worth looking at quickly:
This is a great example of how small estimation errors can quickly build up. The start of the issue is the 1 row estimated versus 37 actual mentioned above, but it is quickly magnified by the join to Table 7. If the covering index suggested before does not resolve the problem, I would be tempted to materialize the 3,968 rows resulting from the join to Table 8 using a temporary table. This intermediate table can automatically provide statistics about the 3,968 to the optimizer, and also offers an opportunity to introduce additional indexes if they are warranted.
After this point, the plan is so likely to change out of all recognition that it is not really worth going into minute detail. However, there are a couple of things about the shape of the plan that I want to mention:
The Asset plus Stream Aggregate pattern means you have a correlated subquery that is not guaranteed to return 1 row per iteration, whereas correctness requires it to do so. This is the shape that would cause a "subquery returned multiple rows" error message. While you are not seeing this error, your SQL construction and database design do not currently allow the optimizer to skip this expensive check. You should review your query and unique constraints & indexes to remove this unnecessary work, if possible.
This pattern means you have multiple aggregates, including at least one DISTINCT aggregate in your query. SQL Server 2012 introduced a clever optimizer rewrite that can often remove this expensive pattern, but it is often possible to refactor the query manually to achieve the same effect. See this link (and the references) for details. That said, the number of rows actually involved in the Eager Spool is relatively small, so this is likely not a performance issue right now. Nevertheless, it is worth becoming familiar with the issue, and workarounds.