I want to optimize this query execution to improve its duration.
In this case, 3MB of data was expected, but 110MB was encountered. The underlying cause is an inaccurate cardinality estimation at some earlier point in the plan. With such a complex query with so many joins, inaccurate cardinality estimation is probably unavoidable, though you should certainly check that you are providing all the statistics the optimizer needs, including manually creating any useful multi-column statistics.
The sorts are required in this case because Merge Joins require sorted inputs. You might think to use Hash Join instead, but that type of join also requires a memory grant for the hash table, so spills can and do occur with that join type as well.
Nested Loops join does not require memory, so you could look at achieving that join type instead, though it will require more work than just adding an OPTION (LOOP JOIN) query hint! Nevertheless, the plan produced with that hint should help you identify why the optimizer is choosing Hash and Merge joins instead, and probably lead you in the direction of additional indexing which would make nested loops perform well, if possible.
Ultimately, the query is too deep (too many joins) to expect accurate cardinality estimation. I would normally recommend splitting the query into parts, at the point where cardinality estimation errors creep in, but you may be limited in what you can do if the query must remain a single SELECT in a view for whatever reason.