That's one big query (and execution plan) you've got there. Wow!
You are joining no less than 27 tables. That's a lot! And almost all of them are without any filtering, so SQL Server needs to combine all rows to produce the correct results. That's a lot of wokr!
I see that the query finished in 8.18 seconds. That's actually quite good for a query of this size. I also see that the compile time was 4.4 seconds, though I cannot see if that is included in the 8.18 seconds total execution plan (you may have captured an execution that grabbed the plan from the plan cache). For a query this complex, with so many different options, 4.4 seconds compile time does not surprise me either.
However, you asked about possible optimizations. And I do see a few, allthough a query this complex and dealing with such high amounts of data will never break any speed records.
First: Do you really need all the rows that are returned? Is this for a report that really needs all rows? Or is this sent to a client that then uses only some of the rows? Or does some aggregation on it and then shows the aggregated results? If the client uses only a few rows, change the query to return only those rows. If the client only shows an aggregated total, change the query to return that aggregated total.
Second: All joins are LEFT OUTER JOIN. Are you aware of the difference between INNER JOIN and LEFT OUTER JOIN? I have very often seen code where a LEFT OUTER JOIN was used even though it's not needed. An INNER JOIN can often be evaluated slightly faster, so if you can replace LEFT OUTER JOIN with INNER JOIN in places where it does not affect the results, then please do so.
Third: The first join looks like this:
I think (but do test and check results!!) that this is equivalent to the below, which may or may not be faster depending on circumstances beyond my control:
(And if the LEFT OUTER JOIN could have been an INNER JOIN, then the OUTER APPLY should become a CROSS APPLY instead).
Again, not guaranteed to help but wirht testing.
Fourth: There are a lot of "Remote Query" operators, used to fetch data from another server. This corresponds to all the joins to remote tables, all of which are on linked server [SQL2005\CERCO64]. I do not know your data and I can not fully decipher everything going on in your query without spending more time than I can justify, but perhaps you know the data well enough: is there a logical way to join all these tables remotely on the linked server and then do a single join to those remote results? If so, then we can try to rewrite the query in a way that "forces" the optimizer to do those joins remotely. In the current execution plan, only two joins are done remotely, all other joins are done locally. Which may or may not be slower.
Fifth: There are two places where you join the remote tables clientes and sujetos, with the same join logic. The first time, the result is then joined to the local tables based on DistribuidorId; the second time on ClienteId. It might be better (again, you'll need to test) to first join the clientes and sujetos tables and store the results (only the columns you need!!) in a local temporary table, then index that temporary table, and then join to it twice in the query.
Sixth: With a query as complex (as in: lots of tables - there is not much weird stuff going on, just a lot) as this, sometimes it is better to simply split the work into two or three tables. Yes, that does mean that you deny the optimizer some of its options to change join order and such. But it also means that you are not overwhelming the optimizer with too many options to choose from. Of course, it does depend on HOW you split the queries.
I hope this helps!
By Hugo Kornelis 271 answered Jan 28 at 09:35 PM