How to improve the performance of this query?
Table 'Worktable'. Scan count 12943, logical reads 1588368, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TABLE1'. Scan count 1, logical reads 1, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TABLE2'. Scan count 3, logical reads 4535055, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TABLE3'. Scan count 2, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TABLE4'. Scan count 1, logical reads 2010789007, physical reads 0, read-ahead reads 6, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TABLE5'. Scan count 1, logical reads 1262184, physical reads 0, read-ahead reads 514076, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TABLE6'. Scan count 1, logical reads 305192, physical reads 0, read-ahead reads 146, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TABLE7'. Scan count 1, logical reads 11253, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TABLE8'. Scan count 3, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TABLE9'. Scan count 1, logical reads 126, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Two billion reads seems excessive for a scan that returns only 12 million rows without a predicate. You should look into the reasons for this.
That whole plan section is concerning:
You should check to see you have proper indexes to support these joins and predicates. If there really is no alternative to large hash joins, you should at least look to enable parallelism.
The query takes 2 hours to run and return a distinct of 251K records but the estimated records are 12 billions. How can I fix this?
Even with up-to-date statistical information, cardinality estimation has limits. Queries with complex expressions and many joins often benefit from being broken down into smaller pieces. This is helpful to the optimizer and humans alike. Tuning smaller sections of code is always easier.
The query plan has several areas where cardinality estimation errors are rooted. Consider breaking the query down at these points using temporary tables. These may present additional tuning options including index creation.