Table with 789.6 million records taking 5 mins to execute the provided select stmt
Database1.Schema1.Object5:
Total Records : 789.6 million
# of records between 01/01/2014 and 01/31/2014 : 28.2 million
My table has around 789 million records and it is partitioned on "Column19" by month and year .
Clustered index on Column19
Database1.Schema1.Object6:
Total Records : 24791
Database1.Schema1.Object7:
Total Records : 311
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 2 ms.
Table 'Database1.Schema1.Object6'. Scan count 9, logical reads 1082, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Database1.Schema1.Object7'. Scan count 4, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Database1.Schema1.Object5'. Scan count 9, logical reads 280072, physical reads 283, read-ahead reads 130274, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(17064 row(s) affected)
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 697975 ms, elapsed time = 254160 ms.
Also, given the relatively small number of unique values compared with the estimate:
I would also try an OPTION (HASH GROUP) hint because hash aggregate usually performs better than Sort + Stream Aggregate with a smaller number of distinct values. The memory grant will be far too large of course, but that's a secondary problem.
Rewriting the query to aggregate earlier would definitely be my first instinct though.