I have a query that has been optimized to the best of my current capabilites. It is doing a summary of the entire data (18 million records) in a transactional table, grouped by company subsidiary.
At this point, I have the query tuned from 3 minutes to 2 minutes. The table being summarized contains 81 fields per record. I tried a test and partitioned the data, then added a columnstore index to each partition, which brought the query down from an average of 2 minutes to less than 20 seconds.
Looking that this original query, can anyone provide some suggestions as to things to look at to try and improve the processing time, other than the partitions and columnstore indexes?
By DBNewbie_2007 0 asked Dec 21, 2017 at 06:17 PM
You're processing 18.3 million rows, without filter. Reading 18.3 million rows from a rowstore index in a 81-column wide table is actually pretty fast. I don't think this can be made to go faster in other ways then you described,
A columnstore compresses data much better (less IO), and allows you to read only the columns used in this query (yet less IO). If you look at STATISTICS IO, you should see a huge decline in the logical reads.
A columnstore also enables the query to execute in batch mode, which make things go orders of magnitude faster.
(I don't think that the partitioning you added did anything for performance)
By Hugo Kornelis 271 answered Dec 21, 2017 at 07:06 PM