Additional Tuning Opportunities

I have a query that has been optimized to the best of my current capabilites. It is doing a summary of the entire data (18 million records) in a transactional table, grouped by company subsidiary.

At this point, I have the query tuned from 3 minutes to 2 minutes. The table being summarized contains 81 fields per record. I tried a test and partitioned the data, then added a columnstore index to each partition, which brought the query down from an average of 2 minutes to less than 20 seconds.

Looking that this original query, can anyone provide some suggestions as to things to look at to try and improve the processing time, other than the partitions and columnstore indexes?

avatar image By DBNewbie_2007 0 asked Dec 21, 2017 at 06:17 PM
more ▼
(comments are locked)
10|10000 characters needed characters left

1 answer: sort voted first

You're processing 18.3 million rows, without filter. Reading 18.3 million rows from a rowstore index in a 81-column wide table is actually pretty fast. I don't think this can be made to go faster in other ways then you described,

A columnstore compresses data much better (less IO), and allows you to read only the columns used in this query (yet less IO). If you look at STATISTICS IO, you should see a huge decline in the logical reads.

A columnstore also enables the query to execute in batch mode, which make things go orders of magnitude faster.

(I don't think that the partitioning you added did anything for performance)

avatar image By Hugo Kornelis 211 answered Dec 21, 2017 at 07:06 PM
more ▼
(comments are locked)
avatar image DBNewbie_2007 Dec 21, 2017 at 07:30 PM

Hugo, thanks for the update. Yes.. 18.3 million is a lot without a filter.. we have even bigger tables, but that is a separate topic! LOL Unfortunately, as I mentioned, this particular query is summarizing all of the data (going back to Jan 2016) currently in the table. This allows the various divisions to review data and performance monthly, quarterly or yearly, depending on the grouping they use in our web reporting system.

The reason for the partitioning was due to the ColumnStore indexes and trying to prevent/minimize rebuilds or drop/create(s) in an OLTP database. Makes sense, as prior to SQL Server 2017, the table with the non-clustered column store index became "read only", so the partitioning helped in only having to rebuild a smaller set of data. When we first started the columnstore tests, it was with SQL Server 2014.

avatar image Hugo Kornelis Dec 21, 2017 at 07:35 PM

Oh sure, I never wanted to imply that partitioning is a bad idea. With tables this large, it makes a lot of sense to ease the pain of all kinds of table and index management.

I just wanted to make sure that neither you nor other people get the impression that partitioning by itself is a tuning tool. That appears to be a common misconception so I just wanted to make that clear. ;)

10|10000 characters needed characters left
Your answer
toggle preview:

Up to 50 attachments (including images) can be used with a maximum of 209.7 MB each and 209.7 MB total.