Ideas as how I might increase effeciency yet retain the same results?

DataBASS 2017-10-10 03:44:02

At a low row count this works exceedingly well, but I will be running this on a table with 500,000 plus rows. At this point, the query becomes very slow. If there is anything I can do to get more performance? I would appreciate the input.

SDyckes2 2017-10-10 15:29:49
If you are planning to scale this code to handle 500,000+ rows, I would strongly consider the use of temp tables over the table variables. Table variables are good for smaller chunks of data.

As you can see in the image below, the query optimizer is estimating 1 record for the table variable but is reading 35 records and returning 33. As the dataset scales, this will become a potential bottleneck for your query.

alt text

With temp tables, you may get a better estimate from the query optimizer and can add indexes to help avoid the table scans, which are potentially costly.

I would also test using a working table instead of a CTE to accomplish your end result. Mileage may vary with CTE's, so you will want to test both with the larger datasets.

Once you are using larger datasets, you will want to revisit the execution plans for the code, looking for additional tuning opportunities.