At a low row count this works exceedingly well, but I will be running this on a table with 500,000 plus rows. At this point, the query becomes very slow. If there is anything I can do to get more performance? I would appreciate the input.
By DataBASS 0 asked Oct 10 at 03:44 AM
If you are planning to scale this code to handle 500,000+ rows, I would strongly consider the use of temp tables over the table variables. Table variables are good for smaller chunks of data.
As you can see in the image below, the query optimizer is estimating 1 record for the table variable but is reading 35 records and returning 33. As the dataset scales, this will become a potential bottleneck for your query.
With temp tables, you may get a better estimate from the query optimizer and can add indexes to help avoid the table scans, which are potentially costly.
I would also test using a working table instead of a CTE to accomplish your end result. Mileage may vary with CTE's, so you will want to test both with the larger datasets.
Once you are using larger datasets, you will want to revisit the execution plans for the code, looking for additional tuning opportunities.
By SDyckes2 81 answered Oct 10 at 03:29 PM