Ideas as how I might increase effeciency yet retain the same results?

At a low row count this works exceedingly well, but I will be running this on a table with 500,000 plus rows. At this point, the query becomes very slow. If there is anything I can do to get more performance? I would appreciate the input.

Plan.pesession (52.4 kB)
avatar image By DataBASS 0 asked Oct 10 at 03:44 AM
more ▼
(comments are locked)
10|10000 characters needed characters left

1 answer: sort voted first

If you are planning to scale this code to handle 500,000+ rows, I would strongly consider the use of temp tables over the table variables. Table variables are good for smaller chunks of data.

As you can see in the image below, the query optimizer is estimating 1 record for the table variable but is reading 35 records and returning 33. As the dataset scales, this will become a potential bottleneck for your query.

alt text

With temp tables, you may get a better estimate from the query optimizer and can add indexes to help avoid the table scans, which are potentially costly.

I would also test using a working table instead of a CTE to accomplish your end result. Mileage may vary with CTE's, so you will want to test both with the larger datasets.

Once you are using larger datasets, you will want to revisit the execution plans for the code, looking for additional tuning opportunities.

avatar image By SDyckes2 81 answered Oct 10 at 03:29 PM
more ▼
(comments are locked)
10|10000 characters needed characters left
Your answer
toggle preview:

Up to 50 attachments (including images) can be used with a maximum of 209.7 MB each and 209.7 MB total.

Follow this question

Topics:

x631
x445
x87
x41
x15

asked: Oct 10 at 03:44 AM

Seen: 17 times

Last Updated: Oct 10 at 03:29 PM