SQLCruise Carribean 2015 Hairy Execution Plan Winner

Kevin 2015-02-20 16:26:12

There are two INSERTS in this batch that process hundreds of TRILLIONs of rows.

Unfortunately, they are only estimated plans. But I'd love to hear your thoughts about how to tune the two most demanding statements in the batch. I noticed, for example, that all of the Merge Joins are many-to-many. What else can we tune?

-Kevlink text

Kevin 2015-02-20 17:11:41
Since we're talking trillions of rows, I wonder if a columnstore index would make sense on some of these columns?
SQLMickey 2015-02-20 17:41:18
Hi,

I didn't get to spend a ton of time looking at the plan, but this is what I saw first.

For the two insert statements I saw the same problem. They are both using the substring function in the WHERE clause. This means that an index scan needs to occur instead of an index seek. I would investigate to see if it is possible to use persisted computed columns in that table to break up the ORD_RSN_OCR field into the parts you need. Then you can add those parts to an index.

I would also change @tblProductValues.LookupValues field to nvarchar(125). An implicit conversion is occuring when that table is being used in the first INSERT statement.

I don't know the data, but I would investigate to see if the DISTINCT is actually needed. It is very expensive. If there is a way to join to another table, or use some other logic that will make usre you have distinct rows with out using that operator, then you will be much better off.

Mickey

umair 2015-02-20 18:03:04
At a quick glance DISTINCT on the subquery for the INSERT INTO t_WSTDiscosStaging looks redundant as you have an aggregate on the outer query anyway.