Having trouble balancing row counts of the threads

GokhanVarol 2013-05-28 20:59:14

What can I do to evenly distribute the rowcounts between the threads (imbalance shows in the picture).?
I did use a optimizer for to hide some where conditions since statistics were overly low balling the where condition and therefore the sort was not getting enough memory grant.
Thank you

link text

SQLkiwi 2013-05-29 01:04:56
The problem is that all the rows from ChangedTransKeysTemp end up on the same thread when hash partitioned by CntyCd. Presumably, this means all rows in that table have the same value for CntyCd. There is not much you can do about the hash partitioning keys without changing the join keys, if hash or merge join is needed.

You might find you get a better distribution by writing that particular join in reverse order and using nested loops join instead of merge. This will result in thread distribution using parallel scan rather than the hash partitioning used for parallel hash and merge join.

There is not a huge number of rows in the OwnerThruDt table though, so parallel scan distribution may not be much better. You could look to drop the cardinality estimate on that table using query tricks. This would introduce a round-robin exchange (the 'few outer rows' optimization) distributing rows evenly across threads.

The FORCE ORDER hint is a very big hammer – are you sure it is needed?

GokhanVarol 2013-05-29 01:59:55
(Thank you++)++
I not understanding the hash would distribute on the join key got rid of the initial join by dumping to step above it and enjoyed the even distribution.
This is awesome, I am getting to understand what's going on with these joins/plans little by little.
Thank you so much again, now I have quite a bit more code that I want to revisit with this info 🙂

!<a target=_blank href=![![![link text" />]1]1]1

GokhanVarol 2013-05-29 02:01:41
I'll remove force order.