开发者

Minimum item processing time for using Parallel.Foreach

开发者 https://www.devze.com 2023-04-09 17:09 出处:网络
Suppose I have a list of items that are currently processed in a normal foreach loop. Assume the开发者_Python百科 number of items is significantly larger than the number of cores. How much time should

Suppose I have a list of items that are currently processed in a normal foreach loop. Assume the开发者_Python百科 number of items is significantly larger than the number of cores. How much time should each item take, as a rule of thumb, before I should consider refactoring the for-loop into a Parallel.ForEach?


This is one of the core problems of parallel programming. For an accurate answer you would still have to measure in the exact situation.

The big advantage of the TPL however is that the treshold is a lot smaller than it used to be, and that you're not punished (as much) when your workitems are too small.

I once made a demo with 2 nested loops and I wanted to show that only the outer one should be made to run in parallel. But the demo failed to show a significant disadvantage of turning both into a Parallel.For().

So if the code in you loop is independent, go for it.

The #items / #cores ratio is not very relevant, TPL wil partition the ranges and use the 'right' amount of threads.


On a large data processing project I'm working on any loop that I used that contained more than two or three statements benefited greatly from the Parallel.Foreach. If the data your loop is working on is atomic then I see very little downside compared to the tremendous benefit the Parallel library offers.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号