开发者

What is the purpose of the org.apache.hadoop.mapreduce.Mapper.run() function in Hadoop?

开发者 https://www.devze.com 2023-04-05 10:26 出处:网络
What is the purpose of the org.apache.hadoop.mapreduce.Mappe开发者_StackOverflow中文版r.run() function in Hadoop? The setup() is called before calling the map() and the clean() is called after the map

What is the purpose of the org.apache.hadoop.mapreduce.Mappe开发者_StackOverflow中文版r.run() function in Hadoop? The setup() is called before calling the map() and the clean() is called after the map(). The documentation for the run() says

Expert users can override this method for more complete control over the execution of the Mapper.

I am looking for the practical purpose of this function.


The default run() method simply takes each key / value pair supplied by the context and calls the map() method:

public void run(Context context) throws IOException, InterruptedException {
    setup(context);
    while (context.nextKeyValue()) {
       map(context.getCurrentKey(), context.getCurrentValue(), context);
    }
    cleanup(context);
}

If you wanted to do more than that ... you'd need to override it. For example, the MultithreadedMapper class


I just came up with a fairly odd case for using this.

Occasionally I've found that I want a mapper that consumes all its input before producing any output. I've done it in the past by performing the record writes in my cleanup function. My map function doesn't actually output any records, it just reads the input and stores whatever will be needed in private structures.

It turns out that this approach works fine unless you're producing a LOT of output. The best I can make out is that the mapper's spill facility doesn't operate during cleanup. So the records that are produced just keep accumulating in memory, and if there are too many of them you risk heap exhaustion. This is my speculation of what's going on - could be wrong. But definitely the problem goes away with my new approach.

That new approach is to override run() instead of cleanup(). My only change to the default run() is that after the last record has been delivered to map(), I call map() once more with null key and value. That's a signal to my map() function to go ahead and produce its output. In this case, with the spill facility still operable, memory usage stays in check.


Maybe it could be used for debugging purposes as well. You can then skip part of the input key-value pairs (=take a sample) to test your code.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号