2017-03-15 73 views
1

我遇到了一個有趣的情況,其中我的映射器輸入與減速器輸出相同(減速器代碼不起作用)。這是我的第一個數據集,因爲我是一個新手。提前致謝。我的映射器輸入和減速器輸出是相同的

問題陳述:尋找一年的最高溫度。

考慮,下面是我的數據集(一年及溫度列由tab空間separted)

2001 32 
2001 50 
2001 18 
2001 21 
2002 30 
2002 34 
2002 12 
2003 09 
2003 12 

映射代碼

import java.io.IOException; 
import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.LongWritable; 
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.mapreduce.Mapper; 

public class MapperCode extends Mapper<LongWritable,Text,Text,IntWritable> { 
public void map(LongWritable key,Text value,Context context) throws IOException,InterruptedException 
{ 
    String Line=value.toString(); 
    String keyvalpair[]=Line.split("\t"); 
    context.write(new Text(keyvalpair[0].trim()), new IntWritable(Integer.parseInt(keyvalpair[1].trim()))); 
} 
} 

減速器代號:

import java.io.IOException; 
import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.mapreduce.Reducer; 

public class ReducerCode extends Reducer<Text,IntWritable,Text,IntWritable>   { 
public void reducer(Text key,Iterable<IntWritable> value,Context context)throws IOException,InterruptedException 
{ 
    int max=0; 
    for (IntWritable values:value) 
    { 
     max=Math.max(max, values.get()); 
    } 
    context.write(key,new IntWritable(max));  
} 
} 

驅動代碼:

import java.io.IOException; 
import org.apache.hadoop.conf.Configuration; 
import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.mapreduce.Job; 
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat; 
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 

public class MaxTemp extends Configuration { 
    public static void main(String[] args) throws IOException,InterruptedException,Exception { 
Job job=new Job(); 
job.setJobName("MaxTemp"); 
job.setJarByClass(MaxTemp.class); 
FileInputFormat.addInputPath(job, new Path(args[0])); 
FileOutputFormat.setOutputPath(job, new Path(args[1])); 
job.setMapperClass(MapperCode.class); 
job.setReducerClass(ReducerCode.class); 
job.setInputFormatClass(TextInputFormat.class); 
job.setOutputFormatClass(TextOutputFormat.class); 
job.setOutputKeyClass(Text.class); 
job.setOutputValueClass(IntWritable.class); 
job.waitForCompletion(true); 

    } 

} 

請讓我知道我在哪裏犯了一個錯誤。爲什麼我的o/p與輸入數據集相同。

回答

0

Reducer實現必須覆蓋reduce()方法。您的實現具有名爲reducer()的方法,該方法從未被調用。

將其更改爲

public class ReducerCode extends Reducer<Text,IntWritable,Text,IntWritable> { 
    public void reduce(Text key,Iterable<IntWritable> value,Context context)throws IOException,InterruptedException { 
+0

添加'@ Override'將 –

+0

franklinsijo和cricket_007感謝幫助以及給你們的精彩幫助:) – user3928562

相關問題