2016-02-17 11 views
2

我正在寫一個mapreduce應用程序,它以(key,value)格式輸入並只顯示與reducer輸出相同的數據。ClassCastException:java.lang.Exception:java.lang.Exception java.lang.ClassCastException mapred

這是樣品輸入:

1500s 1 
1960s 1 
Aldus 1 

在下面的代碼中,我使用< < >>指定輸入格式和指定的分隔符在主標籤()。當我運行代碼時,我遇到了錯誤消息:

java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.LongWritable 
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) 
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) 
Caused by: java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.LongWritable 
at cscie63.examples.WordDesc$KVMapper.map(WordDesc.java:1) 
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) 
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) 
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) 
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) 
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 

嘗試了不同的事情來調試,但沒有任何幫助。

public class WordDesc { 

    public static class KVMapper 
     extends Mapper<Text, LongWritable, Text, LongWritable>{ 
    private final static IntWritable one = new IntWritable(1); 
    private Text word = new Text(); 
    public void map(Text key, LongWritable value , Context context 
        ) throws IOException, InterruptedException { 
     context.write(key,value); 
     } 
    } 


    public static class KVReducer 
     extends Reducer<Text,LongWritable,Text,LongWritable> { 

    private IntWritable result = new IntWritable(); 
    public void reduce(Text key, LongWritable value, 
         Context context 
         ) throws IOException, InterruptedException { 
     context.write(key, value); 
    } 
    } 

    public static void main(String[] args) throws Exception { 
    Configuration conf = new Configuration(); 
    conf.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator", "\t"); 
      String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); 
    if (otherArgs.length < 2) { 
     System.err.println("Usage: wordcount <in> [<in>...] <out>"); 
     System.exit(2); 
    } 
    Job job = new Job(conf, "word desc"); 
    job.setInputFormatClass(KeyValueTextInputFormat.class); 
    job.setJarByClass(WordDesc.class); 
    job.setMapperClass(KVMapper.class); 
    job.setCombinerClass(KVReducer.class); 
    job.setReducerClass(KVReducer.class); 
    job.setOutputKeyClass(Text.class); 
    job.setOutputValueClass(LongWritable.class); 
    for (int i = 0; i < otherArgs.length - 1; ++i) { 
     FileInputFormat.addInputPath(job, new Path(otherArgs[i])); 
    } 
    FileOutputFormat.setOutputPath(job, 
     new Path(otherArgs[otherArgs.length - 1])); 
    System.exit(job.waitForCompletion(true) ? 0 : 1); 
    } 
} 
+0

我可以知道downvote – Abhi

回答

1

我想這條線job.setInputFormatClass(KeyValueTextInputFormat.class);告訴你的程序來對待你的輸入爲Text鍵值對。因此,當您要求您的輸入值爲LongWritable時,您將獲得此例外。

速戰速決將讀取你輸入的文本,然後,如果你想使用LongWritable,使用解析它:

public static class KVMapper 
    extends Mapper<Text, Text, Text, LongWritable>{ 
    private final static LongWritable val = new LongWritable(); 
    public void map(Text key, Text value, Context context) { 
     val.set(Long.parseLong(value.toString())); 
     context.write(key,val); 
    } 
} 

它所做的是:值是文本,然後value.toString()給出該文本的字符串表示,然後Long.parseLong()解析該字符串。最後,val.set()將它轉換爲LongWritable。

順便說一句,我不認爲你需要一個減速......你可以讓它更快一點由減少任務數量設置爲0

+0

我的原因將在該程序的下一步中擴展功能。我只是想獲取keyvaluepair的數據。謝謝! – Abhi

+1

@Abhi請看我更新的答案。它應該更快,更易於理解 – vefthym

+1

val.set(Long.parseLong(value.get()));該函數是否將文本轉換爲LongWritable。 Eclipse正在爲此行拋出一個錯誤,我不能找出另一種方式來執行此操作 – Abhi