2015-05-29 85 views
0

我正嘗試在java中執行mapreduce 2.x中的mapreduce字數exaple ....我創建了jar,但執行時卻顯示了類似WordMapper類的錯誤在我的包沒有找到,但我已宣佈,我的包中.....請幫我解決這個問題......無法在mapreduce 2.x中運行wordcount示例

這是我的字計數驅動程序代碼:

package com.mapreduce2.x; 

public class WordCount { 

public static void main(String args[]) throws IOException, ClassNotFoundException, InterruptedException 
{ 
    Configuration conf=new Configuration(); 

    org.apache.hadoop.mapreduce.Job job= new org.apache.hadoop.mapreduce.Job(conf, "Word_Count"); 


    job.setMapperClass(WordMapper.class); 
    job.setReducerClass(WordReducer.class); 

    job.setMapOutputKeyClass(Text.class); 
    job.setMapOutputValueClass(IntWritable.class); 
    job.setOutputKeyClass(Text.class); 
    job.setOutputValueClass(IntWritable.class); 

    org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(job, new Path(args[0])); 
    org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.setOutputPath(job, new Path(args[1])); 

    job.waitForCompletion(true); 

}} 

這是我的WordMapper類: -

public class WordMapper extends Mapper<LongWritable, Text, Text,IntWritable>{ 

private final static IntWritable one=new IntWritable(1); 
private Text word=new Text(); 

public void map(LongWritable key, Text value, org.apache.hadoop.mapreduce.Reducer.Context context) throws IOException, InterruptedException 
{ 
    String line=value.toString(); 
    StringTokenizer tokenizer=new StringTokenizer(line); 

    while(tokenizer.hasMoreTokens()) 
    { 
     word.set(tokenizer.nextToken()); 
     context.write(word, one); 

    } 


}} 

WordReducer碼 -

public class WordReducer extends Reducer<Text, IntWritable, Text, IntWritable>{ 


public void reduce(Text key, Iterator<IntWritable> values,Context context) throws IOException, InterruptedException 
{ 
    int sum =0; 

    while(values.hasNext()) 
    { 
     sum= sum+values.next().get(); 
    } 

    context.write(key, new IntWritable(sum)); 
}} 

它顯示的同時executing-

15/05/29 10:12:26 INFO mapreduce.Job: map 0% reduce 0% 
15/05/29 10:12:33 INFO mapreduce.Job: Task Id : attempt_1432876892622_0005_m_000000_0, Status : FAILED 
Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.mapreduce2.x.WordMapper not found 
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2076) 
    at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186) 
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:742) 
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) 
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) 
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 
Caused by: java.lang.ClassNotFoundException: Class com.mapreduce2.x.WordMapper not found 
    at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1982) 
    at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2074) 
    ... 8 more 

回答

0

如下因素誤差包含類名,同時運行的JAR文件,也可以指定主類名稱,同時創建JAR文件。

如果您在沒有類名的情況下運行,請在運行JAR時指定類名。

使用命令 Hadoop的罐子word.jar com.mapreduce2.x.WordMapper /輸入/輸出

這裏word.jar是JAR文件名。

您還可以在創建jar文件時包含主類名。 步驟: 文件 - >導出 - > JAR - >地址 - >然後單擊下一步 - >它要求選擇主類 - >選擇課程,然後點擊OK

之後,你可以運行用命令

Hadoop的罐子word.jar /輸入/輸出

希望jar文件解決您的問題。

+0

單詞計數是我的主類和我指定在執行.... hadoop jar wordcount.jar com.mapreduce2.x.WordCount/input/output 但它是不工作 –

0

嘗試添加下面的註釋行 Job job = new Job(conf,「wordcount」); //job.setJarByClass(WordCount.class);

它爲我

0

你可以試試這個:(IN的Linux/Unix)

  1. 在Java代碼中刪除軟件包名稱。

  2. 在包含java程序的目錄中,創建一個名爲classes的新目錄。例如:Hadoop-Wordcount -> classes , WordCount.java

  3. 編譯:javac -classpath $HADOOP_HOME/hadoop-common-2.7.1.jar:$HADOOP_HOME/hadoop-mapreduce-client-core-2.7.1.jar:$HADOOP_HOME/hadoop-annotations-2.7.1.jar:$HADOOP_HOME/commons-cli-1.2.jar -d ./classes WordCount.java

  4. 創建一個jar jar -cvf wordcount.jar -C ./classes/ .

5.run bin/hadoop jar $HADOOP_HOME/Hadoop-WordCount/wordcount.jar WordCount input output