2015-06-20 142 views
0

我有下面的火花字計數程序:產生的原因:拋出java.lang.ClassNotFoundException:org.apache.hadoop.fs.CanSetDropBehind問題ecllipse

package com.sample.spark; 
    import java.util.Arrays; 
    import java.util.List; 
    import java.util.Map; 
    import org.apache.spark.SparkConf; 
    import org.apache.spark.api.java.*; 
    import org.apache.spark.api.java.function.FlatMapFunction; 
    import org.apache.spark.api.java.function.Function; 
    import org.apache.spark.api.java.function.Function2; 
    import org.apache.spark.api.java.function.PairFlatMapFunction; 
    import org.apache.spark.api.java.function.PairFunction; 
    import org.apache.hadoop.fs.FSDataInputStream; 
    import org.apache.hadoop.fs.FSDataOutputStream; 
    import scala.Tuple2; 


    public class SparkWordCount { 

     public static void main(String[] args) { 
      SparkConf conf = new SparkConf().setAppName("wordcountspark").setMaster("local").setSparkHome("/Users/hadoop/spark-1.4.0-bin-hadoop1"); 
      JavaSparkContext sc = new JavaSparkContext(conf); 
      //SparkConf conf = new SparkConf(); 
      //JavaSparkContext sc = new JavaSparkContext("hdfs", "Simple App","/Users/hadoop/spark-1.4.0-bin-hadoop1", new String[]{"target/simple-project-1.0.jar"}); 
      JavaRDD<String> textFile = sc.textFile("hdfs://localhost:54310/data/wordcount"); 
      JavaRDD<String> words = textFile.flatMap(new FlatMapFunction<String, String>() { 
       public Iterable<String> call(String s) { return Arrays.asList(s.split(" ")); } 
      }); 
      JavaPairRDD<String, Integer> pairs = words.mapToPair(new PairFunction<String, String, Integer>() { 
       public Tuple2<String, Integer> call(String s) { return new Tuple2<String, Integer>(s, 1); } 

      }); 

      JavaPairRDD<String, Integer> counts = pairs.reduceByKey(new Function2<Integer, Integer, Integer>() { 
        public Integer call(Integer a, Integer b) { return a + b; } 
       }); 
      counts.saveAsTextFile("hdfs://localhost:54310/data/output/spark/outfile"); 

     } 


    } 

我得到的產生的原因:java.lang中。 ClassNotFoundException的:

 bin/spark-submit --class com.sample.spark.SparkWordCount --master local /Users/hadoop/spark-1.4.0-bin-hadoop1/finalJars/SparkJar-v2.jar 

Maven的POM樣子:當我從ecllipse運行代碼但是如果我導出爲可運行罐子,從終端運行如下它的工作原理org.apache.hadoop.fs.CanSetDropBehind例外

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
     xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> 
     <modelVersion>4.0.0</modelVersion> 
     <groupId>com.sample.spark</groupId> 
     <artifactId>SparkRags</artifactId> 
     <packaging>jar</packaging> 
     <version>1.0-SNAPSHOT</version> 
     <name>SparkRags</name> 
     <url>http://maven.apache.org</url> 
     <dependencies> 
      <dependency> 
       <groupId>junit</groupId> 
       <artifactId>junit</artifactId> 
       <version>3.8.1</version> 
       <scope>test</scope> 
      </dependency> 
      <dependency> <!-- Spark dependency --> 
       <groupId>org.apache.spark</groupId> 
       <artifactId>spark-core_2.10</artifactId> 
       <version>1.4.0</version> 
       <scope>compile</scope> 
      </dependency> 
      <dependency> 
       <groupId>org.apache.hadoop</groupId> 
       <artifactId>hadoop-common</artifactId> 
       <version>0.23.11</version> 
       <scope>compile</scope> 
      </dependency> 
      <dependency> 
       <groupId>org.apache.hadoop</groupId> 
       <artifactId>hadoop-core</artifactId> 
       <version>1.2.1</version> 
       <scope>compile</scope> 
      </dependency> 
    </dependencies> 
    </project> 

回答

1

在eclipse中運行時,引用的jar文件是程序運行的唯一源代碼。所以這個jar hadoop-core(即CanSetDropBehind存在的地方),由於某些原因,在你的eclipse中從本地存儲庫中沒有正確添加。如果它是一個代理問題,或者使用pom,則需要識別此問題。

從終端運行jar時,運行的原因可能是由於引用的classpath中存在jar。同時,從終端運行時,您也可以選擇將這些罐子作爲胖罐(將hadoop核心包含在您的罐子中)。我希望你在創建你的jar時不使用這個選項。然後,引用將從您的jar中取出,而不依賴於類路徑。

驗證每一步,它將幫助您找出原因。快樂編碼

+0

感謝Ramzy爲您的輸入。我使用「提取所需的庫到生成的jar」選項,可能會創建一個胖罐子。 – ragesh

+0

很酷,如果您滿意,請接受答案,或者等待更多答案。快樂編碼 – Ramzy

+0

「代理問題」是什麼意思? – ragesh

1

發現這是造成的,因爲在版本0.23.11類Hadoop共同罐子沒有階級,改變了版本2.7.0,並還增加了以下的依賴:

<dependency> 
     <groupId>org.apache.hadoop</groupId> 
     <artifactId>hadoop-mapreduce-client-core</artifactId> 
     <version>2.7.0</version> 
    </dependency> 

了擺脫了錯誤,但仍然看到下面的錯誤:

線程「主」異常java.io.EOFException:文件結束本地主機之間的異常是:「mbr-xxxx.local/127.0.0.1」;目的地主機是:「localhost」:54310; :java.io.EOFException;欲瞭解更多詳情,請參閱:http://wiki.apache.org/hadoop/EOFException

相關問題