2016-12-16 1139 views
2

我使用: 星火2.0.0 斯卡拉2.11.8 的Java 1.7 Maven的星火流錯誤TransportRequestHandler:錯誤而調用RpcHandler#接收()用於單向消息

我試圖運行在5臺機器的集羣中流動Kmeans。 我不知道什麼可以導致這個錯誤,有人可以幫我嗎?

下面控制檯日誌:

log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell). 
log4j:WARN Please initialize the log4j system properly. 
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. 
starting streaming context 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
16/12/16 15:49:31 INFO SparkContext: Running Spark version 2.0.0 
16/12/16 15:49:31 INFO SecurityManager: Changing view acls to: master 
16/12/16 15:49:31 INFO SecurityManager: Changing modify acls to: master 
16/12/16 15:49:31 INFO SecurityManager: Changing view acls groups to: 
16/12/16 15:49:31 INFO SecurityManager: Changing modify acls groups to: 
16/12/16 15:49:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(master); groups with view permissions: Set(); users with modify permissions: Set(master); groups with modify permissions: Set() 
16/12/16 15:49:31 INFO Utils: Successfully started service 'sparkDriver' on port 42691. 
16/12/16 15:49:31 INFO SparkEnv: Registering MapOutputTracker 
16/12/16 15:49:31 INFO SparkEnv: Registering BlockManagerMaster 
16/12/16 15:49:31 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-bfe445b5-2ee0-4209-9778-6750809a7b53 
16/12/16 15:49:32 INFO MemoryStore: MemoryStore started with capacity 1890.6 MB 
16/12/16 15:49:32 INFO SparkEnv: Registering OutputCommitCoordinator 
16/12/16 15:49:32 INFO Utils: Successfully started service 'SparkUI' on port 4040. 
16/12/16 15:49:32 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://200.17.78.23:4040 
16/12/16 15:49:32 INFO SparkContext: Added JAR /home/master/scala%20projects/testeKmeans/testeKmeans.jar at spark://200.17.78.23:42691/jars/testeKmeans.jar with timestamp 1481910572510 
16/12/16 15:49:32 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://master:7077... 
16/12/16 15:49:32 INFO TransportClientFactory: Successfully created connection to master/200.17.78.23:7077 after 26 ms (0 ms spent in bootstraps) 
16/12/16 15:49:32 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20161216154932-0009 
16/12/16 15:49:32 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20161216154932-0009/0 on worker-20161215083739-200.17.78.23-54671 (200.17.78.23:54671) with 6 cores 
16/12/16 15:49:32 INFO StandaloneSchedulerBackend: Granted executor ID app-20161216154932-0009/0 on hostPort 200.17.78.23:54671 with 6 cores, 1024.0 MB RAM 
16/12/16 15:49:32 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20161216154932-0009/1 on worker-20161215083737-200.17.78.26-33946 (200.17.78.26:33946) with 4 cores 
16/12/16 15:49:32 INFO StandaloneSchedulerBackend: Granted executor ID app-20161216154932-0009/1 on hostPort 200.17.78.26:33946 with 4 cores, 1024.0 MB RAM 
16/12/16 15:49:32 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20161216154932-0009/2 on worker-20161215084302-200.17.78.27-57926 (200.17.78.27:57926) with 4 cores 
16/12/16 15:49:32 INFO StandaloneSchedulerBackend: Granted executor ID app-20161216154932-0009/2 on hostPort 200.17.78.27:57926 with 4 cores, 1024.0 MB RAM 
16/12/16 15:49:32 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20161216154932-0009/3 on worker-20161215083727-200.17.78.25-56713 (200.17.78.25:56713) with 4 cores 
16/12/16 15:49:32 INFO StandaloneSchedulerBackend: Granted executor ID app-20161216154932-0009/3 on hostPort 200.17.78.25:56713 with 4 cores, 1024.0 MB RAM 
16/12/16 15:49:32 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 48945. 
16/12/16 15:49:32 INFO NettyBlockTransferService: Server created on 200.17.78.23:48945 
16/12/16 15:49:32 INFO StandaloneAppClient$ClientEndpoint: 
Executor added: app-20161216154932-0009/4 on worker-20161215084213-200.17.78.24-54792 (200.17.78.24:54792) with 4 cores 
16/12/16 15:49:32 INFO StandaloneSchedulerBackend: Granted executor ID app-20161216154932-0009/4 on hostPort 200.17.78.24:54792 with 4 cores, 1024.0 MB RAM 
16/12/16 15:49:32 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 200.17.78.23, 48945) 
16/12/16 15:49:32 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message. 
java.io.InvalidClassException: org.apache.spark.deploy.DeployMessages$ExecutorUpdated; local class incompatible: stream classdesc serialVersionUID = 1654279024112373855, local class serialVersionUID = 3598161183190952796 
    at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:612) 
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1630) 
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521) 
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1779) 
+0

嗨,我有類似的問題,你找到任何方式解決? –

回答

0

我想回答這個問題,以幫助那些誰將達到同樣的問題在未來。

短版:

檢查您的Scala版本


長的版本:

我不只是運行像thise演示:

public class SparkTest{ 
    public static void main(String[] args){ 
     JavaSparkContext sc = new JavaSparkContext("spark://masterIP:7077", "appName"); 
     System.out.println(sc.textFile("file.txt").count()); 
    } 
} 

然而,我在運行時遇到類似這樣的錯誤:

17/04/20 20:11:39 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up. 
17/04/20 20:11:39 WARN StandaloneSchedulerBackend: Application ID is not initialized yet. 
17/04/20 20:11:39 ERROR SparkContext: Error initializing SparkContext. 
java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem 
    at scala.Predef$.require(Predef.scala:219) 
    at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91) 
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:524) 
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) 
    at cn.keepfight.spark.SparkTest.main(SparkTest.java:15) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) 

與主節點的日誌像那些:

17/04/20 20時21分47秒ERROR TransportRequestHandler:錯誤而調用RpcHandler#接收()用於單向消息。 java.io.InvalidClassException:scala.collection.immutable.List $ SerializationProxy;局部類不相容:流classdesc的serialVersionUID = -7905219378619747021,局部類的serialVersionUID = 1

17/04/20 20時22分07秒ERROR TransportRequestHandler:錯誤而調用RpcHandler#()爲單向消息接收。 java.io.InvalidClassException:scala.collection.immutable.List $ SerializationProxy;局部類不兼容:流classdesc的serialVersionUID = -7905219378619747021,當地CL 屁股的serialVersionUID = 1個

17/04/20 20時22分47秒INFO站長:10.10.18.96:33722得到了關聯,刪除它。 17/04/20 20:22:47 INFO Master:10.10.18.96:45147被解散,將其刪除。

首先,我只是獵取的Application ID is not initialized yet解決方案,但是當我檢查主節點的日誌,我發現我讓正在斯卡拉框架支持的2.10版本,我的想法是什麼錯誤。而且我的集羣在版本2.11上運行。