2017-08-29 140 views
0

我有權訪問使用HDP 2.4安裝的Hadoop羣集2.7.1版。這種集羣已安裝星火,具體是:在Hadoop 2.7.1羣集中使用Spark 1.6.2 for Hadoop 2.6.0的問題

$ cat /usr/hdp/2.4.3.0-227/spark/RELEASE 
Spark 1.6.2.2.4.3.0-227 built for Hadoop 2.7.1.2.4.3.0-227 

我試圖建立一個「客戶」機能夠remotelly連接到羣集和部署星火工作。因此,我需要爲上述相同版本安裝Spark分發。

首先,我去了官方的Spark下載頁面,但1.6.2僅適用於Hadoop 2.6。

然後,我決定下載Spark源代碼,並按照this指南構建它。有趣的是Hadoop「2.6.x和更高版本2.x」的required building profilehadoop-2-6。即如果我自己創建Spark,我將獲得Spark官方下載頁面中提供的發行版。

因此,我已經爲Hadoop 2.6.0提供了Spark 1.6.2的官方預建版本。

而且它似乎沒有正常工作。我已經提交一個Python腳本-a非常簡單的只產生火花的上下文並有某種問題(只顯示日誌中的相關部分):

$ ./bin/spark-submit --master yarn --deploy-mode cluster basic.py 
... 
17/08/28 13:08:29 INFO Client: Requesting a new application from cluster with 8 NodeManagers 
17/08/28 13:08:29 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (24576 MB per container) 
17/08/28 13:08:29 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead 
17/08/28 13:08:29 INFO Client: Setting up container launch context for our AM 
17/08/28 13:08:29 INFO Client: Setting up the launch environment for our AM container 
17/08/28 13:08:29 INFO Client: Preparing resources for our AM container 
17/08/28 13:08:36 INFO Client: Uploading resource file:/Users/frb/Applications/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/spark-assembly-1.6.2-hadoop2.6.0.jar 
17/08/28 13:14:40 INFO Client: Uploading resource file:basic.py -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/basic.py 
17/08/28 13:14:40 INFO Client: Uploading resource file:/Users/frb/Applications/spark-1.6.2-bin-hadoop2.6/python/lib/pyspark.zip -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/pyspark.zip 
17/08/28 13:14:41 INFO Client: Uploading resource file:/Users/frb/Applications/spark-1.6.2-bin-hadoop2.6/python/lib/py4j-0.9-src.zip -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/py4j-0.9-src.zip 
17/08/28 13:14:42 INFO Client: Uploading resource file:/private/var/folders/cc/p9gx2wnn3dz8g6yf_r4308fm0000gn/T/spark-0d86f1f4-d310-423a-9d2f-90e2ff46f84e/__spark_conf__3704082754178078870.zip -> hdfs://<host>:8020/user/frb/.sparkStaging/application_1495097788339_0066/__spark_conf__3704082754178078870.zip 
17/08/28 13:14:42 INFO SecurityManager: Changing view acls to: frb 
17/08/28 13:14:42 INFO SecurityManager: Changing modify acls to: frb 
17/08/28 13:14:42 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(frb); users with modify permissions: Set(frb) 
17/08/28 13:14:42 INFO Client: Submitting application 66 to ResourceManager 
17/08/28 13:14:42 INFO YarnClientImpl: Submitted application application_1495097788339_0066 
17/08/28 13:14:48 INFO Client: Application report for application_1495097788339_0066 (state: ACCEPTED) 
17/08/28 13:14:48 INFO Client: 
    client token: N/A 
    diagnostics: N/A 
    ApplicationMaster host: N/A 
    ApplicationMaster RPC port: -1 
    queue: default 
    start time: 1503918882943 
    final status: UNDEFINED 
    tracking URL: <host>:8088/proxy/application_1495097788339_0066/ 
    user: frb 
17/08/28 13:14:49 INFO Client: Application report for application_1495097788339_0066 (state: ACCEPTED) 
... 
17/08/28 13:14:52 INFO Client: Application report for application_1495097788339_0066 (state: RUNNING) 
17/08/28 13:14:52 INFO Client: 
    client token: N/A 
    diagnostics: N/A 
    ApplicationMaster host: 10.95.120.6 
    ApplicationMaster RPC port: 0 
    queue: default 
    start time: 1503918882943 
    final status: UNDEFINED 
    tracking URL: <host>:8088/proxy/application_1495097788339_0066/ 
    user: frb 
17/08/28 13:14:53 INFO Client: Application report for application_1495097788339_0066 (state: RUNNING) 
... 
17/08/28 13:14:59 INFO Client: Application report for application_1495097788339_0066 (state: ACCEPTED) 
17/08/28 13:14:59 INFO Client: 
    client token: N/A 
    diagnostics: N/A 
    ApplicationMaster host: N/A 
    ApplicationMaster RPC port: -1 
    queue: default 
    start time: 1503918882943 
    final status: UNDEFINED 
    tracking URL: <host>:8088/proxy/application_1495097788339_0066/ 
    user: frb 
17/08/28 13:15:00 INFO Client: Application report for application_1495097788339_0066 (state: ACCEPTED) 
17/08/28 13:15:01 INFO Client: Application report for application_1495097788339_0066 (state: RUNNING) 
17/08/28 13:15:01 INFO Client: 
    client token: N/A 
    diagnostics: N/A 
    ApplicationMaster host: 10.95.58.21 
    ApplicationMaster RPC port: 0 
    queue: default 
    start time: 1503918882943 
    final status: UNDEFINED 
    tracking URL: <host>:8088/proxy/application_1495097788339_0066/ 
    user: frb 
17/08/28 13:15:02 INFO Client: Application report for application_1495097788339_0066 (state: RUNNING) 
... 
17/08/28 13:15:09 INFO Client: Application report for application_1495097788339_0066 (state: FINISHED) 
17/08/28 13:15:09 INFO Client: 
    client token: N/A 
    diagnostics: Max number of executor failures (4) reached 
    ApplicationMaster host: 10.95.58.21 
    ApplicationMaster RPC port: 0 
    queue: default 
    start time: 1503918882943 
    final status: FAILED 
    tracking URL: <host>:8088/proxy/application_1495097788339_0066/ 
    user: frb 
Exception in thread "main" org.apache.spark.SparkException: Application application_1495097788339_0066 finished with failed status 
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034) 
    at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081) 
    at org.apache.spark.deploy.yarn.Client.main(Client.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
17/08/28 13:15:09 INFO ShutdownHookManager: Shutdown hook called 
17/08/28 13:15:09 INFO ShutdownHookManager: Deleting directory /private/var/folders/cc/p9gx2wnn3dz8g6yf_r4308fm0000gn/T/spark-0d86f1f4-d310-423a-9d2f-90e2ff46f84e 

如果我檢查日誌這個工作,我看到:

ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server 
Traceback (most recent call last): 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 690, in start 
    self.socket.connect((self.address, self.port)) 
    File "/usr/lib64/python2.7/socket.py", line 224, in meth 
    return getattr(self._sock,name)(*args) 
error: [Errno 111] Connection refused 
Traceback (most recent call last): 
    File "basic.py", line 36, in <module> 
    sc = SparkContext(conf=conf) 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/pyspark.zip/pyspark/context.py", line 115, in __init__ 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/pyspark.zip/pyspark/context.py", line 172, in _do_init 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/pyspark.zip/pyspark/context.py", line 235, in _initialize_context 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 1062, in __call__ 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 631, in send_command 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 624, in send_command 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 579, in _get_connection 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 585, in _create_connection 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 697, in start 
py4j.protocol.Py4JNetworkError: An error occurred while trying to connect to the Java server 
ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server 
Traceback (most recent call last): 
    File "/disk0/hadoop/yarn/local/usercache/frb/appcache/application_1495097788339_0066/container_e03_1495097788339_0066_02_000001/py4j-0.9-src.zip/py4j/java_gateway.py", line 690, in start 
    self.socket.connect((self.address, self.port)) 
    File "/usr/lib64/python2.7/socket.py", line 224, in meth 
    return getattr(self._sock,name)(*args) 
error: [Errno 111] Connection refused 

iee Spark上下文未創建,運行Java網關的JVM與運行Spark上下文的Python驅動程序之間的連接失敗。

,因爲這必須與我已經安裝在我的客戶端機器肯定星火分佈:

  • 我的客戶機的火花分佈上傳到clsuter,因此它是一個用過的;提交時請記住此日誌:

    17/08/28 13:08:36信息客戶端:上傳資源文件:/Users/frb/Applications/spark-1.6.2-bin-hadoop2.6/lib/spark - 裝配-1.6.2-hadoop2.6.0.jar - > HDFS://:8020 /用戶/ FRB/.sparkStaging/application_1495097788339_0066 /火花組裝1.6.2-hadoop2.6.0.jar

  • 相同上述命令在羣集內提交時工作,即在使用由HDP安裝的Spark的「爲Hadoop 2.7.1.2.4.3.0-227」版本構建的「Spark 1.6.2.2.4.3.0-227」時。

有關如何解決這個問題的任何想法?謝謝!

回答

0

我finaly解決了這個:

  • 我加入到​​命令選項--conf spark.yarn.jar,具有值火花組件罐的位置處的遠程火花羣集。這樣可以避免上傳我安裝的客戶端Spark程序集jar(這是一個緩慢的過程,實際上並不完全匹配遠程版本)。
  • 我向yarn-site.xml的客戶端添加了屬性hdp.version,其中值爲遠程Hadoop-Spark羣集的HDP版本。這避免了在某些路徑上的替換錯誤,最終顯示爲我在問題中描述的連接錯誤。