2017-08-03 184 views
1

更新:問題已解決。 Docker鏡像在這裏:docker-spark-submit在Docker容器中運行Spark驅動程序 - 沒有從執行程序連接到驅動程序的連接?

我在Docker容器中運行spark-submit和一個胖胖的jar包。我的獨立Spark羣集運行在3個虛擬機上 - 一個主機和兩個工作人員。從工人的機器上的執行日誌,我看到執行人有以下驅動器網址:

「--driver-URL」, 「火花://[email protected]:5001」

172.17.0.2實際上是帶有驅動程序的容器的地址,而不是容器運行的主機。該IP不能從工作人員機器訪問,因此工作人員無法與驅動程序通信。 當我從StandaloneSchedulerBackend的源代碼看,它建立使用spark.driver.host設置driverUrl:

val driverUrl = RpcEndpointAddress(
    sc.conf.get("spark.driver.host"), 
    sc.conf.get("spark.driver.port").toInt, 
    CoarseGrainedSchedulerBackend.ENDPOINT_NAME).toString 

它沒有考慮到SPARK_PUBLIC_DNS環境變量 - 這是正確的?在容器中,除了容器「內部」IP地址(本例中爲172.17.0.2)外,我無法將spark.driver.host設置爲其他任何內容。當試圖spark.driver.host設置爲主機的IP地址,我得到的錯誤是這樣的:

WARN utils的:服務「sparkDriver」無法在端口5001 試圖綁定端口5002

我試圖設置spark.driver.bindAddress到主機的IP地址,但得到相同的錯誤。 那麼,如何配置Spark使用主機IP地址而不是Docker容器地址與驅動程序進行通信?

UPD:從執行堆棧跟蹤:

ERROR RpcOutboxMessage: Ask timeout before connecting successfully 
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713) 
    at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66) 
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:188) 
    at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:284) 
    at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala) 
Caused by: org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout 
    at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48) 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63) 
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) 
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36) 
    at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:216) 
    at scala.util.Try$.apply(Try.scala:192) 
    at scala.util.Failure.recover(Try.scala:216) 
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326) 
    at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:326) 
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
    at org.spark_project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293) 
    at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:136) 
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) 
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) 
    at scala.concurrent.Promise$class.complete(Promise.scala:55) 
    at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) 
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) 
    at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) 
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
    at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:63) 
    at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:78) 
    at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) 
    at scala.concurrent.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:55) 
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) 
    at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:54) 
    at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601) 
    at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:106) 
    at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599) 
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) 
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) 
    at scala.concurrent.Promise$class.tryFailure(Promise.scala:112) 
    at scala.concurrent.impl.Promise$DefaultPromise.tryFailure(Promise.scala:153) 
    at org.apache.spark.rpc.netty.NettyRpcEnv.org$apache$spark$rpc$netty$NettyRpcEnv$$onFailure$1(NettyRpcEnv.scala:205) 
    at org.apache.spark.rpc.netty.NettyRpcEnv$$anon$1.run(NettyRpcEnv.scala:239) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) 
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
    at java.lang.Thread.run(Thread.java:748) 
Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply in 120 seconds 
    ... 8 more 

回答

2

因此,工作方案是:

  • 集spark.driver.host到主機
  • 集火花的IP地址.driver.bindAddress到容器的IP地址

正在工作的Docker映像位於:docker-spark-submit

+0

它使用SPARK_DRIVER_HOST時沒有工作顯然,這裏有三個IP地址docker0:172.17.0.1(主機的docker以太網),Job Server docker IP(172.17.0.2)和主機本地分配的IP地址。我們嘗試了所有3個SPARK_DRIVER_HOST,但我們得到連接超時! – Somum

相關問題