2017-02-17 83 views
1

我試圖在spark-jobserver的本地執行一個作業。我的應用程序有以下依賴性:如何在spark-jobserver中運行sqlContext

name := "spark-test" 

version := "1.0" 

scalaVersion := "2.10.6" 

resolvers += Resolver.jcenterRepo 

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.1" 
libraryDependencies += "spark.jobserver" %% "job-server-api" % "0.6.2" % "provided" 
libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.6.2" 
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.6.2" 
libraryDependencies += "com.holdenkarau" % "spark-testing-base_2.10" % "1.6.2_0.4.7" % "test" 

我已經生成使用的應用程序包:

sbt assembly 

在那之後,我已經提交了包這樣的:

curl --data-binary @spark-test-assembly-1.0.jar localhost:8090/jars/myApp 

當我觸發了這項工作,出現以下錯誤:

{ 
    "duration": "0.101 secs", 
    "classPath": "jobs.TransformationJob", 
    "startTime": "2017-02-17T13:01:55.549Z", 
    "context": "42f857ba-jobs.TransformationJob", 
    "result": { 
    "message": "java.lang.Exception: Could not find resource path for Web UI: org/apache/spark/sql/execution/ui/static", 
    "errorClass": "java.lang.RuntimeException", 
    "stack": ["org.apache.spark.ui.JettyUtils$.createStaticHandler(JettyUtils.scala:180)", "org.apache.spark.ui.WebUI.addStaticHandler(WebUI.scala:117)", "org.apache.spark.sql.execution.ui.SQLTab.<init>(SQLTab.scala:34)", "org.apache.spark.sql.SQLContext$$anonfun$createListenerAndUI$1.apply(SQLContext.scala:1369)", "org.apache.spark.sql.SQLContext$$anonfun$createListenerAndUI$1.apply(SQLContext.scala:1369)", "scala.Option.foreach(Option.scala:236)", "org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1369)", "org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:77)", "jobs.TransformationJob$.runJob(TransformationJob.scala:64)", "jobs.TransformationJob$.runJob(TransformationJob.scala:14)", "spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:301)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)", "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)", "java.lang.Thread.run(Thread.java:745)"] 
    }, 
    "status": "ERROR", 
    "jobId": "a6bd6f23-cc82-44f3-8179-3b68168a2aa7" 
} 

這裏是應用發生故障的部分:

override def runJob(sparkCtx: SparkContext, config: Config): Any = { 
    val sqlContext = new SQLContext(sparkCtx) 
    ... 
} 

我有一些問題:

1)我注意到運行火花jobserver地方我並不需要有火花安裝。 spark-jobserver是否已經嵌入了spark?

2)如何知道spark-jobserver正在使用的spark的版本是什麼?哪裏是?

3)我使用spark-sql的版本1.6.2。我應該改變它還是保留它?

如果有人能回答我的問題,我將非常感激。

+0

你怎麼現在運行的火花jobserver它包括? – noorul

+0

嗨@noorul。我正在運行spark-jobserver,像這樣:job-server/reStart – vallim

+0

答案對你有幫助嗎? – noorul

回答

1
  1. 是的,spark-jobserver具有火花依賴性。而不是工作服務器/重新啓動,你應該使用job-server-extras/reStart這將幫助你獲得與sql相關的依賴關係。
  2. 看項目/ Versions.scala
  3. 你不需要火花SQL我想是因爲如果你運行jobserver,演員/重啓