2015-02-24 131 views
2

我一直在用我的YARN集羣成功使用pyspark。我所做的工作是 ,包括使用RDD的pipe命令通過我製作的二進制文件 發送數據。我可以在pyspark輕鬆地做到這一點,像這樣(假設「SC」已定義 ):在紗線集羣上對管道使用addFile

sc.addFile("./dumb_prog") 
t= sc.parallelize(range(10)) 
t.pipe("dumb_prog") 
t.take(10) # Gives expected result 

但是,如果我做同樣的事情在Scala中,管道命令得到一個「不能 運行程序「dumb_prog 「:錯誤= 2,沒有這樣的文件或目錄錯誤。下面是 代碼在斯卡拉殼:

sc.addFile("./dumb_prog") 
val t = sc.parallelize(0 until 10) 
val u = t.pipe("dumb_prog") 
u.take(10) 

爲什麼在Python,而不是在斯卡拉這只是工作的?有沒有辦法讓我的 能夠在Scala中工作?

這裏是斯卡拉側完整的錯誤消息:

[59/3965] 
14/09/29 13:07:47 INFO SparkContext: Starting job: take at <console>:17 
14/09/29 13:07:47 INFO DAGScheduler: Got job 3 (take at <console>:17) with 1 
output partitions (allowLocal=true) 
14/09/29 13:07:47 INFO DAGScheduler: Final stage: Stage 3(take at 
<console>:17) 
14/09/29 13:07:47 INFO DAGScheduler: Parents of final stage: List() 
14/09/29 13:07:47 INFO DAGScheduler: Missing parents: List() 
14/09/29 13:07:47 INFO DAGScheduler: Submitting Stage 3 (PipedRDD[3] at pipe 
at <console>:14), which has no missing parents 
14/09/29 13:07:47 INFO MemoryStore: ensureFreeSpace(2136) called with 
curMem=7453, maxMem=278302556 
14/09/29 13:07:47 INFO MemoryStore: Block broadcast_3 stored as values in 
memory (estimated size 2.1 KB, free 265.4 MB) 
14/09/29 13:07:47 INFO MemoryStore: ensureFreeSpace(1389) called with 
curMem=9589, maxMem=278302556 
14/09/29 13:07:47 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes 
in memory (estimated size 1389.0 B, free 265.4 MB) 
14/09/29 13:07:47 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory 
on 10.10.0.20:37574 (size: 1389.0 B, free: 265.4 MB) 
14/09/29 13:07:47 INFO BlockManagerMaster: Updated info of block 
broadcast_3_piece0 
14/09/29 13:07:47 INFO DAGScheduler: Submitting 1 missing tasks from Stage 3 
(PipedRDD[3] at pipe at <console>:14) 
14/09/29 13:07:47 INFO YarnClientClusterScheduler: Adding task set 3.0 with 
1 tasks 
14/09/29 13:07:47 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID 
6, SERVERNAME, PROCESS_LOCAL, 1201 bytes) 
14/09/29 13:07:47 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory 
on SERVERNAME:57118 (size: 1389.0 B, free: 530.3 MB) 
14/09/29 13:07:47 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 6, 
SERVERNAME): java.io.IOException: Cannot run program "dumb_prog": error=2, 
No such file or directory 
    java.lang.ProcessBuilder.start(ProcessBuilder.java:1041) 
    org.apache.spark.rdd.PipedRDD.compute(PipedRDD.scala:119) 
    org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) 
    org.apache.spark.rdd.RDD.iterator(RDD.scala:229) 
    org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62) 
    org.apache.spark.scheduler.Task.run(Task.scala:54) 

回答

1

我在紗線客戶端模式遇到了類似的問題,在火花1.3.0。當我查看應用程序緩存目錄時,即使使用--files,文件也不會被推送到執行程序。但是,當我說下面的,它並推到每個執行人:

sc.addFile("dumb_prog",true) 
t.pipe("./dumb_prog") 

我認爲這是一個錯誤,但是上面有我過去的問題。

+0

sc.add(「hdfs:/// hdfs中的hdfs:///文件腳本路徑」)不起作用您是否嘗試過使用spark 2.2 – donald 2018-01-18 10:26:22