2017-08-16 79 views
1

我嘗試使用Zeppelin 0.7.2和Spark 2.1.1進行基本回歸運行在Debian 9上。兩個zeppelin都被安裝在/ usr/local /這意味着/ usr/local/zeppelin /和/ usr/local/spark。齊柏林也知道正確的SPARK_HOME。首先,我加載數據:Zeppelin/Spark:org.apache.spark.SparkException:無法運行程序「/ usr/bin /」:error = 13,沒有權限

%spark.pyspark 
from sqlalchemy import create_engine #sql query 
import pandas as pd #sql query 
from pyspark import SparkContext #Spark DataFrame 
from pyspark.sql import SQLContext #Spark DataFrame 

# database connection and sql query 
pdf = pd.read_sql("select col1, col2, col3 from table", create_engine('mysql+mysqldb://user:[email protected]:3306/db').connect()) 

print(pdf.size) # size of pandas dataFrame 

# convert pandas dataFrame into spark dataFrame 
sdf = SQLContext(SparkContext.getOrCreate()).createDataFrame(pdf) 

sdf.printSchema()# what does the spark dataFrame look like? 

精細,它的工作原理和我得到的輸出與46977一行三米的cols:

46977 
root 
|-- col1: double (nullable = true) 
|-- col2: double (nullable = true) 
|-- col3: date (nullable = true) 

好了,現在我想做的迴歸:

%spark.pyspark 
# do a linear regression with sparks ml libs 
# https://community.intersystems.com/post/machine-learning-spark-and-cach%C3%A9 
from pyspark.ml.regression import LinearRegression 
from pyspark.ml.feature import VectorAssembler 

# choose several inputCols and transform the "Features" column(s) into the correct vector format 
vectorAssembler = VectorAssembler(inputCols=["col1"], outputCol="features") 
data=vectorAssembler.transform(sdf) 
print(data) 

# Split the data into 70% training and 30% test sets. 
trainingData,testData = data.randomSplit([0.7, 0.3], 0.0) 
print(trainingData) 

# Configure the model. 
lr = LinearRegression().setFeaturesCol("features").setLabelCol("col2").setMaxIter(10) 

## Train the model using the training data. 
lrm = lr.fit(trainingData) 

## Run the test data through the model and display its predictions for PetalLength. 
#predictions = lrm.transform(testData) 
#predictions.show() 

但是在做lr.fit(trainingData)時,我在控制檯(和zeppelin的日誌文件)中出現錯誤。錯誤似乎在啓動時出現:無法運行程序「/ usr/bin /」:錯誤= 13,Keine Berechtigung。我想知道/ usr/bin /中應該啓動什麼,因爲我只使用路徑/ usr/local /。

Traceback (most recent call last): 
    File "/tmp/zeppelin_pyspark-4001144784380663394.py", line 367, in <module> 
    raise Exception(traceback.format_exc()) 
Exception: Traceback (most recent call last): 
    File "/tmp/zeppelin_pyspark-4001144784380663394.py", line 360, in <module> 
    exec(code, _zcUserQueryNameSpace) 
    File "<stdin>", line 9, in <module> 
    File "/usr/local/spark/python/pyspark/ml/base.py", line 64, in fit 
    return self._fit(dataset) 
    File "/usr/local/spark/python/pyspark/ml/wrapper.py", line 236, in _fit 
    java_model = self._fit_java(dataset) 
    File "/usr/local/spark/python/pyspark/ml/wrapper.py", line 233, in _fit_java 
    return self._java_obj.fit(dataset._jdf) 
    File "/usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__ 
    answer, self.gateway_client, self.target_id, self.name) 
    File "/usr/local/spark/python/pyspark/sql/utils.py", line 63, in deco 
    return f(*a, **kw) 
    File "/usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value 
    format(target_id, ".", name), value) 
Py4JJavaError: An error occurred while calling o70.fit. 
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): **java.io.IOException: Cannot run program "/usr/bin/": error=13, Keine Berechtigung** 
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) 
    at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:163) 
    at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:89) 
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:65) 
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:116) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:128) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) 

回答

1

這是Zeppelins conf/zeppelin-env.sh中的配置錯誤。在那裏,我有以下行註釋掉導致錯誤。我現在評論的行,它的工作原理:

#export PYSPARK_PYTHON=/usr/bin/ # path to the python command. must be the same path on the driver(Zeppelin) and all workers. 

所以問題是,在沒有正確設定爲PYSPARK_PYTHON路徑,現在使用默認的Python二進制。我通過在Zeppelin基本目錄中執行grep -R "/usr/bin/"查找字符串/usr/bin/找到了解決方案並檢查了這些文件。

相關問題