2017-02-23 107 views
1

我試圖使Spark 2.1.0上的Hive 2.1.1在單個實例上工作。我不確定這是否正確。目前我只有一個實例,所以我無法構建羣集。Hive on Spark:無法創建火花客戶端

當我運行在蜂巢任何插入查詢,我得到的錯誤:

hive> insert into mcus (id, name) values (1, 'ARM'); 
Query ID = server_20170223121333_416506b4-13ba-45a4-a0a2-8417b187e8cc 
Total jobs = 1 
Launching Job 1 out of 1 
In order to change the average load for a reducer (in bytes): 
    set hive.exec.reducers.bytes.per.reducer=<number> 
In order to limit the maximum number of reducers: 
    set hive.exec.reducers.max=<number> 
In order to set a constant number of reducers: 
    set mapreduce.job.reduces=<number> 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' 
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask 

我怕我沒有正確配置,因爲我無法找到hdfs dfs -ls /spark/eventlog下任何星火日誌。下面是這是星火有關我的蜂巢-site.xml中的一部分,紗線:

<property> 
    <name>hive.exec.stagingdir</name> 
    <value>/tmp/hive-staging</value> 
</property> 

<property> 
    <name>hive.fetch.task.conversion</name> 
    <value>more</value> 
</property> 

<property> 
    <name>hive.execution.engine</name> 
    <value>spark</value> 
</property> 

<property> 
    <name>spark.master</name> 
    <value>spark://ThinkPad-W550s-Lab:7077</value> 
</property> 

<property> 
    <name>spark.eventLog.enabled</name> 
    <value>true</value> 
</property> 

<property> 
    <name>spark.eventLog.dir</name> 
    <value>hdfs://localhost:8020/spark/eventlog</value> 
</property> 
<property> 
    <name>spark.executor.memory</name> 
    <value>2g</value> 
</property> 

<property> 
    <name>spark.serializer</name> 
    <value>org.apache.spark.serializer.KryoSerializer</value> 
</property> 

<property> 
    <name>spark.home</name> 
    <value>/home/server/spark</value> 
</property> 

<property> 
    <name>spark.yarn.jar</name> 
    <value>hdfs://localhost:8020/spark-jars/*</value> 
</property> 

1)由於我沒有配置Hadoop中的fs.default.name價值,我可以只使用hdfs://localhost:8020作爲文件系統路徑配置文件或更改端口爲9000(當我將8020更改爲9000時,出現同樣的錯誤)?

enter image description here

2)我開始通過start-master.shstart-slave.sh spark://ThinkPad-W550s-Lab:7077火花,是正確的嗎?

3)根據本thread,我怎麼能檢查Spark Executor Memory + Overhead值,以設定的yarn.scheduler.maximum-allocation-mbyarn.nodemanager.resource.memory-mb值是多少?

yarn.scheduler.maximum-allocation-mbyarn.nodemanager.resource.memory-mb的值遠大於spark.executor.memory

4)如何修復Failed to create spark client錯誤? 非常感謝!

回答

0

對於第三個問題,您可以在yarn-default.xml文件中找到yarn.scheduler.maximum-allocation-mb和yarn.nodemanager.resource.memory-mb合適的值。或者,如果您有權訪問紗線資源管理器,則可以在工具 - >配置(xml)下找到值

+0

謝謝我找到它們! –