2017-06-12 98 views
0

在我的HDFS中,我收集了大約350個csv文件。每個文件的大小從幾個KB到250Mb不等。我需要將這些csv文件的值插入到名爲RECORD的表中。插入時,我也需要引用一些其他表(PARAMETER和FRAME_RATE)。我有以下查詢來完成此任務。將記錄插入大量csv文件中的表格

-- create external table for the csv files in hdfs 
    CREATE EXTERNAL TABLE TEMP_CSV( 
    FRAME_RANK BIGINT, 
    FRATE BIGINT, 
    SOURCE STRING, 
    PARAM STRING, 
    RECORDEDVALUE STRING 
    )   
ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ';'  
location '/user/bala/output' 
TBLPROPERTIES ("skip.header.line.count"="2"); 


-- Now insert fresh values into T_RECORD 
INSERT OVERWRITE TABLE RECORD 
PARTITION(SESSION) 
SELECT DISTINCT   
    TEMP_CSV.F_FRAME_RANK,       
    PARAMETER.K_ID, 
    FRAME_RATE.K_ID, 
    CAST(TEMP_CSV.RECORDEDVALUE as FLOAT),   
    split(reverse(split(reverse(TEMP_CSV.INPUT__FILE__NAME),"/")[0]), "[.]")[0] AS SESSION   
    FROM TEMP_CSV , PARAMETER, FRAME_RATE 
    WHERE PARAMETER.NAME = TEMP_CSV.PARAM AND FRAME_RATE.FRATE = TEMP_CSV.FRATE; 

在我小的PoC的研究,我有大約50 CSV文件,該查詢成功地填充記錄到記錄表中關於500秒與下面的配置

Hive-on-spark 
spark standalon 
6 nodes in the cluster 
4 cores per node/16gb RAM 
spark.executor.memory 2g 

然而,當我處理350個文件,查詢失敗,執行程序中出現java堆空間錯誤。所以,我增加了executor.memory到4g。失敗。我增加了executor.memory到6g。失敗。最後,我增加了spark.executor.memory到12g。成功。但花了大約2小時30分鐘。將spark.executor.memory增加到12g導致每個節點只有一個執行器,因此只有6個執行器。

當我的executor.memory是6G,這是日誌在發生故障時,

****** 
****** 
2017-06-12 11:59:09,988 Stage-1_0: 101/101 Finished Stage-2_0: 12/12 Fini shed Stage-3_0: 0(+12,-2)/12 
2017-06-12 11:59:12,997 Stage-1_0: 101/101 Finished Stage-2_0: 12/12 Finished Stage-3_0: 0(+12,-2)/12 
2017-06-12 11:59:16,004 Stage-1_0: 101/101 Finished Stage-2_0: 12/12 Finished Stage-3_0: 0(+12,-2)/12 
2017-06-12 11:59:19,012 Stage-1_0: 101/101 Finished Stage-2_0: 12/12 Finished Stage-3_0: 0(+12,-2)/12 
***** 
***** 

在執行,這是錯誤日誌

17/06/12 11:58:36 WARN NettyRpcEndpointRef: Error sending message [message = Heartbeat(5,[Lscala.Tuple2;@e65f7b8,BlockManagerId(5, bndligpu04, 54618))] in 1 attempts 
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [50 seconds]. This timeout is controlled by spark.executor.heartbeatInterval 
at  org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48) 
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63) 
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59) 
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) 
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76) 
at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101) 
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:476) 
at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:505) 
at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:505) 
at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:505) 
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1801) 
at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:505) 
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) 
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745) 
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [50 seconds] 
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219) 
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223) 
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107) 
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53) 
at scala.concurrent.Await$.result(package.scala:107) 
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75) 
... 14 more 
17/06/12 11:58:36 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 115) 
java.lang.OutOfMemoryError: Java heap space 
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) 
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) 
at org.apache.orc.impl.OutStream.getNewInputBuffer(OutStream.java:109) 
at org.apache.orc.impl.OutStream.write(OutStream.java:130) 
at org.apache.orc.impl.RunLengthIntegerWriterV2.writeDeltaValues(RunLengthIntegerWriterV2.java:238) 
at org.apache.orc.impl.RunLengthIntegerWriterV2.writeValues(RunLengthIntegerWriterV2.java:186) 
at org.apache.orc.impl.RunLengthIntegerWriterV2.write(RunLengthIntegerWriterV2.java:772) 
at org.apache.orc.impl.WriterImpl$IntegerTreeWriter.writeBatch(WriterImpl.java:1039) 
at org.apache.orc.impl.WriterImpl$StructTreeWriter.writeRootBatch(WriterImpl.java:1977) 
at org.apache.orc.impl.WriterImpl.addRowBatch(WriterImpl.java:2759) 
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushInternalBatch(WriterImpl.java:277) 
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.addRow(WriterImpl.java:296) 
at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.write(OrcOutputFormat.java:103) 
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:743) 
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) 
at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:97) 
at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.processKeyValues(SparkReduceRecordHandler.java:309) 
at org.apache.hadoop.hive.ql.exec.spark.SparkReduceRecordHandler.processRow(SparkReduceRecordHandler.java:267) 
at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.processNextRecord(HiveReduceFunctionResultList.java:49) 
at org.apache.hadoop.hive.ql.exec.spark.HiveReduceFunctionResultList.processNextRecord(HiveReduceFunctionResultList.java:28) 
at org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList$ResultIterator.hasNext(HiveBaseFunctionResultList.java:95) 
at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:41) 
at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120) 
at org.apache.spark.rdd.AsyncRDDActions$$anonfun$foreachAsync$1$$anonfun$apply$15.apply(AsyncRDDActions.scala:120) 
at org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:1992) 
at org.apache.spark.SparkContext$$anonfun$37.apply(SparkContext.scala:1992) 
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
at org.apache.spark.scheduler.Task.run(Task.scala:89) 
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
17/06/12 11:58:36  ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-1,5,main] 
java.lang.OutOfMemoryError: Java heap space 
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57) 
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335) 
at org.apache.orc.impl.OutStream.getNewInputBuffer(OutStream.java:109) 
at org.apache.orc.impl.OutStream.write(OutStream.java:130) 
at org.apache.orc.impl.RunLengthIntegerWriterV2.writeDeltaValues(RunLengthIntegerWriterV2.java:238) 

我的問題是: -

  1. 是否有一個範圍來優化查詢?
  2. 任何其他可以應對這一挑戰的火花/蜂巢配置?
  3. 有沒有辦法告訴Hive處理50個文件?

任何幫助/信息來解決這個問題將是有益的。還有一個信息,'SELECT'聲明有效,我可以在我的色調瀏覽器中看到結果。當我嘗試插入由'SELECT'收集的信息時,查詢中斷。

回答

-1

您可以嘗試增加此作業的執行者核心。

執行程序核心數是執行程序可以運行的並行任務數。工作者核心 - 讓工作人員運行的是「CPU核心」。

在Spark中,可以選擇設置啓動從站時的CPU內核數量,從而定義允許Spark應用程序在機器上僅在worker上使用的總CPU內核數量。默認是:使用所有可用的核心

的命令來啓動星火會是這樣的: ./sbin/start-all.sh --cores 2

或者你可以用--executor-cores 2

+0

感謝您的建議,但我不知道如何--executor-cores = 2將是正確的修復。因爲在一個核心中只有一個進程在6gb可用的情況下運行,它會失敗。再運行一個進程與6gb共享應該再次失敗。對?我可以嘗試一下。 – Bala

0

嘗試一些挖入日誌和表本身,我後做了以下操作:

  1. 我刪除了RECORD表的'集羣'。早些時候,RECORD在第二階段有12個任務(12個數字)。爲了增加這個數字,我刪除了桶。現在,它已經創建了273個任務。我仍然不知道背後的原因。但是,用執行器內存4GB,這個配置工作。

  2. 我轉移到紗線上的配置。這提高了性能。現在,我能夠在35米內完成查詢。

不過,我看到可能有一兩個優化查詢。我會嘗試加入。