2017-05-29 59 views
1

它關係到這個問題[創建表XXX如SELECT * FROM YYY有時會錯誤星火SQL 2.1.1節儉服務器 - 無法動彈源HDFS目標

] 1

當使用火花節儉服務器,執行多個語句,如create table xxx as select * from yyy,只有第一次會成功,以後嘗試將總是失敗,因爲java.io.IOException: Filesystem closeddoAs問題。

完整的錯誤堆棧跟蹤:

17/05/29 08:44:53 ERROR thriftserver.SparkExecuteStatementOperation: Error executing query, currentState RUNNING, 
org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_107/.hive-staging_hive_2017-05-29_08-44-50_607_2388239917764085229-3/-ext-10000/part-00000 to destination hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_107/part-00000; 
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106) 
    at org.apache.spark.sql.hive.HiveExternalCatalog.loadTable(HiveExternalCatalog.scala:766) 
    at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:374) 
    at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:221) 
    at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:407) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) 
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) 
    at org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand.run(CreateHiveTableAsSelectCommand.scala:92) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114) 
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135) 
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132) 
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92) 
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92) 
    at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185) 
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64) 
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592) 
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:231) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:174) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:422) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:184) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_107/.hive-staging_hive_2017-05-29_08-44-50_607_2388239917764085229-3/-ext-10000/part-00000 to destination hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_107/part-00000 
    at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2644) 
    at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2892) 
    at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1640) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.spark.sql.hive.client.Shim_v0_14.loadTable(HiveShim.scala:728) 
    at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadTable$1.apply$mcV$sp(HiveClientImpl.scala:676) 
    at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadTable$1.apply(HiveClientImpl.scala:676) 
    at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadTable$1.apply(HiveClientImpl.scala:676) 
    at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:279) 
    at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:226) 
    at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:225) 
    at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:268) 
    at org.apache.spark.sql.hive.client.HiveClientImpl.loadTable(HiveClientImpl.scala:675) 
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadTable$1.apply$mcV$sp(HiveExternalCatalog.scala:768) 
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadTable$1.apply(HiveExternalCatalog.scala:766) 
    at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadTable$1.apply(HiveExternalCatalog.scala:766) 
    at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) 
    ... 40 more 
Caused by: java.io.IOException: Filesystem closed 
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:798) 
    at org.apache.hadoop.hdfs.DFSClient.getEZForPath(DFSClient.java:2966) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.getEZForPath(DistributedFileSystem.java:1906) 
    at org.apache.hadoop.hdfs.client.HdfsAdmin.getEncryptionZoneForPath(HdfsAdmin.java:262) 
    at org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.isPathEncrypted(Hadoop23Shims.java:1221) 
    at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2607) 
    ... 59 more 
17/05/29 08:44:53 ERROR thriftserver.SparkExecuteStatementOperation: Error running hive query: 
org.apache.hive.service.cli.HiveSQLException: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_107/.hive-staging_hive_2017-05-29_08-44-50_607_2388239917764085229-3/-ext-10000/part-00000 to destination hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_107/part-00000; 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:266) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:174) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:171) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:422) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:184) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

這是正常create table select as日誌

17/05/29 08:42:30 INFO cluster.YarnScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 
17/05/29 08:42:30 INFO scheduler.DAGScheduler: ResultStage 1 (run at AccessController.java:0) finished in 2.079 s 
17/05/29 08:42:30 INFO scheduler.DAGScheduler: Job 1 finished: run at AccessController.java:0, took 2.100557 s 
17/05/29 08:42:30 INFO metastore.HiveMetaStore: 2: get_table : db=task tbl=task_106 
17/05/29 08:42:30 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=task tbl=task_106  
17/05/29 08:42:30 INFO metastore.HiveMetaStore: 2: get_table : db=task tbl=task_106 
17/05/29 08:42:30 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=task tbl=task_106  
17/05/29 08:42:30 INFO metadata.Hive: Replacing src:hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_106/.hive-staging_hive_2017-05-29_08-42-26_232_2514893773205547001-1/-ext-10000/part-00000, dest: hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_106/part-00000, Status:true 
17/05/29 08:42:30 INFO metadata.Hive: Replacing src:hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_106/.hive-staging_hive_2017-05-29_08-42-26_232_2514893773205547001-1/-ext-10000/part-00001, dest: hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_106/part-00001, Status:true 

這是失敗的一個,一些get_table後,正在執行一些drop_table,進而導致Filesystem.close,終於unable to move source

17/05/29 08:42:50 INFO cluster.YarnScheduler: Removed TaskSet 6.0, whose tasks have all completed, from pool 
17/05/29 08:42:50 INFO scheduler.DAGScheduler: ResultStage 6 (run at AccessController.java:0) finished in 2.567 s 
17/05/29 08:42:50 INFO scheduler.DAGScheduler: Job 3 finished: run at AccessController.java:0, took 2.819549 s 
17/05/29 08:42:51 INFO metastore.HiveMetaStore: 6: get_table : db=task tbl=task_107 
17/05/29 08:42:51 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=task tbl=task_107  
17/05/29 08:42:51 INFO metastore.HiveMetaStore: 6: get_table : db=task tbl=task_107 
17/05/29 08:42:51 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=task tbl=task_107  
17/05/29 08:42:51 INFO metastore.HiveMetaStore: 6: get_database: task 
17/05/29 08:42:51 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: task 
17/05/29 08:42:51 INFO metastore.HiveMetaStore: 6: get_table : db=task tbl=task_107 
17/05/29 08:42:51 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=task tbl=task_107  
17/05/29 08:42:51 INFO metastore.HiveMetaStore: 6: get_database: task 
17/05/29 08:42:51 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_database: task 
17/05/29 08:42:51 INFO metastore.HiveMetaStore: 6: get_table : db=task tbl=task_107 
17/05/29 08:42:51 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_table : db=task tbl=task_107  
17/05/29 08:42:51 INFO metastore.HiveMetaStore: 6: drop_table : db=task tbl=task_107 
17/05/29 08:42:51 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=drop_table : db=task tbl=task_107 
17/05/29 08:42:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 
17/05/29 08:42:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 
17/05/29 08:42:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 
17/05/29 08:42:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 
17/05/29 08:42:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 
17/05/29 08:42:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 
17/05/29 08:42:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 
17/05/29 08:42:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 
17/05/29 08:42:52 INFO metastore.hivemetastoressimpl: deleting hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_107 
17/05/29 08:42:52 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. 
17/05/29 08:42:52 INFO metastore.hivemetastoressimpl: Deleted the diretory hdfs://jzf-01:9000/user/hive/warehouse/task.db/task_107 
17/05/29 08:42:52 ERROR thriftserver.SparkExecuteStatementOperation: Error executing query, currentState RUNNING, 

回答

1

嘗試在hive-site.xml設置hive.exec.staging-dir這樣的:

<property> 
    <name>hive.exec.stagingdir</name> 
    <value>/tmp/hive/spark-${user.name}</value> 
</property> 

這個工作對誰升級從1.6.2到2.1.1,誰曾與CTAS同樣問題的客戶。在我們的開發集羣上,這樣做會讓我們超越您的特定錯誤,但我們仍然會遇到一些HDFS權限問題。

希望這會有所幫助。

+0

爽!這解決了它的優雅。感謝終於有人知道我在說什麼。 – pinkdawn

+0

嘿,我忘了補充一件事。除非將'hive.default.fileformat'設置爲parquet或orc,否則CTAS表將爲'TEXTFILE',因此搜索它可能會比您期望的慢。 –

+0

我使用'創建表xx存儲爲orc作爲選擇*從yyy',我想這將被存儲爲orc? – pinkdawn

0

這100%是一個錯誤,我不知道h來形容它,但我可以告訴我如何繞過它。

數百失敗後,我發現這種情況使用JDBC執行時create table as select

  • ,如果一個連接執行一次,然後嘗試在另一個方面,將100%的失敗。這幾乎殺了我,因爲認識到spring simpleDatasource -> jdbcTemplate只是爲每個語句使用不同的連接需要很長的時間。有一個游泳池,所以我將游泳池大小設置爲1,這是正常的。
  • 使用直線,如果一旦執行它,並停止直線,然後再次開始直線,將100%的失敗

我發現了繞過這個錯誤的唯一方法,就是每次我想執行,

  • 在這個會話啓動SSH會話
  • ,開始直線,並連接到服務器漂移
  • 執行create table as select聲明
  • d銷燬ssh會話

這是在單個spark sql thrift jdbc服務器生命週期中多次執行create table as select的唯一方法。

1

將該屬性添加到hdfs站點。xml

<property> 
     <name>fs.hdfs.impl.disable.cache</name> 
     <value>true</value> 
    </property> 

原因:spark和hdfs使用相同的api(在底部它們使用相同的實例)。

當直線關閉文件系統實例。它也關閉了thriftserver的文件系統實例。 第二直線試圖獲得實例,它總是會報告「所造成:java.io.IOException異常:文件系統關閉」

請點擊這裏查看此問題:

https://issues.apache.org/jira/browse/SPARK-21725

+0

雖然這個鏈接可能回答這個問題,但最好在這裏包含答案的基本部分,並提供供參考的鏈接。如果鏈接頁面更改,則僅鏈接答案可能會失效。 - [來自評論](/ review/low-quality-posts/17811176) –

+0

@ user8142092 - 非常感謝您的幫助!你已經救了我的一天。我有另一個SO帖子相同的問題https://stackoverflow.com/questions/48592337/hive-hadoop-intermittent-failure-unable-to-move-source-to-destination/48611442#48611442它現在解決了謝謝你的答案。 – leeyuiwah

+0

@ user8142092-此方法有效。但隨後查詢變得非常緩慢。過去幾秒鐘完成的查詢現在需要約40秒才能執行。你在你的網站上看到相同的內容嗎? – leeyuiwah