2016-04-27 84 views
1

我有一個簡單的Spark作業,它讀取大型日誌文件,對它們進行過濾,並將結果寫入新表格。簡化的Scala驅動程序的應用代碼是:Spark寫入Postgresql。的BatchUpdateException?

val sourceRdd = sc.textFile(sourcePath) 

val parsedRdd = sourceRdd.flatMap(parseRow) 

val filteredRdd = parsedRdd.filter(l => filterLogEntry(l, beginDateTime, endDateTime)) 

val dataFrame = sqlContext.createDataFrame(filteredRdd) 

val writer = dataFrame.write 

val properties = new Properties() 
properties.setProperty("user", "my_user") 
properties.setProperty("password", "my_password") 
writer.jdbc("jdbc:postgresql://ip_address/database_name", "my_table", properties) 

這對小批量生產完美。在大批量兩小時執行後,我看到約800萬條記錄的目標表和火花的工作已經失敗,出現以下錯誤:

Caused by: java.sql.BatchUpdateException: Batch entry 524 INSERT INTO my_table <snip> was aborted. Call getNextException to see the cause. 
    at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:136) 
    at org.postgresql.core.v3.QueryExecutorImpl$ErrorTrackingResultHandler.handleError(QueryExecutorImpl.java:308) 
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2004) 
    at org.postgresql.core.v3.QueryExecutorImpl.flushIfDeadlockRisk(QueryExecutorImpl.java:1187) 
    at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1212) 
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:351) 
    at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:1019) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.savePartition(JdbcUtils.scala:210) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:277) 
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$saveTable$1.apply(JdbcUtils.scala:276) 
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920) 
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$33.apply(RDD.scala:920) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858) 
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

如果我複製粘貼給定的SQL INSERT語句成SQL控制檯,它工作正常。在PostgreSQL服務器日誌我看到:

(這是未修改/ unanonymized日誌)

2012016-04-26 22:38:09 GMT [3769-12] [email protected] ERROR: syntax error at or near "was" at character 544 
2016-04-26 22:38:09 GMT [3769-13] [email protected] STATEMENT: INSERT INTO log_entries2 (client,host,req_t,request,seg,server,timestamp_value) VALUES ('68.67.161.5','"204.13.197.104"','0.000s','"GET /bid?apnx_id=&ip=67.221.130.195&idfa=&dmd5=&daid=&lt=32.90630&lg=-95.57920&u=branovate.com&ua=Mozilla%2F5.0+%28Linux%3B+Android+5.1%3B+XT1254+Build%2FSU3TL-39%3B+wv%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Version%2F4.0+Chrome%2F44.0.2403.90+Mobile+Safari%2F537.36+%5BFB_IAB%2FFB4A%3BFBAV%2F39.0.0.36.238%3B%5D&ap=&c=1&dmdl=&dmk= HTTP/1.1"','samba_info_has_geo','','2015-08-02T20:24:30.482000112') was aborted. Call getNextException to see the cause. 

好像星火發送文本「被中止呼叫...的getNextException」到PostgreSQL從而引發這一特定錯誤。這似乎是一個合法的Spark錯誤。第二個問題是爲什麼Spark首先放棄了這一點?

所以,afaik,我不能調用getNextException,因爲我沒有直接使用JDBC,而是通過Spark。

僅供參考,這是Spark 1.6.1和Scala 2.11。

+0

VALUE子句中的項以奇怪的方式引用,例如'VALUES('「2016-04-27」',...)「,當服務器需要將它們解釋爲日期。 – wildplasser

+0

所以,是的,在「請求」和「主機」文本字段中有一個額外的引用問題,應該固定清潔,但它不會導致任何錯誤。一個日期字段當前被視爲一個字符串,並沒有多餘的引用問題。將date字段設置爲適當的postgresql日期類型而不是字符串可能會更好,但這不會導致問題。 – clay

+0

對於叮咬,允許包含引號。對於日期和時間以及時間戳,引號將使它們不可分段。 – wildplasser

回答

0

如果其他人正在搜索並點擊它,我的數據庫服務器(運行在VM中)達到磁盤空間限制,Spark似乎被這個錯誤困惑,不記錄真正的錯誤,導致不同的內部錯誤,並且記錄那個結果。從技術上講,這可能是內部Spark響應不常見的數據庫磁盤滿錯誤的錯誤。