我已經配置了Spark,以便在託管輸入文件的HDFS的兩個節點上運行Spark。我想將所有由metrics.properties提供的stats文件轉儲到HDFS或每個節點的本地目錄。Spark度量標準錯誤:CsvReporter:寫入jvm.PS-MarkSweep.count時出錯
這裏是我的metrics.properties的統計位置的配置:
*.sink.csv.directory=hdfs://ip:port/user/spark_stats/
我也試圖做一個臨時的本地目錄中的每個節點,並配置metrics.properties如下:
*.sink.csv.directory=/tmp/spark_stats/
這兩種方法都給出錯誤如下:
16/03/02 15:41:49 WARN CsvReporter: Error writing to jvm.PS-MarkSweep.count
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at com.codahale.metrics.CsvReporter.report(CsvReporter.java:241)
at com.codahale.metrics.CsvReporter.reportGauge(CsvReporter.java:234)
at com.codahale.metrics.CsvReporter.report(CsvReporter.java:150)
at com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162)
at com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/03/02 15:41:49 WARN CsvReporter: Error writing to jvm.PS-MarkSweep.count
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
....
我的應用程序仍然運行並完成罰款。但火花日誌文件顯示寫入統計文件時出錯。有沒有人遇到過這個問題?
後續操作:仔細查看錯誤信息後,所有IO錯誤都是由寫入主jvm信息引起的。如果我指定僅轉儲worker,driver和executors的jvm信息,則不會出現錯誤。
一個修復程序,可以把在metric.properties文件這一行: executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource