我在火花獨立羣集上運行spark 2.0.2
並部署了流式作業cluster
部署模式。流式傳輸作業正常,但是在SPARK_HOME
的工作目錄中創建的應用程序和驅動程序stderr文件存在問題。由於流媒體一直在運行,這些文件的大小隻有不斷增長,我不知道如何控制它。Spark流驅動程序和應用程序工作文件清理
我都試過,即使它們不完全與在手的問題,但我仍然嘗試和沒有工作了以下解決方案:
- Apache Spark does not delete temporary directories
- How to log using log4j to local file system inside a Spark application that runs on YARN?
燦任何人請幫助我如何限制正在創建的這些文件的大小?
P.S:我嘗試了在conf/spark-env.sh
中添加以下行並重新啓動集羣的解決方案,但在運行流應用程序的情況下它不起作用。
export SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true -Dspark.worker.cleanup.interval=60 -Dspark.worker.cleanup.appDataTtl=60"
編輯:
@YuvalItzchakov我也想你的建議,但沒有奏效。駕駛員的stderr
日誌如下:
Launch Command: "/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java" "-cp" "/mnt/spark2.0.2/conf/:/mnt/spark2.0.2/jars/*" "-Xmx2048M" "-Dspark.eventLog.enabled=true" "-Dspark.eventLog.dir=/mnt/spark2.0.2/JobsLogs" "-Dspark.executor.memory=2g" "-Dspark.deploy.defaultCores=2" "-Dspark.io.compression.codec=snappy" "-Dspark.submit.deployMode=cluster" "-Dspark.shuffle.consolidateFiles=true" "-Dspark.shuffle.compress=true" "-Dspark.app.name=Streamingjob" "-Dspark.kryoserializer.buffer.max=128M" "-Dspark.master=spark://172.16.0.27:7077" "-Dspark.shuffle.spill.compress=true" "-Dspark.serializer=org.apache.spark.serializer.KryoSerializer" "-Dspark.cassandra.input.fetch.size_in_rows=20000" "-Dspark.executor.extraJavaOptions=-Dlog4j.configuration=file:///mnt/spark2.0.2/sparkjars/log4j.xml" "-Dspark.jars=file:/mnt/spark2.0.2/sparkjars/StreamingJob-assembly-0.1.0.jar" "-Dspark.executor.instances=10" "-Dspark.driver.extraJavaOptions=-Dlog4j.configuration=file:///mnt/spark2.0.2/sparkjars/log4j.xml" "-Dspark.driver.memory=2g" "-Dspark.rpc.askTimeout=10" "-Dspark.eventLog.compress=true" "-Dspark.executor.cores=1" "-Dspark.driver.supervise=true" "-Dspark.history.fs.logDirectory=/mnt/spark2.0.2/JobsLogs" "-Dlog4j.configuration=file:///mnt/spark2.0.2/sparkjars/log4j.xml" "org.apache.spark.deploy.worker.DriverWrapper" "spark://[email protected]:34475" "/mnt/spark2.0.2/work/driver-20170210124424-0001/StreamingJob-assembly-0.1.0.jar" "Streamingjob"
========================================
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/02/10 12:44:26 INFO SecurityManager: Changing view acls to: cassuser
17/02/10 12:44:26 INFO SecurityManager: Changing modify acls to: cassuser
17/02/10 12:44:26 INFO SecurityManager: Changing view acls groups to:
17/02/10 12:44:26 INFO SecurityManager: Changing modify acls groups to:
而且我log4j.xml
文件看起來像這樣:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd" >
<log4j:configuration>
<appender name="stdout" class="org.apache.log4j.RollingFileAppender">
<param name="threshold" value="TRACE"/>
<param name="File" value="stdout"/>
<param name="maxFileSize" value="1MB"/>
<param name="maxBackupIndex" value="10"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n"/>
</layout>
<filter class="org.apache.log4j.varia.LevelRangeFilter">
<param name="levelMin" value="ALL" />
<param name="levelMax" value="OFF" />
</filter>
</appender>
<appender name="stderr" class="org.apache.log4j.RollingFileAppender">
<param name="threshold" value="WARN"/>
<param name="File" value="stderr"/>
<param name="maxFileSize" value="1MB"/>
<param name="maxBackupIndex" value="10"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n"/>
</layout>
</appender>
</log4j:configuration>
注意,因爲它提供了一些錯誤,我已刪除在回答你的XML這根標籤:
<root>
<appender-ref ref="console"/>
</root>
您可以使用滾動文件appender向工作人員和主人員提供自定義log4j.xml。 –
@YuvalItzchakov你能告訴我該怎麼做。? –