2015-12-17 40 views
0

我在Spark Streaming主題中是全新的。
通過流媒體應用程序我創建大小約爲2.5MB的Parquet文件並將它們存儲在S3 /本地目錄中。Spark Streaming - 將實木複合地板文件上傳到S3錯誤

方法我使用的是如下:

data.write.parquet(destination) 

其中「數據」是一個數據幀

如果目的地是本地路徑,一切就像一個魅力,但如果只是我把它發送到S3與像路徑「S3N://桶/目錄/文件名」我發現了以下情況例外:從剷鬥操作閱讀

15/12/17 10:47:06 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-3,5,main] 
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z 
     at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method) 
     at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557) 
     at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977) 
     at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187) 
     at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174) 
     at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108) 
     at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285) 
     at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344) 
     at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:416) 
     at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:198) 
     at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.newBackupFile(NativeS3FileSystem.java:263) 
     at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.<init>(NativeS3FileSystem.java:245) 
     at org.apache.hadoop.fs.s3native.NativeS3FileSystem.create(NativeS3FileSystem.java:412) 
     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) 
     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) 
     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786) 
     at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:176) 
     at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:160) 
     at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:289) 
     at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:262) 
     at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetRelation.scala:94) 
     at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anon$3.newInstance(ParquetRelation.scala:272) 
     at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:234) 
     at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150) 
     at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150) 
     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
     at org.apache.spark.scheduler.Task.run(Task.scala:88) 
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
     at java.lang.Thread.run(Thread.java:745) 

工作正常。 儘管存在錯誤,存儲在桶上。像「目錄&文件夾」,它爲給定的路徑創建文件夾,但最終而不是文件有「文件夾&文件夾」文件。

技術細節:#

  • S3瀏覽器
  • 的Windows 8.1
  • 的IntelliJ CE 14.1.5
  • 星火流媒體應用
  • 星火1.5 Hadoop的2.6.0
+0

你在'%HADOOP_HOME%\ bin%'中有'winutils.exe'嗎?它可能或可能不是一個解決方案。 –

回答

1

問題出在Hadoop庫中。我不得不用windows SDK 7重建winutils(winutils.exe)和native lib(hadoop.dll),然後我必須將它移動到%HADOOP_HOME%\bin%並將%HADOOP_HOME%\bin%添加到Path變量。項目重建可以在hadoop-2.7.1-src\hadoop-common-project\hadoop-common\target找到。對於win utils,我推薦使用Windows優化分支http://svn.apache.org/repos/asf/hadoop/common/branches/branch-trunk-win/

相關問題