2016-11-07 74 views
0

我使用的命令行使用以下命令把從本地系統HDFS系統csv文件:把一個CSV文件HDFS使用命令行

C:\Hadoop\hadoop-2.7.3\bin>hdfs dfs -put c:\hdfs\stock.csv /user/XYZ 

是我得到的輸出中的錯誤是:

Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.uti 
l.NativeCrc32.nativeComputeChunkedSumsByteArray(II[BI[BIILjava/lang/String;JZ)V 
     at org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(
Native Method) 
     at org.apache.hadoop.util.NativeCrc32.calculateChunkedSumsByteArray(Nati 
veCrc32.java:86) 
     at org.apache.hadoop.util.DataChecksum.calculateChunkedSums(DataChecksum 
.java:430) 
     at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSumme 
r.java:202) 
     at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1 
63) 
     at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:1 
44) 
     at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java 
:2250) 
     at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:223 
2) 
     at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOut 
putStream.java:72) 
     at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java 
:106) 
     at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61) 
     at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119) 
     at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.wr 
iteStreamToFile(CommandWithDestination.java:466) 
     at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(
CommandWithDestination.java:391) 
     at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(Co 
mmandWithDestination.java:328) 
     at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(Command 
WithDestination.java:263) 
     at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(Command 
WithDestination.java:248) 
     at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317) 
     at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:2 
89) 
     at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument 
(CommandWithDestination.java:243) 
     at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271) 
     at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255) 

     at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(Co 
mmandWithDestination.java:220) 
     at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyComm 
ands.java:267) 
     at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:2 
01) 
     at org.apache.hadoop.fs.shell.Command.run(Command.java:165) 
     at org.apache.hadoop.fs.FsShell.run(FsShell.java:287) 
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) 
     at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) 

有人能幫助我如何修復這個錯誤,或者這是不使用命令行來轉儲HDFS文件

回答

0

請使用正斜槓(/),在Windows命令公關的正確方法的OmpT。見下例

C:\Hadoop\hadoop-2.7.3\bin>hdfs dfs -put c:/hdfs/stock.csv /user/XYZ