2012-07-06 159 views
4

我已經安裝了cloudera cdh4 release我試圖在此上運行mapreduce作業。我收到以下錯誤 - >cdh4 hadoop-hbase PriviledgedActionException as:hdfs(auth:SIMPLE)cause:java.io.FileNotFoundException

2012-07-09 15:41:16 ZooKeeperSaslClient [INFO] Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration. 
2012-07-09 15:41:16 ClientCnxn [INFO] Socket connection established to Cloudera/192.168.0.102:2181, initiating session 
2012-07-09 15:41:16 RecoverableZooKeeper [WARN] Possibly transient ZooKeeper exception: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master 
2012-07-09 15:41:16 RetryCounter [INFO] The 1 times to retry after sleeping 2000 ms 
2012-07-09 15:41:16 ClientCnxn [INFO] Session establishment complete on server Cloudera/192.168.0.102:2181, sessionid = 0x1386b0b44da000b, negotiated timeout = 60000 
2012-07-09 15:41:18 TableOutputFormat [INFO] Created table instance for exact_custodian 
2012-07-09 15:41:18 NativeCodeLoader [WARN] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
2012-07-09 15:41:18 JobSubmitter [WARN] Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
2012-07-09 15:41:18 JobSubmitter [INFO] Cleaning up the staging area file:/tmp/hadoop-hdfs/mapred/staging/hdfs48876562/.staging/job_local_0001 
2012-07-09 15:41:18 UserGroupInformation [ERROR] PriviledgedActionException as:hdfs (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: /home/cloudera/yogesh/lib/hbase.jar 
Exception in thread "main" java.io.FileNotFoundException: File does not exist: /home/cloudera/yogesh/lib/hbase.jar 
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:736) 
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:208) 
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:71) 
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:246) 
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:284) 
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:355) 
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1226) 
    at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1223) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:396) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232) 
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1223) 
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1244) 
    at 

我能夠運行Hadoop的MapReduce的例子-2.0.0-cdh4.0.0.jar給定的樣本程序。 但是當我的工作成功提交給jobtracker時,我收到了這個錯誤。看起來它試圖再次訪問本地文件系統(儘管我已經爲分佈式緩存中的作業執行設置了所有必需的庫,但仍嘗試訪問本地目錄)。這個問題與用戶權限有關嗎?

I) Cloudera:~ # hadoop fs -ls hdfs://<MyClusterIP>:8020/節目 -

Found 8 items 
drwxr-xr-x - hbase hbase    0 2012-07-04 17:58 hdfs://<MyClusterIP>:8020/hbase<br/> 
drwxr-xr-x - hdfs supergroup   0 2012-07-05 16:21 hdfs://<MyClusterIP>:8020/input<br/> 
drwxr-xr-x - hdfs supergroup   0 2012-07-05 16:21 hdfs://<MyClusterIP>:8020/output<br/> 
drwxr-xr-x - hdfs supergroup   0 2012-07-06 16:03 hdfs:/<MyClusterIP>:8020/tools-lib<br/> 
drwxr-xr-x - hdfs supergroup   0 2012-06-26 14:02 hdfs://<MyClusterIP>:8020/test<br/> 
drwxrwxrwt - hdfs supergroup   0 2012-06-12 16:13 hdfs://<MyClusterIP>:8020/tmp<br/> 
drwxr-xr-x - hdfs supergroup   0 2012-07-06 15:58 hdfs://<MyClusterIP>:8020/user<br/> 

II) ---沒有結果用於以下----

[email protected]:/etc/hadoop/conf> find . -name '**' | xargs grep "default.name"<br/> 
[email protected]:/etc/hbase/conf> find . -name '**' | xargs grep "default.name"<br/> 

相反,我覺得跟我們使用新的API - >
fs.defaultFS - > hdfs:// Cloudera:8020我已經正確設置了

雖然「fs.default.name」我有條目Hadoop集羣0.20.2(非Cloudera的集羣)

[email protected]:~/hadoop/conf> find . -name '**' | xargs grep "default.name"<br/> 
./core-default.xml: <name>fs.default.name</name><br/> 
./core-site.xml: <name>fs.default.name</name><br/> 

我覺得CDH4默認配置應該添加在各自的目錄中該條目。 (如果它的錯誤)。

命令我使用運行我progrmme -

[email protected]:/home/cloudera/yogesh/lib> java -classpath hbase-tools.jar:hbase.jar:slf4j-log4j12-1.6.1.jar:slf4j-api-1.6.1.jar:protobuf-java-2.4.0a.jar:hadoop-common-2.0.0-cdh4.0.0.jar:hadoop-hdfs-2.0.0-cdh4.0.0.jar:hadoop-mapreduce-client-common-2.0.0-cdh4.0.0.jar:hadoop-mapreduce-client-core-2.0.0-cdh4.0.0.jar:log4j-1.2.16.jar:commons-logging-1.0.4.jar:commons-lang-2.5.jar:commons-lang3-3.1.jar:commons-cli-1.2.jar:commons-configuration-1.6.jar:guava-11.0.2.jar:google-collect-1.0-rc2.jar:google-collect-1.0-rc1.jar:hadoop-auth-2.0.0-cdh4.0.0.jar:hadoop-auth.jar:jackson.jar:avro-1.5.4.jar:hadoop-yarn-common-2.0.0-cdh4.0.0.jar:hadoop-yarn-api-2.0.0-cdh4.0.0.jar:hadoop-yarn-server-common-2.0.0-cdh4.0.0.jar:commons-httpclient-3.0.1.jar:commons-io-1.4.jar:zookeeper-3.3.2.jar:jdom.jar:joda-time-1.5.2.jar com.hbase.xyz.MyClassName

+0

你可以發表你的作業提交的命令行,或引用任何代碼這個文件 - 文件是否存在於本地系統上? – 2012-07-06 14:57:12

+0

嗨克里斯,感謝您的回覆我已更新的問題,請參閱上文。 – Yogesh 2012-07-09 08:01:24

+1

job_local_0001表示mapred-site.xml設置不正確。並且應該在使用New Configuration()時使用。在那裏設置。 http://hbase.apache.org/book.html#trouble.mapreduce.local – Yogesh 2012-07-13 10:32:37

回答

2

調試程序:嘗試運行簡單的Hadoop shell命令。

Hadoop的FS -ls/

如果這顯示了HDFS文件,那麼你的配置是正確的。如果不是,則配置丟失。發生這種情況時,像-ls這樣的hadoop shell命令將引用本地文件系統而不是Hadoop文件系統。 如果Hadoop使用CMS(Cloudera管理器)啓動,則會發生這種情況。它沒有明確地將配置存儲在conf目錄中。

hadoop的FS -ls HDFS://主機:

檢查hadoop的文件系統是由下面的命令(改變端口)顯示8020/

如果顯示本地文件系統時您提交路徑爲/,那麼您應該在配置目錄中設置配置文件hdfs-site.xmlmapred-site.xml。另外hdfs-site.xml應該有指向hdfs://host:port/的條目fs.default.name。在我的情況下,目錄是/etc/hadoop/conf

請參見:http://hadoop.apache.org/common/docs/r0.20.2/core-default.html

看,如果這個解決您的問題。

+0

Ashish請在父母問題中找到您問題的結果。 – Yogesh 2012-07-09 07:10:55

+0

這就是我如何創建一個配置對象。組態。 conf = new Configuration(false); conf.addResource(新路徑(ConfigReader.HBASE_ROOT_DIRECTORY +「/conf/core-site.xml」)); conf.addResource(新路徑(ConfigReader.HBASE_ROOT_DIRECTORY +「/conf/hdfs-site.xml」)); conf.addResource(新路徑(ConfigReader.HADOOP_ROOT_DIRECTORY +「/conf/core-site.xml」)); conf.addResource(新路徑(ConfigReader.HADOOP_ROOT_DIRECTORY +「/conf/hdfs-site.xml」)); \t \t conf.addResource(new Path(ConfigReader.HBASE_ROOT_DIRECTORY +「/conf/hbase-site.xml」));
這是正確的方法嗎?
Yogesh 2012-07-09 11:32:25

+0

我認爲mappred設置(mapred-site.xml文件丟失)沒有正確設置。這就是爲什麼默認情況下它試圖在本地運行作業。要麼我們需要配置Yarn或需要正確設置配置,以便mrf1作業將在jobtracker上運行 – Yogesh 2012-07-11 15:38:48

4

即使我在運行MR作業時也在2.0.0-cdh4.1.3中分階段出現同樣的問題。加入 財產mapred.site.xml

<property> 
<name>mapreduce.framework.name</name> 
<value>yarn</value> 
</property> 

對於運行蜂巢作業後

export HIVE_USER=yarn 
相關問題