2012-04-24 106 views
0

我剛剛在小羣集上成功安裝了Hadoop。現在我試圖運行wordcount的例子,但我得到這個錯誤:運行Hadoop wordcount示例時未找到Job Token文件示例

****hdfs://localhost:54310/user/myname/test11 
12/04/24 13:26:45 INFO input.FileInputFormat: Total input paths to process : 1 
12/04/24 13:26:45 INFO mapred.JobClient: Running job: job_201204241257_0003 
12/04/24 13:26:46 INFO mapred.JobClient: map 0% reduce 0% 
12/04/24 13:26:50 INFO mapred.JobClient: Task Id : attempt_201204241257_0003_m_000002_0, Status : FAILED 
Error initializing attempt_201204241257_0003_m_000002_0: 
java.io.IOException: Exception reading file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken 
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:135) 
    at org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(TokenCache.java:165) 
    at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1179) 
    at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1116) 
    at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2404) 
    at java.lang.Thread.run(Thread.java:722) 
Caused by: java.io.FileNotFoundException: File file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken does not exist. 
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397) 
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251) 
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125) 
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283) 
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427) 
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:129) 
    ... 5 more 

任何幫助嗎?

+2

是否路徑'的/ tmp/mapred/local'存在,並執行用戶在其下的Hadoop服務運行有權寫入這個目錄? – 2012-04-24 18:30:59

+0

IIRC你必須製作該目錄或成爲具有這些權限的組中的用戶。否則,你會得到fnf – apesa 2012-04-24 21:50:59

回答

2

我剛剛通過這個相同的錯誤 - 遞歸地在我的Hadoop目錄設置權限沒有幫助。在Mohyt的推薦here之後,我修改了core-site.xml(在hadoop/conf /目錄中)以刪除我指定臨時目錄(XML中的hadoop.tmp.dir)的地方。在允許Hadoop創建自己的臨時目錄後,我運行無錯誤。

0

最好是創建自己的臨時目錄。

<configuration> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>/home/unmesha/mytmpfolder/tmp</value> 
<description>A base for other temporary directories.</description> 
</property> 
..... 

並給予許可

[email protected]:~$chmod 750 /mytmpfolder/tmp 

check this核心-site.xml中配置