我試圖設置具有單節點羣集(Psuedo-distributed)的Hadoop並使用the apache guide來這樣做。現在,我嘗試運行MapReduce工作,並使用例如它提供bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-alpha3.jar grep input output 'dfs[a-z]+'
Hadoop MapReduce無法連接到ResourceManager
[email protected]:/usr/local/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-alpha3.jar grep input output 'dfs[a-z]+'
xxxx-xx-xx xx:xx:xx,xxx INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
xxxx-xx-xx xx:xx:xx,xxx INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
xxxx-xx-xx xx:xx:xx,xxx INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
...
xxxx-xx-xx xx:xx:xx,xxx INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
xxxx-xx-xx xx:xx:xx,xxx WARN ipc.Client: Failed to connect to server: 0.0.0.0/0.0.0.0:8032: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
尋找這個問題的在線,其他人有這個問題似乎紗但不MapReduce的擁有它。
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
我跑jps
但我不知道做什麼,我正在尋找在這裏:
[email protected]:/usr/local/hadoop$ jps
9860 DataNode
10075 SecondaryNameNode
9708 NameNode
11021 Jps
任何幫助和指導中提到我的HDFS-site.xml中是相同的讚賞。
編輯:我看着hadoop-hadoop-resourcemanager-hadoop.log
,發現這個:
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8032: starting
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to active state
xxxx-xx-xx xx:xx:xx,xxx INFO org.eclipse.jetty.util.log: Logging initialized @7307ms
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context cluster
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context logs
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context static
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: adding path spec: /cluster/*
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
xxxx-xx-xx xx:xx:xx,xxx FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
java.lang.ExceptionInInitializerError
at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
...
EDIT2:這裏是我的紗-site.xml中,如果有幫助:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
運行jps命令顯示ResourceManager和Nodemanager未啓動。請您查看資源管理器日誌以查看確切原因? –
我試圖使用來自[這裏](http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/)的指令設置單個集羣,並且它始終工作。試試吧 –
雖然這不是用於Hadoop 1嗎?我正在使用Hadoop 3 – user2361174