2012-04-09 189 views
3

嗨,我正在使用hadoop和HBase.When我試圖啓動hadoop,它開始很好,但當我試圖啓動HBase它顯示日誌文件中的異常。在日誌文件中,hadoop拒絕localhost的端口54310上的連接。日誌在下面給出:HBase連接拒絕

Mon Apr 9 12:28:15 PKT 2012 Starting master on hbase 
ulimit -n 1024 
2012-04-09 12:28:17,685 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HMaster, port=60000 
2012-04-09 12:28:18,180 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting 
2012-04-09 12:28:18,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60000: starting 
2012-04-09 12:28:18,197 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: starting 
2012-04-09 12:28:18,200 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: starting 
2012-04-09 12:28:18,202 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: starting 
2012-04-09 12:28:18,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: starting 
2012-04-09 12:28:18,210 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: starting 
2012-04-09 12:28:18,278 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: starting 
2012-04-09 12:28:18,279 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: starting 
2012-04-09 12:28:18,284 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: starting 
2012-04-09 12:28:18,285 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: starting 
2012-04-09 12:28:18,285 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: starting 
2012-04-09 12:28:18,369 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 05:32 GMT 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=hbase.com.com 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.6.0_20 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc. 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-6-openjdk/jre 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/opt/com/hbase-0.90.4/bin/../conf:/usr/lib/jvm/java-6-openjdk/lib/tools.jar:/opt/com/hbase-0.90.4/bin/..:/opt/com/hbase-0.90.4/bin/../hbase-0.90.4.jar:/opt/com/hbase-0.90.4/bin/../hbase-0.90.4-tests.jar:/opt/com/hbase-0.90.4/bin/../lib/activation-1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/asm-3.1.jar:/opt/com/hbase-0.90.4/bin/../lib/avro-1.3.3.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-cli-1.2.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-codec-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-configuration-1.6.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-el-1.0.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-httpclient-3.1.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-lang-2.5.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-logging-1.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-net-1.4.1.jar:/opt/com/hbase-0.90.4/bin/../lib/core-3.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/guava-r06.jar:/opt/com/hbase-0.90.4/bin/../lib/hadoop-core-0.20.205.0.jar:/opt/com/hbase-0.90.4/bin/../lib/hadoop-gpl-compression-0.2.0-dev.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-mapper-asl-1.4.2.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-xc-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jasper-compiler-5.5.23.jar:/opt/com/hbase-0.90.4/bin/../lib/jasper-runtime-5.5.23.jar:/opt/com/hbase-0.90.4/bin/../lib/jaxb-api-2.1.jar:/opt/com/hbase-0.90.4/bin/../lib/jaxb-impl-2.1.12.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-core-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-json-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-server-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jettison-1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/jetty-6.1.26.jar:/opt/com/hbase-0.90.4/bin/../lib/jetty-util-6.1.26.jar:/opt/com/hbase-0.90.4/bin/../lib/jruby-complete-1.6.0.jar:/opt/com/hbase-0.90.4/bin/../lib/jsp-2.1-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/jsr311-api-1.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/log4j-1.2.16.jar:/opt/com/hbase-0.90.4/bin/../lib/protobuf-java-2.3.0.jar:/opt/com/hbase-0.90.4/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/slf4j-api-1.5.8.jar:/opt/com/hbase-0.90.4/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/com/hbase-0.90.4/bin/../lib/stax-api-1.0.1.jar:/opt/com/hbase-0.90.4/bin/../lib/thrift-0.2.0.jar:/opt/com/hbase-0.90.4/bin/../lib/xmlenc-0.52.jar:/opt/com/hbase-0.90.4/bin/../lib/zookeeper-3.3.2.jar 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/jvm/java-6-openjdk/jre/lib/i386/client:/usr/lib/jvm/java-6-openjdk/jre/lib/i386:/usr/lib/jvm/java-6-openjdk/jre/../lib/i386:/usr/java/packages/lib/i386:/usr/lib/jni:/lib:/usr/lib 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=i386 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=2.6.32-40-generic 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=com 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/com 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/opt/com/hbase-0.90.4/bin 
2012-04-09 12:28:18,372 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=master:60000 
2012-04-09 12:28:18,436 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181 
2012-04-09 12:28:18,484 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session 
2012-04-09 12:28:18,676 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1369600cac10000, negotiated timeout = 180000 
2012-04-09 12:28:18,740 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=Master, sessionId=hbase.com.com:60000 
2012-04-09 12:28:18,803 INFO org.apache.hadoop.hbase.metrics: MetricsString added: revision 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: date 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: user 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: url 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: version 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo 
2012-04-09 12:28:18,810 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo 
2012-04-09 12:28:18,810 INFO org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized 
2012-04-09 12:28:18,940 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Master=hbase.com.com:60000 
2012-04-09 12:28:21,342 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 0 time(s). 
2012-04-09 12:28:22,343 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 1 time(s). 
2012-04-09 12:28:23,344 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 2 time(s). 
2012-04-09 12:28:24,345 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 3 time(s). 
2012-04-09 12:28:25,346 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 4 time(s). 
2012-04-09 12:28:26,347 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 5 time(s). 
2012-04-09 12:28:27,348 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 6 time(s). 
2012-04-09 12:28:28,349 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 7 time(s). 
2012-04-09 12:28:29,350 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 8 time(s). 
2012-04-09 12:28:30,351 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 9 time(s). 
2012-04-09 12:28:30,356 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. 
java.net.ConnectException: Call to hbase/192.168.15.20:54310 failed on connection exception: java.net.ConnectException: Connection refused 
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1095) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1071) 
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) 
    at $Proxy6.getProtocolVersion(Unknown Source) 
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) 
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) 
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118) 
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222) 
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) 
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328) 
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65) 
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346) 
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244) 
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187) 
    at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81) 
    at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:346) 
    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:282) 
Caused by: java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:604) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560) 
    at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1046) 
    ... 17 more 
2012-04-09 12:28:30,361 INFO org.apache.hadoop.hbase.master.HMaster: Aborting 
2012-04-09 12:28:30,361 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads 
2012-04-09 12:28:30,361 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000 
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting 
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting 
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting 
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting 
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting 
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting 
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting 
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting 
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting 
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting 
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000 
2012-04-09 12:28:30,369 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 
2012-04-09 12:28:30,450 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down 
2012-04-09 12:28:30,450 INFO org.apache.zookeeper.ZooKeeper: Session: 0x1369600cac10000 closed 
2012-04-09 12:28:30,450 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting 
Mon Apr 9 12:28:40 PKT 2012 Stopping hbase (via master) 

(hadoop的CONF) 芯-site.xml中

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>/hadoop/tmp</value> 
</property><property> 
<name>fs.default.name</name> 
<value>hdfs://localhost:54310</value> 
</property> 
</configuration> 

HDFS-site.xml中

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration> 
<property> 
<name>dfs.replication</name> 
<value>1</value> 
</property> 
<property> 
<name>dfs.permissions</name> 
<value>false</value> 
</property> 
</configuration> 

mapred-site.xml中

<configuration> 
<property> 
<name>mapred.job.tracker</name> 
<value>localhost:54311</value> 
</property> 
</configuration> 

(HBase的CONF) HBase的-site.xml中

<configuration> 
<property> 
<name>hbase.cluster.distributed</name> 
<value>true</value> 
</property> 
<property> 
<name>hbase.rootdir</name> 
<value>hdfs://localhost:54310/hbase</value> 
</property> 
<!--added--> 
<property> 
<name>hbase.master</name> 
<value>127.0.0.1:60000</value> 
<description>The host and port that the HBase master runs at. 
</description> 
</property> 
</configuration> 
+0

檢查的錯誤配置??? – 2012-04-09 11:13:41

+0

是的,我已經檢查過我的iptables,firestarter等。我認爲這不是端口問題,可能是配置錯誤。 – khan 2012-04-10 11:59:30

+0

你可以把你的配置文件...我想一些如何hbase不能連接到hdfs ...可能是namenode沒有運行..查看配置文件將有所幫助。並檢查日誌namenode和所有你自己。 – 2012-04-10 12:38:09

回答

3

試試這個

評論127.0.1。1在/ etc/hosts中使用文件# 然後把你的IP和計算機名在新行 如果你想使用本地主機確保127.0.0.1本地主機是否有在您的主機文件 然後替換在配置文件中IP的所有次數如果你想使用IP,而非輸入localhost然後確保IP和等效域名有沒有在你的hosts文件,併爲你的IP本地主機替換所有次數與本地主機

更換。如果防火牆阻止端口

一般名稱節點相關的問題的發生是由於主機或IP

+0

沒錯。我嘗試了'telnet localhost 60000',但它不能正常工作,但telnet 127.0.0.1 60000運行良好。 – 2014-09-24 07:22:59

1

嘗試尋找在/ etc/hosts文件和/或爲127.0.0.1分配本地主機。在你的榜樣它連接到192.168.15.20:54310,不127.0.0.1:54310

+0

謝謝,但錯誤地我上傳了我的舊日誌。無論如何,這個問題仍然發生在127.0.0.1:54310的localhost i-e上。現在通過前進,最後我發現了這個問題。其實當我嘗試啓動hadoop時,它的所有服務,如 TaskTracker,JobTracker,DataNode,SecondaryNameNode正在運行,除了NameNode.So,HBase無法找到hadoop,因爲namenode不在。請引導我爲什麼發生這種情況 – khan 2012-04-10 12:49:42

+0

試着把您的Hadoop/conf目錄和HBase的/ conf目錄這裏的文件,以便我可以檢查 – abatyuk 2012-04-11 07:13:40

+0

我已經把他們 – khan 2012-04-12 13:09:48

0

首先檢查在habse-site.xml財產hbase.rootdir試圖在core-site.xml Hadoop的定義爲fs.default.name連接到同一個端口。

hbase.rootdir是否設置爲/tmp/hadoop位置? (因爲這是詭辯) 將其更改爲指向您的hdfs所在的位置。

首先嚐試http://localhost:50070並檢查Namenode之類的東西:--IP - : - port--。給我那個港口。

0

在java.io.FileNotFoundException看看:/ Hadoop的/ tmp目錄/ DFS /名/電流/版本(拒絕)

所以,首先 - 請你有什麼設置爲HBase的。確實是rootdir--無論是指向HDFS還是本地文件系統。我的例子(與本地主機的僞分佈式模式):

<configuration> 
     <property> 
      <name>hbase.rootdir</name> 
      <value>hdfs://localhost:54310/hbase</value> 
     </property> 
     <property> 
      <name>hbase.master</name> 
      <value>127.0.0.1:60000</value> 
     </property> 
    </configuration> 

接下來,看你的日誌,似乎最有可能您正在使用本地文件系統中運行,您不必讀/寫訪問其中HBase的存儲其數據的目錄 - 與

mcbatyuk:/ bam$ ls -l/|grep hadoop 
drwxr-xr-x 3 bam wheel  102 Feb 29 21:34 hadoop 

檢查,如果你的base.rootdir是HDFS你似乎有破損的權限,所以你需要用

# hadoop fs -chmod -R MODE /hadoop/ 

,或者更改在您的$ HADOOP_HOME/conf/hdfs-site.xml中將屬性dfs.permissions更改爲false

+0

@雪利酒汗因此,基本上,你在提的評論[](http://stackoverflow.com/a/10106233/1053990)您的hadoop tmp目錄無法訪問。首先更改權限(例如'sudo chmod -R a + rw/hadoop'),然後格式化namenode(hadoop namenode -format)。 – abatyuk 2012-04-11 17:01:33

+0

是的 - 我已經改變了權限但仍然沒有輸出 – khan 2012-04-12 13:10:24

0

而不是使用臨時目錄,將hdfs-site.xml中的「dfs.name.dir」配置到您的目錄有權讀取/寫入。然後在格式化後啓動namenode(命令是「hadoop namenode -format」)。一旦完成,請嘗試啓動hbase。