2015-05-04 70 views
3

我有一個天鵝座在filab的Orion實例; subcription和通知運行良好,但我不能堅持數據到cosmos.lab.fi-ware.org。 天鵝返回此錯誤:錯誤503:在堅持HDFS的服務不可用

[ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)]持久性誤差(所述的對講機/ talkykar/room6_room目錄不能在HDFS創建HttpFS響應:503服務不可用)

這是我agent_a.conf文件:

cygnusagent.sources = http-source 
cygnusagent.sinks = hdfs-sink 
cygnusagent.channels = hdfs-channel 

#============================================= 
# source configuration 
# channel name where to write the notification events 
cygnusagent.sources.http-source.channels = hdfs-channel 
# source class, must not be changed 
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource 
# listening port the Flume source will use for receiving incoming notifications 
cygnusagent.sources.http-source.port = 5050 
# Flume handler that will parse the notifications, must not be changed 
cygnusagent.sources.http-source.handler = es.tid.fiware.fiwareconnectors.cygnus.handlers.OrionRestHandler 
# URL target 
cygnusagent.sources.http-source.handler.notification_target = /notify 
# Default service (service semantic depends on the persistence sink) 
cygnusagent.sources.http-source.handler.default_service = talky 
# Default service path (service path semantic depends on the persistence sink) 
cygnusagent.sources.http-source.handler.default_service_path = talkykar 
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries) 
cygnusagent.sources.http-source.handler.events_ttl = 10 
# Source interceptors, do not change 
cygnusagent.sources.http-source.interceptors = ts de 
# Timestamp interceptor, do not change 
cygnusagent.sources.http-source.interceptors.ts.type = timestamp 
# Destination extractor interceptor, do not change 
cygnusagent.sources.http-source.interceptors.de.type = es.tid.fiware.fiwareconnectors.cygnus.interceptors.DestinationExtractor$Builder 
# Matching table for the destination extractor interceptor, put the right absolute path to the file if necessary 
# See the doc/design/interceptors document for more details 
cygnusagent.sources.http-source.interceptors.de.matching_table = /usr/cygnus/conf/matching_table.conf 

# ============================================ 
# OrionHDFSSink configuration 
# channel name from where to read notification events 
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel 
# sink class, must not be changed 
cygnusagent.sinks.hdfs-sink.type = es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink 
# Comma-separated list of FQDN/IP address regarding the Cosmos Namenode endpoints 
# If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory 
cygnusagent.sinks.hdfs-sink.cosmos_host = http://cosmos.lab.fi-ware.org 
# port of the Cosmos service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs and free choice for inifinty 
cygnusagent.sinks.hdfs-sink.cosmos_port = 14000 
# default username allowed to write in HDFS 
cygnusagent.sinks.hdfs-sink.cosmos_default_username = myuser 
# default password for the default username 
cygnusagent.sinks.hdfs-sink.cosmos_default_password = mypass 
# HDFS backend type (webhdfs, httpfs or infinity) 
cygnusagent.sinks.hdfs-sink.hdfs_api = httpfs 
# how the attributes are stored, either per row either per column (row, column) 
cygnusagent.sinks.hdfs-sink.attr_persistence = row 
# Hive FQDN/IP address of the Hive server 
cygnusagent.sinks.hdfs-sink.hive_host = http://cosmos.lab.fi-ware.org 
# Hive port for Hive external table provisioning 
cygnusagent.sinks.hdfs-sink.hive_port = 10000 
# Kerberos-based authentication enabling 
cygnusagent.sinks.hdfs-sink.krb5_auth = false 
# Kerberos username 
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username 
# Kerberos password 
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx 
# Kerberos login file 
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf 
# Kerberos configuration file 
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf 
#============================================= 

這是天鵝座日誌:

2015-05-04 09:05:10,434 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionHDFSSink.persist(OrionHDFSSink.java:315)] [hdfs-sink] Persisting data at OrionHDFSSink. HDFS file (talky/talkykar/room6_room/room6_room.txt), Data ({"recvTimeTs":"1430723069","recvTime":"2015-05-04T09:04:29.819","entityId":"Room6","entityType":"Room","attrName":"temperature","attrType":"float","attrValue":"26.5","attrMd":[]}) 
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:255)] HDFS request: PUT http://http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/mped.mlg/talky/talkykar/room6_room?op=mkdirs&user.name=mped.mlg HTTP/1.1 
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:186)] Connection request: [route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500] 
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:220)] Connection leased: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 1 of 100; total allocated: 1 of 500] 
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection [email protected] closed 
2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.shutdown(DefaultClientConnection.java:154)] Connection [email protected] shut down 
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:272)] Connection [id: 21][route: {}->http://http] can be kept alive for 9223372036854775807 MILLISECONDS 
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.DefaultClientConnection.close(DefaultClientConnection.java:169)] Connection [email protected] closed 
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - org.apache.http.impl.conn.PoolingClientConnectionManager.releaseConnection(PoolingClientConnectionManager.java:278)] Connection released: [id: 21][route: {}->http://http][total kept alive: 0; route allocated: 0 of 100; total allocated: 0 of 500] 
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:191)] The used HDFS endpoint is not active, trying another one (host=http://cosmos.lab.fi-ware.org) 
2015-05-04 09:05:10,436 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - es.tid.fiware.fiwareconnectors.cygnus.sinks.OrionSink.process(OrionSink.java:139)] Persistence error (The talky/talkykar/room6_room directory could not be created in HDFS. HttpFS response: 503 Service unavailable) 

謝謝。

回答

0

如果你看看這個日誌:

2015-05-04 09:05:10,435 (SinkRunner-PollingRunner-DefaultSinkProcessor) [DEBUG - es.tid.fiware.fiwareconnectors.cygnus.backends.hdfs.HDFSBackendImpl.doHDFSRequest(HDFSBackendImpl.java:255)] HDFS request: PUT http://http://cosmos.lab.fi-ware.org:14000/webhdfs/v1/user/mped.mlg/talky/talkykar/room6_room?op=mkdirs&user.name=mped.mlg HTTP/1.1 

你將你本身正在嘗試創建使用http://http://cosmos.lab... URL一個HDFS目錄(請注意雙http://http://)。

這是具有因您已經配置:

cygnusagent.sinks.hdfs-sink.hive_host = http://cosmos.lab.fi-ware.org 

相反的:

cygnusagent.sinks.hdfs-sink.hive_host = cosmos.lab.fi-ware.org 

這樣的參數要求一個主機,而不是URL。

被說,在未來的版本中,我們將允許這兩種編碼。

+0

它的工作原理!謝謝!! – AGonzalez

+0

太棒了!那麼,您是否願意提出問題並單擊接受的問題圖標?謝謝! – frb

相關問題