數據I有一個彈性的搜索服務器:從3至4 logstash服務器Elactic搜索未Reciving從logstash
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 76,
"number_of_data_nodes" : 1,
"active_primary_shards" : 297,
"active_shards" : 297,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 297,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}
是雙核機有30GB拉姆上運行,並且正在接收日誌和具有總共30輸入(所有日誌存儲服務器合併),但對於大多數輸入日誌錯過了,我得到30-40分鐘沒有日誌,我得到像:retrying-failed-action-with-response-code-429
logstash服務器警告。同時彈性搜索服務器有一個非常高的內存使用率。日誌文件的心跳非常高..我檢查了gork patter並且它們是正確的 這是我的一個conf文件:
input {
exec {
command => "/usr/bin/touch /var/run/logstash-monit/input.touch && /bin/echo OK."
interval => 60
type => "heartbeat"
}
file {
type => 'seller-forever'
path => '/var/log/seller/seller.log'
sincedb_path => "/opt/logstash/sincedb-access1"
}
}
filter {
grok {
type => "seller-forever"
match => [ "message", "%{GREEDYDATA:logline} %{GREEDYDATA:extra_fields}" ]
}
geoip {
add_tag => [ "GeoIP" ]
database => "/opt/logstash/GeoLiteCity.dat"
source => "clientip"
}
if [useragent] != "-" and [useragent] != "" {
useragent {
add_tag => [ "UA" ]
source => "useragent"
}
}
if [bytes] == 0 { mutate { remove => "[bytes]" } }
if [geoip][city_name] == "" { mutate { remove => "[geoip][city_name]" } }
if [geoip][continent_code] == "" { mutate { remove => "[geoip][continent_code]" } }
if [geoip][country_code2] == "" { mutate { remove => "[geoip][country_code2]" } }
if [geoip][country_code3] == "" { mutate { remove => "[geoip][country_code3]" } }
if [geoip][country_name] == "" { mutate { remove => "[geoip][country_name]" } }
if [geoip][latitude] == "" { mutate { remove => "[geoip][latitude]" } }
if [geoip][longitude] == "" { mutate { remove => "[geoip][longitude]" } }
if [geoip][postal_code] == "" { mutate { remove => "[geoip][postal_code]" } }
if [geoip][region_name] == "" { mutate { remove => "[geoip][region_name]" } }
if [geoip][time_zone] == "" { mutate { remove => "[geoip][time_zone]" } }
if [urlquery] == "" { mutate { remove => "urlquery" } }
if "apache_json" in [tags] {
if [method] =~ "(HEAD|OPTIONS)" { mutate { remove => "method" } }
if [useragent] == "-" { mutate { remove => "useragent" } }
if [referer] == "-" { mutate { remove => "referer" } }
}
if "UA" in [tags] {
if [device] == "Other" { mutate { remove => "device" } }
if [name] == "Other" { mutate { remove => "name" } }
if [os] == "Other" { mutate { remove => "os" } }
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
type => "seller-forever"
index => "seller-forever"
host => "10.0.0.89"
protocol => "node"
}
}
我正在使用kibana進行可視化。 我應該如何解決這個問題我應該檢查什麼?任何幫助將不勝感激,我無法理解該怎麼做。
支票存入logstash的錯誤日誌,在/ var /日誌/ logstash/ –
@NishantSingh沒什麼logstash.log,這是在logstash.err 發現2016年5月14日10時28分35秒AM'org.elasticsearch.cluster.service.InternalClusterService $ UpdateTask run INFO:[logstash-ip-10-0-0-105-8431-13990] added {[logstash-ip-10-0-0-105-6096 -13970] [M-skGUKgQXC-_Zt9kHma6w] [ip-10-0-0-105] [inet [/10.0.0.105:9303]] {client = true,data = false},},原因:zen-disco-接收(來自主[[Hammer Harrison] [GPRbvuZ4RJW_Cq_iPW1i7A] [ip-10-0-0-89] [inet [/10.0.0.89:9300]]])' –
429代碼意味着您的Logstash客戶端發送太多請求並且該節點無法處理它們。 –