2014-11-06 64 views
0

卡夫卡和Spark-Streaming之間存在一個問題,我在生產中有低級別的流量(大約12000-15000記錄/每秒)服務,起初消費流量看起來很正常,但是在10-15分鐘後,突然消耗了將近1/10的速度。這可能是網絡的流量問題?卡夫卡卡夫卡消費者的缺失記錄

配置:
num.network.threads = 2個
num.io.threads = 8個
socket.send.buffer.bytes = 1048576
socket.receive.buffer.bytes = 1048576
socket.request.max.bytes = 104857600個
log.flush.interval.messages = 10000個
log.flush.interval.ms = 1000
log.retention.hours = 12
log.segm ent.bytes = 536870912
log.retention.check.interval.ms = 60000
log.cleaner.enable =假
log.cleanup.interval.mins =火花流的1

配置(消費):

.... 
val kafkaParams = Map(
    "zookeeper.connect" -> zkQuorum, 
    "group.id" -> group, 
    "zookeeper.connection.timeout.ms" -> "1000000", 
    "zookeeper.sync.time.ms" -> "200", 
    "fetch.message.max.bytes" -> "2097152000", 
    "queued.max.message.chunks" -> "1000", 
    "auto.commit.enable" -> "true", 
    "auto.commit.interval.ms" -> "1000") 

try { 
    KafkaUtils.createStream[String, String, StringDecoder, StringDecoder](
     ssc, kafkaParams, topics.map((_, partition)).toMap, 
     StorageLevel.MEMORY_ONLY).map { 
     case (key, value) => convertTo(key, value) 
    }.filter { 
     _ != null 
    }.foreachRDD(line => saveToHBase(line, INPUT_TABLE)) 
    //}.foreachRDD(line => logger.info("handling testing....."+ line)) 
    } catch { 
    case e: Exception => logger.error("consumerEx: " + e.printStackTrace) 
    } 

回答