我一直在嘗試使用python kafka庫,現在不能讓生產者工作。python kafka庫的編碼/格式化問題
經過一番研究後,我發現卡夫卡向消費者發送了一個額外的5字節頭(一個0字節,一長包含模式註冊表的模式ID) 。我設法通過簡單地刪除第一個字節來讓消費者工作。
我在編寫製作人時應該預先寫上一個類似的頭文件嗎?
散發出來的異常下面:
[2016-09-14 13:32:48,684] ERROR Task hdfs-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)
org.apache.kafka.connect.errors.DataException: Failed to deserialize data to Avro:
at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:109)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
我同時使用卡夫卡和python-卡夫卡的最新穩定版本。
編輯
消費者
from kafka import KafkaConsumer
import avro.io
import avro.schema
import io
import requests
import struct
# To consume messages
consumer = KafkaConsumer('hadoop_00',
group_id='my_group',
bootstrap_servers=['hadoop-master:9092'])
schema_path = "resources/f1.avsc"
for msg in consumer:
value = bytearray(msg.value)
schema_id = struct.unpack(">L", value[1:5])[0]
response = requests.get("http://hadoop-master:8081/schemas/ids/" + str(schema_id))
schema = response.json()["schema"]
schema = avro.schema.parse(schema)
bytes_reader = io.BytesIO(value[5:])
# bytes_reader = io.BytesIO(msg.value)
decoder = avro.io.BinaryDecoder(bytes_reader)
reader = avro.io.DatumReader(schema)
temp = reader.read(decoder)
print(temp)
生產者
from kafka import KafkaProducer
import avro.schema
import io
from avro.io import DatumWriter
producer = KafkaProducer(bootstrap_servers="hadoop-master")
# Kafka topic
topic = "hadoop_00"
# Path to user.avsc avro schema
schema_path = "resources/f1.avsc"
schema = avro.schema.parse(open(schema_path).read())
range = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for i in range:
producer.send(topic, b'{"f1":"value_' + str(i))
您的生產者和代碼消費者請。這將有助於把所有的事情都集中在一起。 – thiruvenkadam
@thiruvenkadam你去吧 –