2017-03-16 85 views
1

當試圖使用BinaryEncoder寫入Avro對象時,我試圖發佈Avro(進入Kafka)並獲取NullPointerException嘗試序列化包含數組的Avro GenericRecord時發生NullPointerException

這裏是縮寫堆棧跟蹤:

java.lang.NullPointerException: null of array of com.mycode.DeeplyNestedObject of array of com.mycode.NestedObject of union of com.mycode.ParentObject 
    at org.apache.avro.generic.GenericDatumWriter.npe(GenericDatumWriter.java:132) ~[avro-1.8.1.jar:1.8.1] 
    at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:126) ~[avro-1.8.1.jar:1.8.1] 
    at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73) ~[avro-1.8.1.jar:1.8.1] 
    at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:60) ~[avro-1.8.1.jar:1.8.1] 
    at com.mycode.KafkaAvroPublisher.send(KafkaAvroPublisher.java:61) ~[classes/:na] 
    .... 
    at org.apache.avro.generic.GenericDatumWriter.write(GenericDatumWriter.java:73) ~[avro-1.8.1.jar:1.8.1] 
    at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:112) ~[avro-1.8.1.jar:1.8.1] 
    at org.apache.avro.specific.SpecificDatumWriter.writeField(SpecificDatumWriter.java:87) ~[avro-1.8.1.jar:1.8.1] 
    at org.apache.avro.generic.GenericDatumWriter.writeRecord(GenericDatumWriter.java:143) ~[avro-1.8.1.jar:1.8.1] 
    at org.apache.avro.generic.GenericDatumWriter.writeWithoutConversion(GenericDatumWriter.java:105) ~[avro-1.8.1.jar:1.8.1] 
    ... 55 common frames omitted 

這裏就是異常發生在我的代碼的發送方法:

private static final EncoderFactory ENCODER_FACTORY = EncoderFactory.get(); 
private static final SpecificDatumWriter<ParentObject> PARENT_OBJECT_WRITER = new SpecificDatumWriter<>(ParentObject.SCHEMA$); 
public void send(ParentObject parentObject) { 
    try { 
     ByteArrayOutputStream stream = new ByteArrayOutputStream(); 
     binaryEncoder = ENCODER_FACTORY.binaryEncoder(stream, binaryEncoder); 
     PARENT_OBJECT_WRITER.write(parentObject, binaryEncoder); // Exception HERE 
     binaryEncoder.flush(); 
     producer.send(new ProducerRecord<>(topic, stream.toByteArray())); 
    } catch (IOException ioe) { 
     logger.debug("Problem publishing message to Kafka.", ioe); 
    } 
} 

在架構中,NestedObject包含DeeplyNestedObject陣列。我已經做了足夠的調試,看看NestedObject確實包含一個DeeplyNestedObject的數組或者一個空數組(如果不存在的話)。下面是模式的相關部分:

[ { "namespace": "com.mycode.avro" 
    , "type": "record" 
    , "name": "NestedObject" 
    , "fields": 
    [ { "name": "timestamp", "type": "long", "doc": "Instant in time (milliseconds since epoch)." } 
    , { "name": "objs", "type": { "type": "array", "items": "DeeplyNestedObject" }, "doc": "Elided." } 
    ] 
    } 
] 

回答

1

從Avro出來的堆棧追蹤令人誤解。該問題可能比消息指示的類更深一級。

當它說:「null of array of com.mycode.DeeplyNestedObject of array of com.mycode.NestedObject of union of com.mycode.ParentObject」,這意味着DeeplyNestedObject內的領域之一,預計是array但被發現是null。 (這完全是有道理的誤解,作爲意義的是,DeeplyNestedObjectnullNestedObject內。)

你需要檢查的DeeplyNestedObject領域,並找出哪些array沒有被正確序列化。該問題可能位於創建DeeplyNestedObject的位置。它將有一個類型爲array的字段,在調用發送方法之前,不會在所有情況下由序列化程序填充該字段。

0

我不知道有足夠的瞭解你的對象,但我在你的例子看到的是,你有不正確的Avro圖式。

DeeplyNestedObject在Avro中是一個記錄,所以你的架構必須是這樣的:

{ 
    "type": "record", 
    "name": "NestedObject", 
    "namespace": "com.mycode.avro", 
    "fields": [ 
    { 
     "name": "timestamp", 
     "type": "long" 
    }, 
    { 
     "name": "objs", 
     "type": { 
     "type": "record", 
     "name": "DeeplyNestedObject", 
     "fields": [] 
     } 
    } 
    ] 
} 

當然,你需要聲明的「領域」 DeeplyNestedObject的各個領域:關係到DeeplyNestedObject記錄[]。

相關問題