1
我已將我的數據編碼到tfrecord文件中。對於每張圖片,我用多個標籤對多個邊界框進行編碼。現在,我想驗證我的數據是否已被Tensorflow/slim數據集類型正確解碼。我寫一個如下測試:如何驗證Tensorflow slim數據集中的實際數據
def test2(sess):
labels_to_class = read_label_file(label_fname)
reader = tf.TFRecordReader
keys_to_features = {
'image/encoded': tf.FixedLenFeature(
(), tf.string, default_value=''),
'image/format': tf.FixedLenFeature((), tf.string, default_value='jpg'),
'image/object/labels': tf.VarLenFeature(dtype=tf.int64),
'image/object/truns': tf.VarLenFeature(dtype=tf.int64),
'image/object/occluds': tf.VarLenFeature(dtype=tf.int64),
'image/object/bbox/xmin': tf.VarLenFeature(dtype=tf.int64),
'image/object/bbox/xmax': tf.VarLenFeature(dtype=tf.int64),
'image/object/bbox/ymin': tf.VarLenFeature(dtype=tf.int64),
'image/object/bbox/ymax': tf.VarLenFeature(dtype=tf.int64),
}
items_to_handlers = {
'image': slim.tfexample_decoder.Image('image/encoded', 'image/format'),
'object/label': slim.tfexample_decoder.Tensor('image/object/labels'),
'object/truncated': slim.tfexample_decoder.Tensor('image/object/truns'),
'object/occluded': slim.tfexample_decoder.Tensor('image/object/occluds'),
'object/bbox': slim.tfexample_decoder.BoundingBox(
['ymin', 'xmin', 'ymax', 'xmax'], 'image/object/bbox/'),
}
decoder = slim.tfexample_decoder.TFExampleDecoder(
keys_to_features, items_to_handlers)
dataset = slim.dataset.Dataset(
data_sources=filename_queue,
reader=reader,
decoder=decoder,
num_samples=sample_num,
items_to_descriptions=_ITEMS_TO_DESCRIPTIONS,
num_classes=_NUM_CLASSES,
labels_to_names=labels_to_class)
provider = slim.dataset_data_provider.DatasetDataProvider(dataset)
keys = provider._items_to_tensors.keys()
print(provider._num_samples)
for item in provider._items_to_tensors:
print(item, provider._items_to_tensors[item])
[image, label] = provider.get(['image', 'object/label'])
print('AAA')
sess.run([image, label])
print('BBB')
當運行上述的碼,它示出了:
6
image Tensor("case/If_2/Merge:0", shape=(?, ?, 3), dtype=uint8)
object/label Tensor("SparseToDense:0", shape=(?,), dtype=int64)
object/occluded Tensor("SparseToDense_1:0", shape=(?,), dtype=int64)
record_key Tensor("parallel_read/common_queue_Dequeue:0", dtype=string)
object/bbox Tensor("transpose:0", shape=(?, 4), dtype=int64)
object/truncated Tensor("SparseToDense_2:0", shape=(?,), dtype=int64)
AAA
然後程序停止永遠存在,而不提供任何錯誤信息。 程序顯示了正確的示例編號(6)和我編碼的張量的正確類型,但我仍想檢查張量中的值。 無論如何,我可以檢查他們的價值?
謝謝你的幫助。
-----------------更新--------------------
我添加的代碼是:
tf.train.start_queue_runners()
print('Start verification process..')
for i in range(provider._num_samples):
[image, labelList, truncList, occList,
boxList] = provider.get([
'image', 'object/label', 'object/truncated',
'object/occluded', 'object/bbox'])
enc_image = tf.image.encode_jpeg(image)
img, labels, truns, occluds, boxes = sess.run(
[enc_image, labelList, truncList, occList, boxList])
f = tf.gfile.FastGFile('out_%.2d.jpg' % i, 'wb')
f.write(img)
f.close()
for j in range(labels.shape[0]):
print('label=%d (%s), truc=%d, occluded=%d at [%d, %d, %d, %d]' % (
labels[j], labels_to_class[labels[j]], truns[j],
occluds[j], boxes[j][0], boxes[j][1],
boxes[j][2], boxes[j][3]))
你能舉個例子嗎?我在網上搜索,但沒有工作。謝謝。 – Brandon
在調用任何會話之前添加tf.train.start_queue_runners()。運行 –
感謝您的建議,該工作可用於驗證數據。但是,有一個問題。當我運行我的測試程序時,圖像閱讀順序每次都是隨機的,並且可能有重複。例如,當我對6條記錄進行驗證時,一次加載的圖像數據來自項目0,1,1,2,1,5,另一個時間來自於1,2,3,4,0,1。這可能是什麼原因?有沒有辦法限制一次只讀一次的記錄?謝謝。 – Brandon