2016-10-22 62 views
0

我使用下面的代碼來計算預測標籤和實際標籤的熵。數據來自CIFAR-10數據集。用softmax計算熵時數據類型不匹配

我用astype(np.float32)將源數據轉換爲ndarray,之後在tf.constant()中使用dtype作爲float32。錯誤消息

TypeError: DataType float32 for attr 'Tlabels' not in list of allowed values: int32, int64

列出只有int32,int64是允許的數據類型。如果沒有在上述2個步驟中明確指定數據類型,我將在matmul()操作中面對一個障礙,因爲在計算中使用的權重項是float數據類型。

f = open('cifar-10-batches-py/data_batch_1', 'rb') 
datadict = cPickle.load(f,encoding='bytes') 
#f.close() 
X = np.asarray(datadict[b"data"]).astype(np.float32) #b prefix is for bytes string literal. 
Y = np.asarray(datadict[b'labels']).astype(np.float32) 
f = open('cifar-10-batches-py/data_batch_1', 'rb') 
datadict = cPickle.load(f,encoding='bytes') 
#f.close() 
X = np.asarray(datadict[b"data"]).astype(np.float32) #b prefix is for bytes string literal. 
Y = np.asarray(datadict[b'labels']).astype(np.float32) 
graph = tf.Graph() 
with graph.as_default(): 
    tf_train_data = tf.constant(X, dtype = tf.float32) 
    tf_train_labels = tf.constant(Y, dtype = tf.float32) 
    tf_test_data = tf.constant(X_test, dtype = tf.float32) 
    tf_test_labels = tf.constant(Y_test, dtype = tf.float32) 
    print (tf_train_labels.get_shape()) 
    weights = tf.Variable(tf.truncated_normal([3072, 10])) 
    print (tf.rank(weights)) 
    biases = tf.Variable(tf.zeros([10])) 
    logits = tf.matmul(tf_train_data, weights) + biases 
    print (tf.rank(logits), tf.rank(tf_train_labels), tf.rank(tf_test_labels)) 
    loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits, tf_train_labels)) 
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) 
    train_prediction = tf.nn.softmax(logits) 
    test_prediction = tf.nn.softmax(tf.matmul(tf_test_data, weights) + biases) 

這是錯誤消息

--------------------------------------------------------------------------- 
TypeError         Traceback (most recent call last) 
<ipython-input-74-8e1ffbeb5013> in <module>() 
    11  logits = tf.matmul(tf_train_data, weights) + biases 
    12  print (tf.rank(logits), tf.rank(tf_train_labels), tf.rank(tf_test_labels)) 
---> 13  loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits, tf_train_labels)) 
    14  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) 
    15  train_prediction = tf.nn.softmax(logits) 

/Users/ayada/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/nn_ops.py in sparse_softmax_cross_entropy_with_logits(logits, labels, name) 
    562  if logits.get_shape().ndims == 2: 
    563  cost, _ = gen_nn_ops._sparse_softmax_cross_entropy_with_logits(
--> 564   precise_logits, labels, name=name) 
    565  if logits.dtype == dtypes.float16: 
    566   return math_ops.cast(cost, dtypes.float16) 

/Users/ayada/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gen_nn_ops.py in _sparse_softmax_cross_entropy_with_logits(features, labels, name) 
    1538 """ 
    1539 result = _op_def_lib.apply_op("SparseSoftmaxCrossEntropyWithLogits", 
-> 1540         features=features, labels=labels, name=name) 
    1541 return _SparseSoftmaxCrossEntropyWithLogitsOutput._make(result) 
    1542 

/Users/ayada/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py in apply_op(self, op_type_name, name, **keywords) 
    527    for base_type in base_types: 
    528    _SatisfiesTypeConstraint(base_type, 
--> 529          _Attr(op_def, input_arg.type_attr)) 
    530    attrs[input_arg.type_attr] = attr_value 
    531    inferred_from[input_arg.type_attr] = input_name 

/Users/ayada/anaconda/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py in _SatisfiesTypeConstraint(dtype, attr_def) 
    58   "DataType %s for attr '%s' not in list of allowed values: %s" % 
    59   (dtypes.as_dtype(dtype).name, attr_def.name, 
---> 60   ", ".join(dtypes.as_dtype(x).name for x in allowed_list))) 
    61 
    62 

TypeError: DataType float32 for attr 'Tlabels' not in list of allowed values: int32, int64 

如何解決它

+0

嘗試僅使用astype('float')。這可能有幫助。 –

+0

tensorflow是否具有數據類型float。從文檔中,我只能看到它從float32開始。我嘗試了fron ndarrays但沒有幫助,https://www.tensorflow.org/versions/r0.11/resources/dims_types.html – Abhi

回答

0

tf.nn.sparse_softmax_cross_entropy_with_logits接受整類型的稀疏標籤。要使用單熱浮標籤,請考慮使用tf.nn.softmax_cross_entropy_with_logits

+0

我使用稀疏標籤,來自輸入數據的標籤是一個列表,其中包含相應的類每個輸入記錄。我需要將標籤轉換爲單熱浮標籤嗎? – Abhi

+0

只需使用整數數據類型即可。稀疏交叉熵的標籤應該是tf.int64的tf.int32類型。 –

+0

我這樣做了,但它不起作用,因爲權重張量是用truncated_normal()初始化的,它返回一個2D浮動張量。其中一個matmul操作使用輸入數據,X_train和權重。對於matmul操作,我需要將X作爲float。我將標籤更改爲int32,但沒有幫助 – Abhi