1

我正在處理Tensorflow中的項目。我已經建立並訓練了一個CNN,現在我正試圖將它加載到另一個文件中進行預測。出於某種原因,我不斷收到錯誤「您必須爲dtype浮點和形狀[10]提供佔位符張量'y_pred'的值」Tensorflow導入metagraph佔位符未送入

圖構建的文件具有用於預測的變量y_pred:

y_pred = tf.nn.softmax(layer_fc2) 

在那裏我試圖加載模型文件如下:

# Create Session 
sess = tf.Session() 
# Load model 
saver = tf.train.import_meta_graph('Model.meta') 
saver.restore(sess, tf.train.latest_checkpoint('./')) 
sess.run(tf.global_variables_initializer()) 
graph = tf.get_default_graph() 
x_batch = mnist.test.next_batch(1) 

x_batch = x_batch[0].reshape(1, 784) 
x = graph.get_tensor_by_name("x:0") 
y_pred = graph.get_tensor_by_name("y_pred:0") 


classification = sess.run(y_pred, feed_dict={x:x_batch}) 
print(classification) 

我得到確切的錯誤是:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'y_pred' with dtype float and shape [10] 
[[Node: y_pred = Placeholder[dtype=DT_FLOAT, shape=[10], _device="/job:localhost/replica:0/task:0/cpu:0"]()]] 

我想知道如果我在導出之前沒有正確設置值? 有誰知道爲什麼這不起作用?

編輯。包括型號代碼:

# Network Design 
# First Layer 
layer_conv1, weights_conv1 = new_conv_layer(input=x_image, num_input_channels=num_channels, filter_size=filter_size1, num_filters=num_filters1, use_pooling=True) 
# Second Layer 
layer_conv2, weights_conv2 = new_conv_layer(input=layer_conv1, num_input_channels=num_filters1, filter_size=filter_size2, num_filters=num_filters2, use_pooling=True) 
# Third Layer 
layer_conv3, weights_conv3 = new_conv_layer(input=layer_conv2, num_input_channels=num_filters2, filter_size=filter_size3, num_filters=num_filters3, use_pooling=True) 
# Flatten Layer 
layer_flat, num_features = flatten_layer(layer_conv3) 
# First Fully Connected Layer 
layer_fc1 = new_fc_layer(input=layer_flat, num_inputs=num_features, num_outputs=fc_size, use_relu=True) 
# Second Fully Connected Layer 
layer_fc2 = new_fc_layer(input=layer_fc1, num_inputs=fc_size, num_outputs=num_classes, use_relu=False) 

# softmaxResult = tf.placeholder(tf.float32, shape=[10], name='softmaxResult') 
# Get class probabilities 
y_pred = tf.nn.softmax(layer_fc2) 
y_pred = tf.identity(y_pred, name="y_pred") 
# session.run(y_pred, feed_dict={softmaxResult: y_pred}) 
# Predicted Class 
y_pred_cls = tf.argmax(y_pred, dimension=1) 
# softmaxResult.assign(y_pred_cls) 

# Feed y_pred 
# session.run(softmaxResult, feedDict={softmaxResult: softmaxResult}) 

# Define Cost Function 
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2, labels=y_true) 
cost = tf.reduce_mean(cross_entropy) 

# Optimize Network 
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost) 
correct_prediction = tf.equal(y_pred_cls, y_true_cls) 
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 

# Run Session 
session.run(tf.global_variables_initializer()) 

def print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss): 
    #Calculate accuracy on training set 
    acc = session.run(accuracy, feed_dict=feed_dict_train) 
    val_acc = session.run(accuracy, feed_dict=feed_dict_validate) 
    msg = "Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Validation Loss: {3:.3f}" 
    print(msg.format(epoch + 1, acc, val_acc, val_loss)) 

total_iterations = 0 

#Optimization Function 
def optimize(num_iterations): 
    # Updates global rather than local value 
    global total_iterations 

    best_val_loss = float("inf") 

    for i in range(total_iterations, total_iterations + num_iterations): 
     # Get training data batch 
     x_batch, y_batch = mnist.train.next_batch(batch_size) 
     # Get a validation batch 
     x_validate, y_validate = mnist.train.next_batch(batch_size) 

     # Shrink to single dimension 
     x_batch = x_batch.reshape(batch_size, img_size_flat) 
     x_validate = x_validate.reshape(batch_size, img_size_flat) 

     # Training feed 
     feed_dict_train = {x: x_batch, y_true: y_batch} 
     feed_dict_validate = {x: x_validate, y_true: y_validate} 


     # Run the optimizer 
     session.run(optimizer, feed_dict=feed_dict_train) 

     # Print status at end of each epoch (defined as full pass through training dataset). 
     if i % int(5000/batch_size) == 0: 
      val_loss = session.run(cost, feed_dict=feed_dict_validate) 
      epoch = int(i/int(5000/batch_size)) 

      print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss) 

     total_iterations += num_iterations 

optimize(num_iterations=3000) 

# Save the final model 
saver = tf.train.Saver() 
saved_path = saver.save(session, os.path.join(os.getcwd(),'MNIST Model')) 
print("Model saved in: ", saved_path) 

# Run on test image 
image = mnist.test.next_batch(1) 
feedin = image[0].reshape(1, 784) 
inputStuff = {x:feedin} 

classification = session.run(y_pred, feed_dict=inputStuff) 
print(classification) 
+0

您是否可以發佈您的型號代碼?你有另一個變量作爲y_pred作爲佔位符來提供標籤嗎?基本上,要知道你的代碼中的所有佔位符? – hars

+1

你可以檢查確切的是你得到這個錯誤哪一行嗎?既然你想做預測,我猜你認爲你得到了這個行分類= sess.run(y_pred,feed_dict = {x:x_batch})的錯誤,但是我懷疑你可能會得到這行y_pred = graph.get_tensor_by_name(「y_pred:0」)?爲什麼你甚至需要這條線? –

+0

感謝您的回覆。我沒有其他變量名稱y_pred。我無法確切地知道錯誤來自哪裏,儘管它表明它來自session.run行。如果有幫助,我添加了其餘的代碼。 – Hirsh

回答

0

謝謝@VS_FF

你需要找到 'X:0',因爲他們輸入鍵。