2017-03-02 137 views
0

我有另一個TensorFlow查詢:TensorFlow:不是所有我的變量都被恢復 - Python的

我訓練迴歸模型,節省了重量和偏見,然後將它們恢復到重新運行在不同的數據模型組。至少,這就是我想要做的。並非我所有的體重都正在恢復。下面是用於保存我的變量代碼:

# Add ops to save and restore all the variables. 
saver = tf.train.Saver({**weights, **biases}) 

# Save the variables to disk. 
save_path = saver.save(sess, "Saved_Vars.ckpt") 

這是我的恢復和運行模式整個代碼:

# Network Parameters 
n_hidden_1 = 9 
n_hidden_2 = 56 
n_hidden_3 = 8 
n_input = 9 
n_classes = 1 

# TensorFlow Graph Input 
x = tf.placeholder("float", [None, n_input]) 

# Create Multilayer Model 
def multilayer_perceptron(x, weights, biases): 
    # First hidden layer with RELU activation 
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) 
    layer_1 = tf.nn.relu(layer_1) 

    # Second hidden layer with RELU activation 
    layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) 
    layer_2 = tf.nn.relu(layer_2) 

    # Second hidden layer with RELU activation 
    layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3']) 
    layer_3 = tf.nn.relu(layer_3) 

    # Last output layer with linear activation 
    out_layer = tf.matmul(layer_3, weights['out']) + biases['out'] 
    return out_layer 

# weights and biases 
weights = { 
     'h1': tf.Variable(tf.zeros([n_input, n_hidden_1])), 
     'h2': tf.Variable(tf.zeros([n_hidden_1, n_hidden_2])), 
     'h3': tf.Variable(tf.zeros([n_hidden_2, n_hidden_3])), 
     'out': tf.Variable(tf.zeros([n_hidden_3, n_classes])) 
} 

biases = { 
     'b1' : tf.Variable(tf.zeros([n_hidden_1])), 
     'b2': tf.Variable(tf.zeros([n_hidden_2])), 
     'b3': tf.Variable(tf.zeros([n_hidden_3])), 
     'out': tf.Variable(tf.zeros([n_classes])) 
} 

# Construct Model 
pred = multilayer_perceptron(x, weights, biases) 
pred = tf.transpose(pred) 

# Initialize variables 
init = tf.global_variables_initializer() 

# RUNNING THE SESSION 

# launch the session 
sess = tf.InteractiveSession() 


# Initialize all the variables 
sess.run(init) 

# Add ops to save and restore all the variables. 
saver = tf.train.Saver({**weights, **biases}) 

# Restore variables from disk. 
saver.restore(sess, "Saved_Vars.ckpt") 

# Use the restored model to predict the target values 
prediction = sess.run(pred, feed_dict={x:dataVar_scaled}) #pred.eval(feed_dict={x:X}) 

現在,這裏是有我困惑/沮喪/惱火。從權重我可以恢復'h1','h2'和'h3',但不是'出'。爲什麼不'出'?有什麼我做錯了嗎?請你花幾分鐘時間來幫助我?

非常感謝

我直接在Windows 10上運行的Python 3.5和0.12 TensorFlow和我使用Spyder的IDE。

回答

0

它看起來像你在寫作本字典構造的「出」的關鍵之一:

{**weights, **biases} 

例如:

weights = {'h1':1, 'h2':2, 'out':3} 
biases = {'b1':4, 'b2':5, 'out':6} 
print({**weights, **biases}) 

{'h2': 2, 'out': 6, 'b2': 5, 'b1': 4, 'h1': 1} 
+0

謝謝你這麼多,改變變量名工作一種享受! :)請你能解釋一下,「{**權重,**偏差}」在做什麼,因爲我顯然沒有很好的把握。再次感謝 – jlt199

+0

這[SO線程](http://stackoverflow.com/questions/36901/what-does-double-star-and-star-do-for-parameters)是相當不錯的。另外,Saver真的想要一個列表,而不是一本字典。我通常不會將任何值傳遞給Saver,然後默認保存任何可以保存的值。 –