2017-07-07 97 views
0

我想用兩次adam優化器來最小化代碼中的不同張量,我嘗試過使用GradientDescentOptimizer兩次,這很好,但是當我使用adam優化器兩次時出現錯誤消息,我問了另一個問題在:tensorflowVariable RNNLM/RNNLM/embedding/Adam_2/ does not exist,但該解決方案在這裏不起作用。我也向上看:https://github.com/tensorflow/tensorflow/issues/6220,但我還是不明白。

這是我的代碼,我收到錯誤消息:ValueError:變量NN/NN/W/Adam_2 /不存在,或者未使用tf.get_variable()創建。你是否想在VarScope中設置重用=無?

然後我試圖解決在tensorflowVariable RNNLM/RNNLM/embedding/Adam_2/ does not exist,但不工作
在tensorflow中使用Adam優化器TWICE

import tensorflow as tf 

def main(): 
    optimizer = tf.train.GradientDescentOptimizer(0.005) 
    # optimizer = tf.train.AdamOptimizer(0.005) 

    with tf.variable_scope('NN') as scope: 
     W = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     X = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     y_ = tf.get_variable(name='y_', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     y1 = W + X 
     loss_1 = tf.reduce_mean(tf.abs(y_ - y1)) 


     # train_op1 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_1) 
     train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1) 
     # with tf.variable_scope('opt'): 
     #  train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1) 

     ############################################################################################## 
     scope.reuse_variables() 

     W2 = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     X2 = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     b = tf.Variable(tf.random_normal(shape=[5, 1], dtype=tf.float32)) 
     y2 = W2 + X2 + b 
     loss_2 = tf.reduce_mean(tf.abs(y_ - y2)) 

     # train_op2 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_2) 
     train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2) 
     # with tf.variable_scope('opt'): 
     #  train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2) 


if __name__ == '__main__': 
    main() 
+0

你可以嘗試把你的第二優化器在不同的範圍? – rmeertens

+0

謝謝,如果我把第二個優化器放在不同的範圍內,它可以工作!但我仍然不知道爲什麼錯誤發生在我的代碼中 – Joey

回答

0

來解決這個問題最簡單的方法就是把第二個優化在不同的範圍。這樣的命名不會造成任何混淆。

0

如果你絕對必須這樣做,在同一範圍內, 確保所有變量在時間上定義。 我不得不做更多的研究,爲什麼它像這樣工作,但優化器設置鎖定在較低級別的圖形中,不再可動態訪問。

最小工作例如:

import tensorflow as tf 

def main(): 
    optimizer = tf.train.GradientDescentOptimizer(0.005) 
    # optimizer = tf.train.AdamOptimizer(0.005) 

    with tf.variable_scope('NN') as scope: 
     assert scope.reuse == False 
     W2 = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     X2 = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     y2_ = tf.get_variable(name='y_', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     b = tf.get_variable(name='b', initializer=tf.random_normal(shape=[5, 1], dtype=tf.float32)) 
     y2 = W2 + X2 + b 
     loss_2 = tf.reduce_mean(tf.abs(y2_ - y2)) 

     # train_op2 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_2) 
     train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2) 


     # with tf.variable_scope('opt'): 
     #  train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1) 

    ############################################################################################## 
    with tf.variable_scope('NN', reuse = True) as scope: 


     W = tf.get_variable(name='W', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     X = tf.get_variable(name='X', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     y_ = tf.get_variable(name='y_', initializer=tf.random_uniform(dtype=tf.float32, shape=[5, 1])) 
     b = tf.get_variable(name='b', initializer=tf.random_normal(shape=[5, 1], dtype=tf.float32)) 

     y1 = W + X 
     loss_1 = tf.reduce_mean(tf.abs(y_ - y1)) 


     # train_op1 = tf.train.GradientDescentOptimizer(0.005).minimize(loss_1) 
     train_op1 = tf.train.AdamOptimizer(0.005).minimize(loss_1) 
     # with tf.variable_scope('opt'): 
     #  train_op2 = tf.train.AdamOptimizer(0.005).minimize(loss_2) 


if __name__ == '__main__': 
    main() 
相關問題