7
我對張量流中優化器的apply_gradients
和minimize
之間的區別感到困惑。例如,tensorflow中`apply_gradients`和`minimize`優化器之間的區別
optimizer = tf.train.AdamOptimizer(1e-3)
grads_and_vars = optimizer.compute_gradients(cnn.loss)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)
和
optimizer = tf.train.AdamOptimizer(1e-3)
train_op = optimizer.minimize(cnn.loss, global_step=global_step)
他們是同一的確?
如果我想衰減學習的學習率,我可以使用下面的代碼嗎?
global_step = tf.Variable(0, name="global_step", trainable=False)
starter_learning_rate = 1e-3
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
100, FLAGS.decay_rate, staircase=True)
# Passing global_step to minimize() will increment it at each step.
learning_step = (
optimizer = tf.train.AdamOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(cnn.loss)
train_op = optimizer.apply_gradients(grads_and_vars, global_step=global_step)
)
感謝您的幫助!
更好地解釋這個文檔的鏈接是https://www.tensorflow.org/api_docs/python/tf/train/Optimizer。 –