2

這是一個可能的重複Tensorflow: How to get gradients per instance in a batch?。無論如何,我都會問這個問題,因爲沒有一個令人滿意的答案,這裏的目標有點不同。幾個批次的TensorFlow平均梯度

我有一個非常大的網絡,我可以放在我的GPU上,但是我可以提供的最大批處理大小是32.任何比這更大的圖像會導致GPU耗盡內存。我想要使​​用更大的批次以獲得更精確的漸變近似值。

爲了具體說明,我們假設我想通過依次喂入3批次32來計算大批量96的梯度。我知道的最好方法是使用Optimizer.compute_gradients()Optimizer.apply_gradients()。這裏是一個小例子,它如何工作

import tensorflow as tf 
import numpy as np 

learn_rate = 0.1 

W_init = np.array([ [1,2,3], [4,5,6], [7,8,9] ], dtype=np.float32) 
x_init = np.array([ [11,12,13], [14,15,16], [17,18,19] ], dtype=np.float32) 

X = tf.placeholder(dtype=np.float32, name="x") 
W = tf.Variable(W_init, dtype=np.float32, name="w") 
y = tf.matmul(X, W, name="y") 
loss = tf.reduce_mean(y, name="loss") 

opt = tf.train.GradientDescentOptimizer(learn_rate) 
grad_vars_op = opt.compute_gradients(loss) 

sess = tf.Session() 
sess.run(tf.global_variables_initializer()) 

# Compute the gradients for each batch 
grads_vars1 = sess.run(grad_vars_op, feed_dict = {X: x_init[None,0]}) 
grads_vars2 = sess.run(grad_vars_op, feed_dict = {X: x_init[None,1]}) 
grads_vars3 = sess.run(grad_vars_op, feed_dict = {X: x_init[None,2]}) 

# Separate the gradients from the variables 
grads1 = [ grad for grad, var in grads_vars1 ] 
grads2 = [ grad for grad, var in grads_vars2 ] 
grads3 = [ grad for grad, var in grads_vars3 ] 
varl = [ var for grad, var in grads_vars1 ] 

# Average the gradients 
grads = [ (g1 + g2 + g3)/3 for g1, g2, g3 in zip(grads1, grads2, grads3)] 

sess.run(opt.apply_gradients(zip(grads,varl))) 

print("Weights after 1 gradient") 
print(sess.run(W)) 

現在,這一切都非常醜陋和低效,因爲直傳被在GPU上運行,而平均梯度的CPU上發生的,然後將它們發生在GPU上再次。

此外,該代碼拋出一個異常,因爲gradsnp.array個清單,並使其工作,一個人必須要創造每一個梯度的tf.placeholder

我相信應該有更好更有效的方法來做到這一點?有什麼建議麼?

回答

4

您可以創建trainable_variables的副本並累計批梯度。這裏有幾個簡單的步驟遵循

... 
opt = tf.train.GradientDescentOptimizer(learn_rate) 
# get all trainable variables 
t_vars = tf.trainable_variables() 
# create a copy of all trainable variables with `0` as initial values 
accum_tvars = [tf.Variable(tf.zeros_like(tv.initialized_value()),trainable=False) for t_var in t_vars]           
# create a op to initialize all accums vars 
zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_tvars] 

# compute gradients for a batch 
batch_grads_vars = opt.compute_gradients(loss, t_vars) 
# collect the batch gradient into accumulated vars 
accum_ops = [accum_tvars[i].assign_add(batch_grad_var[0]) for i, batch_grad_var in enumerate(batch_grads_vars)] 

# apply accums gradients 
train_step = opt.apply_gradients([(accum_tvars[i], batch_grad_var[1]) for i, batch_grad_var in enumerate(batch_grads_vars)]) 
# train_step = opt.apply_gradients(zip(accum_tvars, zip(*batch_grads_vars)[1]) 

while True: 
    # initialize the accumulated gards 
    sess.run(zero_ops) 

    # number of batches for gradient accumulation 
    n_batches = 3 
    for i in xrange(n_batches): 
     sess.run(accum_ops, feed_dict={X: x_init[:, i]}) 

    sess.run(train_step) 
+0

很好的解決方案。在train_step和train_step列表解析中,而不是列舉和索引(也可能更易讀),會稍微更加pythonic。 – lejlot

+0

確實很好的解決方案。我確定所有操作都將在GPU上執行嗎? – niko

+0

'assign_op'取決於變量定義的位置,cpu/gpu。你可以在gpus上計算其餘的部分。 –