2016-08-20 84 views
2

有沒有什麼聰明的方法可以使用feed_dict並快速創建批處理。有沒有什麼可以幫助的。所以我的訓練數據被加載到列表中,但沒有批處理。有沒有什麼聰明的方法可以用feed_dict隨機選擇一個批次,而無需預先批量處理數據。Tensorflow和feed_dict以及批處理培訓集

例如,我有:

for i in range(N_STEPS): 
     sess.run(train_step, feed_dict={x_: X, y_: Y}) 

其中x和y是輸入和一個標準的NN的輸出和X的長度的訓練示例的數目。人們建議如何創建批次?

這個,我認爲可能會做到這一點,但必須有更優雅的東西?

batch = random.randrange(0, len(X)-N_BATCH) 
sess.run(train_step, feed_dict={x_: X[batch:batch+N_BATCH], y_: Y[batch:batch+N_BATCH]}) 

回答

1

由谷歌創建udacity的tensorflow課程採用使用tf.placeholder批次 for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} tf_train_datasettf_train_labels定義如下。