只有存在時纔可以恢復變量嗎?這樣做的最習慣的方式是什麼?TensorFlow - 恢復(如果存在)
例如,考慮下面的小例子:
import tensorflow as tf
import glob
import sys
import os
with tf.variable_scope('volatile'):
x = tf.get_variable('x', initializer=0)
with tf.variable_scope('persistent'):
y = tf.get_variable('y', initializer=0)
add1 = tf.assign_add(y, 1)
saver = tf.train.Saver(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, 'persistent'))
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
tf.get_default_graph().finalize()
print('save file', sys.argv[1])
if glob.glob(sys.argv[1] + '*'):
saver.restore(sess, sys.argv[1])
print(sess.run(y))
sess.run(add1)
print(sess.run(y))
saver.save(sess, sys.argv[1])
當使用相同的參數運行兩次,程序首先打印0\n1
然後1\n2
預期。現在假設您通過在persistent
範圍內的add1
之後添加z = tf.get_variable('z', initializer=0)
來更新您的代碼以具有新功能。再次運行這個時候,老保存文件存在將具有以下突破:
NotFoundError(見上文回溯):在檢查點沒有找到關鍵持久/ Z [節點:保存/ RestoreV2_1 = RestoreV2 [dtypes = [DT_INT32],_device =「/ job:localhost/replica:0/task:0/device:CPU:0」](_ arg_save/Const_0_0,save/RestoreV2_1/tensor_names,save/RestoreV2_1/shape_andslices)]] [ Node:save/Assign_1/_18 = _Recvclient_terminated = false,recv_device =「/ job:localhost/replica:0/task:0/device:GPU:0」,send_device =「/ job:localhost/replica:0/task:0/device:CPU:0「,send_device_incarnation = 1,tensor_name =」edge_12_save/Assign_1「,tensor_type = DT_FLOAT,_device =」/ job:localhost/replica:0/task:0/device:GPU:0「]]