2017-04-14 214 views
1

我想恢復我的Tensorflow模型 - 這是一個線性迴歸網絡。我確信我做錯了事,因爲我的預測不好。當我訓練時,我有一套測試裝置。我的測試集預測看起來不錯,但是當我嘗試恢復相同的模型時,預測看起來很差。Tensorflow恢復模型

這是我如何保存模型:

with tf.Session() as sess: 
    saver = tf.train.Saver() 
    init = tf.global_variables_initializer() 
    sess.run(init) 
    training_data, ground_truth = d.get_training_data() 
    testing_data, testing_ground_truth = d.get_testing_data() 

    for iteration in range(config["training_iterations"]): 
     start_pos = np.random.randint(len(training_data) - config["batch_size"]) 
     batch_x = training_data[start_pos:start_pos+config["batch_size"],:,:] 
     batch_y = ground_truth[start_pos:start_pos+config["batch_size"]] 
     sess.run(optimizer, feed_dict={x: batch_x, y: batch_y}) 
     train_acc, train_loss = sess.run([accuracy, cost], feed_dict={x: batch_x, y: batch_y}) 

     sess.run(optimizer, feed_dict={x: testing_data, y: testing_ground_truth}) 
     test_acc, test_loss = sess.run([accuracy, cost], feed_dict={x: testing_data, y: testing_ground_truth}) 
     samples = sess.run(pred, feed_dict={x: testing_data}) 
     # print samples 
     data.compute_acc(samples, testing_ground_truth) 

     print("Training\tAcc: {}\tLoss: {}".format(train_acc, train_loss)) 
     print("Testing\t\tAcc: {}\tLoss: {}".format(test_acc, test_loss)) 
     print("Iteration: {}".format(iteration)) 

     if iteration % config["save_step"] == 0: 
      saver.save(sess, config["save_model_path"]+str(iteration)+".ckpt") 

下面是我的測試集的一些例子。你會發現My prediction比較接近Actual

My prediction: -12.705 Actual : -10.0 
My prediction: 0.000 Actual : 8.0 
My prediction: -14.313 Actual : -23.0 
My prediction: 17.879 Actual : 13.0 
My prediction: 17.452 Actual : 24.0 
My prediction: 22.886 Actual : 29.0 
Custom accuracy: 5.0159861487 
Training Acc: 5.63836860657 Loss: 25.6545143127 
Testing  Acc: 4.238052845 Loss: 22.2736053467 
Iteration: 6297 

那麼這裏我如何恢復模式:

with tf.Session() as sess: 
    saver = tf.train.Saver() 
    saver.restore(sess, config["retore_model_path"]+"3000.ckpt") 

    init = tf.global_variables_initializer() 
    sess.run(init) 

    pred = sess.run(pred, feed_dict={x: predict_data})[0] 
    print("Prediction: {:.3f}\tGround truth: {:.3f}".format(pred, ground_truth)) 

但這裏的預測是什麼樣子。你會發現,Prediction總是對周圍0:

Prediction: 0.355  Ground truth: -22.000 
Prediction: -0.035  Ground truth: 3.000 
Prediction: -1.005  Ground truth: -3.000 
Prediction: -0.184  Ground truth: 1.000 
Prediction: 1.300  Ground truth: 5.000 
Prediction: 0.133  Ground truth: -5.000 

這裏是我的tensorflow版本(是的,我需要更新):

Python 2.7.6 (default, Oct 26 2016, 20:30:19) 
[GCC 4.8.4] on linux2 
Type "help", "copyright", "credits" or "license" for more information. 
>>> import tensorflow as tf 
>>> print(tf.__version__) 
0.12.0-rc1 

不知道這是否會有所幫助,但我想放置saver.restore()sess.run(init)之後致電,我得到的預測完全相同。我認爲這是因爲sess.run(init)初始化變量。

更改排序是這樣的:

sess.run(init) 
saver.restore(sess, config["retore_model_path"]+"6000.ckpt") 

但隨後的預測是這樣的:

Prediction: -15.840  Ground truth: 2.000 
Prediction: -15.840  Ground truth: -7.000 
Prediction: -0.000  Ground truth: 12.000 
Prediction: -15.840  Ground truth: -9.000 
Prediction: -15.175  Ground truth: -27.000 

回答

2

當您從檢查點恢復,你不初始化變量。正如你在問題結尾處注意到的那樣。

init = tf.global_variables_initializer() 
sess.run(init) 

覆蓋剛剛恢復的變量。哎呀! :)

評論這兩條線,我懷疑你會很好去。

相關問題