2017-07-19 71 views
0

這是使用張量流執行批梯度下降。均方誤差不會隨着時代的數量而降低?

當我運行此代碼時,MSE保持不變。

import tensorflow as tf 
from sklearn.preprocessing import StandardScaler 
import numpy as np 
from sklearn.datasets import fetch_california_housing 

housing=fetch_california_housing() 

std=StandardScaler() 
scaled_housing_data=std.fit_transform(housing.data) 

m,n=scaled_housing_data.shape 
scaled_housing_data.shape 

scaled_housing_data_with_bias=np.c_[np.ones((m,1)),scaled_housing_data] 

n_epochs=1000 
n_learning_rate=0.01 

x=tf.constant(scaled_housing_data_with_bias,dtype=tf.float32) 
y=tf.constant(housing.target.reshape(-1,1),dtype=tf.float32) 
theta=tf.Variable(tf.random_uniform([n+1,1],-1.0,1.0,seed=42)) 
y_pred=tf.matmul(x,theta) 

error=y_pred-y 
mse=tf.reduce_mean(tf.square(error)) 
gradients=2/m*tf.matmul(tf.transpose(x),error) 

training_op=tf.assign(theta,theta-n_learning_rate*gradients) 

init=tf.global_variables_initializer() 
with tf.Session() as sess: 
    sess.run(init) 

    for epoch in range(n_epochs): 
     if epoch % 100 == 0: 
      print("Epoch", epoch, "MSE =", mse.eval()) 
     sess.run(training_op) 

    best_theta = theta.eval() 

輸出

('Epoch', 0, 'MSE =', 2.7544272) 
('Epoch', 100, 'MSE =', 2.7544272) 
('Epoch', 200, 'MSE =', 2.7544272) 
('Epoch', 300, 'MSE =', 2.7544272) 
('Epoch', 400, 'MSE =', 2.7544272) 
('Epoch', 500, 'MSE =', 2.7544272) 
('Epoch', 600, 'MSE =', 2.7544272) 
('Epoch', 700, 'MSE =', 2.7544272) 
('Epoch', 800, 'MSE =', 2.7544272) 
('Epoch', 900, 'MSE =', 2.7544272) 

均方誤差(MSE)保持不變,不管是什麼。 請幫忙。

回答

0

如果你的MSE是一樣的,那麼就意味着你的theta是不會得到更新,這意味着梯度爲零。更改此行並檢查:

gradients=2.0/m*tf.matmul(tf.transpose(x),error) # integer division (2/m) causes zero 
0

也許你應該再試一次。我只是複製你的代碼並運行,損失正確減少。
輸出:

Epoch 0 MSE = 2.75443 
Epoch 100 MSE = 0.632222 
Epoch 200 MSE = 0.57278 
Epoch 300 MSE = 0.558501 
Epoch 400 MSE = 0.54907 
Epoch 500 MSE = 0.542288 
Epoch 600 MSE = 0.537379 
Epoch 700 MSE = 0.533822 
Epoch 800 MSE = 0.531242 
Epoch 900 MSE = 0.529371 
相關問題