2016-12-28 45 views
3

這是我第二次嘗試在一個變量中實現漸變下降,並且它總是發散。有任何想法嗎?爲什麼簡單的漸變下降會發散?

這是簡單的線性迴歸,用於最小化一個變量中的殘差平方和。

def gradient_descent_wtf(xvalues, yvalues): 
    tolerance = 0.1 

    #y=mx+b 
    #some line to predict y values from x values 
    m=1. 
    b=1. 

    #a predicted y-value has value mx + b 

    for i in range(0,10): 

     #calculate y-value predictions for all x-values 
     predicted_yvalues = list() 
     for x in xvalues: 
      predicted_yvalues.append(m*x + b) 

     # predicted_yvalues holds the predicted y-values 

     #now calculate the residuals = y-value - predicted y-value for each point 
     residuals = list() 
     number_of_points = len(yvalues) 
     for n in range(0,number_of_points): 
      residuals.append(yvalues[n] - predicted_yvalues[n]) 

     ## calculate the residual sum of squares from the residuals, that is, 
     ## square each residual and add them all up. we will try to minimize 
     ## the residual sum of squares later. 
     residual_sum_of_squares = 0. 
     for r in residuals: 
      residual_sum_of_squares += r**2 
     print("RSS = %s" % residual_sum_of_squares) 
     ## 
     ## 
     ## 

     #now make a version of the residuals which is multiplied by the x-values 
     residuals_times_xvalues = list() 
     for n in range(0,number_of_points): 
      residuals_times_xvalues.append(residuals[n] * xvalues[n]) 

     #now create the sums for the residuals and for the residuals times the x-values 
     residuals_sum = sum(residuals) 

     residuals_times_xvalues_sum = sum(residuals_times_xvalues) 

     # now multiply the sums by a positive scalar and add each to m and b. 

     residuals_sum *= 0.1 
     residuals_times_xvalues_sum *= 0.1 

     b += residuals_sum 
     m += residuals_times_xvalues_sum 

     #and repeat until convergence. 
     #convergence occurs when ||sum vector|| < some tolerance. 
     # ||sum vector|| = sqrt(residuals_sum**2 + residuals_times_xvalues_sum**2) 

     #check for convergence 
     magnitude_of_sum_vector = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5 
     if magnitude_of_sum_vector < tolerance: 
      break 

    return (b, m) 

結果:

gradient_descent_wtf([1,2,3,4,5,6,7,8,9,10],[6,23,8,56,3,24,234,76,59,567]) 
RSS = 370433.0 
RSS = 300170125.7 
RSS = 4.86943013045e+11 
RSS = 7.90447409339e+14 
RSS = 1.28312217794e+18 
RSS = 2.08287421094e+21 
RSS = 3.38110045417e+24 
RSS = 5.48849288217e+27 
RSS = 8.90939341376e+30 
RSS = 1.44624932026e+34 
Out[108]: 
(-3.475524066284303e+16, -2.4195981188763203e+17) 

回答

2

的梯度是巨大 - 因此要爲長距離以下大載體(0.1倍大的數量大)。在適當的方向上查找單位矢量。像這樣的事情(與內涵更換您的循環):

def gradient_descent_wtf(xvalues, yvalues): 
    tolerance = 0.1 

    m=1. 
    b=1. 

    for i in range(0,10): 
     predicted_yvalues = [m*x+b for x in xvalues] 

     residuals = [y-y_hat for y,y_hat in zip(yvalues,predicted_yvalues)] 

     residual_sum_of_squares = sum(r**2 for r in residuals) #only needed for debugging purposes 
     print("RSS = %s" % residual_sum_of_squares) 

     residuals_times_xvalues = [r*x for r,x in zip(residuals,xvalues)] 

     residuals_sum = sum(residuals) 

     residuals_times_xvalues_sum = sum(residuals_times_xvalues) 

     # (residuals_sum,residual_times_xvalues_sum) is a vector which points in the negative 
     # gradient direction. *Find a unit vector which points in same direction* 

     magnitude = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5 

     residuals_sum /= magnitude 
     residuals_times_xvalues_sum /= magnitude 

     b += residuals_sum * (0.1) 
     m += residuals_times_xvalues_sum * (0.1) 

     #check for convergence -- this needs work! 
     magnitude_of_sum_vector = (residuals_sum**2 + residuals_times_xvalues_sum**2)**0.5 
     if magnitude_of_sum_vector < tolerance: 
      break 

    return (b, m) 

例如:

>>> gradient_descent_wtf([1,2,3,4,5,6,7,8,9,10],[6,23,8,56,3,24,234,76,59,567]) 
RSS = 370433.0 
RSS = 368732.1655050716 
RSS = 367039.18363896786 
RSS = 365354.0543519137 
RSS = 363676.7775934381 
RSS = 362007.3533123621 
RSS = 360345.7814567845 
RSS = 358692.061974069 
RSS = 357046.1948108295 
RSS = 355408.17991291644 
(1.1157111313023558, 1.9932828425473605) 

這肯定是更爲合理的。

製作數值穩定的梯度下降算法並不是一件小事。您可能想在數值分析中查閱正確的教科書。

1

首先,你的代碼是正確的。

但是當你做線性迴歸時,你應該考慮一些數學問題。

例如,剩餘的是-205.8和你的學習速度是0.1所以你會得到一個巨大的下降步-25.8

這是一個非常大的步驟,你不能回到正確的mb。你必須做足夠小的步驟。

有兩種方法可以使梯度下降步合理:

  1. 初始化一個小的學習率,如0.001和0.0003。
  2. 將您的步數除以您的輸入值的總量。