2016-04-25 121 views
4

我在neuralnet包中試驗不同的算法,但是當我嘗試傳統的backprop算法時,結果非常奇怪/令人失望。幾乎所有的計算結果都是〜.33 ???我假設我必須錯誤地使用算法,就像我運行默認的rprop+它會區分樣本一樣。當然,正常的反向傳播並不是很糟糕,特別是如果能夠如此快速地收斂到所提供的閾值。R --networknet - 傳統backprop看起來很奇怪

library(neuralnet) 
data(infert) 

set.seed(123) 
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous, 
          data = infert, hidden = 3, 
          learningrate = 0.01, 
          algorithm = "backprop", 
          err.fct = "ce", 
          linear.output = FALSE, 
          lifesign = 'full', 
          lifesign.step = 100) 

preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result 

summary(preds) 
     V1   
Min. :0.3347060 
1st Qu.:0.3347158 
Median :0.3347161 
Mean :0.3347158 
3rd Qu.:0.3347162 
Max. :0.3347286 

這裏有些設置應該不一樣嗎?

例默認neuralnet

set.seed(123) 
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous, 
          data = infert, hidden = 3, 
          err.fct = "ce", 
          linear.output = FALSE, 
          lifesign = 'full', 
          lifesign.step = 100) 

preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result 

summary(preds) 
     V1   
Min. :0.1360947 
1st Qu.:0.1516387 
Median :0.1984035 
Mean :0.3346734 
3rd Qu.:0.4838288 
Max. :1.0000000 

回答

3

建議你喂到神經網絡之前正常化您的數據。如果你這樣做,那麼你很好去:

library(neuralnet) 
data(infert) 

set.seed(123) 
infert[,c('age','parity','induced','spontaneous')] <- scale(infert[,c('age','parity','induced','spontaneous')]) 
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous, 
          data = infert, hidden = 3, 
          learningrate = 0.01, 
          algorithm = "backprop", 
          err.fct = "ce", 
          linear.output = FALSE, 
          lifesign = 'full', 
          lifesign.step = 100) 

preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result 
summary(preds) 
     V1    
Min. :0.02138785 
1st Qu.:0.21002456 
Median :0.21463423 
Mean :0.33471568 
3rd Qu.:0.47239818 
Max. :0.97874839 

實際上有幾個問題關於如何處理這個問題。 Why do we have to normalize the input for an artificial neural network?似乎有一些最詳細的。

+0

有趣的是,我應該知道關於縮放。謝謝。你有什麼想法,爲什麼'rprop +'算法能夠在不縮放的情況下默認處理這個問題? – cdeterman

+0

我不這樣做 - 我會想象它是在代碼的某個地方默認完成的,但我不知道它爲什麼會有所不同。 – Tchotchke

+0

夠公平的,謝謝你回答我的問題。我會徘徊,也許稍後再問這個問題。 – cdeterman

相關問題