4
我在neuralnet
包中試驗不同的算法,但是當我嘗試傳統的backprop
算法時,結果非常奇怪/令人失望。幾乎所有的計算結果都是〜.33 ???我假設我必須錯誤地使用算法,就像我運行默認的rprop+
它會區分樣本一樣。當然,正常的反向傳播並不是很糟糕,特別是如果能夠如此快速地收斂到所提供的閾值。R --networknet - 傳統backprop看起來很奇怪
library(neuralnet)
data(infert)
set.seed(123)
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous,
data = infert, hidden = 3,
learningrate = 0.01,
algorithm = "backprop",
err.fct = "ce",
linear.output = FALSE,
lifesign = 'full',
lifesign.step = 100)
preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result
summary(preds)
V1
Min. :0.3347060
1st Qu.:0.3347158
Median :0.3347161
Mean :0.3347158
3rd Qu.:0.3347162
Max. :0.3347286
這裏有些設置應該不一樣嗎?
例默認neuralnet
set.seed(123)
fit <- neuralnet::neuralnet(formula = case~age+parity+induced+spontaneous,
data = infert, hidden = 3,
err.fct = "ce",
linear.output = FALSE,
lifesign = 'full',
lifesign.step = 100)
preds <- neuralnet::compute(fit, infert[,c("age","parity","induced","spontaneous")])$net.result
summary(preds)
V1
Min. :0.1360947
1st Qu.:0.1516387
Median :0.1984035
Mean :0.3346734
3rd Qu.:0.4838288
Max. :1.0000000
有趣的是,我應該知道關於縮放。謝謝。你有什麼想法,爲什麼'rprop +'算法能夠在不縮放的情況下默認處理這個問題? – cdeterman
我不這樣做 - 我會想象它是在代碼的某個地方默認完成的,但我不知道它爲什麼會有所不同。 – Tchotchke
夠公平的,謝謝你回答我的問題。我會徘徊,也許稍後再問這個問題。 – cdeterman