0

所以我試圖使用圖像識別使用CN中的mxnet包使用CNN嘗試和預測基於圖像的標量輸出(在我的情況下等待時間)。圖像識別與標量輸出使用CNN MXnet在R

但是,當我這樣做,我得到相同的結果輸出(它預測相同的數字,可能只是所有結果的平均值)。我如何才能正確預測標量輸出。

此外,我的圖像已經通過灰度化並轉換爲下面的像素格式進行了預處理。 我基本上使用圖像來預測等待時間,這就是爲什麼我的train_y是以秒爲單位的當前等待時間,因此爲什麼我沒有將它轉換爲[0,1]範圍。我更喜歡迴歸類型輸出或某種類型的標量輸出,它根據圖像輸出預測的等待時間。

你會推薦什麼其他方式來解決這個問題,不知道我的方法是否正確。

這裏是我的重複性代碼:

set.seed(0) 

df <- data.frame(replicate(784,runif(7538))) 
df$waittime <- 1000*runif(7538) 


training_index <- createDataPartition(df$waittime, p = .9, times = 1) 
training_index <- unlist(training_index) 

train_set <- df[training_index,] 
dim(train_set) 
test_set <- df[-training_index,] 
dim(test_set) 


## Fix train and test datasets 
train_data <- data.matrix(train_set) 
train_x <- t(train_data[, -785]) 
train_y <- train_data[,785] 
train_array <- train_x 
dim(train_array) <- c(28, 28, 1, ncol(train_array)) 


test_data <- data.matrix(test_set) 
test_x <- t(test_set[,-785]) 
test_y <- test_set[,785] 
test_array <- test_x 
dim(test_array) <- c(28, 28, 1, ncol(test_x)) 




library(mxnet) 
## Model 
mx_data <- mx.symbol.Variable('data') 
## 1st convolutional layer 5x5 kernel and 20 filters. 
conv_1 <- mx.symbol.Convolution(data = mx_data, kernel = c(5, 5), num_filter = 20) 
tanh_1 <- mx.symbol.Activation(data = conv_1, act_type = "tanh") 
pool_1 <- mx.symbol.Pooling(data = tanh_1, pool_type = "max", kernel = c(2, 2), stride = c(2,2)) 
## 2nd convolutional layer 5x5 kernel and 50 filters. 
conv_2 <- mx.symbol.Convolution(data = pool_1, kernel = c(5,5), num_filter = 50) 
tanh_2 <- mx.symbol.Activation(data = conv_2, act_type = "tanh") 
pool_2 <- mx.symbol.Pooling(data = tanh_2, pool_type = "max", kernel = c(2, 2), stride = c(2, 2)) 
## 1st fully connected layer 
flat <- mx.symbol.Flatten(data = pool_2) 
fcl_1 <- mx.symbol.FullyConnected(data = flat, num_hidden = 500) 
tanh_3 <- mx.symbol.Activation(data = fcl_1, act_type = "tanh") 
## 2nd fully connected layer 
fcl_2 <- mx.symbol.FullyConnected(data = tanh_3, num_hidden = 1) 
## Output 
#NN_model <- mx.symbol.SoftmaxOutput(data = fcl_2) 
label <- mx.symbol.Variable("label") 
#NN_model <- mx.symbol.MakeLoss(mx.symbol.square(mx.symbol.Reshape(fcl_2, shape = 0) - label)) 
NN_model <- mx.symbol.LinearRegressionOutput(fcl_2) 


## Device used. Sadly not the GPU :-(
#device <- mx.gpu 
#Didn't work well, predicted same number continuously regardless of image 
## Train on 1200 samples 
model <- mx.model.FeedForward.create(NN_model, X = train_array, y = train_y, 
            #          ctx = device, 
            num.round = 30, 
            array.batch.size = 100, 
            initializer=mx.init.uniform(0.002), 
            learning.rate = 0.00001, 
            momentum = 0.9, 
            wd = 0.00001, 
            eval.metric = mx.metric.rmse) 
            epoch.end.callback = mx.callback.log.train.metric(100)) 



pred <- predict(model, test_array) 
#gives the same numeric output 
+0

您是否已將數據轉換爲[0,1]? –

+0

是的,數據全部在[0,1]之內,就像這個虛擬示例 – Ic3MaN911

+0

如果運行測試示例,您將看到數據全部爲[0,1] – Ic3MaN911

回答

0

只需修改一些代碼。 train_y也位於[0,1]和initializer = mx.init.Xavier(factor_type = "in", magnitude = 2.34)

library(caret) 

set.seed(0) 

df <- data.frame(replicate(784, runif(7538))) 
df$waittime <- runif(7538) 

training_index <- createDataPartition(df$waittime, p = .9, times = 1) 
training_index <- unlist(training_index) 

train_set <- df[training_index, ] 
dim(train_set) 
test_set <- df[-training_index, ] 
dim(test_set) 

## Fix train and test datasets 
train_data <- data.matrix(train_set) 
train_x <- t(train_data[,-785]) 
train_y <- train_data[, 785] 
train_array <- train_x 
dim(train_array) <- c(28, 28, 1, ncol(train_array)) 

test_data <- data.matrix(test_set) 
test_x <- t(test_set[, -785]) 
test_y <- test_set[, 785] 
test_array <- test_x 
dim(test_array) <- c(28, 28, 1, ncol(test_x)) 

library(mxnet) 
## Model 
mx_data <- mx.symbol.Variable('data') 
## 1st convolutional layer 5x5 kernel and 20 filters. 
conv_1 <- mx.symbol.Convolution(data = mx_data, kernel = c(5, 5), num_filter = 20) 
tanh_1 <- mx.symbol.Activation(data = conv_1, act_type = "tanh") 
pool_1 <- mx.symbol.Pooling(data = tanh_1, pool_type = "max", kernel = c(2, 2), stride = c(2, 2)) 
## 2nd convolutional layer 5x5 kernel and 50 filters. 
conv_2 <- mx.symbol.Convolution(data = pool_1, kernel = c(5, 5), num_filter = 50) 
tanh_2 <- mx.symbol.Activation(data = conv_2, act_type = "tanh") 
pool_2 <- mx.symbol.Pooling(data = tanh_2, pool_type = "max", kernel = c(2, 2), stride = c(2, 2)) 
## 1st fully connected layer 
flat <- mx.symbol.Flatten(data = pool_2) 
fcl_1 <- mx.symbol.FullyConnected(data = flat, num_hidden = 500) 
tanh_3 <- mx.symbol.Activation(data = fcl_1, act_type = "tanh") 
## 2nd fully connected layer 
fcl_2 <- mx.symbol.FullyConnected(data = tanh_3, num_hidden = 1) 
## Output 
#NN_model <- mx.symbol.SoftmaxOutput(data = fcl_2) 
label <- mx.symbol.Variable("label") 
#NN_model <- mx.symbol.MakeLoss(mx.symbol.square(mx.symbol.Reshape(fcl_2, shape = 0) - label)) 
NN_model <- mx.symbol.LinearRegressionOutput(fcl_2) 

mx.set.seed(0) 
model <- mx.model.FeedForward.create(NN_model, 
            X = train_array, 
            y = train_y, 
            num.round = 4, 
            array.batch.size = 64, 
            initializer = mx.init.Xavier(factor_type = "in", magnitude = 2.34), 
            learning.rate = 0.00001, 
            momentum = 0.9, 
            wd = 0.00001, 
            eval.metric = mx.metric.rmse) 

pred <- predict(model, test_array) 

pred[1,1:10] 
# [1] 0.4859098 0.4865469 0.5671642 0.5729486 0.5008956 0.4962234 0.4327411 0.5478653 0.5446281 0.5707113 
+0

所以我保留train_y的原因就是因爲當我預測的時候,這就是我想要的結果。有什麼方法可以保留這一點。 Train_y本質上是幾秒鐘內的等待時間,這正是我想要的輸出。如果我將其轉換,它實際上會非常沒有意義,特別是因爲標準化train_y與標準化test_y有不同的結果,因爲它們會有不同的最大值和最小值。那有意義嗎? – Ic3MaN911

+0

對不起,我不明白哪個是縮放'train_y'的問題。在這種情況下,您可以乘以「1000」。 –

+0

所以我嘗試過縮放,我把它除以20000,用它來預測,然後將預測乘以20000,結果導致模型很差。只有9%的準確性。即使我的縮放數字的範圍是[0.00005,0.73720] – Ic3MaN911

1

看起來您的網絡正在崩潰,由於一些潛力。我會嘗試進行以下修改:

  • 使用ReLU激活而不是tanh。在Conv網絡中,ReLU已被證明是一種比S形或tanh更強大的激活。
  • 在卷積層輸入之間的用戶批處理標準化(請參見紙張here)。
  • 將您的範圍分爲幾個部分並使用softmax。如果您必須進行迴歸,請爲每個範圍考慮一個單獨的迴歸網絡,並根據softmax的輸出選擇正確的迴歸網絡。交叉熵損失在學習高度非線性函數方面表現出更大的成功。