2017-04-22 74 views
0

我想在張量流中添加更多圖層到我的神經網絡,但是在這裏我收到了這個錯誤。用TensorFlow添加更多圖層錯誤

ValueError: Dimensions must be equal, but are 256 and 784 for 'MatMul_1' (op: 'MatMul') with input shapes: [?,256], [784,256]. 

這就是我如何創建權重和偏見。

# Store layers weight & bias 
weights = { 
    'hidden_layer': tf.Variable(tf.random_normal([n_input, n_hidden_layer])), 
    'out': tf.Variable(tf.random_normal([n_hidden_layer, n_classes])) 
} 
biases = { 
    'hidden_layer': tf.Variable(tf.random_normal([n_hidden_layer])), 
    'out': tf.Variable(tf.random_normal([n_classes])) 
} 

這裏就是我在做我的模型

# Hidden layer with RELU activation 
layer_1 = tf.add(tf.matmul(x_flat, weights['hidden_layer']), biases['hidden_layer']) 
layer_1 = tf.nn.relu(layer_1) 
layer_1 = tf.nn.dropout(layer_1, keep_prob) 

layer_2 = tf.add(tf.matmul(layer_1, weights['hidden_layer']), biases['hidden_layer']) 
layer_2 = tf.nn.relu(layer_2) 
layer_2 = tf.nn.dropout(layer_2, keep_prob) 
# Output layer with linear activation 
logits = tf.matmul(layer_2, weights['out']) + biases['out'] 

和錯誤是最有可能在layer_2。我正在使用MNIST數據集。同時,也是XY,一個xflat被重塑到

x shape is (?, 28, 28, 1) 
y shape is (?, 10) 
x flat shape is (?, 784) 

回答

1

你或許應該使用不同的權重和偏見的1層和2層

的問題是,無論是第1層和2層是爲投入製作大小爲784.但第1層的輸出大小爲256,因此第2層無法使用。

具體來說,你嘗試在這一行乘以矩陣layer_1weights['hidden_layer']有不兼容的尺寸:

layer_2 = tf.add(tf.matmul(layer_1, weights['hidden_layer']), biases['hidden_layer']) 

這可能代替工作:

# Store layers weight & bias 
weights = { 
    'layer_1': tf.Variable(tf.random_normal([n_input, n_hidden_layer])), 
    'layer_2': tf.Variable(tf.random_normal([n_hidden_layer, n_hidden_layer])), 
    'out': tf.Variable(tf.random_normal([n_hidden_layer, n_classes])) 
} 
biases = { 
    'layer_1': tf.Variable(tf.random_normal([n_hidden_layer])), 
    'layer_2': tf.Variable(tf.random_normal([n_hidden_layer])), 
    'out': tf.Variable(tf.random_normal([n_classes])) 
} 

# Hidden layer with RELU activation 
layer_1 = tf.add(tf.matmul(x_flat, weights['layer_1']), biases['layer_1']) 
layer_1 = tf.nn.relu(layer_1) 
layer_1 = tf.nn.dropout(layer_1, keep_prob) 

layer_2 = tf.add(tf.matmul(layer_1, weights['layer_2']), biases['layer_2']) 
layer_2 = tf.nn.relu(layer_2) 
layer_2 = tf.nn.dropout(layer_2, keep_prob) 
# Output layer with linear activation 
logits = tf.matmul(layer_2, weights['out']) + biases['out'] 
+0

是ofcourse ,那是一件很簡單的事情,我完全忘記了我s使用相同的權重和偏差。非常感謝,我強調了一個小時,完全忽略了這一點。 謝謝。 –