即使我使用相同的數據集,我對培訓有較高的分類,但對驗證的分類較低。只有在使用批量標準化時纔會出現此問題。我是否正確實施?Keras:使用批次標準化在同一數據集上進行不同的培訓和驗證結果
代碼使用批標準化:
train_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
directory = '../ImageFilter/Images/',
target_size=(img_rows, img_cols),
batch_size=batch_size,
class_mode='categorical',
shuffle=True)
model = Sequential()
model.add(Convolution2D(16,
kernel_size=(3, 3),
strides=(2,2),
activation='relu',
input_shape=(img_rows, img_cols, 3)))
model.add(BatchNormalization())
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics = ['accuracy'])
epochs = 100
patience = 6
n_images = 91
file_path = 'imageFilterCNN.hdf5'
checkpointer = ModelCheckpoint(file_path, monitor='val_acc', verbose=0, save_best_only=True)
earlystop = EarlyStopping(monitor='val_acc', patience=patience, verbose=0, mode='auto')
tboard = TensorBoard('./logs')
model.fit_generator(
train_generator,
steps_per_epoch=n_images// batch_size,
epochs=epochs,
callbacks=[checkpointer, earlystop, tboard],
validation_data=train_generator,
validation_steps=n_images// batch_size)
輸出: 時代15/100 11/11 [===================== =========] - 2S - 損失:0.0092 - ACC:1.0000 - val_loss:3.0321 - val_acc:0.5568
這些結果有什麼奇怪的地方?訓練的準確性總是比測試更好;你有什麼理由期望泛化是簡單的嗎? – lejlot
我正在測試它正在訓練的相同數據集。所以結果應該非常相似,而不是。 – mcudic