2017-09-02 48 views
2

我正在努力處理一個看似簡單的問題。我無法弄清楚如何將我的輸入圖像與我的模型產生的概率相匹配。我的模型(Vanilla VGG16,重訓2班,狗和貓)的培訓和驗證都很順利,使我接近97%的驗證準確性,但是當我運行檢查以查看我的權利時以及我錯了什麼,我只得到隨機結果。檢查Keras中的驗證結果只顯示50%正確。顯然是隨機

找到1087個正確的標籤(53.08%)

我敢肯定它是與產生於我的驗證圖像隨機批次ImageDataGenerator,雖然我設置shuffle = false

我只是在我運行它們之前保存我的生成器的文件名和類,我假設我的文件名和類的索引與我的概率輸出相同。

這裏是我的設置(香草VGG16,與去年層更換,以滿足2類貓狗)

new_model.summary() 

Layer (type)     Output Shape    Param # 
================================================================= 
input_2 (InputLayer)   (None, 224, 224, 3)  0   
_________________________________________________________________ 
block1_conv1 (Conv2D)  (None, 224, 224, 64)  1792  
_________________________________________________________________ 
block1_conv2 (Conv2D)  (None, 224, 224, 64)  36928  
_________________________________________________________________ 
block1_pool (MaxPooling2D) (None, 112, 112, 64)  0   
_________________________________________________________________ 
block2_conv1 (Conv2D)  (None, 112, 112, 128)  73856  
_________________________________________________________________ 
block2_conv2 (Conv2D)  (None, 112, 112, 128)  147584  
_________________________________________________________________ 
block2_pool (MaxPooling2D) (None, 56, 56, 128)  0   
_________________________________________________________________ 
block3_conv1 (Conv2D)  (None, 56, 56, 256)  295168  
_________________________________________________________________ 
block3_conv2 (Conv2D)  (None, 56, 56, 256)  590080  
_________________________________________________________________ 
block3_conv3 (Conv2D)  (None, 56, 56, 256)  590080  
_________________________________________________________________ 
block3_pool (MaxPooling2D) (None, 28, 28, 256)  0   
_________________________________________________________________ 
block4_conv1 (Conv2D)  (None, 28, 28, 512)  1180160 
_________________________________________________________________ 
block4_conv2 (Conv2D)  (None, 28, 28, 512)  2359808 
_________________________________________________________________ 
block4_conv3 (Conv2D)  (None, 28, 28, 512)  2359808 
_________________________________________________________________ 
block4_pool (MaxPooling2D) (None, 14, 14, 512)  0   
_________________________________________________________________ 
block5_conv1 (Conv2D)  (None, 14, 14, 512)  2359808 
_________________________________________________________________ 
block5_conv2 (Conv2D)  (None, 14, 14, 512)  2359808 
_________________________________________________________________ 
block5_conv3 (Conv2D)  (None, 14, 14, 512)  2359808 
_________________________________________________________________ 
block5_pool (MaxPooling2D) (None, 7, 7, 512)   0   
_________________________________________________________________ 
flatten (Flatten)   (None, 25088)    0   
_________________________________________________________________ 
fc1 (Dense)     (None, 4096)    102764544 
_________________________________________________________________ 
fc2 (Dense)     (None, 4096)    16781312 
_________________________________________________________________ 
Binary_predictions (Dense) (None, 2)     8194  
================================================================= 
Total params: 134,268,738 
Trainable params: 8,194 
Non-trainable params: 134,260,544 
_________________________________________________________________ 


batch_size=16 
epochs=3 
learning_rate=0.01 

這是發電機的定義,用於培訓和驗證。在這一點上,我還沒有包含數據增強部分。

train_datagen = ImageDataGenerator() 
validation_datagen = ImageDataGenerator() 
test_datagen = ImageDataGenerator() 

train_generator = train_datagen.flow_from_directory(
    train_path, 
    target_size=(img_height, img_width), 
    batch_size=batch_size, 
    class_mode='categorical') 
train_filenames = train_generator.filenames 
train_samples = len(train_filenames) 

validation_generator = validation_datagen.flow_from_directory(
    valid_path, 
    target_size=(img_height, img_width), 
    batch_size=batch_size, 
    class_mode='categorical', 
    shuffle = False) #Need this to be false, so I can extract the correct classes and filenames in order that that are predicted 
validation_filenames = validation_generator.filenames 
validation_samples = len(validation_filenames) 

微調模型去細

#Fine-tune the model 
#DOC: fit_generator(generator, steps_per_epoch, epochs=1, verbose=1, callbacks=None, 
#    validation_data=None, validation_steps=None, class_weight=None, 
#    max_queue_size=10, workers=1, use_multiprocessing=False, initial_epoch=0) 

new_model.fit_generator(
    train_generator, 
    steps_per_epoch=train_samples // batch_size, 
    epochs=epochs, 
    validation_data=validation_generator, 
    validation_steps=validation_samples // batch_size) 

Epoch 1/3 
1434/1434 [==============================] - 146s - loss: 0.5456 - acc: 0.9653 - val_loss: 0.5043 - val_acc: 0.9678 
Epoch 2/3 
1434/1434 [==============================] - 148s - loss: 0.5312 - acc: 0.9665 - val_loss: 0.4293 - val_acc: 0.9722 
Epoch 3/3 
1434/1434 [==============================] - 148s - loss: 0.5332 - acc: 0.9665 - val_loss: 0.4329 - val_acc: 0.9731 

由於是驗證數據的提取

#We need the probabilities/scores for the validation set 
#DOC: predict_generator(generator, steps, max_queue_size=10, workers=1, 
#      use_multiprocessing=False, verbose=0) 
probs = new_model.predict_generator(
      validation_generator, 
      steps=validation_samples // batch_size, 
      verbose = 1) 

#Extracting the probabilities and labels 
our_predictions = probs[:,0] 
our_labels = np.round(1-our_predictions) 
expected_labels = validation_generator.classes 

現在,當我計算我的驗證通過比較期望的標籤設置成功和計算的標籤,我得到的東西是可疑接近隨機:

correct = np.where(our_labels==expected_labels)[0] 
print("Found {:3d} correct labels ({:.2f}%)".format(len(correct), 
     100*len(correct)/len(our_predictions))) 

實測值1087個正確的標籤(53.08%)

顯然,這是不正確的。

我懷疑這是與發生器的隨機性有關,但我設置shuffle = False。

此代碼是直接從Fast.ai課程由偉大的傑里米·霍華德複製,但我不能讓它再工作..

我使用Keras 2.0.8和1.3 TensorFlow後端上的Python 3.5下的蟒蛇...

請幫我保留我的理智!

回答

0

我遇到過類似的問題,我覺得predict_generator()不友好,於是我寫了一個函數來測試數據集。 這裏是我的代碼片段:

from PIL import Image 
import numpy as np 
import json 

def get_img_result(img_path): 
image = Image.open(img_path) 
image.load() 
image = image.resize((img_width, img_height)) 
if image.mode is not 'RGB': 
    image = image.convert('RGB') 
array = np.asarray(image, dtype='int32') 
array = array/255 
array = np.asarray([array]) 
result = new_model.predict(array) 
print(result) 
return result 

# path: the root folder of the validation data set. validation->cat->kitty.jpg 
def validate(path): 
result_list = [] 
right_count = 0 
wrong_count = 0 
categories = os.listdir(path) 
for i in range(len(categories)): 
    images = os.listdir(os.path.join(path, categories[i])) 
    for image in images: 
     result = get_img_result(os.path.join(path, categories[i], image))[0] 
     if result[i] != max(result): 
      result_list.append({'image': image, 'category': categories[i], 'score': result.tolist(), 'right': 0}) 
      wrong_count = wrong_count + 1 
     else: 
      result_list.append({'image': image, 'category': categories[i], 'score': result.tolist(), 'right': 1}) 
      right_count = right_count + 1 
json_string = json.dumps(result_list) 
with open('result.json', 'w') as f: 
    f.write(json_string) 
print('right count : {0} \n wrong count : {1} \n accuracy : {2}'.format(right_count, wrong_count, 
                     (right_count)/(
                      right_count + wrong_count))) 

我用PIL轉換圖像numpy的數組作爲Keras做,我測試的所有圖像並將結果保存到一個JSON文件。

希望它有幫助。

+1

當然可以!仍然覺得很奇怪,這對於一項如此根本的任務來說非常困難 – thondeboer

1

您需要撥打validation_generator.reset()fit_generator()predict_generator()之間。

*_generator()函數中,在用於擬合/評估模型之前,將數據批次插入到隊列中。底層隊列始終保持滿,因此訓練結束時隊列中會有一些額外的批次。您可以在培訓後通過打印validation_generator.batch_index進行驗證。因此,您的predict_generator()不是從第一批開始,而probs[0]不是第一張圖的預測。這就是爲什麼our_labels不符合expected_labels和準確性低。

順便說一句,你應該使用validation_steps=validation_samples // batch_size + 1(也用於訓練發生器)。除非validation_samplesbatch_size的倍數,否則如果您使用validation_steps=validation_samples // batch_size,則忽略每個時期的一個批次,並且您的模型在每個時期的(略有不同)數據集上進行評估。

+1

謝謝!這現在總是有意義的。順便說一句,我確實發現了+1問題,因爲我想我會錯過一批 – thondeboer

+2

@thondeboer,如果問題已解決,請接受相關答案,以便其他人知道它已解決。 – etov

+1

謝謝Yu-Yang!對於發電機的後續預測,我有不同的結果,並且可以通過您解決。非常感謝。 – petezurich