2017-10-18 88 views
2

我有5種不同類別的聲音樣本的對數比例mel-spectrogram的2d陣列。Keras中聲音數據的自動編碼器

對於培訓我已經在凱拉斯使用卷積和密集神經網絡。下面的代碼:

model = Sequential() 
model.add(Conv1D(80, 8, activation='relu', padding='same',input_shape=(60,108))) 
model.add(MaxPooling1D(2,padding='same',strides=None)) 
model.add(Flatten()) 
initializer=initializers.TruncatedNormal() 
model.add(Dense(200, activation='relu', kernel_initializer=initializer,bias_initializer=initializer)) 
model.add(BatchNormalization()) 
model.add(Dropout(0.8)) 
model.add(Dense(50, activation='relu', kernel_initializer=initializer,bias_initializer=initializer)) 
model.add(Dropout(0.8)) 
model.add(Dense(5, activation='softmax', kernel_initializer=initializer,bias_initializer=initializer)) 
model.compile(loss='categorical_crossentropy', 
      optimizer='adam',lr=0.01, 
      metrics=['accuracy']) 

我可以適用於這種類型的數據輸入什麼樣的自動編碼器的?什麼型號?任何建議或代碼示例都會有所幫助。 :)

+0

我認爲這可能是你可以使用卷積3D Keras層,例如,您可以從一個簡單的卷積網絡開始,第一層有16個3x3x3內核,第二層有16個5x5x5內核,通過softmax輸出添加簡單的MLP。如果您可以分享數據,我會爲您的數據提供代碼示例的準確答案。然而,直到那時你可以在我的問題中看到一個CAE圖像樣本 - https://stackoverflow.com/questions/46921246/issue-with-simple-cae –

+0

我分享了我的模型,現在已經改變了一下,因爲輸入數據是一個二維數組。我期待收到您的一些建議。謝謝 –

+0

你爲什麼使用Conv1D來處理二維數據?我建議使用Conv2D,因爲我們有2維數據特性。它應該會有更好的結果。但是我無法檢查它,因爲我沒有數據檢查。但是,如果您不介意使用Conv2d,我可以建議您使用Conv2D來檢查一些模型,但是如果我有數據要測試,我可以給您更精確的答案,因爲好的模型應考慮到數據的性質 –

回答

1

因爲我沒有對數據性質問題的答案,所以我會假設我們有一組形狀類似的二維數據(NSamples,68,108)。此外,我認爲我的建議使用卷積二維而不是卷積1D的回答是是

這裏是卷積自動編碼器,模型,它可以使用訓練有素的自動編碼器,以及如何使用自動編碼器最終模型:

from keras.layers.core import Dense, Dropout, Flatten, Reshape 
from keras.layers import Conv1D, Conv2D, Deconv2D, MaxPooling1D, MaxPooling2D, UpSampling2D, Conv2DTranspose, Flatten, BatchNormalization, Dropout 
from keras.callbacks import ModelCheckpoint 
import keras.models as models 
import keras.initializers as initializers 
from sklearn.model_selection import train_test_split 

ae = models.Sequential() 
#model.add(Conv1D(80, 8, activation='relu', padding='same',input_shape=(60,108))) 
#encoder 
c = Conv2D(80, 3, activation='relu', padding='same',input_shape=(60, 108, 1)) 
ae.add(c) 
ae.add(MaxPooling2D(pool_size=(2, 2), padding='same', strides=None)) 
ae.add(Flatten()) 
initializer=initializers.TruncatedNormal() 
d1 = Dense(200, activation='relu', kernel_initializer=initializer,bias_initializer=initializer) 
ae.add(d1) 
ae.add(BatchNormalization()) 
ae.add(Dropout(0.8)) 
d2 = Dense(50, activation='relu', kernel_initializer=initializer,bias_initializer=initializer) 
ae.add(d2) 
ae.add(Dropout(0.8)) 
#decodser 
ae.add(Dense(d2.input_shape[1], activation='sigmoid')) 
ae.add(Dense(d1.input_shape[1], activation='sigmoid')) 
ae.add(Reshape((30, 54, 80))) 
ae.add(UpSampling2D((2,2))) 
ae.add(Deconv2D(filters= c.filters, kernel_size= c.kernel_size, strides=c.strides, activation=c.activation, padding=c.padding,)) 
ae.add(Deconv2D(filters= 1, kernel_size= c.kernel_size, strides=c.strides, activation=c.activation, padding=c.padding,)) 
ae.compile(loss='binary_crossentropy', 
optimizer='adam',lr=0.001, 
metrics=['accuracy']) 
ae.summary() 
#now train your convolutional autoencoder to reconstruct your input data 
#reshape your data to (NSamples, 60, 108, 1) 
#Then train your autoencoder. it can be something like that: 
#X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=43) 
#pre_mcp = ModelCheckpoint("CAE.hdf5", monitor='val_accuracy', verbose=2, save_best_only=True, mode='max') 
#pre_history = ae.fit(X_train, X_train, epochs=100, validation_data=(X_val, X_val), batch_size=22, verbose=2, callbacks=[pre_mcp]) 

#model 
model = models.Sequential() 
#model.add(Conv1D(80, 8, activation='relu', padding='same',input_shape=(60,108))) 
model.add(Conv2D(80, 3, activation='relu', padding='same',input_shape=(60, 108, 1))) 
model.add(MaxPooling2D(pool_size=(2, 2), padding='same',strides=None)) 
model.add(Flatten()) 
initializer=initializers.TruncatedNormal() 
model.add(Dense(200, activation='relu', kernel_initializer=initializer,bias_initializer=initializer)) 
model.add(BatchNormalization()) 
model.add(Dropout(0.8)) 
model.add(Dense(50, activation='relu', kernel_initializer=initializer,bias_initializer=initializer)) 
model.add(Dropout(0.8)) 
model.add(Dense(5, activation='softmax', kernel_initializer=initializer,bias_initializer=initializer)) 
model.compile(loss='categorical_crossentropy', 
optimizer='adam',lr=0.001, 
metrics=['accuracy']) 
#Set weights    
model.layers[0].set_weights(ae.layers[0].get_weights())  
model.layers[3].set_weights(ae.layers[3].get_weights()) 
model.layers[4].set_weights(ae.layers[4].get_weights()) 
model.layers[6].set_weights(ae.layers[6].get_weights()) 
model.summary() 
#Now you can train your model with pre-trained weights from autoencoder 

像這樣的模型是有用的,我有MNIST數據集,並改善與自動編碼器的初始權重模型的精度與隨機權初始化

但是與模型相比較,我會建議使用幾個卷積/去卷積層,可能是3或者莫因爲根據我的經驗,具有3個或更多卷積層的卷積自動編碼器比1卷積層更有效。事實上,一個卷積層我甚至不能看到任何改善的準確性有時

更新:

我檢查自動編碼器通過的Emanuela提供的數據,我也有不同的自動編碼器架構檢查它沒有任何成功

我有關的假設是,該數據不包含任何顯著的特點,它可以通過自動編碼器,甚至CAE來區分

但是,它看起來像我的假設數據的2維性質由幾乎達到99.99%,驗證準確的證實: enter image description here

然而,在同一時間,訓練數據的97.31%的準確率可以表明與數據集的潛在問題,所以它看起來像一個好主意,修改它

另外,我會建議使用網絡的合奏。你可以訓練,例如10個網絡具有不同的驗證數據和最投票類別

分配類別項目在這裏是我的代碼:

from keras.layers.core import Dense, Dropout, Flatten 
from keras.layers import Conv2D, BatchNormalization 
from keras.callbacks import ModelCheckpoint 
from keras.optimizers import Adam 
from sklearn.model_selection import train_test_split 
import keras.models as models 
import keras.initializers as initializers 
import msgpack 
import numpy as np 

with open('SoundDataX.msg', "rb") as fx,open('SoundDataY.msg', "rb") as fy: 
    dataX=msgpack.load(fx) 
    dataY=msgpack.load(fy) 

num_samples = len(dataX) 
x = np.empty((num_samples, 60, 108, 1), dtype = np.float32) 
y = np.empty((num_samples, 4), dtype = np.float32) 

for i in range(0, num_samples): 
    x[i] = np.asanyarray(dataX[i]).reshape(60, 108, 1) 
    y[i] = np.asanyarray(dataY[i]) 

X_train, X_val, y_train, y_val = train_test_split(x, y, test_size=0.2, random_state=43) 

#model 
model = models.Sequential() 
model.add(Conv2D(128, 3, activation='relu', padding='same',input_shape=(60, 108, 1))) 
model.add(Conv2D(128, 5, activation='relu', padding='same',input_shape=(60, 108, 1))) 
model.add(Conv2D(128, 7, activation='relu', padding='same',input_shape=(60, 108, 1))) 
model.add(Flatten()) 
initializer=initializers.TruncatedNormal() 
model.add(Dense(200, activation='relu', kernel_initializer=initializer,bias_initializer=initializer)) 
model.add(BatchNormalization()) 
model.add(Dropout(0.8)) 
model.add(Dense(50, activation='relu', kernel_initializer=initializer,bias_initializer=initializer)) 
model.add(Dropout(0.8)) 
model.add(Dense(4, activation='softmax', kernel_initializer=initializer,bias_initializer=initializer)) 
model.compile(loss='categorical_crossentropy', 
optimizer=Adam(lr=0.0001), 
metrics=['accuracy']) 
model.summary() 
filepath="weights-{epoch:02d}-{val_acc:.7f}-{acc:.7f}.hdf5" 
mcp = ModelCheckpoint(filepath, monitor='val_acc', verbose=2, save_best_only=True, mode='max') 
history = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), batch_size=64, verbose=2, callbacks=[mcp])