2016-05-17 148 views
0

this後,你可以找到一個非常好的教程,介紹如何將SVM分類器應用到MNIST數據集。我想知道是否可以使用邏輯迴歸而不是SVM分類器。所以我搜索了openCV中的Logistic迴歸,並且我發現兩個分類器的語法幾乎完全相同。所以我猜我可能只是註釋掉以下部分:MNIST數據集上的邏輯迴歸

cv::Ptr<cv::ml::SVM> svm = cv::ml::SVM::create(); 
    svm->setType(cv::ml::SVM::C_SVC); 
    svm->setKernel(cv::ml::SVM::POLY);//LINEAR, RBF, SIGMOID, POLY 
    svm->setTermCriteria(cv::TermCriteria(cv::TermCriteria::MAX_ITER, 100, 1e-6)); 
    svm->setGamma(3); 
    svm->setDegree(3); 
    svm->train(trainingMat , cv::ml::ROW_SAMPLE , labelsMat); 

,取而代之的是:

cv::Ptr<cv::ml::LogisticRegression> lr1 = cv::ml::LogisticRegression::create(); 
    lr1->setLearningRate(0.001); 
    lr1->setIterations(10); 
    lr1->setRegularization(cv::ml::LogisticRegression::REG_L2); 
    lr1->setTrainMethod(cv::ml::LogisticRegression::BATCH); 
    lr1->setMiniBatchSize(1); 
    lr1->train(trainingMat, cv::ml::ROW_SAMPLE, labelsMat); 

但首先,我得到這個錯誤: OpenCV的錯誤:錯誤的參數(數據和標籤必須是浮點矩陣)

然後,我改變

cv::Mat labelsMat(labels.size(), 1, CV_32S, labelsArray); 

cv::Mat labelsMat(labels.size(), 1, CV_32F, labelsArray); 

現在我得到這個錯誤:OpenCV的錯誤:錯誤的參數(數據應該有ATLEAST兩班)

我有10個班(0,1,...,9),但我不知道爲什麼我得到這個錯誤。我的代碼與上述教程中的代碼幾乎完全相同。

+0

可能的分類之一,你'將'labelsArray'中的整數值解釋爲浮點數。嘗試這種方式,讓我知道:'cv :: Mat labelsMat(labels.size(),1,CV_32S,labelsArray); labelsMat.convertTo(labelsMat,CV_32F);'(數據相同) – Miki

+0

@miki這很好用。謝謝 – MoNo

回答

1

在Python中,你可以做這樣的事情:

import matplotlib.pyplot as plt 

# Import datasets, classifiers and performance metrics 
from sklearn import datasets, svm, metrics 
from sklearn.linear_models import LogisticRegression 

# The digits dataset 
digits = datasets.load_digits() 

# The data that we are interested in is made of 8x8 images of digits, let's 
# have a look at the first 3 images, stored in the `images` attribute of the 
# dataset. If we were working from image files, we could load them using 
# pylab.imread. Note that each image must have the same size. For these 
# images, we know which digit they represent: it is given in the 'target' of 
# the dataset. 
images_and_labels = list(zip(digits.images, digits.target)) 
for index, (image, label) in enumerate(images_and_labels[:4]): 
    plt.subplot(2, 4, index + 1) 
    plt.axis('off') 
    plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') 
    plt.title('Training: %i' % label) 

# To apply a classifier on this data, we need to flatten the image, to 
# turn the data in a (samples, feature) matrix: 
n_samples = len(digits.images) 
data = digits.images.reshape((n_samples, -1)) 

選擇你喜歡下面

# Create a classifier: a support vector classifier 
classifier = svm.SVC(gamma=0.001) 

# create a Logistic Regression Classifier 
classifier = LogisticRegression(C=1.0) 

# We learn the digits on the first half of the digits 
classifier.fit(data[:n_samples/2], digits.target[:n_samples/2]) 

# Now predict the value of the digit on the second half: 
expected = digits.target[n_samples/2:] 
predicted = classifier.predict(data[n_samples/2:]) 

print("Classification report for classifier %s:\n%s\n" 
     % (classifier, metrics.classification_report(expected, predicted))) 
print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted)) 

images_and_predictions = list(zip(digits.images[n_samples/2:], predicted)) 
for index, (image, prediction) in enumerate(images_and_predictions[:4]): 
    plt.subplot(2, 4, index + 5) 
    plt.axis('off') 
    plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') 
    plt.title('Prediction: %i' % prediction) 

plt.show() 

你可以看到整個代碼here