2016-06-15 209 views
0

我想知道爲什麼使用SIFT在兩幅圖像之間沒有很好的匹配。 匹配的圖像如下所示。 enter image description here 原始影像在兩幅圖像之間使用SIFT匹配的特徵點

enter image description here

enter image description here

我的計劃是如下。

int imagematching(Mat &img1, Mat & img2, std::vector<Point2f> &first_keypoints, std::vector<Point2f> &second_keypoints){ 
    int max_keypoints = 500; 

    Ptr<SIFT> detector = SIFT::create(); 
    Ptr<SIFT> extractor = SIFT::create(); 

    //--Step 1: Key point detection 
    std::vector<KeyPoint> keypoints1, keypoints2; 
    //-- Step 2: Calculate descriptors (feature vectors) 
    Mat descriptors1, descriptors2; 

    detector->detect(img1, keypoints1); 
    detector->detect(img2, keypoints2); 

    extractor->compute(img1, keypoints1, descriptors1); 
    extractor->compute(img2, keypoints2, descriptors2);  

    FlannBasedMatcher matcher; 


    vector<DMatch> matches; 
    matcher.match(descriptors1, descriptors2, matches); 

    double max_dist = 0; double min_dist = 999999; 

    //-- Quick calculation of max and min distances between keypoints 
    for(int i = 0; i < descriptors1.rows; i++) 
    { 
     double dist = matches[i].distance; 
     if(dist < min_dist) min_dist = dist; 
     if(dist > max_dist) max_dist = dist; 
    } 

    printf("-- Max dist : %f \n", max_dist); 
    printf("-- Min dist : %f \n", min_dist); 
    //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist) 
    std::vector<DMatch> good_matches; 

    for(int i = 0; i < descriptors1.rows; i++) 
    { 
     if(matches[i].distance < 3*min_dist) 
     { good_matches.push_back(matches[i]); } 
    } 
    matches.clear(); 

    Mat img_matches; 
    drawMatches(img1, keypoints1, img2, keypoints2, 
       good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), 
       vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS); 

    descriptors1.release(); 
    descriptors2.release(); 

    //-- Localize the object 
    //std::vector<Point2f> first_keypoints; 
    //std::vector<Point2f> second_keypoints; 

    for(int i = 0; i < good_matches.size(); i++) 
    { 
     //cout << i << " :"; 
     //-- Get the keypoints from the good matches 
     if(keypoints1[ good_matches[i].queryIdx ].pt.x > 0 && keypoints1[ good_matches[i].queryIdx ].pt.y > 0 
       && keypoints2[ good_matches[i].trainIdx ].pt.x > 0 && keypoints2[ good_matches[i].trainIdx ].pt.y > 0){ 
      first_keypoints.push_back(keypoints1[ good_matches[i].queryIdx ].pt); 
      //cout << "first point" << keypoints1[ good_matches[i].queryIdx ].pt << endl; 

      second_keypoints.push_back(keypoints2[ good_matches[i].trainIdx ].pt); 
      //cout << "second point" << keypoints2[ good_matches[i].trainIdx ].pt << endl; 
     } 
    } 
    keypoints1.clear(); 
    keypoints2.clear(); 
    good_matches.clear(); 
    //-- Show detected matches 
    imshow("Good Matches & Object detection", img_matches); 
    waitKey(0); 

    return SUCCESS; 
} 
+0

SIFT/SURF /等。通常不是預期不變的。它可能適用於較小的角度,但您的物體旋轉了90°。通常,對於這種類型的對象識別,您可以使用一個數據庫,其中存在多個以不同角度表示同一對象的表示形式。你可以嘗試在前視圖中捕捉同一個對象,並將其與這兩個圖像進行匹配?也許45°的角度已經可以,也許不是... – Micka

回答

0

SIFT可能是旋轉不變的,但它絕對不是透視不變的。 您將需要添加一些機器學習 - 也許SVM - 能夠以不同的角度匹配圖像,僅SIFT功能是不夠的。

+0

機器學習不會在所示的情況下只有1個圖像。還需要*附加信息*,例如因爲@Micka建議使用不同視角的圖像,或者通過派生/建模(部分)對象3D形狀(ML無法做到魔法;)) – geekoverdose

+0

是的,你有一點意思。 –