2014-02-07 36 views
1

我試圖用OpenCV Java API將兩個圖像拼接在一起。但是,我得到了錯誤的輸出結果,我無法解決問題。我使用以下步驟: 1.檢測功能 2.提取功能 3.匹配功能。 4.查找單應 5.查找透視變換 6.經透視 7.「針腳」的2個圖像,成組合圖像。拼接2圖像(OpenCV)

但地方我會錯了。我認爲這是我梳理2幅圖像的方式,但我不確定。我在這兩張圖像之間獲得了214個很好的功能匹配,但無法縫合它們?

public class ImageStitching { 

static Mat image1; 
static Mat image2; 

static FeatureDetector fd; 
static DescriptorExtractor fe; 
static DescriptorMatcher fm; 

public static void initialise(){ 
    fd = FeatureDetector.create(FeatureDetector.BRISK); 
    fe = DescriptorExtractor.create(DescriptorExtractor.SURF); 
    fm = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE); 

    //images 
    image1 = Highgui.imread("room2.jpg"); 
    image2 = Highgui.imread("room3.jpg"); 

    //structures for the keypoints from the 2 images 
    MatOfKeyPoint keypoints1 = new MatOfKeyPoint(); 
    MatOfKeyPoint keypoints2 = new MatOfKeyPoint(); 

    //structures for the computed descriptors 
    Mat descriptors1 = new Mat(); 
    Mat descriptors2 = new Mat(); 

    //structure for the matches 
    MatOfDMatch matches = new MatOfDMatch(); 

    //getting the keypoints 
    fd.detect(image1, keypoints1); 
    fd.detect(image1, keypoints2); 

    //getting the descriptors from the keypoints 
    fe.compute(image1, keypoints1, descriptors1); 
    fe.compute(image2,keypoints2,descriptors2); 

    //getting the matches the 2 sets of descriptors 
    fm.match(descriptors2,descriptors1, matches); 

    //turn the matches to a list 
    List<DMatch> matchesList = matches.toList(); 

    Double maxDist = 0.0; //keep track of max distance from the matches 
    Double minDist = 100.0; //keep track of min distance from the matches 

    //calculate max & min distances between keypoints 
    for(int i=0; i<keypoints1.rows();i++){ 
     Double dist = (double) matchesList.get(i).distance; 
     if (dist<minDist) minDist = dist; 
     if(dist>maxDist) maxDist=dist; 
    } 

    System.out.println("max dist: " + maxDist); 
    System.out.println("min dist: " + minDist); 

    //structure for the good matches 
    LinkedList<DMatch> goodMatches = new LinkedList<DMatch>(); 

    //use only the good matches (i.e. whose distance is less than 3*min_dist) 
    for(int i=0;i<descriptors1.rows();i++){ 
     if(matchesList.get(i).distance<3*minDist){ 
      goodMatches.addLast(matchesList.get(i)); 
     } 
    } 

    //structures to hold points of the good matches (coordinates) 
    LinkedList<Point> objList = new LinkedList<Point>(); // image1 
    LinkedList<Point> sceneList = new LinkedList<Point>(); //image 2 

    List<KeyPoint> keypoints_objectList = keypoints1.toList(); 
    List<KeyPoint> keypoints_sceneList = keypoints2.toList(); 

    //putting the points of the good matches into above structures 
    for(int i = 0; i<goodMatches.size(); i++){ 
     objList.addLast(keypoints_objectList.get(goodMatches.get(i).queryIdx).pt); 
     sceneList.addLast(keypoints_sceneList.get(goodMatches.get(i).trainIdx).pt); 
    } 

    System.out.println("\nNum. of good matches" +goodMatches.size()); 

    MatOfDMatch gm = new MatOfDMatch(); 
    gm.fromList(goodMatches); 

    //converting the points into the appropriate data structure 
    MatOfPoint2f obj = new MatOfPoint2f(); 
    obj.fromList(objList); 

    MatOfPoint2f scene = new MatOfPoint2f(); 
    scene.fromList(sceneList); 

    //finding the homography matrix 
    Mat H = Calib3d.findHomography(obj, scene); 

    //LinkedList<Point> cornerList = new LinkedList<Point>(); 
    Mat obj_corners = new Mat(4,1,CvType.CV_32FC2); 
    Mat scene_corners = new Mat(4,1,CvType.CV_32FC2); 

    obj_corners.put(0,0, new double[]{0,0}); 
    obj_corners.put(0,0, new double[]{image1.cols(),0}); 
    obj_corners.put(0,0,new double[]{image1.cols(),image1.rows()}); 
    obj_corners.put(0,0,new double[]{0,image1.rows()}); 

    Core.perspectiveTransform(obj_corners, scene_corners, H); 

    //structure to hold the result of the homography matrix 
    Mat result = new Mat(); 

    //size of the new image - i.e. image 1 + image 2 
    Size s = new Size(image1.cols()+image2.cols(),image1.rows()); 

    //using the homography matrix to warp the two images 
    Imgproc.warpPerspective(image1, result, H, s); 
    int i = image1.cols(); 
    Mat m = new Mat(result,new Rect(i,0,image2.cols(), image2.rows())); 

    image2.copyTo(m); 

    Mat img_mat = new Mat(); 

    Features2d.drawMatches(image1, keypoints1, image2, keypoints2, gm, img_mat, new Scalar(254,0,0),new Scalar(254,0,0) , new MatOfByte(), 2); 

    //creating the output file 
    boolean imageStitched = Highgui.imwrite("imageStitched.jpg",result); 
    boolean imageMatched = Highgui.imwrite("imageMatched.jpg",img_mat); 
} 


public static void main(String args[]){ 
    System.loadLibrary(Core.NATIVE_LIBRARY_NAME); 
    initialise(); 
} 

我不能嵌入圖片,也不能發佈超過2個鏈接,因爲名譽點?所以我已經掛了錯誤拼接的圖像和顯示2個圖像之間的匹配特徵的圖像(得到這個問題的理解):

不正確拼接圖像:http://oi61.tinypic.com/11ac01c.jpg 檢測功能:http://oi57.tinypic.com/29m3wif.jpg

回答

1

似乎你有很多異常值使單應性估計不正確。所以你可以使用遞歸拒絕那些異常值的RANSAC方法。

不需要那麼多的努力,只用一個第三個參數中findHomography功能:

Mat H = Calib3d.findHomography(obj, scene, CV_RANSAC); 

編輯

然後嘗試以確保您給探測器的圖像是8位灰度圖像,如前所述here

+0

我已經嘗試過上述功能, Mat H = Calib3d.findHomography(obj,scene,Calib3d.RANSAC,1) - 但不成功。我已經嘗試過許多不同的算法組合,以進行檢測,提取和匹配,但仍然沒有成功。還嘗試了許多不同的重新投影錯誤值(findHomography()的第四個參數))。不太確定如何繼續前進。 – user3019612

+0

看到我編輯的答案:圖像必須是灰色的。 – dervish

0

的「錯誤地拼接圖像」你張貼看起來像一個糟糕的條件矩陣H。除了+ dervish的建議,運行:

cv::determinant(H) > 0.01 

要檢查您的H矩陣是否「可用」。如果矩陣條件不好,你會得到你正在顯示的效果。

您正在繪製到一個2x2的畫布大小,如果是這樣的話,你就不會看到大量的拼接構造,即它的確定對圖像A圖像B的左側而不是其他。嘗試繪製輸出到一個3x3畫布大小,使用下面的代碼片斷:

// Use the Homography Matrix to warp the images, but offset it to the 
    // center of the output canvas. Careful to pre-multiply, not post-multiply. 
    cv::Mat Offset = (cv::Mat_<double>(3,3) << 1, 0, 
        width, 0, 1, height, 0, 0, 1); 
    H = Offset * H; 

    cv::Mat result; 
    cv::warpPerspective(mat_l, 
         result, 
         H, 
         cv::Size(3*width, 3*height)); 
    // Copy the reference image to the center of the 3x3 output canvas. 
    cv::Mat roi = result.colRange(width,2*width).rowRange(height,2*height); 
    mat_r.copyTo(roi); 

width在哪裏和height是那些輸入圖像的,據說兩者的大小相同。請注意,此變形假定mat_l未變(平坦)和mat_r變形以便縫合。