2014-11-02 82 views
0

我一直在使用這個飛躍很長一段時間。 2.1。+ SDK版本允許我們訪問相機並獲取原始圖像。我想用這些圖像與OpenCV的方形/圓檢測和東西......問題是我不能讓這些圖像unistorted。我閱讀文檔,但並不完全明白他們的意思。這裏有一件事我需要之前在文檔中前進從Leap運動攝像機收到的Un-Distort原始圖像

 distortion_data_ = image.distortion(); 
     for (int d = 0; d < image.distortionWidth() * image.distortionHeight(); d += 2) 
     { 
      float dX = distortion_data_[d]; 
      float dY = distortion_data_[d + 1]; 
      if(!((dX < 0) || (dX > 1)) && !((dY < 0) || (dY > 1))) 
      { 
       //what do i do now to undistort the image? 
      } 
     } 
     data = image.data(); 
     mat.put(0, 0, data); 
     //Imgproc.Canny(mat, mat, 100, 200); 
     //mat = findSquare(mat); 
     ok.showImage(mat);  

正確地理解它說,這樣的事情「 標定地圖可用於校正圖像失真由於鏡片弧度和其他缺陷。該地圖是一個64x64網格的點。每個點包含兩個32位值....(其餘在開發網站上)「

有人可以詳細解釋這個請詳細說明或OR,只需發佈​​Java代碼以unistort圖像給我一個輸出MAT圖像,所以我可以繼續處理(如果可能的話,我仍然希望有一個好的解釋)

+0

什麼是您的相機圖像的分辨率?可能校準圖中的每個點都會告訴您用於移動像素的偏移量。也許你可以使用它初始化另一張地圖,以使用cv :: remap – Micka 2014-11-02 10:51:32

+0

@Micka原始圖像的分辨率爲640x240。 '該地圖是一個64x64點的網格。每個點由兩個32位值' [link](https://developer.leapmotion.com/documentation/skeletal/java/devguide/Leap_Images)組成。html#get-the-calibration-map) 檢查鏈接ü可以看到我得到的原始圖像,該頁面描述瞭如何避開它並保持原樣,但我不太明白它:( – user3648343 2014-11-02 11:29:07

+0

你能夠從c自己轉換爲java?它很好的解釋在文檔中,我會盡快添加一個答案。 – Micka 2014-11-02 12:55:30

回答

0

下面是如何在不使用OpenCV的情況下執行此操作的示例。以下似乎比使用Leap :: Image :: warp()方法更快(可能是由於在使用warp()時額外的函數調用開銷):

float destinationWidth = 320; 
float destinationHeight = 120; 
unsigned char destination[(int)destinationWidth][(int)destinationHeight]; 

//define needed variables outside the inner loop 
float calX, calY, weightX, weightY, dX1, dX2, dX3, dX4, dY1, dY2, dY3, dY4, dX, dY; 
int x1, x2, y1, y2, denormalizedX, denormalizedY; 
int x, y; 

const unsigned char* raw = image.data(); 
const float* distortion_buffer = image.distortion(); 

//Local variables for values needed in loop 
const int distortionWidth = image.distortionWidth(); 
const int width = image.width(); 
const int height = image.height(); 

for (x = 0; x < destinationWidth; x++) { 
    for (y = 0; y < destinationHeight; y++) { 
     //Calculate the position in the calibration map (still with a fractional part) 
     calX = 63 * x/destinationWidth; 
     calY = 63 * y/destinationHeight; 
     //Save the fractional part to use as the weight for interpolation 
     weightX = calX - truncf(calX); 
     weightY = calY - truncf(calY); 

     //Get the x,y coordinates of the closest calibration map points to the target pixel 
     x1 = calX; //Note truncation to int 
     y1 = calY; 
     x2 = x1 + 1; 
     y2 = y1 + 1; 

     //Look up the x and y values for the 4 calibration map points around the target 
     // (x1, y1) .. .. .. (x2, y1) 
     // ..     .. 
     // .. (x, y)  .. 
     // ..     .. 
     // (x1, y2) .. .. .. (x2, y2) 
     dX1 = distortion_buffer[x1 * 2 + y1 * distortionWidth]; 
     dX2 = distortion_buffer[x2 * 2 + y1 * distortionWidth]; 
     dX3 = distortion_buffer[x1 * 2 + y2 * distortionWidth]; 
     dX4 = distortion_buffer[x2 * 2 + y2 * distortionWidth]; 
     dY1 = distortion_buffer[x1 * 2 + y1 * distortionWidth + 1]; 
     dY2 = distortion_buffer[x2 * 2 + y1 * distortionWidth + 1]; 
     dY3 = distortion_buffer[x1 * 2 + y2 * distortionWidth + 1]; 
     dY4 = distortion_buffer[x2 * 2 + y2 * distortionWidth + 1]; 

     //Bilinear interpolation of the looked-up values: 
     // X value 
     dX = dX1 * (1 - weightX) * (1- weightY) + dX2 * weightX * (1 - weightY) + dX3 * (1 - weightX) * weightY + dX4 * weightX * weightY; 

     // Y value 
     dY = dY1 * (1 - weightX) * (1- weightY) + dY2 * weightX * (1 - weightY) + dY3 * (1 - weightX) * weightY + dY4 * weightX * weightY; 

     // Reject points outside the range [0..1] 
     if((dX >= 0) && (dX <= 1) && (dY >= 0) && (dY <= 1)) { 
      //Denormalize from [0..1] to [0..width] or [0..height] 
      denormalizedX = dX * width; 
      denormalizedY = dY * height; 

      //look up the brightness value for the target pixel 
      destination[x][y] = raw[denormalizedX + denormalizedY * width]; 
     } else { 
      destination[x][y] = -1; 
     } 
    } 
} 
+0

這個作品太高了fps! – user3648343 2014-11-15 19:52:52

1

好吧,我沒有飛躍相機來測試所有這些,但這是我如何理解的文檔:

校準地圖不保留偏移,但全點位置。一個條目說明像素必須放置在哪裏。這些值被映射在0和1之間,這意味着你必須通過你的真實圖像的寬度和高度來重新渲染它們。

明確沒有解釋的是,您的像素位置如何映射到您的校準圖的64 x 64位置。我假設它是一樣的:640像素寬度被映射到64像素寬度,並且240像素高度被映射到64像素高度。

所以在一般情況下,從(PX,PY)的640×240像素的位置之一移到畸變位置,你會:

  1. 相應的校準地圖像素位置的計算:float cX = pX/640.0f * 64.0f; float cY = pY/240.0f * 64.0f;
  2. (cX,cY)現在是校準圖中該像素的位置。您將不得不在兩個像素位置之間進行插值,但我現在只解釋如何繼續校準圖(cX', cY') = rounded locations of (cX, cY)中的離散位置。
  3. 從校準圖中讀取x和y值:dX,dY,如文檔中所示。您必須通過以下公式計算陣列中的位置:d = dY*calibrationMapWidth*2 + dX*2;
  4. dX和dY是介於0和1之間的值(如果不是這樣:因爲沒有不失真可用,所以不要取消這一點,爲了找到真實圖像中的像素位置,由圖像大小相乘:uX = dX*640; uY = dY*240;
  5. 設置你的像素不失真值:undistortedImage(pX,pY) = distortedImage(uX,uY);

,但你不要在你的標定地圖離散點的位置,所以你必須插我給你舉個例子。 :

let be(cX,cY)=(13.7,10。4)

讓你從你的標定地圖閱讀四個值:

  1. calibMap(13,10)=(DX1,DY1)
  2. calibMap(14,10)=(DX2,DY2)
  3. calibMap(13,11)=(DX3,DY3)
  4. calibMap(14,11)=(DX4,DY4)

現在您的未失真的像素位置(13.7,10.4)爲(乘以每個用640或240來獲得uX 1,uY1,UX2等):

// interpolate in x direction first: 
float tmpUX1 = uX1*0.3 + uX2*0.7 
float tmpUY1 = uY1*0.3 + uY2*0.7 

float tmpUX2 = uX3*0.3 + uX4*0.7 
float tmpUY2 = uY3*0.3 + uY4*0.7 

// now interpolate in y direction 
float combinedX = tmpUX1*0.6 + tmpUX2*0.4 
float combinedY = tmpUY1*0.6 + tmpUY2*0.4 

和你的不失真點是:

undistortedImage(pX,pY) = distortedImage(floor(combinedX+0.5),floor(combinedY+0.5));或插值的像素值也有。

希望這有助於對基本的理解。我會盡快添加openCV重映射代碼!我唯一不清楚的是,pX/Y和cX/Y之間的映射是否正確,會導致文檔中未明確說明。

這是一些代碼。你可以跳過第一部分,我僞造一個失真和創建地圖,這是你的初始狀態。

使用openCV很簡單,只需將校準圖調整爲圖像大小並將所有值與您的分辨率相乘即可。好的是,openCV在調整大小的同時「自動」執行插值。

int main() 
{ 
    cv::Mat input = cv::imread("../Data/Lenna.png"); 

    cv::Mat distortedImage = input.clone(); 

    // now i fake some distortion: 
    cv::Mat transformation = cv::Mat::eye(3,3,CV_64FC1); 
    transformation.at<double>(0,0) = 2.0; 
    cv::warpPerspective(input,distortedImage,transformation,input.size()); 



    cv::imshow("distortedImage", distortedImage); 
    //cv::imwrite("../Data/LenaFakeDistorted.png", distortedImage); 

    // now fake a calibration map corresponding to my faked distortion: 
    const unsigned int cmWidth = 64; 
    const unsigned int cmHeight = 64; 

    // compute the calibration map by transforming image locations to values between 0 and 1 for legal positions. 
    float calibMap[cmWidth*cmHeight*2]; 
    for(unsigned int y = 0; y < cmHeight; ++y) 
     for(unsigned int x = 0; x < cmWidth; ++x) 
     { 
      float xx = (float)x/(float)cmWidth; 
      xx = xx*2.0f; // this if from my fake distortion... this gives some values bigger than 1 
      float yy = (float)y/(float)cmHeight; 

      calibMap[y*cmWidth*2+ 2*x] = xx; 
      calibMap[y*cmWidth*2+ 2*x+1] = yy; 
     } 


    // NOW you have the initial situation of your scenario: calibration map and distorted image... 

    // compute the image locations of calibration map values: 
    cv::Mat cMapMatX = cv::Mat(cmHeight, cmWidth, CV_32FC1); 
    cv::Mat cMapMatY = cv::Mat(cmHeight, cmWidth, CV_32FC1); 
    for(int j=0; j<cmHeight; ++j) 
     for(int i=0; i<cmWidth; ++i) 
     { 
      cMapMatX.at<float>(j,i) = calibMap[j*cmWidth*2 +2*i]; 
      cMapMatY.at<float>(j,i) = calibMap[j*cmWidth*2 +2*i+1]; 
     } 

    //cv::imshow("mapX",cMapMatX); 
    //cv::imshow("mapY",cMapMatY); 


    // interpolate those values for each of your original images pixel: 
    // here I use linear interpolation, you could use cubic or other interpolation too. 
    cv::resize(cMapMatX, cMapMatX, distortedImage.size(), 0,0, CV_INTER_LINEAR); 
    cv::resize(cMapMatY, cMapMatY, distortedImage.size(), 0,0, CV_INTER_LINEAR); 


    // now the calibration map has the size of your original image, but its values are still between 0 and 1 (for legal positions) 
    // so scale to image size: 
    cMapMatX = distortedImage.cols * cMapMatX; 
    cMapMatY = distortedImage.rows * cMapMatY; 


    // now create undistorted image: 
    cv::Mat undistortedImage = cv::Mat(distortedImage.rows, distortedImage.cols, CV_8UC3); 
    undistortedImage.setTo(cv::Vec3b(0,0,0)); // initialize black 

    //cv::imshow("undistorted", undistortedImage); 

    for(int j=0; j<undistortedImage.rows; ++j) 
     for(int i=0; i<undistortedImage.cols; ++i) 
     { 
      cv::Point undistPosition; 
      undistPosition.x =(cMapMatX.at<float>(j,i)); // this will round the position, maybe you want interpolation instead 
      undistPosition.y =(cMapMatY.at<float>(j,i)); 

      if(undistPosition.x >= 0 && undistPosition.x < distortedImage.cols 
       && undistPosition.y >= 0 && undistPosition.y < distortedImage.rows) 

      { 
       undistortedImage.at<cv::Vec3b>(j,i) = distortedImage.at<cv::Vec3b>(undistPosition); 
      } 

     } 

    cv::imshow("undistorted", undistortedImage); 
    cv::waitKey(0); 
    //cv::imwrite("../Data/LenaFakeUndistorted.png", undistortedImage); 
} 


cv::Mat SelfDescriptorDistances(cv::Mat descr) 
{ 
    cv::Mat selfDistances = cv::Mat::zeros(descr.rows,descr.rows, CV_64FC1); 
    for(int keyptNr = 0; keyptNr < descr.rows; ++keyptNr) 
    { 
     for(int keyptNr2 = 0; keyptNr2 < descr.rows; ++keyptNr2) 
     { 
      double euclideanDistance = 0; 
      for(int descrDim = 0; descrDim < descr.cols; ++descrDim) 
      { 
       double tmp = descr.at<float>(keyptNr,descrDim) - descr.at<float>(keyptNr2, descrDim); 
       euclideanDistance += tmp*tmp; 
      } 

      euclideanDistance = sqrt(euclideanDistance); 
      selfDistances.at<double>(keyptNr, keyptNr2) = euclideanDistance; 
     } 

    } 
    return selfDistances; 
} 

我使用這個作爲輸入,並假重映射/失真從中我計算我CALIB墊:

輸入:

enter image description here

僞造失真:

enter image description here

我們ED地圖可以undistort圖像:

enter image description here

TODO:後那些computatons使用的OpenCV地圖與這些值來執行更快的重映射。

+0

在java上試了很多..有點黑屏,但我認爲我做錯了。只會在C++中試用它,然後回覆評論。非常感謝,這段代碼讓所有事情變得如此清晰,我的概念成爲事物! – user3648343 2014-11-03 15:31:45

+0

您對校準網格基本上是正確的。但有一點需要注意的是,網格並不直接與原始數據的640x240像素大小相關聯 - 您可以對任何大小的目標圖像使用相同的基本方法(顯然,對於您可以炸燬圖像)。事實上,API是專門用於片段着色器的,你不知道輸出的像素大小。 – 2014-11-05 19:07:28