2017-02-26 57 views
0

我正在使用Python 3.5.2和opencv 3.1.0。我試圖通過使用cv.getAffineTransform()生成的變換矩陣(參見下面的代碼)來查詢查詢圖像中的某些關鍵點。無論我嘗試通過轉換功能,它會永遠把我這個錯誤:KeyPoint上的OpenCV3矩陣轉換失敗

cv2.error: D:\opencv\sources\modules\core\src\matmul.cpp:1947: error: (-215) scn == m.cols || scn + 1 == m.cols in function cv::transform

如何做我必須通過的關鍵點,使cv2.transform()工作?

import cv2 
import numpy as np 
import random 

queryImage_path = "C:\tmp\query.jpg" 
trainImage_path = "C:\tmp\train.jpg" 

queryImage = cv2.imread(queryImage_path, cv2.IMREAD_COLOR) 
trainImage = cv2.imread(trainImage_path, cv2.IMREAD_COLOR) 

surf = cv2.xfeatures2d.SURF_create() 

queryImage_keypoints = surf.detect(queryImage,None) 
trainImage_keypoints = surf.detect(trainImage, None) 

queryImage_keypoints, qeryImage_descriptors = surf.compute(queryImage, queryImage_keypoints) 
trainImage_keypoints, trainImage_descriptors = surf.compute(trainImage, trainImage_keypoints) 

bf = cv2.BFMatcher(cv2.NORM_L2, crossCheck=True) 
matches = bf.match(qeryImage_descriptors, trainImage_descriptors) 

# get three random match indices which are not the same 
match_index_a = random.randint(0, len(matches) - 1) 
match_index_b = random.randint(0, len(matches) - 1) 
match_index_c = random.randint(0, len(matches) - 1) 

# get Keypoints from match indices 

# queryImage- keypoints 
queryImage_keypoint_a = queryImage_keypoints[matches[match_index_a].queryIdx] 
queryImage_keypoint_b = queryImage_keypoints[matches[match_index_b].queryIdx] 
queryImage_keypoint_c = queryImage_keypoints[matches[match_index_c].queryIdx] 
# trainImage-keypoints 
trainImage_keypoint_a = trainImage_keypoints[matches[match_index_a].trainIdx] 
trainImage_keypoint_b = trainImage_keypoints[matches[match_index_b].trainIdx] 
trainImage_keypoint_c = trainImage_keypoints[matches[match_index_c].trainIdx] 

# get affine transformation matrix from these 6 keypoints 
trainImage_points = np.float32([[trainImage_keypoint_a.pt[0], trainImage_keypoint_a.pt[1]], 
           [trainImage_keypoint_b.pt[0], trainImage_keypoint_b.pt[1]], 
           [trainImage_keypoint_c.pt[0], trainImage_keypoint_c.pt[1]]]) 
queryImage_points = np.float32([[queryImage_keypoint_a.pt[0], queryImage_keypoint_a.pt[1]], 
           [queryImage_keypoint_b.pt[0], queryImage_keypoint_b.pt[1]], 
           [queryImage_keypoint_c.pt[0], queryImage_keypoint_c.pt[1]]]) 

# get transformation matrix for current points 
currentMatrix = cv2.getAffineTransform(queryImage_points, trainImage_points) 

queryImage_keypoint = queryImage_keypoints[matches[0].queryIdx] 

keypoint_asArray = np.array([[queryImage_keypoint.pt[0]], [queryImage_keypoint.pt[1]], [1]]) 

#queryImage_warped_keypoint = currentMatrix.dot(keypoint_asArray) 
queryImage_warped_keypoint = cv2.transform(keypoint_asArray,currentMatrix)  

回答

0

使用

keypoint_asArray = np.array([[[queryImage_keypoint.pt[0], queryImage_keypoint.pt[1], 1]]]) 

,而不是

keypoint_asArray = np.array([[queryImage_keypoint.pt[0]], [queryImage_keypoint.pt[1]], [1]]) 
+0

非常感謝您! – Lyndra