我的項目是對齊航拍照片以製作一張馬賽克地圖。我的計劃是從兩張照片開始,將第二張與第一張對齊,然後從兩張對齊的圖像中創建「初始鑲嵌」。一旦完成,我將第三張照片與最初的馬賽克對齊,然後將第四張照片與其結果等對齊,從而逐步構建地圖。Python,OpenCV - 一個接一個地排列和疊加多個圖像
我有兩種技術可以做到這一點,但使用calcOpticalFlowPyrLK()
的更精確的技術僅適用於雙圖像階段,因爲兩個輸入圖像必須具有相同的尺寸。正因爲如此,我嘗試了一種新的解決方案,但它不太準確,並且在每一步都引入了錯誤,最終產生了無意義的結果。
我的問題是雙重的,但如果你知道一個答案,你不必回答這兩個問題,除非你想。首先,有沒有辦法使用類似於calcOpticalFlowPyrLK()
的東西,但有兩個不同大小的圖像(這包括任何潛在的解決方法)?其次,是否有辦法修改檢測器/描述符解決方案以使其更加準確?
這裏只適用於兩個圖像的準確版本:
# load images
base = cv2.imread("images/1.jpg")
curr = cv2.imread("images/2.jpg")
# convert to grayscale
base_gray = cv2.cvtColor(base, cv2.COLOR_BGR2GRAY)
# find the coordinates of good features to track in base
base_features = cv2.goodFeaturesToTrack(base_gray, 3000, .01, 10)
# find corresponding features in current photo
curr_features = np.array([])
curr_features, pyr_stati, _ = cv2.calcOpticalFlowPyrLK(base, curr, base_features, curr_features, flags=1)
# only add features for which a match was found to the pruned arrays
base_features_pruned = []
curr_features_pruned = []
for index, status in enumerate(pyr_stati):
if status == 1:
base_features_pruned.append(base_features[index])
curr_features_pruned.append(curr_features[index])
# convert lists to numpy arrays so they can be passed to opencv function
bf_final = np.asarray(base_features_pruned)
cf_final = np.asarray(curr_features_pruned)
# find perspective transformation using the arrays of corresponding points
transformation, hom_stati = cv2.findHomography(cf_final, bf_final, method=cv2.RANSAC, ransacReprojThreshold=1)
# transform the images and overlay them to see if they align properly
# not what I do in the actual program, just for use in the example code
# so that you can see how they align, if you decide to run it
height, width = curr.shape[:2]
mod_photo = cv2.warpPerspective(curr, transformation, (width, height))
new_image = cv2.addWeighted(mod_photo, .5, base, .5, 1)
下面是對多個圖像作品(直到誤差變得太大)不準確的一項:
# load images
base = cv2.imread("images/1.jpg")
curr = cv2.imread("images/2.jpg")
# convert to grayscale
base_gray = cv2.cvtColor(self.base, cv2.COLOR_BGR2GRAY)
# DIFFERENCES START
curr_gray = cv2.cvtColor(self.curr_photo, cv2.COLOR_BGR2GRAY)
# create detector, get keypoints and descriptors
detector = cv2.ORB_create()
base_keys, base_desc = detector.detectAndCompute(base_gray, None)
curr_keys, curr_desc = detector.detectAndCompute(curr_gray, None)
matcher = cv2.DescriptorMatcher_create("BruteForce-Hamming")
max_dist = 0.0
min_dist = 100.0
for match in matches:
dist = match.distance
min_dist = dist if dist < min_dist else min_dist
max_dist = dist if dist > max_dist else max_dist
good_matches = [match for match in matches if match.distance <= 3 * min_dist ]
base_matches = []
curr_matches = []
for match in good_matches:
base_matches.append(base_keys[match.queryIdx].pt)
curr_matches.append(curr_keys[match.trainIdx].pt)
bf_final = np.asarray(base_matches)
cf_final = np.asarray(curr_matches)
# SAME AS BEFORE
# find perspective transformation using the arrays of corresponding points
transformation, hom_stati = cv2.findHomography(cf_final, bf_final, method=cv2.RANSAC, ransacReprojThreshold=1)
# transform the images and overlay them to see if they align properly
# not what I do in the actual program, just for use in the example code
# so that you can see how they align, if you decide to run it
height, width = curr.shape[:2]
mod_photo = cv2.warpPerspective(curr, transformation, (width, height))
new_image = cv2.addWeighted(mod_photo, .5, base, .5, 1)
個
構成。所以如果你有從img1到img2的單應性h12和從img2到img3的單應性h23,那麼h12.dot(h23)是從img1到img3的單應性。 –