2017-05-05 82 views
1

嘗試在拍攝照片後捕捉綠色取景器中的內容。從AVFoundation中捕捉靜態圖像,該圖像與Swift中的AVCaptureVideoPreviewLayer上的取景器邊框相匹配

請看圖片:

image 1/3

image 2/3

image 3/3

這是什麼代碼目前正在做:

在拍攝照片前:

image 4/4

拍攝照片後(規模而導致的圖像是不正確的,因爲它不符合什麼是綠色取景器中): enter image description here

正如你所看到的,圖像需要被擴大以適應綠色取景器中最初包含的內容。即使當我計算正確的縮放比例(對於iPhone 6,我需要將拍攝圖像的尺寸乘以1.334,它不起作用)

任何想法?

回答

1

步驟來解決這個問題:

首先,獲取完整大小的圖片:我也用一個擴展名爲「correctlyOriented」的UIImage的類。

let correctImage = UIImage(data: imageData!)!.correctlyOriented() 

這一切確實是未旋轉iPhone圖像,所以肖像圖像(與iPhone底部的主頁按鈕拍攝)如預期導向。該分機低於:

extension UIImage { 

func correctlyOriented() -> UIImage { 

if imageOrientation == .up { 
    return self 
} 

// We need to calculate the proper transformation to make the image upright. 
// We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored. 
var transform = CGAffineTransform.identity 

switch imageOrientation { 
case .down, .downMirrored: 
    transform = transform.translatedBy(x: size.width, y: size.height) 
    transform = transform.rotated(by: CGFloat.pi) 
case .left, .leftMirrored: 
    transform = transform.translatedBy(x: size.width, y: 0) 
    transform = transform.rotated(by: CGFloat.pi * 0.5) 
case .right, .rightMirrored: 
    transform = transform.translatedBy(x: 0, y: size.height) 
    transform = transform.rotated(by: -CGFloat.pi * 0.5) 
default: 
    break 
} 

switch imageOrientation { 
case .upMirrored, .downMirrored: 
    transform = transform.translatedBy(x: size.width, y: 0) 
    transform = transform.scaledBy(x: -1, y: 1) 
case .leftMirrored, .rightMirrored: 
    transform = transform.translatedBy(x: size.height, y: 0) 
    transform = transform.scaledBy(x: -1, y: 1) 
default: 
    break 
} 

// Now we draw the underlying CGImage into a new context, applying the transform 
// calculated above. 
guard 
    let cgImage = cgImage, 
    let colorSpace = cgImage.colorSpace, 
    let context = CGContext(data: nil, 
          width: Int(size.width), 
          height: Int(size.height), 
          bitsPerComponent: cgImage.bitsPerComponent, 
          bytesPerRow: 0, 
          space: colorSpace, 
          bitmapInfo: cgImage.bitmapInfo.rawValue) else { 
           return self 
} 

context.concatenate(transform) 

switch imageOrientation { 
case .left, .leftMirrored, .right, .rightMirrored: 
    context.draw(cgImage, in: CGRect(x: 0, y: 0, width: size.height, height: size.width)) 
default: 
    context.draw(cgImage, in: CGRect(origin: .zero, size: size)) 
} 

// And now we just create a new UIImage from the drawing context 
guard let rotatedCGImage = context.makeImage() else { 
    return self 
} 

return UIImage(cgImage: rotatedCGImage) 
} 

接着,計算高度因素:

let heightFactor = self.view.frame.height/correctImage.size.height 

創建一個新的CGSize基於所述高度因子,然後調整圖像的大小(使用調整大小圖像的功能,未示出):

let newSize = CGSize(width: correctImage.size.width * heightFactor, height: correctImage.size.height * heightFactor) 

let correctResizedImage = self.imageWithImage(image: correctImage, scaledToSize: newSize) 

現在,我們有一個圖像是相同的高度,提供了設備,但更寬,由於4:iPhone屏幕的9寬高比:VS 16的iPhone相機的3寬高比。所以,裁剪圖像是大小相同的設備屏幕:

let screenCrop: CGRect = CGRect(x: (newSize.width - self.view.bounds.width) * 0.5, 
               y: 0, 
               width: self.view.bounds.width, 
               height: self.view.bounds.height) 


var correctScreenCroppedImage = self.crop(image: correctResizedImage, to: screenCrop) 

最後,我們需要複製由綠色的「取景器」創建的「作物」。因此,我們執行換了一茬又使最終圖像匹配:

let correctCrop: CGRect = CGRect(x: 0, 
              y: (correctScreenCroppedImage!.size.height * 0.5) - (correctScreenCroppedImage!.size.width * 0.5), 
              width: correctScreenCroppedImage!.size.width, 
              height: correctScreenCroppedImage!.size.width) 

var correctCroppedImage = self.crop(image: correctScreenCroppedImage!, to: correctCrop) 

信用爲這個答案去@damirstuhec