2012-04-17 140 views
16

我正在製作應用程序,希望從前置攝像頭捕獲圖像,而不顯示任何類型的捕獲屏幕。我想在沒有任何用戶交互的情況下完全在代碼中拍攝照片。我如何爲前置攝像頭做到這一點?iOS:從前置攝像頭捕獲圖像

+2

你的意思是默默的捕捉圖像,而無需用戶知道anyt興奮嗎? – rid 2012-04-17 20:57:07

+2

是的,我知道它聽起來不好,但它完全無害。該應用程序將導致他們拉一張有趣的臉,我想捕捉它,讓他們看到他們看起來有多傻。 – mtmurdock 2012-04-17 20:58:01

+1

你對這樣一個特性的實現可能是無害的,但是我可以想到很多其他的實例,這些實例除了(這可能是爲什麼它是不可能的)。 – inkedmn 2012-12-28 00:35:39

回答

3

您可能需要使用AVFoundation來捕獲視頻流/圖像而不顯示它。與UIImagePickerController不同,它不能「開箱即用」。以蘋果公司的AVCam爲例讓你開始。

41

如何使用AVFoundation前置攝像頭捕捉到的圖像:

發展注意事項:

  • 檢查您的應用程序和圖像方向設置仔細
  • AVFoundation及其相關框架是討厭的龐然大物和很難理解/實施。我做了我的代碼精簡越好,但請檢查出一個更好的解釋這個優秀的教程(網站沒有提供任何更多,通過archive.org鏈接): http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera

ViewController.h

// Frameworks 
#import <CoreVideo/CoreVideo.h> 
#import <CoreMedia/CoreMedia.h> 
#import <AVFoundation/AVFoundation.h> 
#import <UIKit/UIKit.h> 

@interface CameraViewController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate> 

// Camera 
@property (weak, nonatomic) IBOutlet UIImageView* cameraImageView; 
@property (strong, nonatomic) AVCaptureDevice* device; 
@property (strong, nonatomic) AVCaptureSession* captureSession; 
@property (strong, nonatomic) AVCaptureVideoPreviewLayer* previewLayer; 
@property (strong, nonatomic) UIImage* cameraImage; 

@end 

ViewController.m

#import "CameraViewController.h" 

@implementation CameraViewController 

- (void)viewDidLoad 
{ 
    [super viewDidLoad]; 

    [self setupCamera]; 
    [self setupTimer]; 
} 

- (void)setupCamera 
{  
    NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; 
    for(AVCaptureDevice *device in devices) 
    { 
     if([device position] == AVCaptureDevicePositionFront) 
      self.device = device; 
    } 

    AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil]; 
    AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init]; 
    output.alwaysDiscardsLateVideoFrames = YES; 

    dispatch_queue_t queue; 
    queue = dispatch_queue_create("cameraQueue", NULL); 
    [output setSampleBufferDelegate:self queue:queue]; 

    NSString* key = (NSString *) kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [output setVideoSettings:videoSettings]; 

    self.captureSession = [[AVCaptureSession alloc] init]; 
    [self.captureSession addInput:input]; 
    [self.captureSession addOutput:output]; 
    [self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto]; 

    self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession]; 
    self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; 

    // CHECK FOR YOUR APP 
    self.previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width); 
    self.previewLayer.orientation = AVCaptureVideoOrientationLandscapeRight; 
    // CHECK FOR YOUR APP 

    [self.view.layer insertSublayer:self.previewLayer atIndex:0]; // Comment-out to hide preview layer 

    [self.captureSession startRunning]; 
} 

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{ 
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

    CGContextRelease(newContext); 
    CGColorSpaceRelease(colorSpace); 

    self.cameraImage = [UIImage imageWithCGImage:newImage scale:1.0f orientation:UIImageOrientationDownMirrored]; 

    CGImageRelease(newImage); 

    CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
} 

- (void)setupTimer 
{ 
    NSTimer* cameraTimer = [NSTimer scheduledTimerWithTimeInterval:2.0f target:self selector:@selector(snapshot) userInfo:nil repeats:YES]; 
} 

- (void)snapshot 
{ 
    NSLog(@"SNAPSHOT"); 
    self.cameraImageView.image = self.cameraImage; // Comment-out to hide snapshot 
} 

@end 

連接這件事與快照一個UIImageView一個UIViewController,它會努力!快照是以2.0秒的間隔編程獲取的,沒有任何用戶輸入。註釋掉所選行以刪除預覽圖層和快照反饋。

還有其他問題/意見,請讓我知道!

+1

非常好!我建議這個答案被我的接受(假設它有效)。 – Tim 2012-12-28 19:17:20

+0

這會是Apple App Store友好嗎? – mtmurdock 2012-12-30 21:18:17

+1

我不確定,這是我第一次考慮這樣的應用。我猜你需要深入研究,並確保使用者/ Apple知道它不會被用於任何惡意目的(正如其他文章中所述)。你的應用聽起來有趣無害,所以也許它會好起來的! – 2012-12-30 22:16:34

0

在UIImagePickerController類的文檔中有一個叫做takePicture的方法。它說:

使用此方法結合自定義疊加視圖來啓動靜態圖像的程序捕獲。這支持在不離開界面的情況下拍攝多張照片,但要求您隱藏默認的圖像選取器控件。

2

我轉換上面的代碼從Objc到夫特3,如果任何人仍然會在2017年

import UIKit 
import AVFoundation 

class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { 

@IBOutlet weak var cameraImageView: UIImageView! 

var device: AVCaptureDevice? 
var captureSession: AVCaptureSession? 
var previewLayer: AVCaptureVideoPreviewLayer? 
var cameraImage: UIImage? 

override func viewDidLoad() { 
    super.viewDidLoad() 

    setupCamera() 
    setupTimer() 
} 

func setupCamera() { 
    let discoverySession = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera], 
                  mediaType: AVMediaTypeVideo, 
                  position: .front) 
    device = discoverySession?.devices[0] 

    let input: AVCaptureDeviceInput 
    do { 
     input = try AVCaptureDeviceInput(device: device) 
    } catch { 
     return 
    } 

    let output = AVCaptureVideoDataOutput() 
    output.alwaysDiscardsLateVideoFrames = true 

    let queue = DispatchQueue(label: "cameraQueue") 
    output.setSampleBufferDelegate(self, queue: queue) 
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_32BGRA] 

    captureSession = AVCaptureSession() 
    captureSession?.addInput(input) 
    captureSession?.addOutput(output) 
    captureSession?.sessionPreset = AVCaptureSessionPresetPhoto 

    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) 
    previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill 

    previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height) 

    view.layer.insertSublayer(previewLayer!, at: 0) 

    captureSession?.startRunning() 
} 

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { 
    let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) 
    CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros)) 
    let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!)) 
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!) 
    let width = CVPixelBufferGetWidth(imageBuffer!) 
    let height = CVPixelBufferGetHeight(imageBuffer!) 

    let colorSpace = CGColorSpaceCreateDeviceRGB() 
    let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: 
     CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) 

    let newImage = newContext!.makeImage() 
    cameraImage = UIImage(cgImage: newImage!) 

    CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros)) 
} 

func setupTimer() { 
    _ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true) 
} 

func snapshot() { 
    print("SNAPSHOT") 
    cameraImageView.image = cameraImage 
} 
} 

另外一種解決方案,我發現了一個較短的溶液用於獲取從CMSampleBuffer圖像:

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { 
    let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) 
    let myCIimage = CIImage(cvPixelBuffer: myPixelBuffer!) 
    let videoImage = UIImage(ciImage: myCIimage) 
    cameraImage = videoImage 
} 
+2

謝謝,這是一個非常有用的起點。 – 2017-11-06 22:31:52

+0

沒問題,我很高興它是有用的,不知道它是否仍然可以與Swift 4一起使用而不會彈出警告.. – 2017-11-07 09:57:07

+0

不僅僅是警告,某些東西需要更改,但修復 - 它主要處理它。 – 2017-11-07 22:17:31

0

轉換上面的代碼斯威夫特4

import UIKit 
import AVFoundation 

class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate { 

@IBOutlet weak var cameraImageView: UIImageView! 

var device: AVCaptureDevice? 
var captureSession: AVCaptureSession? 
var previewLayer: AVCaptureVideoPreviewLayer? 
var cameraImage: UIImage? 

override func viewDidLoad() { 
    super.viewDidLoad() 

    setupCamera() 
    setupTimer() 
} 

func setupCamera() { 
    let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], 
                  mediaType: AVMediaType.video, 
                  position: .front) 
    device = discoverySession.devices[0] 

    let input: AVCaptureDeviceInput 
    do { 
     input = try AVCaptureDeviceInput(device: device!) 
    } catch { 
     return 
    } 

    let output = AVCaptureVideoDataOutput() 
    output.alwaysDiscardsLateVideoFrames = true 

    let queue = DispatchQueue(label: "cameraQueue") 
    output.setSampleBufferDelegate(self, queue: queue) 
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: kCVPixelFormatType_32BGRA] 

    captureSession = AVCaptureSession() 
    captureSession?.addInput(input) 
    captureSession?.addOutput(output) 
    captureSession?.sessionPreset = AVCaptureSession.Preset.photo 

    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession!) 
    previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill 
    previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height) 

    view.layer.insertSublayer(previewLayer!, at: 0) 

     captureSession?.startRunning() 
    } 

    func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { 
     let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) 
    CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0)) 
     let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!)) 
     let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!) 
     let width = CVPixelBufferGetWidth(imageBuffer!) 
     let height = CVPixelBufferGetHeight(imageBuffer!) 

     let colorSpace = CGColorSpaceCreateDeviceRGB() 
     let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: 
     CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) 

     let newImage = newContext!.makeImage() 
     cameraImage = UIImage(cgImage: newImage!) 

     CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0)) 
    } 

    func setupTimer() { 
     _ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true) 
    } 

    @objc func snapshot() { 
     print("SNAPSHOT") 
     cameraImageView.image = cameraImage 
    } 
}