我正在製作應用程序,希望從前置攝像頭捕獲圖像,而不顯示任何類型的捕獲屏幕。我想在沒有任何用戶交互的情況下完全在代碼中拍攝照片。我如何爲前置攝像頭做到這一點?iOS:從前置攝像頭捕獲圖像
回答
您可能需要使用AVFoundation
來捕獲視頻流/圖像而不顯示它。與UIImagePickerController
不同,它不能「開箱即用」。以蘋果公司的AVCam
爲例讓你開始。
如何使用AVFoundation前置攝像頭捕捉到的圖像:
發展注意事項:
- 檢查您的應用程序和圖像方向設置仔細
- AVFoundation及其相關框架是討厭的龐然大物和很難理解/實施。我做了我的代碼精簡越好,但請檢查出一個更好的解釋這個優秀的教程(網站沒有提供任何更多,通過archive.org鏈接): http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera
ViewController.h
// Frameworks
#import <CoreVideo/CoreVideo.h>
#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>
#import <UIKit/UIKit.h>
@interface CameraViewController : UIViewController <AVCaptureVideoDataOutputSampleBufferDelegate>
// Camera
@property (weak, nonatomic) IBOutlet UIImageView* cameraImageView;
@property (strong, nonatomic) AVCaptureDevice* device;
@property (strong, nonatomic) AVCaptureSession* captureSession;
@property (strong, nonatomic) AVCaptureVideoPreviewLayer* previewLayer;
@property (strong, nonatomic) UIImage* cameraImage;
@end
ViewController.m
#import "CameraViewController.h"
@implementation CameraViewController
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupCamera];
[self setupTimer];
}
- (void)setupCamera
{
NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for(AVCaptureDevice *device in devices)
{
if([device position] == AVCaptureDevicePositionFront)
self.device = device;
}
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil];
AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init];
output.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
NSString* key = (NSString *) kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[output setVideoSettings:videoSettings];
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession addInput:input];
[self.captureSession addOutput:output];
[self.captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
// CHECK FOR YOUR APP
self.previewLayer.frame = CGRectMake(0, 0, self.view.frame.size.height, self.view.frame.size.width);
self.previewLayer.orientation = AVCaptureVideoOrientationLandscapeRight;
// CHECK FOR YOUR APP
[self.view.layer insertSublayer:self.previewLayer atIndex:0]; // Comment-out to hide preview layer
[self.captureSession startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
self.cameraImage = [UIImage imageWithCGImage:newImage scale:1.0f orientation:UIImageOrientationDownMirrored];
CGImageRelease(newImage);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
- (void)setupTimer
{
NSTimer* cameraTimer = [NSTimer scheduledTimerWithTimeInterval:2.0f target:self selector:@selector(snapshot) userInfo:nil repeats:YES];
}
- (void)snapshot
{
NSLog(@"SNAPSHOT");
self.cameraImageView.image = self.cameraImage; // Comment-out to hide snapshot
}
@end
連接這件事與快照一個UIImageView一個UIViewController,它會努力!快照是以2.0秒的間隔編程獲取的,沒有任何用戶輸入。註釋掉所選行以刪除預覽圖層和快照反饋。
還有其他問題/意見,請讓我知道!
在UIImagePickerController類的文檔中有一個叫做takePicture的方法。它說:
使用此方法結合自定義疊加視圖來啓動靜態圖像的程序捕獲。這支持在不離開界面的情況下拍攝多張照片,但要求您隱藏默認的圖像選取器控件。
我轉換上面的代碼從Objc到夫特3,如果任何人仍然會在2017年
import UIKit
import AVFoundation
class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
@IBOutlet weak var cameraImageView: UIImageView!
var device: AVCaptureDevice?
var captureSession: AVCaptureSession?
var previewLayer: AVCaptureVideoPreviewLayer?
var cameraImage: UIImage?
override func viewDidLoad() {
super.viewDidLoad()
setupCamera()
setupTimer()
}
func setupCamera() {
let discoverySession = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera],
mediaType: AVMediaTypeVideo,
position: .front)
device = discoverySession?.devices[0]
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: device)
} catch {
return
}
let output = AVCaptureVideoDataOutput()
output.alwaysDiscardsLateVideoFrames = true
let queue = DispatchQueue(label: "cameraQueue")
output.setSampleBufferDelegate(self, queue: queue)
output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable: kCVPixelFormatType_32BGRA]
captureSession = AVCaptureSession()
captureSession?.addInput(input)
captureSession?.addOutput(output)
captureSession?.sessionPreset = AVCaptureSessionPresetPhoto
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height)
view.layer.insertSublayer(previewLayer!, at: 0)
captureSession?.startRunning()
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros))
let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!))
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
let width = CVPixelBufferGetWidth(imageBuffer!)
let height = CVPixelBufferGetHeight(imageBuffer!)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo:
CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
let newImage = newContext!.makeImage()
cameraImage = UIImage(cgImage: newImage!)
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: .allZeros))
}
func setupTimer() {
_ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true)
}
func snapshot() {
print("SNAPSHOT")
cameraImageView.image = cameraImage
}
}
另外一種解決方案,我發現了一個較短的溶液用於獲取從CMSampleBuffer圖像:
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let myPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let myCIimage = CIImage(cvPixelBuffer: myPixelBuffer!)
let videoImage = UIImage(ciImage: myCIimage)
cameraImage = videoImage
}
謝謝,這是一個非常有用的起點。 – 2017-11-06 22:31:52
沒問題,我很高興它是有用的,不知道它是否仍然可以與Swift 4一起使用而不會彈出警告.. – 2017-11-07 09:57:07
不僅僅是警告,某些東西需要更改,但修復 - 它主要處理它。 – 2017-11-07 22:17:31
轉換上面的代碼斯威夫特4
import UIKit
import AVFoundation
class CameraViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
@IBOutlet weak var cameraImageView: UIImageView!
var device: AVCaptureDevice?
var captureSession: AVCaptureSession?
var previewLayer: AVCaptureVideoPreviewLayer?
var cameraImage: UIImage?
override func viewDidLoad() {
super.viewDidLoad()
setupCamera()
setupTimer()
}
func setupCamera() {
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera],
mediaType: AVMediaType.video,
position: .front)
device = discoverySession.devices[0]
let input: AVCaptureDeviceInput
do {
input = try AVCaptureDeviceInput(device: device!)
} catch {
return
}
let output = AVCaptureVideoDataOutput()
output.alwaysDiscardsLateVideoFrames = true
let queue = DispatchQueue(label: "cameraQueue")
output.setSampleBufferDelegate(self, queue: queue)
output.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: kCVPixelFormatType_32BGRA]
captureSession = AVCaptureSession()
captureSession?.addInput(input)
captureSession?.addOutput(output)
captureSession?.sessionPreset = AVCaptureSession.Preset.photo
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession!)
previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
previewLayer?.frame = CGRect(x: 0.0, y: 0.0, width: view.frame.width, height: view.frame.height)
view.layer.insertSublayer(previewLayer!, at: 0)
captureSession?.startRunning()
}
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let baseAddress = UnsafeMutableRawPointer(CVPixelBufferGetBaseAddress(imageBuffer!))
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
let width = CVPixelBufferGetWidth(imageBuffer!)
let height = CVPixelBufferGetHeight(imageBuffer!)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let newContext = CGContext(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo:
CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
let newImage = newContext!.makeImage()
cameraImage = UIImage(cgImage: newImage!)
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
}
func setupTimer() {
_ = Timer.scheduledTimer(timeInterval: 2.0, target: self, selector: #selector(snapshot), userInfo: nil, repeats: true)
}
@objc func snapshot() {
print("SNAPSHOT")
cameraImageView.image = cameraImage
}
}
- 1. html5從ipad攝像頭捕獲圖像
- 2. 從java攝像頭捕獲圖像?
- 3. 從c攝像頭捕獲圖像#
- 4. 從網絡攝像頭捕獲圖像
- 5. 從Java攝像頭捕獲
- 6. 用Ruby捕獲攝像頭的圖像
- 7. 同時從前置和後置攝像頭捕獲視頻
- 8. 從網絡攝像頭捕捉圖像
- 9. 從網絡攝像頭捕捉圖像
- 10. 從前置攝像頭切換到後置攝像頭JS
- 11. 從iOS攝像頭捕捉視頻
- 12. 如何獲取前置攝像頭拍攝的圖像路徑
- 13. 從前置攝像頭捕捉時始終能看到鏡像iOS 5.0
- 14. IP攝像頭捕獲
- 15. Visual C攝像頭捕獲
- 16. C#WPF - 從DLL捕獲攝像頭
- 17. OpenCV從外部攝像頭捕獲
- 18. 圖像從前置攝像頭倒置保存
- 19. 從網絡攝像頭和商店捕獲圖像
- 20. 如何從android攝像頭捕獲原始圖像
- 21. 如何檢測圖像捕獲從哪個攝像頭php
- 22. C#:從多個(USB)攝像頭捕獲靜止圖像
- 23. 從Android攝像頭捕獲單張圖像的快速方法
- 24. Opencv:從攝像頭捕獲的圖像始終爲灰色
- 25. .NET應用程序從PDA攝像頭捕獲圖像
- 26. OpenCV無法從isight攝像頭捕獲圖像
- 27. Android攝像頭:前置攝像頭鏡像
- 28. 測試iPhone4上的前置攝像頭或後置攝像頭
- 29. 在後置攝像頭和前置攝像頭之間切換
- 30. Android前置攝像頭
你的意思是默默的捕捉圖像,而無需用戶知道anyt興奮嗎? – rid 2012-04-17 20:57:07
是的,我知道它聽起來不好,但它完全無害。該應用程序將導致他們拉一張有趣的臉,我想捕捉它,讓他們看到他們看起來有多傻。 – mtmurdock 2012-04-17 20:58:01
你對這樣一個特性的實現可能是無害的,但是我可以想到很多其他的實例,這些實例除了(這可能是爲什麼它是不可能的)。 – inkedmn 2012-12-28 00:35:39