2011-05-08 425 views
2

我使用AVCaptureSessionPhoto來允許用戶拍攝高分辨率照片。在拍攝照片時,我使用captureOutput:didOutputSampleBuffer:fromConnection:方法在捕獲時檢索縮略圖。然而,儘管我試圖在委託方法中做最少的工作,但應用程序變得有些滯後(我說這是因爲它仍然可用)。另外,iPhone往往會很熱。captureOutput:didOutputSampleBuffer:fromConnection性能問題

有什麼方法可以減少iPhone的工作量?

我成立了AVCaptureVideoDataOutput通過執行以下操作:

self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; 
self.videoDataOutput.alwaysDiscardsLateVideoFrames = YES; 

// Specify the pixel format 
dispatch_queue_t queue = dispatch_queue_create("com.myapp.videoDataOutput", NULL); 
[self.videoDataOutput setSampleBufferDelegate:self queue:queue]; 
dispatch_release(queue); 
self.videoDataOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 
                forKey:(id)kCVPixelBufferPixelFormatTypeKey]; 

這是我的captureOutput:didOutputSampleBuffer:fromConnection(並協助imageRefFromSampleBuffer法):

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
    fromConnection:(AVCaptureConnection *)connection { 

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; 
if (videoDataOutputConnection == nil) { 
    videoDataOutputConnection = connection; 
} 
if (getThumbnail > 0) { 
    getThumbnail--; 
    CGImageRef tempThumbnail = [self imageRefFromSampleBuffer:sampleBuffer]; 
    UIImage *image; 
    if (self.prevLayer.mirrored) { 
     image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationLeftMirrored]; 
    } 
    else { 
     image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationRight]; 
    } 
    [self.cameraThumbnailArray insertObject:image atIndex:0]; 
    dispatch_async(dispatch_get_main_queue(), ^{ 
     self.freezeCameraView.image = image; 
    }); 
    CFRelease(tempThumbnail); 
} 
sampleBuffer = nil; 
[pool release]; 

} 

-(CGImageRef)imageRefFromSampleBuffer:(CMSampleBufferRef)sampleBuffer { 

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
CVPixelBufferLockBaseAddress(imageBuffer,0); 
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer); 

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
CGImageRef newImage = CGBitmapContextCreateImage(context); 
CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
CGContextRelease(context); 
CGColorSpaceRelease(colorSpace); 
return newImage; 

} 
+0

我也面臨着同樣的問題,它消耗大量的內存,並û如何解決它。 – Amitg2k12 2011-08-30 13:11:13

+0

您是否嘗試不分配新的自動釋放池? didOutputSampleBuffer本身不應該佔用我的經驗中的太多資源...... (您也可以使用樂器進行配置文件) – 2013-08-11 08:54:59

回答

0

要提高,我們應該建立我們AVCaptureVideoDataOutput由:

output.minFrameDuration = CMTimeMake(1, 10); 

我們指定每幀的最小持續時間(使用此設置進行播放以避免在隊列中等待太多幀,因爲這會導致內存問題)。它類似於最大幀率的倒數。在這個例子中,我們設置1/10秒的最小幀持續時間,所以最大幀速率爲10fps。我們說我們無法處理超過10幀每秒。

希望有所幫助!

1

minFrameDuration已被棄用,這可能工作:

AVCaptureConnection *stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo]; 
stillImageConnection.videoMinFrameDuration = CMTimeMake(1, 10);