2013-02-26 115 views
1

在我的項目中,我需要在一個獨特的結果圖像上覆制一個視頻幀的每個幀。iOS - 視頻幀處理優化

捕獲視頻幀並不是什麼大不了的事。它會是這樣的:

// duration is the movie lenght in s. 
// frameDuration is 1/fps. (or 24fps, frameDuration = 1/24) 
// player is a MPMoviePlayerController 
for (NSTimeInterval i=0; i < duration; i += frameDuration) { 
    UIImage * image = [player thumbnailImageAtTime:i timeOption:MPMovieTimeOptionExact]; 

    CGRect destinationRect = [self getDestinationRect:i]; 
    [self drawImage:image inRect:destinationRect fromRect:originRect]; 

    // UI feedback 
    [self performSelectorOnMainThread:@selector(setProgressValue:) withObject:[NSNumber numberWithFloat:x/totalFrames] waitUntilDone:NO]; 
} 

問題出現在我嘗試實施drawImage:inRect:fromRect:方法。
我試圖this code,其中:

  1. 從視頻幀創建CGImageCreateWithImageInRect新CGImage以提取圖像的塊。
  2. 請在ImageContext一個CGContextDrawImage提請塊

但是當視頻達到12-14s,我的iPhone 4S宣佈他的第三個內存警告和崩潰。我異形與泄漏工具的應用,並沒有發現任何泄漏在所有...

我不是在石英很強。有沒有更好的優化方式來實現這一目標?

回答

1

最後,我保留了我的代碼的Quartz部分,並改變了我檢索圖像的方式。

現在我使用AVFoundation,這是一個更快的解決方案。

// Creating the tools : 1/ the video asset, 2/ the image generator, 3/ the composition, which helps to retrieve video properties. 
AVURLAsset *asset = [[[AVURLAsset alloc] initWithURL:moviePathURL 
              options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], AVURLAssetPreferPreciseDurationAndTimingKey, nil]] autorelease]; 
AVAssetImageGenerator *generator = [[[AVAssetImageGenerator alloc] initWithAsset:asset] autorelease]; 
generator.appliesPreferredTrackTransform = YES; // if I omit this, the frames are rotated 90° (didn't try in landscape) 
AVVideoComposition * composition = [AVVideoComposition videoCompositionWithPropertiesOfAsset:asset]; 

// Retrieving the video properties 
NSTimeInterval duration = CMTimeGetSeconds(asset.duration); 
frameDuration = CMTimeGetSeconds(composition.frameDuration); 
CGSize renderSize = composition.renderSize; 
CGFloat totalFrames = round(duration/frameDuration); 

// Selecting each frame we want to extract : all of them. 
NSMutableArray * times = [NSMutableArray arrayWithCapacity:round(duration/frameDuration)]; 
for (int i=0; i<totalFrames; i++) { 
    NSValue *time = [NSValue valueWithCMTime:CMTimeMakeWithSeconds(i*frameDuration, composition.frameDuration.timescale)]; 
    [times addObject:time]; 
} 

__block int i = 0; 
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){ 
    if (result == AVAssetImageGeneratorSucceeded) { 
     int x = round(CMTimeGetSeconds(requestedTime)/frameDuration); 
     CGRect destinationStrip = CGRectMake(x, 0, 1, renderSize.height); 
     [self drawImage:im inRect:destinationStrip fromRect:originStrip inContext:context]; 
    } 
    else 
     NSLog(@"Ouch: %@", error.description); 
    i++; 
    [self performSelectorOnMainThread:@selector(setProgressValue:) withObject:[NSNumber numberWithFloat:i/totalFrames] waitUntilDone:NO]; 
    if(i == totalFrames) { 
     [self performSelectorOnMainThread:@selector(performVideoDidFinish) withObject:nil waitUntilDone:NO]; 
    } 
}; 

// Launching the process... 
generator.requestedTimeToleranceBefore = kCMTimeZero; 
generator.requestedTimeToleranceAfter = kCMTimeZero; 
generator.maximumSize = renderSize; 
[generator generateCGImagesAsynchronouslyForTimes:times completionHandler:handler]; 

即使有很長的視頻,它也需要時間,但它永遠不會崩潰!

+0

馬丁嗨,一路圖像提取物是完美的,但在應用程序,如果視頻時長超過30秒,然後應用內存警告崩潰。你有其他的方式或者有什麼變化嗎?謝謝 – iBhavik 2013-05-14 06:34:15

+0

嗨。它不應該與長視頻崩潰。檢查你的代碼,也許你在處理程序塊中包含泄漏。您無法將所有提取的圖像保留在內存中,因爲設備沒有足夠的內存空間。 – Martin 2013-05-14 08:06:28

+0

@iBhavik你有沒有找到任何解決方案 – Nil 2017-07-06 15:29:22

0

除了Martin的回答,我建議縮小這個電話所獲得圖像的大小;也就是說,增加的屬性[generator.maximumSize = CGSizeMake(width,height)];使圖像儘可能小,這樣他們就不會佔用太多的內存