2010-08-01 119 views
1

我如何訪問使用我的相機拍攝的電影中的原始素材,以便我可以編輯或轉換原始素材(例如:使其成爲黑色/白色)。IOS4:AVFoundation:如何從電影中獲得原始素材

我知道,你可以加載AVAsset 一個MOV使不同AVAsset的 組合物,然後將其導出到一個新的電影,但我怎麼訪問,所以我可以編輯的動畫。

回答

0

我不知道整個過程,但我知道一些:

您可能需要使用AV Foundation框架,也許核心顯卡框架來處理單個幀。你可能會使用AVWriter:

AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL: 
           [NSURL fileURLWithPath:path] 
              fileType:AVFileTypeQuickTimeMovie 
               error:&error]; 

可以維持使用AVFoundation或CV像素緩衝區,然後把它寫像(本例中爲CV):

[pixelBufferAdaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero]; 

要獲得幀,AVAssetStillImageGenerator ISN真的不夠。

或者,可能存在可用於AVVideoMutableComposition,AVMutableComposition或AVAssetExportSession的過濾器或指令。

如果您在8月份詢問後取得了進展,請張貼我感興趣的內容!

+1

我現在與avassetwriter嘗試,但它只是出於自4.1 – 2010-10-07 14:20:17

5

您需要從輸入資源中讀取視頻幀,爲每個幀創建一個CGContextRef來執行繪製,然後將幀寫入新視頻文件。基本步驟如下。我忽略了所有填充代碼和錯誤處理,所以主要步驟更容易閱讀。

// AVURLAsset to read input movie (i.e. mov recorded to local storage) 
NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey]; 
AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:inputURL options:inputOptions]; 

// Load the input asset tracks information 
[inputAsset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{ 

    // Check status of "tracks", make sure they were loaded  
    AVKeyValueStatus tracksStatus = [inputAsset statusOfValueForKey:@"tracks" error:&error]; 
    if (!tracksStatus == AVKeyValueStatusLoaded) 
     // failed to load 
     return; 

    // Fetch length of input video; might be handy 
    NSTimeInterval videoDuration = CMTimeGetSeconds([inputAsset duration]); 
    // Fetch dimensions of input video 
    CGSize videoSize = [inputAsset naturalSize]; 


    /* Prepare output asset writer */ 
    self.assetWriter = [[[AVAssetWriter alloc] initWithURL:outputURL fileType:AVFileTypeQuickTimeMovie error:&error] autorelease]; 
    NSParameterAssert(assetWriter); 
    assetWriter.shouldOptimizeForNetworkUse = NO; 


    // Video output 
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: 
         AVVideoCodecH264, AVVideoCodecKey, 
         [NSNumber numberWithInt:videoSize.width], AVVideoWidthKey, 
         [NSNumber numberWithInt:videoSize.height], AVVideoHeightKey, 
         nil]; 
    self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo 
          outputSettings:videoSettings]; 
    NSParameterAssert(assetWriterVideoInput); 
    NSParameterAssert([assetWriter canAddInput:assetWriterVideoInput]); 
    [assetWriter addInput:assetWriterVideoInput]; 


    // Start writing 
    CMTime presentationTime = kCMTimeZero; 

    [assetWriter startWriting]; 
    [assetWriter startSessionAtSourceTime:presentationTime]; 


    /* Read video samples from input asset video track */ 
    self.reader = [AVAssetReader assetReaderWithAsset:inputAsset error:&error]; 

    NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary]; 
    [outputSettings setObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (NSString*)kCVPixelBufferPixelFormatTypeKey]; 
    self.readerVideoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[inputAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] 
         outputSettings:outputSettings]; 


    // Assign the tracks to the reader and start to read 
    [reader addOutput:readerVideoTrackOutput]; 
    if ([reader startReading] == NO) { 
     // Handle error 
    } 


    dispatch_queue_t dispatch_queue = dispatch_get_main_queue(); 

    [assetWriterVideoInput requestMediaDataWhenReadyOnQueue:dispatch_queue usingBlock:^{ 
     CMTime presentationTime = kCMTimeZero; 

     while ([assetWriterVideoInput isReadyForMoreMediaData]) { 
      CMSampleBufferRef sample = [readerVideoTrackOutput copyNextSampleBuffer]; 
      if (sample) { 
       presentationTime = CMSampleBufferGetPresentationTimeStamp(sample); 

       /* Composite over video frame */ 

       CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sample); 

       // Lock the image buffer 
       CVPixelBufferLockBaseAddress(imageBuffer,0); 

       // Get information about the image 
       uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
       size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
       size_t width = CVPixelBufferGetWidth(imageBuffer); 
       size_t height = CVPixelBufferGetHeight(imageBuffer); 

       // Create a CGImageRef from the CVImageBufferRef 
       CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
       CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 

       /*** Draw into context ref to draw over video frame ***/ 

       // We unlock the image buffer 
       CVPixelBufferUnlockBaseAddress(imageBuffer,0); 

       // We release some components 
       CGContextRelease(newContext); 
       CGColorSpaceRelease(colorSpace); 

       /* End composite */ 

       [assetWriterVideoInput appendSampleBuffer:sample]; 
       CFRelease(sample); 

      } 
      else { 
       [assetWriterVideoInput markAsFinished]; 

       /* Close output */ 

       [assetWriter endSessionAtSourceTime:presentationTime]; 
       if (![assetWriter finishWriting]) { 
        NSLog(@"[assetWriter finishWriting] failed, status=%@ error=%@", assetWriter.status, assetWriter.error); 
       } 

      } 

     } 
    }]; 

}]; 
+0

有此代碼以往任何時候都運行?我問,因爲我從來沒有能夠追加我創建的視頻採樣緩衝區。我總是最終不得不使用AVAssetWriterInputPixelBufferAdaptor。我能看到的唯一真正的區別是您的樣本緩衝區來自預先存在的資產。我可能會錯誤配置樣本緩衝區。或者,也許這個代碼不起作用。 – 2011-02-22 11:00:12

+0

Rhythmic Fistman:是的,代碼來自一個工作的應用程序。如果你是從頭開始創建的話,我猜你也是錯誤配置了樣本緩衝區。這不是我想要做的。 – 2011-02-23 00:19:10