2011-05-17 45 views
7

在Cocoa應用程序目前我編碼,我發現了從石英作曲家渲染快照映像(NSImage中的對象),我想它們編碼的QTMovie在720 * 480尺寸,25 fps的,和H264使用的編解碼器addImage:方法。下面是相應的代碼:爲什麼我的基於QTKit的圖像編碼應用如此之慢?

qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size 


imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: @"avc1", // use the H264 codec 
       QTAddImageCodecType, nil]; 

qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object 

long fps = 25; 
frameNum = 0; 

NSTimeInterval renderingTime = 0; 
NSTimeInterval frameInc = (1./fps); 
NSTimeInterval myMovieDuration = 70; 
NSImage * myImage; 
while (renderingTime <= myMovieDuration){ 
    if(![qRenderer renderAtTime: renderingTime arguments:NULL]) 
     NSLog(@"Rendering failed at time %.3fs", renderingTime); 
    myImage = [qRenderer snapshotImage]; 
    [qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs]; 
    [myImage release]; 
    frameNum ++; 
    renderingTime = frameNum * frameInc; 
} 
[qtMovie updateMovieFile]; 
[qRenderer release]; 
[qtMovie release]; 

它的工作原理,但我的應用程序是無法做到的,在現實的時間在我的新的MacBook Pro,而我知道的QuickTime廣播公司可以在H264實時編碼圖像在同一臺計算機上使用的質量甚至更高。

那麼爲什麼?這裏有什麼問題?這是一個硬件管理問題(多核線程,GPU,...)還是我錯過了一些東西?讓我作序,我是新的(2周的實踐)在蘋果發展的世界裏,無論是在目標C,可可,X-代碼,QuickTime和石英作曲家圖書館等

感謝您的幫助

+0

您確定要使用25fps的720x480嗎?難道不應該是要麼在720×480或29.97,在720×576 25fps的?我懷疑它會解決你的速度問題,但它看起來像一個奇怪的格式。 – user1118321 2012-01-05 01:06:26

回答

5

AVFoundation是一種將QuartzComposer動畫渲染爲H.264視頻流的更高效方式。


size_t width = 640; 
size_t height = 480; 

const char *outputFile = "/tmp/Arabesque.mp4"; 

QCComposition *composition = [QCComposition compositionWithFile:@"/System/Library/Screen Savers/Arabesque.qtz"]; 
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height) 
                 colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition]; 

unlink(outputFile); 
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:@(outputFile)] fileType:AVFileTypeMPEG4 error:NULL]; 

NSDictionary *videoSettings = @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : @(width), AVVideoHeightKey : @(height) }; 
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings]; 

[videoWriter addInput:writerInput]; 
[writerInput release]; 

AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL]; 

int framesPerSecond = 30; 
int totalDuration = 30; 
int totalFrameCount = framesPerSecond * totalDuration; 

[videoWriter startWriting]; 
[videoWriter startSessionAtSourceTime:kCMTimeZero]; 

__block long frameNumber = 0; 

dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL); 

NSLog(@"Starting."); 
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{ 
    while ([writerInput isReadyForMoreMediaData]) { 
     NSTimeInterval frameTime = (float)frameNumber/framesPerSecond; 
     if (![renderer renderAtTime:frameTime arguments:NULL]) { 
      NSLog(@"Rendering failed at time %.3fs", frameTime); 
      break; 
     } 

     CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:@"CVPixelBuffer"]; 
     [pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)]; 
     CFRelease(frame); 

     frameNumber++; 
     if (frameNumber >= totalFrameCount) { 
      [writerInput markAsFinished]; 
      [videoWriter finishWriting]; 
      [videoWriter release]; 
      [renderer release]; 
      NSLog(@"Rendered %ld frames.", frameNumber); 
      break; 
     } 

    } 
}]; 

在我的測試,這是各地的快兩倍,使用QTKit你貼的代碼。最大的改進似乎來自將H.264編碼傳遞給GPU而不是以軟件執行。從在一個簡檔的快速瀏覽似乎剩餘瓶頸是組合物本身的渲染,並且在到像素緩衝器讀取所呈現的數據從GPU回來。很明顯,你的作品的複雜性會對此產生一些影響。

可以通過使用QCRenderer提供的快照作爲CVOpenGLBufferRef來進一步優化此功能,這可能會將幀的數據保留在GPU上,而不是將其讀回以將其傳遞給編碼器。儘管如此,我並沒有看太多。

相關問題