2010-10-16 117 views
2

我使用glReadPixels來抓取我的opengl場景的屏幕截圖,然後使用IOS 4上的AVAssetWriter將它們轉換爲視頻。我的問題是我需要將alpha通道傳遞給只接受kCVPixelFormatType_32ARGB和glReadPixels在檢索RGBA。所以基本上我需要一種方法來將我的RGBA轉換爲ARGB,換句話說,首先放置字母字節。iPhone RGBA到ARGB

int depth = 4; 
unsigned char buffer[width * height * depth]; 
glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer); 

CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer), width*height*depth, NULL); 

CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast; 

CGImageRef image = CGImageCreate(width, height, 8, 32, width*depth, CGColorSpaceCreateDeviceRGB(), bitmapInfo, ref, NULL, true, kCGRenderingIntentDefault); 

UIWindow* parentWindow = [self window]; 

NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil]; 

CVPixelBufferRef pxbuffer = NULL; 
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer); 

NSParameterAssert(status == kCVReturnSuccess); 
NSParameterAssert(pxbuffer != NULL); 

CVPixelBufferLockBaseAddress(pxbuffer, 0); 
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer); 
NSParameterAssert(pxdata != NULL); 

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB(); 
CGContextRef context = CGBitmapContextCreate(pxdata, width, height, 8, depth*width, rgbColorSpace, kCGImageAlphaPremultipliedFirst); 

NSParameterAssert(context); 

CGContextConcatCTM(context, parentWindow.transform); 
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image); 

CGColorSpaceRelease(rgbColorSpace); 
CGContextRelease(context); 

CVPixelBufferUnlockBaseAddress(pxbuffer, 0); 

return pxbuffer; // chuck pixel buffer into AVAssetWriter 

以爲我會張貼整個代碼,因爲我可能會幫助別人。

乾杯

回答

6

注:我假設每個通道8位。如果情況並非如此,則相應調整。

到去年移動alpha位,則需要執行旋轉。這通常通過位移最容易表達。

在這種情況下,你要正確的8位移動RGB位和A位向左24位。然後這兩個值應該使用按位或運算,因此變成argb = (rgba >> 8) | (rgba << 24)

0

沒錯每通道的8位,所以是這樣的:

int depth = 4; 
int width = 320; 
int height = 480; 

unsigned char buffer[width * height * depth]; 

glReadPixels(0,0,width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer); 

for(int i = 0; i < width; i++){ 
    for(int j = 0; j < height; j++){  
    buffer[i*j] = (buffer[i*j] >> 8) | (buffer[i*j] << 24); 
    } 
} 

我似乎無法得到它的工作

+1

這裏的問題是,該緩衝區的類型爲unsigned char,但它必須分別具有4字節的類型,如unsigned int或UInt32。 – 2011-12-13 21:10:16

-1
+ (UIImage *) createARGBImageFromRGBAImage: (UIImage *)image { 
    CGSize dimensions = [image size]; 

    NSUInteger bytesPerPixel = 4; 
    NSUInteger bytesPerRow = bytesPerPixel * dimensions.width; 
    NSUInteger bitsPerComponent = 8; 

    unsigned char *rgba = malloc(bytesPerPixel * dimensions.width * dimensions.height); 
    unsigned char *argb = malloc(bytesPerPixel * dimensions.width * dimensions.height); 

    CGColorSpaceRef colorSpace = NULL; 
    CGContextRef context = NULL; 

    colorSpace = CGColorSpaceCreateDeviceRGB(); 
    context = CGBitmapContextCreate(rgba, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big 
    CGContextDrawImage(context, CGRectMake(0, 0, dimensions.width, dimensions.height), [image CGImage]); 
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace); 

    for (int x = 0; x < dimensions.width; x++) { 
     for (int y = 0; y < dimensions.height; y++) { 
      NSUInteger offset = ((dimensions.width * y) + x) * bytesPerPixel; 
      argb[offset + 0] = rgba[offset + 3]; 
      argb[offset + 1] = rgba[offset + 0]; 
      argb[offset + 2] = rgba[offset + 1]; 
      argb[offset + 3] = rgba[offset + 2]; 
     } 
    } 

    colorSpace = CGColorSpaceCreateDeviceRGB(); 
    context = CGBitmapContextCreate(argb, dimensions.width, dimensions.height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrderDefault); // kCGBitmapByteOrder32Big 
    CGImageRef imageRef = CGBitmapContextCreateImage(context); 
    image = [UIImage imageWithCGImage: imageRef]; 
    CGImageRelease(imageRef); 
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace); 

    free(rgba); 
    free(argb); 

    return image; 
} 
0

我敢肯定的是,阿爾法值可以忽略不計。所以,你可以做memcpy由一個字節偏移像素緩衝區數組:

void *buffer = malloc(width*height*4); 
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, &buffer); 
… 
memcpy(pxdata + 1, buffer, width*height*4 - 1); 
+0

爲什麼忽略alpha值?這將永遠是真實的嗎? – kevlar 2012-02-17 09:11:59

2

更妙的是,不要使用ARGB,發送您的AVAssetWriter BGRA幀編碼視頻。正如我在this answer形容,這樣可以讓你在編碼的iPhone 4在30 FPS視頻640×480,和高達20 FPS的720p視頻。使用此功能,iPhone 4S可以以30 FPS的速度一路高達1080p視頻。

而且,你要確保你使用一個像素的緩衝池,而不是每次都重新創建一個像素緩衝區。從這個答案複製代碼,您可以配置使用這個AVAssetWriter:

NSError *error = nil; 

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error]; 
if (error != nil) 
{ 
    NSLog(@"Error: %@", error); 
} 


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init]; 
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey]; 
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey]; 
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey]; 


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings]; 
assetWriterVideoInput.expectsMediaDataInRealTime = YES; 

// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA. 
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, 
                 [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey, 
                 [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey, 
                 nil]; 

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary]; 

[assetWriter addInput:assetWriterVideoInput]; 

然後使用此代碼使用glReadPixels()抓住每個渲染幀:

CVPixelBufferRef pixel_buffer = NULL; 

CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer); 
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess)) 
{ 
    return; 
} 
else 
{ 
    CVPixelBufferLockBaseAddress(pixel_buffer, 0); 
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer); 
    glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData); 
} 

// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording 
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120); 

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{ 
    NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value); 
} 
else 
{ 
//  NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value); 
} 
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0); 

CVPixelBufferRelease(pixel_buffer); 

使用glReadPixels(),你需要調酒的顏色所以我已經採用了一個offscreen FBO和一個片段着色器來做到這一點:

varying highp vec2 textureCoordinate; 

uniform sampler2D inputImageTexture; 

void main() 
{ 
    gl_FragColor = texture2D(inputImageTexture, textureCoordinate).bgra; 
} 

然而,有一個甚至在iOS 5.0上更快的路線來獲取OpenGL ES內容,比我在this answer中描述的glReadPixels()要好。有關該過程的好處是,紋理已經存儲在BGRA像素格式的內容,這樣你就可以只給封裝像素緩衝區權的AVAssetWriter不帶任何色彩轉換,仍然可以看到很大的編碼速度。

2

我意識到這個問題已經回答了,但我想確保人們都知道VIMAGE,在加快框架的一部分,可以在iOS和OSX的。我的理解是,Core Graphics使用vImage在位圖上執行CPU綁定的矢量操作。

要將ARGB轉換爲RGBA的特定API是vImagePermuteChannels_ARGB8888。也有API將RGB轉換爲ARGB/XRGB,翻轉圖像,覆蓋頻道等等。這是一種隱藏的寶石!

更新:布拉德拉森寫了一個很好的答案基本上相同的問題here