从YUV创建CVPixelBuffer并支持IOSurface

时间:2015-08-05 04:51:01

标签: ios objective-c video opengl-es glkit

所以我在网络回调(voip app)的3个独立阵列中获取原始YUV数据。根据我的理解,根据here

,您无法使用CVPixelBufferCreateWithPlanarBytes创建IOSurface支持的像素缓冲区
  

重要提示:您不能使用CVPixelBufferCreateWithBytes()或   CVPixelBufferCreateWithPlanarBytes()with   kCVPixelBufferIOSurfacePropertiesKey。调用   CVPixelBufferCreateWithBytes()或CVPixelBufferCreateWithPlanarBytes()   将导致CVPixelBuffers不是IOSurface支持的

因此您必须使用CVPixelBufferCreate创建它,但是如何将数据从回调传输回您创建的CVPixelBufferRef

- (void)videoCallBack(uint8_t *yPlane, uint8_t *uPlane, uint8_t *vPlane, size_t width, size_t height, size_t stride yStride,
                      size_t uStride, size_t vStride)
    NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer = NULL;
    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          width,
                                          height,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);

我不确定此后该怎么办?最终我想把它变成一个CIImage,然后我可以使用我的GLKView来渲染视频。人们如何放置"从创建时的数据到缓冲区?

3 个答案:

答案 0 :(得分:8)

我想出来了,这是相当微不足道的。以下是完整的代码。唯一的问题是,我得到BSXPCMessage received error for message: Connection interrupted并且视频显示需要一段时间。

NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                      width,
                                      height,
                                      kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                      (__bridge CFDictionaryRef)(pixelAttributes),
                                      &pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, width * height);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

if (result != kCVReturnSuccess) {
    DDLogWarn(@"Unable to create cvpixelbuffer %d", result);
}

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; //success!
CVPixelBufferRelease(pixelBuffer);

我忘了添加代码来交错两个U和V平面,但这不应该太糟糕。

答案 1 :(得分:3)

我有一个类似的问题,这就是我在SWIFT 2.0中提供的信息,我从其他问题或链接的答案中得到的信息。

$('#links a').on('click', function () {
    //get the index of clicked a
    var index = $(this).index();

    //scroll to the li of equal index
    $('html, body').animate({
        scrollTop: $($("#target li").eq(index)).offset().top
    }, 500);
});

注意:yuvFrame是一个具有y,u和v平面缓冲区以及宽度和高度的结构。另外,我有CFDictionary? CVPixelBufferCreate(...)中的参数设置为nil。如果我给它IOSurface属性,它将失败并抱怨它不是IOSurface支持或错误-6683。

访问这些链接以获取更多信息: 此链接是关于UV交错: How to convert from YUV to CIImage for iOS

及相关问题:CVOpenGLESTextureCacheCreateTextureFromImage returns error 6683

答案 2 :(得分:1)

这是obj-c中的完整转换。 对于所有那些说:“这很琐碎”的天才,也不要光顾任何人!如果您在这里寻求帮助,请帮助,如果您在这里展示自己的“聪明”,那就去别的地方做吧。 这是有关YUV处理的详细说明的链接:www.glebsoft.com

    /// method to convert YUV buffers to pixelBuffer in otder to feed it to face unity methods
-(CVPixelBufferRef*)pixelBufferFromYUV:(uint8_t *)yBuffer vBuffer:(uint8_t *)uBuffer uBuffer:(uint8_t *)vBuffer width:(int)width height:(int)height  {
    NSDictionary *pixelAttributes = @{(id)kCVPixelBufferIOSurfacePropertiesKey : @{}};
    CVPixelBufferRef pixelBuffer;
    /// NumberOfElementsForChroma is width*height/4 because both U plane and V plane are quarter size of Y plane
    CGFloat uPlaneSize =  width * height / 4;
    CGFloat vPlaneSize = width * height / 4;
    CGFloat numberOfElementsForChroma = uPlaneSize + vPlaneSize;

    CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                          width,
                                          height,
                                          kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                          (__bridge CFDictionaryRef)(pixelAttributes),
                                          &pixelBuffer);

    ///for simplicity and speed create a combined UV panel to hold the pixels
    uint8_t *uvPlane = calloc(numberOfElementsForChroma, sizeof(uint8_t));
    memcpy(uvPlane, uBuffer, uPlaneSize);
    memcpy(uvPlane += (uint8_t)(uPlaneSize), vBuffer, vPlaneSize);


    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
    memcpy(yDestPlane, yBuffer, width * height);

    CVPixelBufferLockBaseAddress(pixelBuffer, 1);
    uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
    memcpy(uvDestPlane, uvPlane, numberOfElementsForChroma);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 1);
    CVPixelBufferRelease(pixelBuffer);
    free(uvPlane);
    return pixelBuffer;
}
相关问题