修改CMSampleBuffer内容的最有效方法

时间:2011-01-11 21:18:55

标签: ios avfoundation

我想修改CMSampleBuffer的内容,然后将其写入带有AVAssetWriter / AVAssetWriterInput的文件。

我这样做的方法是创建一个Core Graphics位图上下文,然后绘制到它,但它太慢了。具体来说,我需要将图像绘制到缓冲区中。

那么可以提供某种关于如何更有效地做到这一点的提示或建议吗?

我考虑使用OpenGL来实现这一点,即首先从CMSampleBuffer创建纹理A.然后将从我想要绘制的图像创建的纹理B渲染到纹理A中,然后从OpenGL中检索支持纹理A的数据,最后将该数据移交给AVAssetWriter / AVAssetWriterInput。但是文档说将纹理数据从GPU传输回CPU有点贵。

那么,有关如何处理的任何建议吗?

提前致谢

1 个答案:

答案 0 :(得分:8)

OpenGL可能就是这样。但是,渲染到屏幕外的帧缓冲区而不是纹理可能稍微有点效率。

从样本缓冲区中提取纹理:

// Note the caller is responsible for calling glDeleteTextures on the return value.
- (GLuint)textureFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
    GLuint texture = 0;

    glGenTextures(1, &texture);
    glBindTexture(GL_TEXTURE_2D, texture);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    int width = CVPixelBufferGetWidth(pixelBuffer);
    int height = CVPixelBufferGetHeight(pixelBuffer);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(pixelBuffer));
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

    return texture;
}

要通过OpenGL处理​​纹理,您可以执行以下操作:

// This function exists to free the malloced data when the CGDataProviderRef is
// eventually freed.
void dataProviderFreeData(void *info, const void *data, size_t size){
    free((void *)data);
}

// Returns an autoreleased CGImageRef.
- (CGImageRef)processTexture:(GLuint)texture width:(int)width height:(int)height {
    CGImageRef newImage = NULL;

    // Set up framebuffer and renderbuffer.
    GLuint framebuffer;
    glGenFramebuffers(1, &framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

    GLuint colorRenderbuffer;
    glGenRenderbuffers(1, &colorRenderbuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, width, height);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);

    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if (status != GL_FRAMEBUFFER_COMPLETE) {
        NSLog(@"Failed to create OpenGL frame buffer: %x", status);
    } else {
        glViewport(0, 0, width, height);
        glClearColor(0.0,0.0,0.0,1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // Do whatever is necessary to actually draw the texture to the framebuffer
        [self renderTextureToCurrentFrameBuffer:texture];

        // Read the pixels out of the framebuffer
        void *data = malloc(width * height * 4);
        glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

        // Convert the data to a CGImageRef. Note that CGDataProviderRef takes
        // ownership of our malloced data buffer, and the CGImageRef internally
        // retains the CGDataProviderRef. Hence the callback above, to free the data
        // buffer when the provider is finally released.
        CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, width * height * 4, dataProviderFreeData);
        CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
        newImage = CGImageCreate(width, height, 8, 32, width*4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, dataProvider, NULL, true, kCGRenderingIntentDefault);
        CFRelease(dataProvider);
        CGColorSpaceRelease(colorspace);

        // Autorelease the CGImageRef
        newImage = (CGImageRef)[NSMakeCollectable(newImage) autorelease];
    }

    // Clean up the framebuffer and renderbuffer.
    glDeleteRenderbuffers(1, &colorRenderbuffer);
    glDeleteFramebuffers(1, &framebuffer);

    return newImage;
}
相关问题