如何从CVPixelBufferRef转换为openCV cv :: Mat

时间:2013-10-14 10:57:19

标签: c++ ios objective-c opencv avcapturesession

我想对CVPixelBufferRef执行一些操作,然后提出cv::Mat

  • 裁剪到感兴趣的区域
  • 缩放到固定维度
  • 均衡直方图
  • 转换为灰度 - 每像素8位(CV_8UC1

我不确定最有效的顺序是什么,但是,我知道所有操作都可以在open:CV矩阵中使用,所以我想知道如何转换它。

- (void) captureOutput:(AVCaptureOutput *)captureOutput 
         didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
         fromConnection:(AVCaptureConnection *)connection
{
     CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

     cv::Mat frame = f(pixelBuffer); // how do I implement f()?

2 个答案:

答案 0 :(得分:12)

我在一些excellent GitHub source code中找到了答案。为简单起见,我在这里改编它也为我做了灰度转换。

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);

// Set the following dict on AVCaptureVideoDataOutput's videoSettings to get YUV output
// @{ kCVPixelBufferPixelFormatTypeKey : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange }

NSAssert(format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, @"Only YUV is supported");

// The first plane / channel (at index 0) is the grayscale plane
// See more infomation about the YUV format
// http://en.wikipedia.org/wiki/YUV
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

CGFloat width = CVPixelBufferGetWidth(pixelBuffer);
CGFloat height = CVPixelBufferGetHeight(pixelBuffer);

cv::Mat mat(height, width, CV_8UC1, baseaddress, 0);

// Use the mat here

CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

我在想最好的订单是:

  1. 转换为灰度(因为它几乎是自动完成的)
  2. 裁剪(这应该是一个快速操作,并会减少要使用的像素数)
  3. 缩小
  4. 均衡直方图

答案 1 :(得分:3)

我正在使用它。我的cv:Mat配置了BGR(8UC3)colorFormat。

CVImageBufferRef - > cv :: Mat

- (cv::Mat) matFromImageBuffer: (CVImageBufferRef) buffer {

    cv::Mat mat ;

    CVPixelBufferLockBaseAddress(buffer, 0);

    void *address = CVPixelBufferGetBaseAddress(buffer);
    int width = (int) CVPixelBufferGetWidth(buffer);
    int height = (int) CVPixelBufferGetHeight(buffer);

    mat   = cv::Mat(height, width, CV_8UC4, address, 0);
    //cv::cvtColor(mat, _mat, CV_BGRA2BGR);

    CVPixelBufferUnlockBaseAddress(buffer, 0);

    return mat;
}

cv :: Mat - > CVImageBufferRef(CVPixelBufferRef)

- (CVImageBufferRef) getImageBufferFromMat: (cv::Mat) mat {

    cv::cvtColor(mat, mat, CV_BGR2BGRA);

    int width = mat.cols;
    int height = mat.rows;

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             // [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             // [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             [NSNumber numberWithInt:width], kCVPixelBufferWidthKey,
                             [NSNumber numberWithInt:height], kCVPixelBufferHeightKey,
                             nil];

    CVPixelBufferRef imageBuffer;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, width, height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ;


    NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL);

    CVPixelBufferLockBaseAddress(imageBuffer, 0);
    void *base = CVPixelBufferGetBaseAddress(imageBuffer) ;
    memcpy(base, mat.data, _mat.total()*4);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    return imageBuffer;
}