获取录制视频原始帧

时间:2014-03-24 05:42:25

标签: ios image-processing video-processing ca

我是Objective-C和iOS技术的新手。我想通过代码录制视频,在运行期间,我必须将每个帧作为原始数据进行一些处理。我怎么能实现这个?请任何人帮助我。提前致谢。到目前为止,这是我的代码:

- (void)viewDidLoad
{
    [super viewDidLoad];

    [self setupCaptureSession];

}

viewDidAppear函数

-(void)viewDidAppear:(BOOL)animated
{
    if (!_bpickeropen)
    {
       _bpickeropen = true;
        _picker = [[UIImagePickerController alloc] init];
        _picker.delegate = self;
        NSArray *sourceTypes = [UIImagePickerController availableMediaTypesForSourceType:picker.sourceType];
        if (![sourceTypes containsObject:(NSString *)kUTTypeMovie ])
        {
            NSLog(@"device not supported");
            return;
        }

        _picker.sourceType = UIImagePickerControllerSourceTypeCamera;
        _picker.mediaTypes = [NSArray arrayWithObjects:(NSString *)kUTTypeMovie,nil];//,(NSString *) kUTTypeImage
        _picker.videoQuality = UIImagePickerControllerQualityTypeHigh;
        [self presentModalViewController:_picker animated:YES];
    }



}

//编写样本缓冲区时调用的委托例程

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{

    CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);


    CVPixelBufferLockBaseAddress(cameraFrame, 0);

    GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);

    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);

    **NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];

**

问题 1.(这里我只获取原始字节一次) 2.(之后我想在app路径中将这个原始字节存储为二进制文件)。

// Do whatever with your bytes

NSLog(@"bytes per row %zd",bytesPerRow);

[dataForRawBytes writeToFile:[self datafilepath]atomically:YES];

NSLog(@"Sample Buffer Data is %@\n",dataForRawBytes);


CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

}

这里我设置输出委托//创建并配置捕获会话并开始运行 - (void)setupCaptureSession {     NSError * error = nil;

// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];


// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetMedium;

// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice
                           defaultDeviceWithMediaType:AVMediaTypeVideo];

// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
                                                                    error:&error];
if (!input)
{
    // Handling the error appropriately.
}
[session addInput:input];

// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:output];


// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);

// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
 [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
                            forKey:(id)kCVPixelBufferPixelFormatTypeKey]; //kCVPixelBufferPixelFormatTypeKey


// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
// output.minFrameDuration = CMTimeMake(1, 15);

// Start the session running to start the flow of data
[session startRunning];

// Assign session to an ivar.
//[self setSession:session];

}

我感谢任何帮助。谢谢。

1 个答案:

答案 0 :(得分:2)

您可以查看AVFoundation框架。它允许您访问从摄像机生成的原始数据。

link是AVFoundation摄像机使用的一个很好的介绍级项目。

为了从视频输出中获取单个帧,您可以使用AVFoundation框架中的AVCaptureVideoDataOutput类。

希望这有帮助。

编辑:您可以查看AVCaptureVideoDataOutputSampleBufferDelegate的委托功能,尤其是captureOutput:didOutputSampleBuffer:fromConnection:方法。每次捕获新帧时都会调用此方法。

如果您不知道代表的工作方式,那么link就是代表们的一个很好的例子。