AVFoundation - 检测面部和裁剪面部区域?

时间:2014-06-06 13:30:35

标签: ios ipad crop face-detection

正如标题所说,我想检测脸部,然后只裁剪脸部区域。这就是我到目前为止所做的:

- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection {

for (AVMetadataObject *face in metadataObjects) {
    if ([face.type isEqualToString:AVMetadataObjectTypeFace]) {

        AVCaptureConnection *stillConnection = [_stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
        stillConnection.videoOrientation = [self videoOrientationFromCurrentDeviceOrientation];
        [_stillImageOutput captureStillImageAsynchronouslyFromConnection:stillConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
            if (error) {
                NSLog(@"There was a problem");
                return;
            }

            NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
            UIImage *stillImage = [UIImage imageWithData:jpegData];

            CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:[CIContext contextWithOptions:nil] options:nil];
            CIImage *ciimage = [CIImage imageWithData:jpegData];

            NSArray *features = [faceDetector featuresInImage:ciimage];
            self.captureImageView.image = stillImage;

            for(CIFeature *feature in features) {
                if ([feature isKindOfClass:[CIFaceFeature class]]) {
                    CIFaceFeature *faceFeature = (CIFaceFeature *)feature;

                    CGImageRef imageRef = CGImageCreateWithImageInRect([stillImage CGImage], faceFeature.bounds);
                    self.detectedFaceImageView.image = [UIImage imageWithCGImage:imageRef];
                    CGImageRelease(imageRef);
                }
            }
            //[_session stopRunning];
        }];
    }
}

}

这段代码部分工作,它可以检测到脸部,但它不能用脸部裁剪,它总是裁剪错误的区域,它会裁剪一些东西。我一直在浏览堆栈的答案,尝试这个和那个,但无济于事。

1 个答案:

答案 0 :(得分:4)

以下是答案

- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

// when do we start face detection
if (!_canStartDetection) return;

CIImage *ciimage = [CIImage imageWithCVPixelBuffer:CMSampleBufferGetImageBuffer(sampleBuffer)];
NSArray *features = [_faceDetector featuresInImage:ciimage options:nil];

// find face feature
for(CIFeature *feature in features) {

    // if not face feature ignore
    if (![feature isKindOfClass:[CIFaceFeature class]]) continue;

    // face detected
    _canStartDetection = NO;
    CIFaceFeature *faceFeature = (CIFaceFeature *)feature;

    // crop detected face
    CIVector *cropRect = [CIVector vectorWithCGRect:faceFeature.bounds];
    CIFilter *cropFilter = [CIFilter filterWithName:@"CICrop"];
    [cropFilter setValue:ciimage forKey:@"inputImage"];
    [cropFilter setValue:cropRect forKey:@"inputRectangle"];
    CIImage *croppedImage = [cropFilter valueForKey:@"outputImage"];
    UIImage *stillImage = [UIImage imageWithCIImage:ciimage];
}

}

请注意,我这次使用AVCaptureVideoDataOutput,这是代码:

// set output for face frames
AVCaptureVideoDataOutput *output2 = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output2];
output2.videoSettings = @{(NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};
output2.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue = dispatch_queue_create("com.myapp.faceDetectionQueueSerial", DISPATCH_QUEUE_SERIAL);
[output2 setSampleBufferDelegate:self queue:queue];
相关问题