使用OpenCV在IOS中进行面部特征检测

时间:2016-05-26 21:43:09

标签: ios opencv opencv3.0

我想在iOS中进行面部特征检测。我已经能够使用OpenCV检测到一张脸,但现在想要检测所有的功能'在这张脸上,我可以在以后对它们进行识别。

我找到了一个名为flandmark的库,但它并不喜欢它有一个我可以在iOS上使用的框架。

有没有人知道如何做到这一点?

由于 Nikhil Mehta

1 个答案:

答案 0 :(得分:3)

我建议你尽量保持简单,在这种情况下,如果足够的话,只需使用原生iOS的可能性。

主要类是CIDetector框架的 CoreImage 。 以下是主要方法

// create CIDetector object with CIDetectorTypeFace type
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                                  context:nil
                                                  options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];

// give it CIImage and receive an array with CIFaceFeature objects
NSArray *features = [detector featuresInImage:newImage];

根据 Apple 文档 CIFaceFeature 包含下一个属性

@interface CIFaceFeature : CIFeature

 @property (readonly, assign) CGRect bounds;
 @property (readonly, assign) BOOL hasLeftEyePosition;
 @property (readonly, assign) CGPoint leftEyePosition;
 @property (readonly, assign) BOOL hasRightEyePosition;
 @property (readonly, assign) CGPoint rightEyePosition;
 @property (readonly, assign) BOOL hasMouthPosition;
 @property (readonly, assign) CGPoint mouthPosition;

 @property (readonly, assign) BOOL hasTrackingID;
 @property (readonly, assign) int trackingID;
 @property (readonly, assign) BOOL hasTrackingFrameCount;
 @property (readonly, assign) int trackingFrameCount;

 @property (readonly, assign) BOOL hasFaceAngle;
 @property (readonly, assign) float faceAngle;

 @property (readonly, assign) BOOL hasSmile;
 @property (readonly, assign) BOOL leftEyeClosed;
 @property (readonly, assign) BOOL rightEyeClosed;

 @end

还有一个很棒的Raywenderlich article关于GCD,它实现了脸部特征检测,这里是它的Final project。它找到了人们的眼睛位置,并用一些有趣的眼睛覆盖它。

最后是项目中的部分代码和截图。

- (UIImage *)faceOverlayImageFromImage:(UIImage *)image
{
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                              context:nil
                                              options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];
    // Get features from the image
    CIImage* newImage = [CIImage imageWithCGImage:image.CGImage];

    NSArray *features = [detector featuresInImage:newImage];

    UIGraphicsBeginImageContext(image.size);
    CGRect imageRect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);

    //Draws this in the upper left coordinate system
    [image drawInRect:imageRect blendMode:kCGBlendModeNormal alpha:1.0f];

    CGContextRef context = UIGraphicsGetCurrentContext();

    for (CIFaceFeature *faceFeature in features) {
        CGRect faceRect = [faceFeature bounds];
        CGContextSaveGState(context);

        // CI and CG work in different coordinate systems, we should translate to
        // the correct one so we don't get mixed up when calculating the face position.
        CGContextTranslateCTM(context, 0.0, imageRect.size.height);
        CGContextScaleCTM(context, 1.0f, -1.0f);

        if ([faceFeature hasLeftEyePosition]) {
            CGPoint leftEyePosition = [faceFeature leftEyePosition];
            CGFloat eyeWidth = faceRect.size.width / kFaceBoundsToEyeScaleFactor;
            CGFloat eyeHeight = faceRect.size.height / kFaceBoundsToEyeScaleFactor;
            CGRect eyeRect = CGRectMake(leftEyePosition.x - eyeWidth/2.0f,
                                        leftEyePosition.y - eyeHeight/2.0f,
                                        eyeWidth,
                                        eyeHeight);
            [self drawEyeBallForFrame:eyeRect];
        }

        if ([faceFeature hasRightEyePosition]) {
            CGPoint leftEyePosition = [faceFeature rightEyePosition];
            CGFloat eyeWidth = faceRect.size.width / kFaceBoundsToEyeScaleFactor;
            CGFloat eyeHeight = faceRect.size.height / kFaceBoundsToEyeScaleFactor;
            CGRect eyeRect = CGRectMake(leftEyePosition.x - eyeWidth / 2.0f,
                                        leftEyePosition.y - eyeHeight / 2.0f,
                                        eyeWidth,
                                        eyeHeight);
            [self drawEyeBallForFrame:eyeRect];
        }

        CGContextRestoreGState(context);
    }

    UIImage *overlayImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return overlayImage;
}

- (void)drawEyeBallForFrame:(CGRect)rect
{
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextAddEllipseInRect(context, rect);
    CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
    CGContextFillPath(context);

    CGFloat x, y, eyeSizeWidth, eyeSizeHeight;
    eyeSizeWidth = rect.size.width * kRetinaToEyeScaleFactor;
    eyeSizeHeight = rect.size.height * kRetinaToEyeScaleFactor;

    x = arc4random_uniform((rect.size.width - eyeSizeWidth));
    y = arc4random_uniform((rect.size.height - eyeSizeHeight));
    x += rect.origin.x;
    y += rect.origin.y;

    CGFloat eyeSize = MIN(eyeSizeWidth, eyeSizeHeight);
    CGRect eyeBallRect = CGRectMake(x, y, eyeSize, eyeSize);
    CGContextAddEllipseInRect(context, eyeBallRect);
    CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor);
    CGContextFillPath(context);
}

Screenshot from the Raywenderlich's project

希望它会有所帮助

相关问题