使用相机快速进行实时人脸跟踪4

时间:2018-01-05 15:09:02

标签: ios swift avfoundation apple-vision

我希望能够通过相机Feed跟踪用户脸部。我看了this SO帖子。我使用了答案中给出的代码,但似乎没有做任何事情。我听说过

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
在pift中,

已更改为其他内容。这可能是代码的问题吗?

在进行面部追踪时,我还希望使用CIFaceFeature监控面部地标。我该怎么做?

1 个答案:

答案 0 :(得分:0)

我在这里找到了一个起点:https://github.com/jeffreybergier/Blog-Getting-Started-with-Vision

基本上,您可以将声明一个惰性变量的视频捕获会话实例化:

private lazy var captureSession: AVCaptureSession = {
    let session = AVCaptureSession()
    session.sessionPreset = AVCaptureSession.Preset.photo
    guard
        let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),
        let input = try? AVCaptureDeviceInput(device: frontCamera)
        else { return session }
    session.addInput(input)
    return session
}()

然后在viewDidLoad内启动会话

self.captureSession.startRunning()

最后,您可以在内部执行请求

func captureOutput(_ output: AVCaptureOutput, 
    didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
}

例如:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: 
    CMSampleBuffer, from connection: AVCaptureConnection) {
    guard
        // make sure the pixel buffer can be converted
        let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
        else { return }

    let faceRequest = VNDetectFaceRectanglesRequest(completionHandler: self.faceDetectedRequestUpdate)

    // perform the request
    do {
        try self.visionSequenceHandler.perform([faceRequest], on: pixelBuffer)
    } catch {
        print("Throws: \(error)")
    }
}

然后定义faceDetectedRequestUpdate函数。

无论如何,我不得不说我一直无法弄清楚如何从这里创建一个有效的示例。我发现最好的示例是在Apple文档中:https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time

相关问题