CIContext initialization crash

时间:2017-04-10 03:02:06

标签: ios image camera swift2

Background: I'm running a swift 2 application with the following two options.

Option A: The user can enter a number to sign in. In this case, his/her picture is shown in a UIImageView.

Option B: The user can use an NFC tag to sign in. In this case, the UIImageView is replaced with a camera layer that shows live camera stream and uses CIContext to capture an image on a button press.

Problem: The issue I'm facing is that sometimes, when I choose option A (not using the camera layer), the app crashes. Since I'm unable to reproduce the crash deterministically, I have hit a dead end to understand why the app is crashing.

EDIT: The camera layer is used in both options but is hidden in option A.

Crashlytics generates the following crash log:

0   libswiftCore.dylib specialized _fatalErrorMessage(StaticString, StaticString, StaticString, UInt) -> () + 44
1   CameraLayerView.swift line 20 CameraLayerView.init(coder : NSCoder) -> CameraLayerView?
2   CameraLayerView.swift line 0 @objc CameraLayerView.init(coder : NSCoder) -> CameraLayerView?
3   UIKit -[UIClassSwapper initWithCoder:] + 248
32  UIKit UIApplicationMain + 208
33  AppDelegate.swift line 17 main
34  libdispatch.dylib (Missing)

I've checked line#20 in CameraLayerView but it is just an initialization statement

private let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))

Mentioned below is the CameraLayerView file. Any help would be appreciated

var captureSession = AVCaptureSession()
var sessionOutput = AVCaptureVideoDataOutput()
var previewLayer = AVCaptureVideoPreviewLayer()

private var pixelBuffer : CVImageBuffer!
private var attachments : CFDictionary!
private var ciImage : CIImage!
private let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))
private var imageOptions : [String : AnyObject]!

var faceFound = false
var image : UIImage!

override func layoutSubviews() {
    previewLayer.position = CGPoint(x: self.frame.width/2, y: self.frame.height/2)
    previewLayer.bounds = self.frame
    self.layer.borderWidth = 2.0
    self.layer.borderColor = UIColor.redColor().CGColor
}

func loadCamera() {
    let camera = AVCaptureDevice.devicesWithMediaType(AVMediaTypeVideo)
    for device in camera {
        if device.position == .Front {
            do{
                for input in captureSession.inputs {
                    captureSession.removeInput(input as! AVCaptureInput)
                }
                for output in captureSession.outputs {
                    captureSession.removeOutput(output as! AVCaptureOutput)
                }
                previewLayer.removeFromSuperlayer()
                previewLayer.session = nil
                let input = try AVCaptureDeviceInput(device: device as! AVCaptureDevice)
                if captureSession.canAddInput(input) {
                    captureSession.addInput(input)
                    sessionOutput.videoSettings = [String(kCVPixelBufferPixelFormatTypeKey) : Int(kCVPixelFormatType_32BGRA)]
                    sessionOutput.setSampleBufferDelegate(self, queue: dispatch_get_global_queue(Int(QOS_CLASS_BACKGROUND.rawValue), 0))
                    sessionOutput.alwaysDiscardsLateVideoFrames = true

                    if captureSession.canAddOutput(sessionOutput) {
                        captureSession.addOutput(sessionOutput)
                        captureSession.sessionPreset = AVCaptureSessionPresetPhoto
                        captureSession.startRunning()

                        previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
                        previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
                        switch UIDevice.currentDevice().orientation.rawValue {
                        case 1:
                            previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.Portrait
                            break
                        case 2:
                            previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.PortraitUpsideDown
                            break
                        case 3:
                            previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
                            break
                        case 4:
                            previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeLeft
                            break
                        default:
                            break
                        }
                        self.layer.addSublayer(previewLayer)
                    }
                }

            } catch {
                print("Error")
            }
        }
    }
}

func takePicture() -> UIImage {
    self.previewLayer.removeFromSuperlayer()
    self.captureSession.stopRunning()
    return image
}

func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
    pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)
    ciImage = CIImage(CVPixelBuffer: pixelBuffer!, options: attachments as? [String : AnyObject])
    if UIDevice.currentDevice().orientation == .PortraitUpsideDown {
        imageOptions = [CIDetectorImageOrientation : 8]
    } else if UIDevice.currentDevice().orientation == .LandscapeLeft {
        imageOptions = [CIDetectorImageOrientation : 3]
    } else if UIDevice.currentDevice().orientation == .LandscapeRight {
        imageOptions = [CIDetectorImageOrientation : 1]
    } else {
        imageOptions = [CIDetectorImageOrientation : 6]
    }
    let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: ciContext, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
    let features = faceDetector.featuresInImage(ciImage, options: imageOptions)
    if features.count == 0 {
        if faceFound == true {
            faceFound = false
            dispatch_async(dispatch_get_main_queue()) {
                self.layer.borderColor = UIColor.redColor().CGColor
            }
        }
    } else {
        if UIDevice.currentDevice().orientation == .PortraitUpsideDown {
            image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Left)
        } else if UIDevice.currentDevice().orientation == .LandscapeLeft {
            image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Down)
        } else if UIDevice.currentDevice().orientation == .LandscapeRight {
            image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Up)
        } else {
            image = UIImage(CGImage: ciContext.createCGImage(ciImage, fromRect: ciImage.extent), scale: 1.0, orientation: UIImageOrientation.Right)
        }
        if faceFound == false {
            faceFound = true
            for feature in features {
                if feature.isKindOfClass(CIFaceFeature) {
                    dispatch_async(dispatch_get_main_queue()) {
                        self.layer.borderColor = UIColor.greenColor().CGColor
                    }
                }
            }
        }
    }
}

1 个答案:

答案 0 :(得分:0)

我测试了一个理论并且它起作用了。由于ciContext正在使用视图初始化进行初始化,因此看起来应用程序因竞争条件而崩溃。我将ciContext的初始化移到了我的loadCamera方法中,因此它没有崩溃。

<强>更新

我注意到的另一件事是,在互联网上的各种教程和博客文章中,声明let ciContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2))在两个单独的声明中声明,以便它成为

let eaglContext = EAGLContext(API: .OpenGLES2)
let ciContext = CIContext(EAGLContext: eaglContext)

我仍然不知道究竟是什么导致应用程序崩溃,但这两个更改似乎解决了问题

正确答案

终于找到了罪魁祸首。在我使用ciContext的viewController中,我有一个未被无效的计时器,因此保留了对viewController的强引用。在每次后续访问中,它将创建一个新的viewController,而前一个从未从内存中释放。这导致记忆加速。一旦它超过某个阈值,ciContext intialiser将返回nil,因为内存不足会导致应用程序崩溃。