iOS相机视频实时预览偏移到拍摄的图片

时间:2014-10-19 04:05:09

标签: ios swift camera uikit avfoundation

我正在使用相机。

相机作为实时馈送呈现给用户,当他们点击图像时,会创建并传递给用户。

问题是图像被设计为最高位置,高于实时预览显示。

您知道如何调整相机的边框,以便实时视频Feed的顶部与他们将拍摄的照片顶部相匹配吗?

我认为这可以做到这一点,但事实并非如此。这是我目前的相机框架代码:

 //Add the device to the session, get the video feed it produces and add it to the video feed layer
    func initSessionFeed()
    {
_session = AVCaptureSession()
        _session.sessionPreset = AVCaptureSessionPresetPhoto
        updateVideoFeed()

        _videoPreviewLayer = AVCaptureVideoPreviewLayer(session: _session)
        _videoPreviewLayer.frame = CGRectMake(0,0, self.frame.width, self.frame.width) //the live footage IN the video feed view
        _videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
        self.layer.addSublayer(_videoPreviewLayer)//add the footage from the device to the video feed layer
    }

    func initOutputCapture()
    {
        //set up output settings
        _stillImageOutput = AVCaptureStillImageOutput()
        var outputSettings:Dictionary = [AVVideoCodecJPEG:AVVideoCodecKey]
        _stillImageOutput.outputSettings = outputSettings
        _session.addOutput(_stillImageOutput)
        _session.startRunning()
    }

    func configureDevice()
    {
        if _currentDevice != nil
        {
            _currentDevice.lockForConfiguration(nil)
            _currentDevice.focusMode = .Locked
            _currentDevice.unlockForConfiguration()
        }
    }

    func captureImage(callback:(iImage)->Void)
    {
        if(_captureInProcess == true)
        {
            return
        }
        _captureInProcess = true

        var videoConnection:AVCaptureConnection!
        for connection in _stillImageOutput.connections
        {
            for port in (connection as AVCaptureConnection).inputPorts
            {
                if (port as AVCaptureInputPort).mediaType == AVMediaTypeVideo
                {
                    videoConnection = connection as AVCaptureConnection
                    break;
                }

                if videoConnection != nil
                {
                    break;
                }
            }
        }

        if videoConnection  != nil
        {
            _stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection)
            {
                (imageSampleBuffer : CMSampleBuffer!, _) in
                let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
                var pickedImage = UIImage(data: imageDataJpeg, scale: 1)
                UIGraphicsBeginImageContextWithOptions(pickedImage.size, false, pickedImage.scale)
                pickedImage.drawInRect(CGRectMake(0, 0, pickedImage.size.width, pickedImage.size.height))
                pickedImage = UIGraphicsGetImageFromCurrentImageContext() //this returns a normalized image
                if(self._currentDevice == self._frontCamera)
                {
                    var context:CGContextRef = UIGraphicsGetCurrentContext()
                    pickedImage = UIImage(CGImage: pickedImage.CGImage, scale: 1.0, orientation: .UpMirrored)
                    pickedImage.drawInRect(CGRectMake(0, 0, pickedImage.size.width, pickedImage.size.height))
                    pickedImage = UIGraphicsGetImageFromCurrentImageContext()
                }
                UIGraphicsEndImageContext()
                var image:iImage = iImage(uiimage: pickedImage)
                self._captureInProcess = false
                callback(image)
            }
        }
    }

如果我通过提高y值来调整AVCaptureVideoPreviewLayer的名气,我只会得到一个显示偏移量的黑条。我非常好奇为什么最顶层的视频帧与我的输出图像不匹配。

我做过' crop'相机所以它是一个完美的正方形,但那么为什么现场相机的顶部不是实际的顶部(因为图像默认为更高的位置,相机进纸不显示)

更新

以下是我所谈论的屏幕截图之前和之后

在: Before image这就是实时Feed显示的内容

在: After image这是用户点击拍照时产生的图像

2 个答案:

答案 0 :(得分:1)

而不是

_videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill

你可以尝试

_videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspect

通常,预览和捕获的图像宽度和高度必须匹配。你可能需要做更多的事情"裁剪"在预览或最终图像上,或两者兼而有之。

答案 1 :(得分:0)

我有同样的问题,这段代码对我有用:

previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
var bounds = UIScreen.mainScreen().bounds

previewLayer?.bounds = bounds
previewLayer?.videoGravity = AVLayerVideoGravityResize
previewLayer?.position = CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds))
self.cameraPreview.layer.addSublayer(previewLayer)
captureSession.startRunning()