在Swift中逐像素地对图像应用视觉效果

时间:2014-08-31 00:00:52

标签: ios iphone camera swift core-image

我有一个大学的任务来创建视觉效果并将它们应用于通过设备相机捕获的视频帧。我目前可以获取图像和显示但不能更改像素颜色值。

我将样本缓冲区转换为imageRef变量,如果我将其转换为UIImage,一切都会好的。

但是现在我想拍摄那个图像。如果逐个像素地改变它的颜色值,在这个例子中改为负颜色(我必须做更多复杂的东西,所以我不能使用CIFilters)但是当我执行注释部分时,由于访问不当而崩溃。

import UIKit
import AVFoundation

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

  let captureSession = AVCaptureSession()
  var previewLayer : AVCaptureVideoPreviewLayer?

  var captureDevice : AVCaptureDevice?

  @IBOutlet weak var cameraView: UIImageView!

  override func viewDidLoad() {
    super.viewDidLoad()

    captureSession.sessionPreset = AVCaptureSessionPresetMedium

    let devices = AVCaptureDevice.devices()

    for device in devices {
      if device.hasMediaType(AVMediaTypeVideo) && device.position == AVCaptureDevicePosition.Back {
        if let device = device as? AVCaptureDevice {
          captureDevice = device
          beginSession()
          break
        }
      }
    }
  }

  func focusTo(value : Float) {
    if let device = captureDevice {
      if(device.lockForConfiguration(nil)) {
        device.setFocusModeLockedWithLensPosition(value) {
          (time) in
        }
        device.unlockForConfiguration()
      }
    }
  }

  override func touchesBegan(touches: NSSet!, withEvent event: UIEvent!) {
    var touchPercent = Float(touches.anyObject().locationInView(view).x / 320)
    focusTo(touchPercent)
  }

  override func touchesMoved(touches: NSSet!, withEvent event: UIEvent!) {
    var touchPercent = Float(touches.anyObject().locationInView(view).x / 320)
    focusTo(touchPercent)
  }

  func beginSession() {
    configureDevice()

    var error : NSError?
    captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error))

    if error != nil {
      println("error: \(error?.localizedDescription)")
    }

    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

    previewLayer?.frame = view.layer.frame
    //view.layer.addSublayer(previewLayer)

    let output = AVCaptureVideoDataOutput()
    let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
    output.setSampleBufferDelegate(self, queue: cameraQueue)
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA]

    captureSession.addOutput(output)
    captureSession.startRunning()
  }

  func configureDevice() {
    if let device = captureDevice {
      device.lockForConfiguration(nil)
      device.focusMode = .Locked
      device.unlockForConfiguration()
    }
  }

  // MARK : - AVCaptureVideoDataOutputSampleBufferDelegate

  func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
    let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    CVPixelBufferLockBaseAddress(imageBuffer, 0)

    let baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
    let width = CVPixelBufferGetWidth(imageBuffer)
    let height = CVPixelBufferGetHeight(imageBuffer)
    let colorSpace = CGColorSpaceCreateDeviceRGB()

    var bitmapInfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.PremultipliedFirst.toRaw())! | CGBitmapInfo.ByteOrder32Little

    let context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, bitmapInfo)
    let imageRef = CGBitmapContextCreateImage(context)

    CVPixelBufferUnlockBaseAddress(imageBuffer, 0)

    let data = CGDataProviderCopyData(CGImageGetDataProvider(imageRef)) as NSData
    let pixels = data.bytes

    var newPixels = UnsafeMutablePointer<UInt8>()

    //for index in stride(from: 0, to: data.length, by: 4) {

      /*newPixels[index] = 255 - pixels[index]
      newPixels[index + 1] = 255 - pixels[index + 1]
      newPixels[index + 2] = 255 - pixels[index + 2]
      newPixels[index + 3] = 255 - pixels[index + 3]*/
    //}

    bitmapInfo = CGImageGetBitmapInfo(imageRef)
    let provider = CGDataProviderCreateWithData(nil, newPixels, UInt(data.length), nil)

    let newImageRef = CGImageCreate(width, height, CGImageGetBitsPerComponent(imageRef), CGImageGetBitsPerPixel(imageRef), bytesPerRow, colorSpace, bitmapInfo, provider, nil, false, kCGRenderingIntentDefault)

    let image = UIImage(CGImage: newImageRef, scale: 1, orientation: .Right)
    dispatch_async(dispatch_get_main_queue()) {
      self.cameraView.image = image
    }
  }
}

1 个答案:

答案 0 :(得分:1)

您在像素操作循环中访问权限不佳,因为newPixels UnsafeMutablePointer使用内置RawPointer初始化并指向内存中的0x0000,在我看来,它指向一个未分配的内存空间,您无权存储数据。

对于更长的解释和“解决方案”,我做了一些改变......

首先,自OP发布以来,Swift发生了一些变化,这条线必须根据rawValue的功能进行修改:

    //var bitmapInfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.PremultipliedFirst.toRaw())! | CGBitmapInfo.ByteOrder32Little
    var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue) | CGBitmapInfo.ByteOrder32Little

指针也需要进行一些更改,所以我发布了所有更改(我将原始行留在注释标记中)。

    let data = CGDataProviderCopyData(CGImageGetDataProvider(imageRef)) as NSData

    //let pixels = data.bytes
    let pixels = UnsafePointer<UInt8>(data.bytes)

    let imageSize : Int = Int(width) * Int(height) * 4

    //var newPixels = UnsafeMutablePointer<UInt8>()

    var newPixelArray = [UInt8](count: imageSize, repeatedValue: 0)

    for index in stride(from: 0, to: data.length, by: 4) {
        newPixelArray[index] = 255 - pixels[index]
        newPixelArray[index + 1] = 255 - pixels[index + 1]
        newPixelArray[index + 2] = 255 - pixels[index + 2]
        newPixelArray[index + 3] = pixels[index + 3]
    }

    bitmapInfo = CGImageGetBitmapInfo(imageRef)
    //let provider = CGDataProviderCreateWithData(nil, newPixels, UInt(data.length), nil)
    let provider = CGDataProviderCreateWithData(nil, &newPixelArray, UInt(data.length), nil)

一些解释:所有旧的像素字节必须转换为UInt8,因此它不是将像素更改为UnsafePointer。然后我为新像素创建了一个数组,并删除了newPixels指针并直接使用了数组。最后将指向新数组的指针添加到提供程序以创建映像。并删除了alpha字节的修改。

在此之后,我能够以非常低的性能将一些负面图像放入我的视图中,每10秒左右一次图像(iPhone 5,通过XCode)。并且在图像视图中呈现第一帧需要花费大量时间。

当我将captureSession.stopRunning()添加到didOutputSampleBuffer函数的开头时,有一些更快的响应,然后在处理完成后再次使用captureSession.startRunning()启动。有了这个我差不多1fps。

感谢有趣的挑战!

相关问题