SceneKit金属深度缓冲区

时间:2016-11-07 23:17:45

标签: ios opengl-es scenekit metal

我正在尝试使用SceneKit编写增强现实应用,我需要使用SCNSceneRenderer's unprojectPoint方法给出2D像素和深度,从当前渲染帧中获得精确的3D点。这需要x,y和z,其中x和y是像素坐标,通常z是从该帧的深度缓冲区读取的值。

SCNView的委托有这种方法来渲染深度框架:

func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
    renderDepthFrame()
} 

func renderDepthFrame(){

    // setup our viewport
    let viewport: CGRect = CGRect(x: 0, y: 0, width: Double(SettingsModel.model.width), height: Double(SettingsModel.model.height))

    // depth pass descriptor
    let renderPassDescriptor = MTLRenderPassDescriptor()

    let depthDescriptor: MTLTextureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: MTLPixelFormat.depth32Float, width: Int(SettingsModel.model.width), height: Int(SettingsModel.model.height), mipmapped: false)
    let depthTex = scnView!.device!.makeTexture(descriptor: depthDescriptor)
    depthTex.label = "Depth Texture"
    renderPassDescriptor.depthAttachment.texture = depthTex
    renderPassDescriptor.depthAttachment.loadAction = .clear
    renderPassDescriptor.depthAttachment.clearDepth = 1.0
    renderPassDescriptor.depthAttachment.storeAction = .store



    let commandBuffer = commandQueue.makeCommandBuffer()

    scnRenderer.scene = scene
    scnRenderer.pointOfView = scnView.pointOfView!

    scnRenderer!.render(atTime: 0, viewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)


    // setup our depth buffer so the cpu can access it
    let depthImageBuffer: MTLBuffer = scnView!.device!.makeBuffer(length: depthTex.width * depthTex.height*4, options: .storageModeShared)
    depthImageBuffer.label   = "Depth Buffer"
    let blitCommandEncoder: MTLBlitCommandEncoder = commandBuffer.makeBlitCommandEncoder()
    blitCommandEncoder.copy(from: renderPassDescriptor.depthAttachment.texture!, sourceSlice: 0, sourceLevel: 0, sourceOrigin: MTLOriginMake(0, 0, 0), sourceSize: MTLSizeMake(Int(SettingsModel.model.width), Int(SettingsModel.model.height), 1), to: depthImageBuffer, destinationOffset: 0, destinationBytesPerRow: 4*Int(SettingsModel.model.width), destinationBytesPerImage: 4*Int(SettingsModel.model.width)*Int(SettingsModel.model.height))
    blitCommandEncoder.endEncoding()

    commandBuffer.addCompletedHandler({(buffer) -> Void in
        let rawPointer: UnsafeMutableRawPointer = UnsafeMutableRawPointer(mutating: depthImageBuffer.contents())
        let typedPointer: UnsafeMutablePointer<Float> = rawPointer.assumingMemoryBound(to: Float.self)
        self.currentMap = Array(UnsafeBufferPointer(start: typedPointer, count: Int(SettingsModel.model.width)*Int(SettingsModel.model.height)))

    })

    commandBuffer.commit()

}

这很有效。我得到的深度值介于0和1之间。问题是我不能在unprojectPoint中使用它们,因为尽管使用相同的SCNScene和SCNCamera,它们看起来并不像初始传递那样缩放。

我的问题:

  1. 有没有办法直接从SceneKit SCNView的主要传递获取深度值,而无需使用单独的SCNRenderer进行额外传递?

  2. 为什么我的传递中的深度值与执行命中测试然后取消投影时获得的值不匹配?我传球的深度值从0.78到0.94。命中测试中的深度值范围从0.89到0.97(奇怪的是,当我用Python渲染时,它与场景的OpenGL深度值匹配。

  3. 我的预感是这是视口中的一个区别,而SceneKit正在做一些事情来将深度值从-1缩放到1,就像OpenGL一样。

    编辑:如果你想知道,我不能直接使用hitTest方法。这对我想要实现的目标来说太慢了。

2 个答案:

答案 0 :(得分:1)

作为一种解决方法,我切换到OpenGL ES并通过添加片段着色器来读取深度缓冲区,该着色器着色器将深度值打包到RGBA渲染缓冲区SCNShadable中。

有关详细信息,请参阅此处:http://concord-consortium.github.io/lab/experiments/webgl-gpgpu/webgl.html

我知道这是一种有效的方法,因为它经常在OpenGL ES设备和WebGL上用于阴影贴图,但这对我来说感觉很糟糕,我不应该这样做。如果有人能够弄清楚Metal的视口转换,我仍然会对另一个答案感兴趣。

答案 1 :(得分:0)

SceneKit 默认使用对数刻度反向 Z 缓冲区。您可以很容易地禁用反向 Z 缓冲区 (scnView.usesReverseZ = false),但是将日志深度设置为具有线性分布的 [0, 1] 范围需要访问深度缓冲区、远剪裁范围的值和近剪裁范围。下面是将非反向 z-log-depth 取到 [0, 1] 范围内线性分布深度的过程:

float delogDepth(float depth, float nearClip, float farClip) {
    // The depth buffer is in Log Format. Probably a 24bit float depth with 8 for stencil.
    // https://outerra.blogspot.com/2012/11/maximizing-depth-buffer-range-and.html
    // We need to undo the log format.
    // https://stackoverflow.com/questions/18182139/logarithmic-depth-buffer-linearization
    float logTuneConstant = nearClip / farClip;
    float deloggedDepth = ((pow(logTuneConstant * farClip + 1.0, depth) - 1.0) / logTuneConstant) / farClip;
    // The values are going to hover around a particular range. Linearize that distribution.
    // This part may not be necessary, depending on how you will use the depth.
    // http://glampert.com/2014/01-26/visualizing-the-depth-buffer/
    float negativeOneOneDepth = deloggedDepth * 2.0 - 1.0;
    float zeroOneDepth = ((2.0 * nearClip) / (farClip + nearClip - negativeOneOneDepth * (farClip - nearClip)));
    return zeroOneDepth;
}
相关问题