我正在嘗試使用SceneKit編寫增強現實應用程序,並且我需要使用SCNSceneRenderer's unprojectPoint方法給出的2D像素和深度,從當前渲染幀準確的3D點。這需要x,y和z,其中x和y是像素座標,通常z是從該幀處的深度緩衝區讀取的值。SceneKit金屬深度緩衝區
的SCNView的代表有這個方法來呈現深度幀:
func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
renderDepthFrame()
}
func renderDepthFrame(){
// setup our viewport
let viewport: CGRect = CGRect(x: 0, y: 0, width: Double(SettingsModel.model.width), height: Double(SettingsModel.model.height))
// depth pass descriptor
let renderPassDescriptor = MTLRenderPassDescriptor()
let depthDescriptor: MTLTextureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: MTLPixelFormat.depth32Float, width: Int(SettingsModel.model.width), height: Int(SettingsModel.model.height), mipmapped: false)
let depthTex = scnView!.device!.makeTexture(descriptor: depthDescriptor)
depthTex.label = "Depth Texture"
renderPassDescriptor.depthAttachment.texture = depthTex
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.clearDepth = 1.0
renderPassDescriptor.depthAttachment.storeAction = .store
let commandBuffer = commandQueue.makeCommandBuffer()
scnRenderer.scene = scene
scnRenderer.pointOfView = scnView.pointOfView!
scnRenderer!.render(atTime: 0, viewport: viewport, commandBuffer: commandBuffer, passDescriptor: renderPassDescriptor)
// setup our depth buffer so the cpu can access it
let depthImageBuffer: MTLBuffer = scnView!.device!.makeBuffer(length: depthTex.width * depthTex.height*4, options: .storageModeShared)
depthImageBuffer.label = "Depth Buffer"
let blitCommandEncoder: MTLBlitCommandEncoder = commandBuffer.makeBlitCommandEncoder()
blitCommandEncoder.copy(from: renderPassDescriptor.depthAttachment.texture!, sourceSlice: 0, sourceLevel: 0, sourceOrigin: MTLOriginMake(0, 0, 0), sourceSize: MTLSizeMake(Int(SettingsModel.model.width), Int(SettingsModel.model.height), 1), to: depthImageBuffer, destinationOffset: 0, destinationBytesPerRow: 4*Int(SettingsModel.model.width), destinationBytesPerImage: 4*Int(SettingsModel.model.width)*Int(SettingsModel.model.height))
blitCommandEncoder.endEncoding()
commandBuffer.addCompletedHandler({(buffer) -> Void in
let rawPointer: UnsafeMutableRawPointer = UnsafeMutableRawPointer(mutating: depthImageBuffer.contents())
let typedPointer: UnsafeMutablePointer<Float> = rawPointer.assumingMemoryBound(to: Float.self)
self.currentMap = Array(UnsafeBufferPointer(start: typedPointer, count: Int(SettingsModel.model.width)*Int(SettingsModel.model.height)))
})
commandBuffer.commit()
}
這工作。我得到0和1之間的深度值。問題是我不能在unprojectPoint中使用它們,因爲儘管使用了相同的SCNScene和SCNCamera,但它們看起來並沒有像初始傳遞一樣縮放。
我的問題:
有沒有什麼辦法讓直接從SceneKit SCNView的主要通的深度值,而無需進行額外的通帶獨立SCNRenderer?
爲什麼我傳球中的深度值不匹配我從進行命中測試獲得的值,然後進行非投影?我的傳球深度值從0.78變爲0.94。命中測試中的深度值從0.89到0.97,足夠奇怪,它與我在Python中渲染場景時的OpenGL深度值相匹配。
我的直覺是,這是在視口中的差異並SceneKit正在做的東西是從-1到1一樣的OpenGL擴展的深度值。
編輯:如果你想知道,我不能直接使用hitTest方法。我試圖達到的速度太慢了。