這是使用Microsoft.Research.Kinect.Nui.SkeletonEngine類照顧Kinect的內用於Windows庫,以及下面的方法:
public Vector DepthImageToSkeleton (
float depthX,
float depthY,
short depthValue
)
此方法將通過映射產生深度圖像基於現實世界的測量,將Kinect整合到可擴展的向量中。
從那裏(當我創建了一個過去目),枚舉通過Kinect的深度圖像創建位圖中的字節數組後,創建載體的新列表指向類似以下內容:
var width = image.Image.Width;
var height = image.Image.Height;
var greyIndex = 0;
var points = new List<Vector>();
for (var y = 0; y < height; y++)
{
for (var x = 0; x < width; x++)
{
short depth;
switch (image.Type)
{
case ImageType.DepthAndPlayerIndex:
depth = (short)((image.Image.Bits[greyIndex] >> 3) | (image.Image.Bits[greyIndex + 1] << 5));
if (depth <= maximumDepth)
{
points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)x/image.Image.Width), ((float)y/image.Image.Height), (short)(depth << 3)));
}
break;
case ImageType.Depth: // depth comes back mirrored
depth = (short)((image.Image.Bits[greyIndex] | image.Image.Bits[greyIndex + 1] << 8));
if (depth <= maximumDepth)
{
points.Add(nui.SkeletonEngine.DepthImageToSkeleton(((float)(width - x - 1)/image.Image.Width), ((float)y/image.Image.Height), (short)(depth << 3)));
}
break;
}
greyIndex += 2;
}
}
通過這樣做,最終結果是以毫米爲單位存儲的向量列表,並且如果您希望釐米乘以100(等等)。
謝謝劉易斯!這正是我所追求的,儘管我仍然不明白這個不好的業務來獲得深度價值。 – 2012-01-10 02:53:13
我相信DepthImageToSkeleton方法已被重構爲KinectSensor對象上的MapDepthToSkeletonPoint – 2012-07-03 18:52:50
僅供參考:http://arena.openni.org/OpenNIArena/Applications/ViewApp.aspx?app_id=426 – EdgarT 2012-09-01 00:01:02