Pictured: The SDK block generates a point cloud without traversing the depth image Version 1.2.1 of the Kinect SDK block for Cinder is the result of a mission to make a rock-solid, high performance, easy to use interface to this device. Point clouds can be generated without reading the depth image. Data is now received using a reliable update/callback system. Skeletons contain more data and can be rendered with a fraction of the code from previous versions. Initializing and reading multiple devices is a breeze, as is mapping skeleton data to video or depth images. Subtle details like seated mode, near mode, and optional verbose error reporting take further advantage of the Kinect's capabilities while making it easier to use. This block is meant to provide easy and reliable access to the device for both the tinkerer who just wants to prototype ideas and the professional who needs their Kinect(s) to run around the clock. Pictured: The SDK block provides access to multiple devices, camera tilt, and more GET KINECTSDK FOR CINDER ON GITHUB BLOCK + INSTALLATION Follow these easy steps to install: Download and install the official SDK and drivers here Clone the KinectSdk block from GitHub Unzip the contents into your main Cinder path, merging it with your "blocks" folder Go to "cinder/blocks/KinectSDK/samples/KinectApp/vc10" and open "KinectApp.sln" with Visual Studio C++ Express or Visual Studio 2010 Build and run IMPORTANT! Do NOT move any of the SDK's files into the sample projects. The projects rely on a properly installed Kinect SDK to work. To create your own project, I recommend starting with the KinectApp sample and then build out from there. This will save you the work of trying to set up linking, DLL copying, etc. CHANGE LOG 1.2.3: Fixed getDepthAt range -- is always 0 - 1 now 1.2.2: Dropped template argument from add Type Callback() method 1.2.2: Removed second add Type Callback() signature 1.2.1: Added ContoursApp sample 1.2.1: Added DeviceOptions::enableUserTracking() to toggle user index flag (set to false when using multiple devices) 1.2.0: Switched to callback system for reduced CPU and improved stability 1.1.9: Works with Kinect SDK 1.5.0 1.1.9: New sample app demonstrating how to map skeletons to color and depth image 1.1.9: New sample app demonstrating how to run up to eight concurrent devices 1.1.9: Improved depth accuracy 1.1.9: New method to read depth values as float values from 0.0 (near) to 1.0 (far), without copying or referencing depth image 1.1.9: Helper methods to align skeleton positions to depth and color images 1.1.9: Query device index and unique ID 1.1.9: Initialize device by unique ID (ensures device order when using multiple Kinects) 1.1.9: Improved frame rate 1.1.9: Improved stability 1.1.9: New user colors 1.1.9: New depth colors -- the red channel always contains the raw depth value and values are ordered from 0 (near) to 4096 (far) 1.1.9: Better support for high resolution modes 1.1.9: Smarter initialization 1.1.9: Verbose error reporting 1.1.9: Visual Studio solution for all projects (VS10 Pro+ only) 1.1.9: Depth image surface channel order is now RGB, not RGBA (alpha was unused) 1.1.9: Handles physical disconnections / reconnections (read note below) 1.1.9: Better, VS10/VS Assist-friendly inline documentation 1.1.3: Updated to work with Kinect SDK 126.96.36.199 1.1.3: Update for use with Cinder 0.8.4 1.1.2: Reduced CPU usage 1.1.2: Fixed an issue where low frame rates occurred when getting user count 1.1.1: Added "near mode" 1.1.1: Improved threading 1.1.0: Added support for stable SDK 1.0 1.1.0: Added static library, implemented in samples 1.1.0: Added "MeshApp" sample 1.0.0: Works with the new Kinect SDK 1.0 beta 2 1.0.0: Support for up to eight connected devices 1.0.0: Camera tilt 1.0.0: Greyscale mode 1.0.0: Improved performance and stability 1.0.0: Added PointCloudGpu sample application 1.0.0: Block arranged to meet new CinderBlock guidelines and improve portability 1.0.0: Includes improved AudioInput class for Windows 1.0.0: Block and samples exit with zero memory remaining 1.0.0: Fixed bug with high resolution modes Read about 0.0.x version here . ON MULTIPLE DEVICES A Kinect uses 61% of the bandwidth of a USB 2.0 controller. That may seem high, but note that each device contains a 1280x960 RGB camera, a 640x480 monochrome camera, a soundcard with multiple microphones, and two motors (neck and IR emitter). That means that you need a separate controller for each device. Before you get frustrated with your additional device not working, make sure you have the bandwidth to support it.
great, I'm getting involved with the kinect and capabilities and this I think it's great to start a personal project, and fails it proves my comments.
hello I have achieved but I can play video or the depth of field in the previous post I read that a problem exists SDK some solution thanks.
Hey, sorry to remove your comments, but they were breaking the formatting on this page pretty badly (need to fix that). The latest block is meant to work with Cinder 0.8.4, which is about to be released. It uses Boost 1.48, rather than 1.44. In the mean time, you can get it from Github here: https://github.com/cinder/Cinder Check out this guide for setting up Cinder and Boost: http://libcinder.org/docs/welcome/GitSetup.html When the Cinder 0.8.4 package is officially released, it will come with the Boost dependencies.
Hello Stephen. Sorry for the trouble my comment caused, that was never intended :) I'll check your links when I get home and post the results here. Thank you!
Hi Stephen, Thank you for this great wrapper. I had no trouble building and running it. However, I found an issue. I use 2 Kinects, but every time I switched into the 2nd Kinect, no images were displayed, only input from audio that were displayed. I tried running the Kinect SDK sample from Microsoft and it managed to show images from both Kinect. Anything I may add on the code for this? Cheers,
@Didit This is a known issue which will be resolved after updating the block to the May 2012 release of the SDK, which I'm doing over the next couple days. You should be able to run two devices simultaneously right now, though. Just make sure they are each on their own USB controller.
ah, I see. thanks for clearing that up. so, are you saying that both devices are actually working? only I couldn't see the image.
Right. You can actually run both devices, but a USB 2.0 controller only has the bandwidth to receive one stream.
Hi, I'm using kinect SDK in order to capture the face. I need to get a result like "input scans" which these guys can get in this link at time 01:05 : http://www.youtube.com/watch?v=8kbPhG3y8ts Do you know how can I get such input scans using Kinect XBOX 360 and Kinect SDK? Thanks in advance :)