Microsoft Kinect Sandbox

Today we set up one of the undergraduate projects, the Kinect Augmented Reality Sandbox.

Here is the link to the students' blog covering how they built it.

The Kinect normally uses an infrared camera to detect people and project their movements onto a screen. 

This is the use of Lidar, which you can read more about here.

Instead here it is held by a tripod connected to a projector above the sandbox and uses its infrared camera to detect the depth of the sand. It is connected to a C++ program which maps this in colour onto the screen, after calibrating the area being measured by the camera. The colour shown on the program changed depending on the depth (blue for medium, orange for high points and green for deeper points) and changes in real time with movements from the sand.

We managed to set this up and got to see it working like this, but we found that for particularly quick repeating movements and making a much higher pile of sand resulted in the program breaking, only showing a flashing blank screen of one of the three colours. This could be fixed by changing the code to be able to take a larger range of depths to sense (unless that is an issue of the connect and not the software) and for users to take care in the speed of their movements. Maybe using the Kinect itself is an issue here since it's last version was released in 2017, and the one used is much older so it may not be able to handle such quick movements. 

Comments

Popular Posts