Cenydd, Llyr apHughes, Chris J.Walker, RickRoberts, Jonathan C.R. Laramee and I. S. Lim2014-02-062014-02-0620111017-4656https://doi.org/10.2312/EG2011/posters/041-042While display technology has advanced significantly in recent years, interaction techniques are still tied to the mouse and keyboard paradigm. While for many tasks this may still be appropriate, allowing systems to recognise and respond to user gestures and motions has enormous potential for natural methods of interaction with virtual media. Traditional methods for pose recognition involve using cameras to track the position of the user. This can be very challenging to complete accurately in a variety of environments where objects may be occluded and the lighting conditions can change. Further, to accurately determine the depth of objects in a scene requires a much more complicated and carefully calibrated system. In this research we prototyped a 3D tabletop display and explored the Kinect game controller as a possible solution to tracking the pose and gesture of a user interacting with our display.Using a Kinect Interface to Develop an Interactive 3D Tabletop Display