Browsing by Author "Genest, A. M."
Now showing 1 - 2 of 2
Results Per Page
- ItemMetadata onlyEvaluating the Effectiveness of Height Visualizations for Improving Gestural Communication at Distributed Tables(ACM, 2012) Genest, A. M.; Gutwin, C.In co-located collaboration, people use the space above the table for deictic gestures, and height is an important part of these gestures. However, when collaborators work at distributed tables, we know little about how to convey information about gesture height. A few visualizations have been proposed, but these have not been evaluated in detail. To better understand how remote embodiments can show gesture height, we developed several visualizations and evaluated them in three studies. First, we show that touch visualizations significantly improve people's accuracy in identifying the type and target of a gesture. Second, we show that visualizations of height above the table help to convey gesture qualities such as confidence, emphasis, and specificity. Third, we show that people quickly make use of height visualizations in realistic collaborative tasks, and that height-enhanced embodiments are strongly preferred. Our work illustrates several designs for effective visualization of height, and provides the first comprehensive evidence of the value of height information as a way to improve gestural communication in distributed tabletop groupware.
- ItemMetadata onlyKinectArms: a Toolkit for Capturing and Displaying Arm Embodiments in Distributed Tabletop Groupware.(ACM, 2013) Genest, A. M.; Gutwin, C.; Tang, A.; Kalyn, M.; Ivkovic, Z.Gestures are a ubiquitous part of human communication over tables, but when tables are distributed, gestures become difficult to capture and represent. There are several problems: extracting arm images from video, representing the height of the gesture, and making the arm embodiment visible and understandable at the remote table. Current solutions to these problems are often expensive, complex to use, and difficult to set up. We have developed a new toolkit - KinectArms - that quickly and easily captures and displays arm embodiments. KinectArms uses a depth camera to segment the video and determine gesture height, and provides several visual effects for representing arms, showing gesture height, and enhancing visibility. KinectArms lets designers add rich arm embodiments to their systems without undue cost or development effort, greatly improving the expressiveness and usability of distributed tabletop groupware.