Browsing by Author "Shen, Chia"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Open Access Exploring True Multi-User Multimodal Interaction over a Digital Table(2007-08-21) Tse, Edward; Greenberg, Saul; Shen, Chia; Forlines, Clifton; Kodama, RyoTrue multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design space through a case study, where we implemented an application that supports the KJ creativity method as used by industrial designers. Four key design issues emerged that have a significant impact on how people would use such a multi-user multimodal system. First, parallel work is affected by the design of multimodal commands. Second, individual mode switches can be confusing to collaborators, especially if speech commands are used. Third, establishing personal and group territories can hinder particular tasks that require artefact neutrality. Finally, timing needs to be considered when designing joint multimodal commands. We also describe our model view controller architecture for true multi-user multimodal interaction.Item Open Access GSI DEMO: Multiuser Gesture / Speech Interaction over Digital Tables by Wrapping Single User Applications(2006-05-18) Tse, Edward; Greenberg, Saul; Shen, ChiaMost commercial software applications are designed for a single user using a keyboard/mouse over an upright monitor. Our interest is exploiting these systems so they work over a digital table. Mirroring what people do when working over traditional tables, we want multiple people to interact with the tabletop application and with each other via rich speech and hand gestures. In previous papers, we illustrated multi-user gesture and speech interaction on a digital table for geospatial applications Google Earth, Warcraft III and The Sims. In this paper, we describe our underlying architecture: GSI DEMO. First, GSI DEMO creates a run-time wrapper around existing single user applications: it accepts and translates speech and gestures from multiple people into a single stream of keyboard and mouse inputs recognized by the application. Second, it lets people use multimodal demonstration instead of programming to quickly map their own speech and gestures to these keyboard/mouse inputs. For example, continuous gestures are trained by saying Computer, when I do [one finger gesture], you do [mouse drag] . Similarly, discrete speech commands can be trained by saying Computer, when I say [layer bars], you do [keyboard and mouse macro] . The end result is that end users can rapidly transform single user commercial applications into a multi-user, multimodal digital tabletop system.Item Open Access MULTIMODAL MULTIPLAYER TABLETOP GAMING(2006-02-23) Tse, Edward; Greenberg, Saul; Shen, Chia; Forlines, CliftonThere is a large disparity between the rich physical interfaces of co-located arcade games and the generic input devices seen in most home console systems. In this paper we argue that a digital table is a conducive form factor for general co-located home gaming as it affords: (a) seating in collaboratively relevant positions that give all equal opportunity to reach into the surface and share a common view, (b) rich whole handed gesture input normally only seen when handling physical objects, (c) the ability to monitor how others use space and access objects on the surface, and (d) the ability to communicate to each other and interact atop the surface via gestures and verbal utterances. Our thesis is that multimodal gesture and speech input benefits collaborative interaction over such a digital table. To investigate this thesis, we designed a multimodal, multiplayer gaming environment that allows players to interact directly atop a digital table via speech and rich whole hand gestures. We transform two commercial single player computer games, representing a strategy and simulation game genre, to work within this setting.Item Open Access Multimodal Split View Tabletop Interaction Over Existing Applications(2007-06-29) Tse, Edward; Greenberg, Saul; Shen, Chia; Barnwell, John; Shipman, Sam; Leigh, DarrenWhile digital tables can be used with existing applications, they are typically limited by the one user per computer assumption of current operating systems. In this paper, we explore multimodal split view interaction a tabletop whose surface is split into two adjacent projected views that leverages how people can interact with three types of existing applications in this setting. Independent applications let people see and work on separate systems. Shared screens let people see a twinned view of a single user application. True groupware lets people work in parallel over large digital workspaces. Atop these, we add multimodal speech and gesture interaction capability to enhance interpersonal awareness during loosely coupled work.