Browsing by Author "Tse, Edward"
Now showing 1 - 12 of 12
Results Per Page
Sort Options
Item Open Access Employing Usability, Efficency and Evolvability in the CEXI Toolkit(2005-04-29) Tse, EdwardComputer displays are expanding beyond the upright desktop and towards personal devices such as Tablet PCs and large public displays (such as walls and tables). These different form factors require researchers to develop suitable interaction techniques. The fundamental problem is that existing development environments assume that everyone will using a mouse for all pointing input. Thus most applications are not able to take advantage of the extra features provided by novel input devices such as the point sizes provided by the Smart Technologies DViT Board. Most input device developers provide Software Development Kits (SDKs) written with legacy C++ code and different SDKs provide different APIs making it hard to port code written for one input device to another. This paper describes the Centralized External Input (CEXI) toolkit, a toolkit that supports the rapid prototyping of applications with a variety of novel input devices. Since this is a third generation tool, I wanted to the toolkit to be usable, efficient and evolvable. These are three lessons (or patterns) gleamed from my experiences and the experiences of other toolkit developers. To make the toolkit API easy to use, I limit the assumptions made in the API, for example I do not expect programmers to know how to traverse an object oriented class hierarchy of different input events, instead I provide all the important event information in a single monolithic event argument. To make the toolkit efficient, I use event queueing in the control panel to control the rate of events per second and I use quenching in both the input forwarder and the client to ensure that they receive only the information that they are interested in. Finally, I make the toolkit evolvable by making the source code available and making it easy for third parties to develop their own input forwarders.Item Open Access Exploring True Multi-User Multimodal Interaction over a Digital Table(2007-08-21) Tse, Edward; Greenberg, Saul; Shen, Chia; Forlines, Clifton; Kodama, RyoTrue multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design space through a case study, where we implemented an application that supports the KJ creativity method as used by industrial designers. Four key design issues emerged that have a significant impact on how people would use such a multi-user multimodal system. First, parallel work is affected by the design of multimodal commands. Second, individual mode switches can be confusing to collaborators, especially if speech commands are used. Third, establishing personal and group territories can hinder particular tasks that require artefact neutrality. Finally, timing needs to be considered when designing joint multimodal commands. We also describe our model view controller architecture for true multi-user multimodal interaction.Item Open Access GSI DEMO: Multiuser Gesture / Speech Interaction over Digital Tables by Wrapping Single User Applications(2006-05-18) Tse, Edward; Greenberg, Saul; Shen, ChiaMost commercial software applications are designed for a single user using a keyboard/mouse over an upright monitor. Our interest is exploiting these systems so they work over a digital table. Mirroring what people do when working over traditional tables, we want multiple people to interact with the tabletop application and with each other via rich speech and hand gestures. In previous papers, we illustrated multi-user gesture and speech interaction on a digital table for geospatial applications Google Earth, Warcraft III and The Sims. In this paper, we describe our underlying architecture: GSI DEMO. First, GSI DEMO creates a run-time wrapper around existing single user applications: it accepts and translates speech and gestures from multiple people into a single stream of keyboard and mouse inputs recognized by the application. Second, it lets people use multimodal demonstration instead of programming to quickly map their own speech and gestures to these keyboard/mouse inputs. For example, continuous gestures are trained by saying Computer, when I do [one finger gesture], you do [mouse drag] . Similarly, discrete speech commands can be trained by saying Computer, when I say [layer bars], you do [keyboard and mouse macro] . The end result is that end users can rapidly transform single user commercial applications into a multi-user, multimodal digital tabletop system.Item Open Access How People Partition Workspaces in Single Display Groupware(2003-10-27) Tse, Edward; Histon, Jonathan; Scott, Stacey; Greenberg, SaulSingle Display Groupware (SDG) lets multiple people, each with their own input device, interact simultaneously on a single display. With two or more people potentially working in the same or nearby areas of the display, the actions of one could interfere with others, e.g., by raising menus and bringing tool palettes into areas where others are working. Interaction techniques could be used to mitigate the interference; however, other approaches might be more suitable if collaborators were to naturally partition their workspace into distinct areas when working on a particular task. To determine the realistic potential for interference, we investigated people performing a set of collaborative drawing exercises in a co-located setting, paying particular attention to the locations of their interactions in the shared workspace. We saw that spatial division occurred consistently and naturally accross all participants, rarely requiring any verbal negotiation. Particular divisions of the space varied, influenced by seating position and image semantics. These results have several implications for the design of SDG workspaces, including the consideration of peoples' seating positions at the display, the use of moveable Local Tools and in-context menus, and the use of dynamic transparency to mitigate interference.Item Open Access Interacting with Stroke-Based Rendering on a Wall Display(2007-10-19) Grubert, Jens; Hancock, Mark; Carpendale, Sheelagh; Tse, Edward; Isenberg, TobiasWe introduce two new interaction techniques for creating and interacting with non-photorealistic images using strokebased rendering. We provide bimanual control of a large interactive canvas through both remote pointing and direct touch. Remote pointing allows people to sit and interact at a distance with an overview of the entire display, while direct-touch interaction provides more precise control. We performed a user study to compare these two techniques in both a controlled setting with constrained tasks and an exploratory setting where participants created their own painting. We found that, although the direct-touch interaction outperformed remote pointing, participants had mixed preferences and did not consistently choose one or the other to create their own painting. Some participants also chose to switch between techniques to achieve different levels of precision and control for different tasks.Item Open Access MULTIMODAL MULTIPLAYER TABLETOP GAMING(2006-02-23) Tse, Edward; Greenberg, Saul; Shen, Chia; Forlines, CliftonThere is a large disparity between the rich physical interfaces of co-located arcade games and the generic input devices seen in most home console systems. In this paper we argue that a digital table is a conducive form factor for general co-located home gaming as it affords: (a) seating in collaboratively relevant positions that give all equal opportunity to reach into the surface and share a common view, (b) rich whole handed gesture input normally only seen when handling physical objects, (c) the ability to monitor how others use space and access objects on the surface, and (d) the ability to communicate to each other and interact atop the surface via gestures and verbal utterances. Our thesis is that multimodal gesture and speech input benefits collaborative interaction over such a digital table. To investigate this thesis, we designed a multimodal, multiplayer gaming environment that allows players to interact directly atop a digital table via speech and rich whole hand gestures. We transform two commercial single player computer games, representing a strategy and simulation game genre, to work within this setting.Item Open Access Multimodal Split View Tabletop Interaction Over Existing Applications(2007-06-29) Tse, Edward; Greenberg, Saul; Shen, Chia; Barnwell, John; Shipman, Sam; Leigh, DarrenWhile digital tables can be used with existing applications, they are typically limited by the one user per computer assumption of current operating systems. In this paper, we explore multimodal split view interaction a tabletop whose surface is split into two adjacent projected views that leverages how people can interact with three types of existing applications in this setting. Independent applications let people see and work on separate systems. Shared screens let people see a twinned view of a single user application. True groupware lets people work in parallel over large digital workspaces. Atop these, we add multimodal speech and gesture interaction capability to enhance interpersonal awareness during loosely coupled work.Item Open Access RAPIDLY PROTOTYPING SINGLE DISPLAY GROUPWARE THROUGH THE SDGTOOLKIT(2003-04-15) Tse, Edward; Greenberg, SaulResearchers in Single Display Goupware (SDG) explore how multiple users share a single display such as a computer monitor, a large wall display, or an electronic tabletop display. Yet todays personal computers are designed with the assumption that one person interacts with the display at a time. Thus researchers and programmers face considerable hurdles if they wish to develop SDG. Our solution is the SDGToolkit, a toolkit for rapidly prototyping SDG. SDGToolkit automatically captures and manages multiple mice and keyboards, and presents them to the programmer as uniquely identified input events relative to either the whole screen or a particular window. It transparently provides multiple cursors, one for each mouse. To handle orientation issues for tabletop displays (i.e., people seated across from one another), programmers can specify a participants seating angle, which automatically rotates the cursor and translates input coordinates so the mouse behaves correctly. Finally, SDGToolkit provides an SDG-aware widget class layer that significantly eases how programmers create novel graphical components that recognize and respond to multiple inputs.Item Open Access A SOFTWARE TOOL TO GREATLY REDUCE THE INSTRUCTIONAL TIME NEEDED TO IMPLEMENT THE SCIENCE GENIUS RAP PROGRAMME(2016-12-20) Fakourfar, Omid; Tse, Edward; Tang, Anthony; Boyle, MichaelUrban youth of color from low socioeconomic status are generally known to be less engaged in STEM in the United States. On the other hand, the same demographic is usually engaged in hip-hop culture and rap music. Emdin et. al. (2016) have introduced a 12-week teaching model through a hip-hop based science programme which encourages students to come up with hip-hop songs by connecting their everyday life to scientific concepts. This method has shown considerable promise: students have used it mainly as a way of disclosing their emotion while learning scientific concepts at the same time. However, the length of this programme could dissuade teachers from adopting this method. In this work, we introduce a software tool to facilitate the same process and achieve many of its outcomes all within a single instructional period, i.e., an hour.Item Open Access Speech-Filtered Bubble Ray: Improving Target Acquisition on Display Walls(2007-06-29) Tse, Edward; Hancock, Mark; Greenberg, SaulThe rapid development of large interactive wall displays has been accompanied by research on methods that allow people to interact with the display at a distance. The basic method for target acquisition is by ray casting a cursor from one s pointing finger or hand position; the problem is that selection is slow and errorprone with small targets. A better method is the bubble cursor that resizes the cursor s activation area to effectively enlarge the target size. The catch is that this technique s effectiveness depends on the proximity of surrounding targets: while beneficial in sparse spaces, it is less so when targets are densely packed together. Our method is the speech-filtered bubble ray that uses speech to transform a dense target space into a sparse one. Our strategy builds on what people already do: people pointing to distant objects in a physical workspace typically disambiguate their choice through speech. For example, a person could point to a stack of books and say the green one . Gesture indicates the approximate location for the search, and speech filters unrelated books from the search. Our technique works the same way; a person specifies a property of the desired object, and only the location of objects matching that property trigger the bubble size. In a controlled evaluation, people were faster and preferred using the speech-filtered bubble ray over the standard bubble ray and ray casting approach.Item Open Access Supporting Lightweight Customization for Meeting Environments(2005-04-29) Tse, Edward; Greenberg, SaulDigital wall-sized displays commonly support authoring and presentation in face to face meetings. Yet most meeting applications show not only meeting content (i.e., the mate-rial being developed) but authoring tools as well the usual controls, palettes, and menus. Attendees are dis-tracted when the author navigates the (usually complex) interface as part of the authoring process the tools them-selves unnecessarily clutter the display. The problem is that current customization techniques are not suited for meeting environments as complex customization interfaces take attention away from the meeting agenda thus making cus-tomization a socially unacceptable practice. In this paper, we present the solution of lightweight cus-tomization, a customization technique designed to mini-mize time and cognitive effort. This paper illustrates lightweight customization through two implementations: First, customized views provide a scribe with full applica-tion functionality while presenting the important presenta-tion content to the other meeting collaborators on a secon-dary projected display. Second, customized interfaces al-low meeting collaborators to rapidly recall previous func-tionality and build customized interfaces through a history of previous actions.Item Open Access Using Aspects to Convert Single User Applications into Multiple User Applications(2005-04-29) Tse, EdwardThis paper details the process of converting a single user application into a multiple user application through the use of Aspect Oriented Programming (AOP). While AOP hopes to enable developers to capture crosscutting concerns (e.g., features that affect different classes and modules of source code ) my goal is to treat multiple user functionality as a cross cutting concern that should be easily added to a single user application. This primary contribution of this paper is the detailing of the issues encountered in the exercise of trying to apply aspects to existing single user applications. Through a detailed analysis of the issues encountered there is the potential to refine the design of current and future Aspect Oriented Tools.