Browsing by Author "Boring, Sebastian"
Now showing 1 - 18 of 18
Results Per Page
Sort Options
Item Open Access Astral: Prototyping Mobile and IoT Interactive Behaviours via Streaming and Input Remapping(2018-07) Ledo, David; Vermeulen, Jo; Carpendale, Sheelagh; Greenberg, Saul; Oehlberg, Lora A.; Boring, SebastianWe present Astral, a prototyping tool for mobile and Internet of Things interactive behaviours that streams selected desktop display contents onto mobile devices (smartphones and smartwatches) and remaps mobile sensor data into desktop input events (i.e., keyboard and mouse events). Interactive devices such as mobile phones, watches, and smart objects, offer new opportunities for interaction design– yet prototyping their interactive behaviour remains an implementation challenge. Additionally, current tools often focus on systems responding after an action takes place as opposed to while the action takes place. With Astral, designers can rapidly author interactive prototypes live on mobile devices through familiar desktop applications. Designers can also customize input mappings using easing functions to author, fine-tune and assess rich outputs. We demonstrate the expressiveness of Astral through a set of prototyping scenarios with novel and replicated examples from past literature which reflect how the system might support and empower designers throughout the design process.Item Open Access Body-Centric Interaction: Using the Body as an Extended Mobile Interaction Space(2011-10-17T22:21:35Z) Chen, Xiang (Anthony); Tang, Anthony; Boring, Sebastian; Greenberg, SaulCurrent mobile devices require a person to navigate and interact with applications and their content via on-screen operations. The problem is that mobility trades off with screen size, providing limited space for interactions. To mitigate this problem, we explore how our body can extend the interaction space of a mobile device. We call this Body- Centric Interaction (BCI), a design space comprised of three dimensions. First, interactions occur in different proximal spaces on/around/far-from the body. Second, different mapping strategies can associate digital knowledge or interactions with these spaces. Third, various input techniques can help perform such interactions. We make use of this design space to 1) unify existing BCIrelated research, and 2) generatively design a set of proofof- concept prototypes. Overall, we contribute a design space that articulates and envisions how our body can be leveraged to create rich interaction possibilities that extends beyond a mobile device’s limited screen space.Item Metadata only C4: a creative-coding API for media, interaction and animation(ACM, 2013) Kirton, Travis; Boring, Sebastian; Baur, Dominikus; MacDonald, Lindsay; Carpendale, SheelaghAlthough there has been widespread proliferation of creative-coding programming languages, the design of many toolkits and application programming interfaces (APIs) for expression and interactivity do not take full advantages of the unique space of mobile multitouch devices. In designing a new API for this space we first consider five major problem spaces and present an architecture that attempts to address these to move beyond the low-level manipulation of graphics giving first-class status to media objects. We present the architecture and design of a new API, called C4, that takes advantage of Objective-C, a powerful yet more complicated lower-level language, while remaining simple and easy to use. We have also designed this API in such a way that the software applications that can be produced are efficient and light on system resources, culminating in a prototyping language suited for the rapid development of expressive mobile applications. The API clearly presents designs for a set of objects that are tightly integrated with multitouch capabilities of hardware devices. C4 allows the programmer to work with media as first-class objects; it also provides techniques for easily integrating touch and gestural interaction, as well as rich animations, into expressive interfaces. To illustrate C4 we present simple concrete examples of the API, a comparison of alternative implementation options, performance benchmarks, and two interactive artworks developed by independent artists. We also discuss observations of C4 as it was used during workshops and an extended 4-week residency.Item Open Access Dark Patterns in Proxemic Interactions: A Critical Perspective(2014-01-24) Greenberg, Saul; Boring, Sebastian; Vermeulen, Jo; Dostal, JakubProxemics theory explains peoples’ use of interpersonal distances to mediate their social interactions with others. Within Ubicomp, proxemic interaction researchers argue that people have a similar social understanding of their spatial relations with nearby digital devices, which can be exploited to better facilitate seamless and natural interactions. To do so, both people and devices are tracked to determine their spatial relationships. While interest in proxemic inter-actions has increased over the last few years, it also has a dark side: the knowledge of proxemics may (and likely will) be easily exploited to the detriment of the user. In this paper, we offer a critical perspective on proxemic interactions in the form of dark patterns (i.e., ways proxemic interactions can be misused). We discuss a series of these patterns and describe how they apply to these types of interactions. In addition, we identify several root problems that underlie these patterns and discuss potential solutions that could lower their harmfulness.Item Open Access Designing User-, Hand-, and Handpart-Aware Tabletop Interactions with the TOUCHID Toolkit(2011-07-12T20:07:47Z) Marquardt, Nicolai; Kiemer, Johannes; Ledo, David; Boring, Sebastian; Greenberg, SaulRecent work in multi-touch tabletop interaction introduced many novel techniques that let people manipulate digital content through touch. Yet most only detect touch blobs. This ignores richer interactions that would be possible if we could identify (1) which hand, (2) which part of the hand, (3) which side of the hand, and (4) which person is actually touching the surface. Fiduciary-tagged gloves were previously introduced as a simple but reliable technique for providing this information. The problem is that its lowlevel programming model hinders the way developers could rapidly explore new kinds of user- and handpartaware interactions. We contribute the TOUCHID toolkit to solve this problem. It allows rapid prototyping of expressive multi-touch interactions that exploit the aforementioned characteristics of touch input. TOUCHID provides an easy-to-use event-driven API. It also provides higher-level tools that facilitate development: a glove configurator to rapidly associate particular glove parts to handparts; and a posture configurator and gesture configurator for registering new hand postures and gestures for the toolkit to recognize. We illustrate TOUCHID’s expressiveness by showing how we developed a suite of techniques (which we consider a secondary contribution) that exploits knowledge of which handpart is touching the surface.Item Open Access The Fat Thumb: Using the Thumb's Contact Size for Single-Handed Mobile Interaction(2011-12-02T21:11:14Z) Boring, Sebastian; Ledo, David; Chen, Xiang (Anthony); Marquardt, Nicolai; Tang, Anthony; Greenberg, SaulModern mobile devices allow a rich set of multi-finger interactions that combine modes into a single fluid act, for example, one finger for panning blending into a two-finger pinch gesture for zooming. Such gestures require the use of both hands: one holding the device while the other is interacting. While on the go, however, only one hand may be available to both hold the device and interact with it. This mostly limits interaction to a single-touch (i.e., the thumb), forcing users to switch between input modes explicitly. In this paper, we contribute the Fat Thumb interaction technique, which uses the thumb’s contact size as a form of simulated pressure. This adds a degree of freedom, which can be used, for example, to integrate panning and zooming into a single interaction. Contact size determines the mode (i.e., panning with a small size, zooming with a large one), while thumb movement performs the selected mode. We discuss nuances of the Fat Thumb based on the thumb’s limited operational range and motor skills when that hand holds the device. We compared Fat Thumb to three alternative techniques, where people had to pan and zoom to a predefined region on a map. Participants performed fastest with the least strokes using Fat Thumb.Item Open Access From Focus to Context and Back: Combining Mobile Projectors and Stationary Displays(2012-10-12T19:29:53Z) Weigel, Martin; Boring, Sebastian; Marquardt, Nicolai; Steimle, Jurgen; Greenberg, Saul; Tang, AnthonyFocus plus context displays combine high-resolution detail and lower-resolution overview using displays of different pixel densities. Historically, they employed two fixed-size displays of different resolutions, one embedded within the other. In this paper, we explore focus plus context displays using one or more mobile projectors in combination with a stationary display. The portability of mobile projectors as applied to focus plus context displays contributes in three ways. First, the projector’s projection on the stationary display can transition dynamically from being the focus of one’s interest (i.e. providing a high resolution view when close to the display) to providing context around it (i.e. providing a low resolution view beyond the display’s borders when further away from it). Second, users can dynamically reposition and resize a focal area that matches their interest rather than repositioning all content into a fixed high-resolution area. Third, multiple users can manipulate multiple foci or context areas without interfering with one other. A proof-of-concept implementation illustrates these contributions.Item Open Access Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer(2012-04-20T17:29:40Z) Marquardt, Nicolai; Ballendat, Till; Boring, Sebastian; Greenberg, Saul; Hinckley, KenConnecting and information transfer between the increasing number of personal and shared digital devices in our environment – phones, tablets, and large surfaces – is tedious. One has to know which devices can communicate, what information they contain, and how information can be exchanged. Inspired by Proxemic Interactions, we introduce novel interaction techniques that allow people to naturally connect to and perform cross-device operations. Our techniques are based on the notion of gradual engagement between a person’s handheld device and the other devices surrounding them as a function of finegrained measures of proximity. They all provide awareness of device presence and connectivity, progressive reveal of available digital content, and interaction methods for transferring digital content between devices from a distance and from close proximity. They also illustrate how gradual engagement may differ when the other device seen is personal (such as a handheld) vs. semi-public (such as a large display). We illustrate our techniques within two applications that enable gradual engagement leading up to information exchange between digital devices.Item Open Access The HAPTIC TOUCH Toolkit: Enabling Exploration of Haptic Interactions(2011-09-26T15:10:23Z) Ledo, David; Nacenta, Miguel A.; Marquardt, Nicolai; Boring, Sebastian; Greenberg, SaulIn the real world, touch based interaction relies on haptic feedback (e.g., grasping objects, feeling textures). Unfortunately, such feedback is absent in current tabletop systems. The previously developed Haptic Tabletop Puck (HTP) aims at supporting experimentation with and development of inexpensive tabletop haptic interfaces. The problem is that programming the HTP is difficult due to interactions when coding its multiple hardware components. To address this problem, we contribute the HAPTICTOUCH toolkit, which allows developers to rapidly prototype haptic tabletop applications. Our toolkit is structured in three layers that enable programmers to: (1) directly control the device, (2) create customized combinable haptic behaviors (e.g. softness, oscillation), and (3) use visuals (e.g., shapes, images, buttons) to quickly make use of the aforementioned behaviors. Our preliminary study found that programmers could use the HAPTICTOUCH toolkit to create haptic tabletop applications in a short amount of time.Item Open Access OneSpace: Shared Depth-Corrected Video Interaction(2012-12-14) Tang, Anthony; Ledo, David; Aseniero, Bon Adriel; Boring, SebastianVideo conferencing commonly employs a video portal metaphor to connect individuals from remote spaces. In this work, we explore an alternate metaphor, a shared depthmirror, where video images of two spaces are merged into a single shared, depth-corrected video. Just as seeing one’s mirror image causes reflective interaction, the shared video space changes the nature of interaction in the video space. We realize this metaphor in OneSpace, where the space respects virtual spatial relationships between people and objects, and in so doing, encourages cross-site, full-body interactions. We report preliminary observations of OneSpace in use, describing the role of depth in our participants’ interactions. Based on these observations, we argue that the depth mirror offers new opportunities for shared video interaction.Item Open Access ProjectorKit: Easing the Development of Interactive Applications for Mobile Projectors(2013-02-19) Weigel, Martin; Boring, Sebastian; Steimle, Jurgen; Marquardt, Nicolai; Greenberg, Saul; Tang, AnthonyResearchers have developed interaction concepts based on mobile projectors. Yet pursuing work in this area – particularly in applying projector-based techniques within an application – is cumbersome and time-consuming. To mitigate this problem, we generalize existing interaction techniques using mobile projectors. First, we identified five interaction primitives that serve as building blocks for a large set of applications. Second, these primitives were used to derive a set of principles that inform the design of a toolkit that ease and support software development for mobile projectors. Finally, we implemented these principles in a toolkit, called ProjectorKit, which we contribute to the community as a flexible open-source platform.Item Open Access Proxemic Peddler: A Public Advertising Display that Captures and Preserves the Attention of a Passerby(2012-01-31T19:41:06Z) Wang, Miaosen; Boring, Sebastian; Greenberg, SaulEffective street peddlers monitor passersby, where they tune their message to capture and keep the passerby's attention over the entire duration of the sales pitch. Similarly, advertising displays in today's public environments can be more effective if they were able to tune their content in response to how passersby were attending them vs. just showing fixed content in a loop. Previously, others have prototyped displays that monitor and react to the presence or absence of a person within a few proxemic (spatial) zones surrounding the screen, where these zones are used as an estimate of attention. However, the coarseness and discrete nature of these zones mean that they cannot respond to subtle changes in the user's attention towards the display. In this paper, we contribute an extension to existing proxemic models. Our Peddler Framework captures (1) fine-grained continuous proxemic measures by (2) monitoring the passerby's distance and orientation with respect to the display at all times. We use this information to infer (3) the passerby's interest or digression of attention at any given time, and (4) their attentional state with respect to their short-term interaction history over time. Depending on this attentional state, we tune content to lead the passerby into a more attentive stage, ultimately resulting in a purchase. We also contribute a prototype of a public advertising display - called the Proxemic Peddler - that demonstrates these extensions as applied to content from the Amazon.com website.Item Metadata only Proxemic-Aware Controls: Designing Remote Controls for Ubiquitous Computing Ecologies(ACM, 2015) Ledo, David; Greenberg, Saul; Marquardt, Nicolai; Boring, SebastianRemote controls facilitate interactions at-a-distance with appliances. However, the complexity, diversity, and increasing number of digital appliances in ubiquitous computing ecologies make it increasingly difficult to: (1) discover which appliances are controllable; (2) select a particular appliance from the large number available; (3) view information about its status; and (4) control the appliance in a pertinent manner. To mitigate these problems we contribute proxemic-aware controls, which exploit the spatial relationships between a person's handheld device and all surrounding appliances to create a dynamic appliance control interface. Specifically, a person can discover and select an appliance by the way one orients a mobile device around the room, and then progressively view the appliance's status and control its features in increasing detail by simply moving towards it. We illustrate proxemic-aware controls of various appliances through various scenarios. We then provide a generalized conceptual framework that informs future designs of proxemic-aware controls.Item Open Access Proxemic-Aware Controls: Designing Remote Controls for Ubiquitous Computing Ecologies(2015-02-18) Ledo, David; Greenberg, Saul; Marquardt, Nicolai; Boring, SebastianRemote controls facilitate interactions at-a-distance with appliances. However, the complexity, diversity, and increasing number of digital appliances in ubiquitous computing ecologies make it increasingly difficult to: (1) discover which appliances are controllable; (2) select a particular appliance from the large number available; (3) view information about its status; and (4) control the appliance in a pertinent manner. To mitigate these problems we contribute proxemic-aware controls, which exploit the spatial relationships between a person’s handheld device and all surrounding appliances to create a dynamic appliance control interface. Specifically, a person can discover and select an appliance by the way one orients a mobile device around the room, and then progressively view the appliance’s status and control its features in increasing detail by simply moving towards it. We illustrate proxemic-aware controls of various appliances through various scenarios. We then provide a generalized conceptual framework that informs future designs of proxemic-aware controls.Item Open Access The Proximity Toolkit: Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies(2011-05-05T17:05:48Z) Marquardt, Nicolai; Diaz-Marino, Roberto; Boring, Sebastian; Greenberg, SaulPeople naturally understand and use proxemic relationships in everyday situations. However, only few ubiquitous computing (ubicomp) systems interpret such proxemic relationships to mediate interaction (proxemic interaction). A technical problem is that developers find it challenging and tedious to access proxemic information from sensors. Our Proximity Toolkit solves this problem. It simplifies the exploration of interaction techniques by supplying fine-grained proxemic information between people, portable devices, large interactive surfaces, and other non-digital objects in a room-sized environment. The toolkit offers three key features. 1) It facilitates rapid prototyping of proxemic-aware systems by supplying developers with the orientation, distance, motion, identity, and location information between entities. 2) It includes various tools, such as a visual monitoring tool, that allows developers to visually observe, record and explore proxemic relationships in a 3D space. (3) Its flexible architecture separates sensing hardware from the proxemic data model derived from these sensors, which means that a variety of sensing technologies can be substituted or combined to derive proxemic information. We illustrate the versatility of the toolkit with a set of proxemic-aware systems built by students.Item Metadata only The Proximity Toolkit: Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies(ACM, 2011) Marquardt, Nicolai; Diaz-Marino, Robert; Boring, Sebastian; Greenberg, SaulPeople naturally understand and use proxemic relationships (e.g., their distance and orientation towards others) in everyday situations. However, only few ubiquitous computing (ubicomp) systems interpret such proxemic relationships to mediate interaction (proxemic interaction). A technical problem is that developers find it challenging and tedious to access proxemic information from sensors. Our Proximity Toolkit solves this problem. It simplifies the exploration of interaction techniques by supplying fine-grained proxemic information between people, portable devices, large interactive surfaces, and other non-digital objects in a room-sized environment. The toolkit offers three key features. 1) It facilitates rapid prototyping of proxemic-aware systems by supplying developers with the orientation, distance, motion, identity, and location information between entities. 2) It includes various tools, such as a visual monitoring tool, that allows developers to visually observe, record and explore proxemic relationships in 3D space. (3) Its flexible architecture separates sensing hardware from the proxemic data model derived from these sensors, which means that a variety of sensing technologies can be substituted or combined to derive proxemic information. We illustrate the versatility of the toolkit with proxemic-aware systems built by students.Item Open Access SPALENDAR: Visualizing a Group’s Calendar Events over a Geographic Space on a Public Display(2012-01-23) Chen, Xiang 'Anthony'; Boring, Sebastian; Carpendale, Sheelagh; Tang, Anthony; Greenberg, SaulPortable paper calendars (i.e., day planners and organizers) have greatly influenced the design of group electronic calendars. Both use time units (hours/days/weeks/etc.) to organize visuals, with useful information (e.g., event types, locations, attendees) usually presented as - perhaps abbreviated or even hidden - text fields within those time units. The problem is that, for a group, this visual sorting of individual events into time buckets conveys only limited information about the social network of people. For example, people’s whereabouts cannot be read ‘at a glance’ but require examining the text. Our goal is to explore an alternate visualization that can reflect and illustrate group members’ calendar events. Our main idea is to display the group’s calendar events as spatiotemporal activities occurring over a geographic space animated over time, all presented on a highly interactive public display. In particular, our SPALENDAR (SPAtial CALENDAR) design animates peoples’ past, present and forthcoming movements between event locations as well as their static locations. Details of people’s events, their movements and their locations are progressively revealed and controlled by the viewer’s proximity to the display, their identity, and their gestural interactions with it, all of which are tracked by the public display.Item Open Access The Unadorned Desk: Exploiting the Physical Space around a Display as an Input Canvas(2012-09-21T21:27:16Z) Hausen, Doris; Boring, Sebastian; Greenberg, SaulIn everyday office work, people smoothly use the space on their physical desks to work with documents of interest, and to keep associated tools and materials nearby for easy use. In contrast, the limited screen space of computer displays imposes interface constraints. Associated material is either placed off-screen (i.e., temporarily hidden) and requires extra work to access (window switching, menu selection) or crowds and competes with the work area (e.g., as palettes and icons). This problem is worsened by the increasing popularity of small displays such as tablets and laptops. To mitigate this problem, we investigate how we can exploit an unadorned physical desk space as an additional input canvas. Our Unadorned Desk detects coarse hovering over and touching of areas on an otherwise standard physical desk, which is used as input to the desktop computer. Unlike other augmented desks, feedback is given on the computer’s screen instead of on the desk itself. To better understand how people make use of this new input space, we conducted two user studies: (1) placing and retrieving application icons onto the desk, and (2) retrieving items from a predefined grid. We found that participants organize items in a grid for easier access, and are generally faster without affecting accuracy without on-screen feedback for few items, but were more accurate (though slower as they relied on feedback) for many items