Browsing by Author "Boring, S."
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Metadata only Body-Centric Interaction: Using the Body as an Extended Mobile Interaction Space.(2011) Chen, X.; Tang, A.; Boring, S.; Greenberg, S.Item Metadata only Dark Patterns in Proxemic Interactions: A Critical Perspective(ACM, 2014) Greenberg, S.; Boring, S.; Vermeulen, J.; Dostal, J.Proxemics theory explains peoples' use of interpersonal distances to mediate their social interactions with others. Within Ubicomp, proxemic interaction researchers argue that people have a similar social understanding of their spatial relations with nearby digital devices, which can be exploited to better facilitate seamless and natural interactions. To do so, both people and devices are tracked to determine their spatial relationships. While interest in proxemic interactions has increased over the last few years, it also has a dark side: knowledge of proxemics may (and likely will) be easily exploited to the detriment of the user. In this paper, we offer a critical perspective on proxemic interactions in the form of dark patterns: ways proxemic interactions can be misused. We discuss a series of these patterns and describe how they apply to these types of interactions. In addition, we identify several root problems that underlie these patterns and discuss potential solutions that could lower their harmfulness.Item Metadata only The Dark Patterns of Proxemic Sensing(IEEE, 2014) Boring, S.; Greenberg, S.; Vermeulen, J.; Dostal, J.; Marquardt, N.To be accepted and trusted by the public, proxemic sensing systems must respect people's conception of physical space, make it easy to opt in or out, and benefit users as well as advertisers and other vendors.Item Metadata only Designing User-, Hand-, and Handpart-Aware Tabletop Interactions with the TOUCHID Toolkit(ACM, 2011) Marquardt, N.; Diaz-Marino, R.; Boring, S.; Greenberg, S.Recent work in multi-touch tabletop interaction introduced many novel techniques that let people manipulate digital content through touch. Yet most only detect touch blobs. This ignores richer interactions that would be possible if we could identify (1) which part of the hand, (2) which side of the hand, and (3) which person is actually touching the surface. Fiduciary-tagged gloves were previously introduced as a simple but reliable technique for providing this information. The problem is that its low-level programming model hinders the way developers could rapidly explore new kinds of user- and handpart-aware interactions. We contribute the TouchID toolkit to solve this problem. It allows rapid prototyping of expressive multi-touch interactions that exploit the aforementioned characteristics of touch input. TouchID provides an easy-to-use event-driven API as well as higher-level tools that facilitate development: a glove configurator to rapidly associate particular glove parts to handparts; and a posture configurator and gesture configurator for registering new hand postures and gestures for the toolkit to recognize. We illustrate TouchID's expressiveness by showing how we developed a suite of techniques that exploits knowledge of which handpart is touching the surface.Item Metadata only The Fat Thumb: Using the Thumb's Contact Size for Single-Handed Mobile Interaction.(ACM, 2012) Boring, S.; Ledo, D.; Chen, X.; Marquardt, N.; Tang, A.; Greenberg, S.Modern mobile devices allow a rich set of multi-finger interactions that combine modes into a single fluid act, for example, one finger for panning blending into a two-finger pinch gesture for zooming. Such gestures require the use of both hands: one holding the device while the other is interacting. While on the go, however, only one hand may be available to both hold the device and interact with it. This mostly limits interaction to a single-touch (i.e., the thumb), forcing users to switch between input modes explicitly. In this paper, we contribute the Fat Thumb interaction technique, which uses the thumb's contact size as a form of simulated pressure. This adds a degree of freedom, which can be used, for example, to integrate panning and zooming into a single interaction. Contact size determines the mode (i.e., panning with a small size, zooming with a large one), while thumb movement performs the selected mode. We discuss nuances of the Fat Thumb based on the thumb's limited operational range and motor skills when that hand holds the device. We compared Fat Thumb to three alternative techniques, where people had to precisely pan and zoom to a predefined region on a map and found that the Fat Thumb technique compared well to existing techniques.Item Metadata only The Fat Thumb: Using the Thumb’s Contact Size for Single-Handed Mobile Interaction(2011) Boring, S.; Ledo, D.; Chen, X.; Tang, A.; Greenberg, S.Item Metadata only From Focus to Context and Back: Combining Mobile Projectors and Stationary Displays(University of Calgary, 2012) Weigel, M.; Tang, A.; Boring, S.; Marquardt, N.; Greenberg, S.Item Metadata only From Focus to Context and Back: Combining Mobile Projectors and Stationary Displays(2013) Weigel, M.; Boring, S.; Steimel, J.; Tang, A.; Greenberg, S.Item Metadata only Gradual Engagement between Digital Devices as a Function of Proximity: From Awareness to Progressive Reveal to Information Transfer(ACM, 2012) Marquardt, N.; Ballendat, T.; Boring, S.; Greenberg, S.; Hinckley, K.The increasing number of digital devices in our environment enriches how we interact with digital content. Yet, cross-device information transfer -- which should be a common operation -- is surprisingly difficult. One has to know which devices can communicate, what information they contain, and how information can be exchanged. To mitigate this problem, we formulate the gradual engagement design pattern that generalizes prior work in proxemic interactions and informs future system designs. The pattern describes how we can design device interfaces to gradually engage the user by disclosing connectivity and information exchange capabilities as a function of inter-device proximity. These capabilities flow across three stages: (1) awareness of device presence/connectivity, (2) reveal of exchangeable content, and (3) interaction methods for transferring content between devices tuned to particular distances and device capabilities. We illustrate how we can apply this pattern to design, and show how existing and novel interaction techniques for cross-device transfers can be integrated to flow across its various stages. We explore how techniques differ between personal and semi-public devices, and how the pattern supports interaction of multiple users.Item Metadata only The HapticTouch Toolkit: Enabling Exploration of Haptic Interactions(ACM, 2012) Ledo, D.; Nacenta, M.; Marquardt, N.; Boring, S.; Greenberg, S.In the real world, touch based interaction relies on haptic feedback (e.g., grasping objects, feeling textures). Unfortunately, such feedback is absent in current tabletop systems. The previously developed Haptic Tabletop Puck (HTP) aims at supporting experimentation with and development of inexpensive tabletop haptic interfaces in a do-it-yourself fashion. The problem is that programming the HTP (and haptics in general) is difficult. To address this problem, we contribute the HapticTouch toolkit, which enables developers to rapidly prototype haptic tabletop applications. Our toolkit is structured in three layers that enable programmers to: (1) directly control the device, (2) create customized combinable haptic behaviors (e.g., softness, oscillation), and (3) use visuals (e.g., shapes, images, buttons) to quickly make use of these behaviors. In our preliminary exploration we found that programmers could use our toolkit to create haptic tabletop applications in a short amount of time.Item Metadata only The HapticTouch Toolkit: Enabling Exploration of Haptic Interactions(2011) Ledo, D.; Nacenta, M.; Boring, S.; Greenberg, S.Item Metadata only ProjectorKit: Easing Rapid Prototyping of Interactive Applications for Mobile Projectors(ACM, 2013) Weigel, M.; Boring, S.; Steimle, J.; Marquardt, N.; Greenberg, S.; Tang, A.Researchers have developed interaction concepts based on mobile projectors. Yet pursuing work in this area - particularly in building projector-based interactions techniques within an application - is cumbersome and time-consuming. To mitigate this problem, we contribute ProjectorKit, a flexible open-source toolkit that eases rapid prototyping mobile projector interaction techniques.Item Metadata only Proxemic Peddler: A Public Advertising Display that Captures and Preserves the Attention of a Passerby(ACM, 2012) Wang, M.; Boring, S.; Greenberg, S.Effective street peddlers monitor passersby, where they tune their message to capture and keep the passerby's attention over the entire duration of the sales pitch. Similarly, advertising displays in today's public environments can be more effective if they were able to tune their content in response to how passersby were attending them vs. just showing fixed content in a loop. Previously, others have prototyped displays that monitor and react to the presence or absence of a person within a few proxemic (spatial) zones surrounding the screen, where these zones are used as an estimate of attention. However, the coarseness and discrete nature of these zones mean that they cannot respond to subtle changes in the user's attention towards the display. In this paper, we contribute an extension to existing proxemic models. Our Peddler Framework captures (1) fine-grained continuous proxemic measures by (2) monitoring the passerby's distance and orientation with respect to the display at all times. We use this information to infer (3) the passerby's interest or digression of attention at any given time, and (4) their attentional state with respect to their short-term interaction history over time. Depending on this attentional state, we tune content to lead the passerby into a more attentive stage, ultimately resulting in a purchase. We also contribute a prototype of a public advertising display -- called Proxemic Peddler -- that demonstrates these extensions as applied to content from the Amazon.com website.Item Metadata only Proxemic Peddler: A Public Advertising Display that Captures and Preserves the Attention of a Passerby(2012) Wang, M.; Boring, S.; Greenberg, S.Item Metadata only SPALENDAR: Visualizing a Group's Calendar Events over a Geographic Space on a Public Display(ACM, 2012) Chen, X.; Boring, S.; Carpendale, S.; Tang, A.; Greenberg, S.Portable paper calendars (i. e., day planners and organizers) have greatly influenced the design of group electronic calendars. Both use time units (hours/days/weeks/etc.) to organize visuals, with useful information (e.g., event types, locations, attendees) usually presented as - perhaps abbreviated or even hidden - text fields within those time units. The problem is that, for a group, this visual sorting of individual events into time buckets conveys only limited information about the social network of people. For example, people's whereabouts cannot be read 'at a glance' but require examining the text. Our goal is to explore an alternate visualization that can reflect and illustrate group members' calendar events. Our main idea is to display the group's calendar events as spatiotemporal activities occurring over a geographic space animated over time, all presented on a highly interactive public display. In particular, our Spalendar (Spatial Calendar) design animates people's past, present and forthcoming movements between event locations as well as their static locations. Detail of people's events, their movements and their locations is progressively revealed and controlled by the viewer's proximity to the display, their identity, and their gestural interactions with it, all of which are tracked by the public display.Item Metadata only The Unadorned Desk: Exploiting the Physical Space around a Display as an Input Canvas(Springer, 2013) Hausen, D.; Boring, S.; Greenberg, S.In everyday office work, people smoothly use the space on their physical desks to work with documents of interest, and to keep tools and materials nearby for easy use. In contrast, the limited screen space of computer displays imposes interface constraints. Associated material is placed off-screen (i.e., temporarily hidden) and requires extra work to access (window switching, menu selection) or crowds and competes with the work area (e.g., palettes and icons). This problem is worsened by the increasing popularity of small displays such as tablets and laptops. To mitigate this problem, we investigate how we can exploit an unadorned physical desk space as an additional input canvas. With minimal augmentation, our Unadorned Desk detects coarse hovering over and touching of discrete areas (‘items’) within a given area on an otherwise regular desk, which is used as input to the desktop computer. We hypothesize that people’s spatial memory will let them touch particular desk locations without looking. In contrast to other augmented desks, our system provides optional feedback of touches directly on the computer’s screen. We conducted a user study to understand how people make use of this input space. Participants freely placed and retrieved items onto/from the desk. We found that participants organize items in a grid-like fashion for easier access later on. In a second experiment, participants had to retrieve items from a predefined grid. When only few (large) items are located in the area, participants were faster without feedback and there was (surprisingly) no difference in error rates with or without feedback. As the item number grew (i.e., items shrank to fit the area), participants increasingly relied on feedback to minimize errors – at the cost of speed.