Expressive Touch: Using a Wrist-worn Inertial

Expressive Touch: Using a Wrist-worn Inertial
Measurement Unit to add Expressiveness
to Touch-based Interactions
1st Author Name
Affiliation
Address
e-mail address
Optional phone number
2nd Author Name
Affiliation
Address
e-mail address
Optional phone number
ABSTRACT
We have seen substantial increases in the use of touchbased devices in recent years, and in parallel with this
wrist-worn devices such as smart watches and fitness
trackers have become increasingly available. We believe
there is an opportunity to leverage wrist-worn movement
sensing to complement and extend touch interactions. We
take inspiration from existing contact-oriented interaction
tools, such as instrumented styluses and pressure-pads, to
explore how gestures and other features of wrist movement,
detected using a wrist-mounted inertial measurement unit,
can be leveraged to extend the expressiveness of
conventional touch-based interfaces. We propose new
expressive-touch interactions based upon pitch and roll, as
well as the dynamics of pre- and post-touch movement, and
outline how these can be used to add expression to
smartphone-based applications.
Author Keywords
Expressive interaction; gesture; multi-touch; natural user
interfaces; inertial measurement unit; smart watch.
ACM Classification Keywords
H.5.2 [User Interfaces]: Input devices and strategies,
Interaction styles.
INTRODUCTION
The use of touch-based devices has increased dramatically
in recent years. Touch is now a deeply entrenched mobile
phone interaction paradigm, and it also offers an intuitive
alternative to the mouse and keyboard for laptop and
desktop PCs. However, most mainstream touch-based
interfaces restrict the user’s interaction to a small number of
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, to republish, to
post on servers, or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from Publications Dept., ACM, Inc., fax
1 (212) 869-0481, or email [email protected].
Every submission will be assigned their own unique DOI string to be included here.
3rd Author Name
Affiliation
Address
e-mail address
Optional phone number
degrees of freedom, i.e. user interaction with touch-based
devices is commonly restricted to pressing, moving and
releasing fingers at different positions on a two-dimensional
surface. This simple range of interactions supports a wide
variety of rich interactions, which extend what has been
traditionally possible using a standard keyboard and mouse.
However, touch-based interaction often fails to provide the
subtlety and expressiveness offered by the hand’s
interaction with objects, and some of the traditional tools
and controls it seeks to emulate.
In this paper, we explore how a wrist-worn inertial
measurement unit (IMU), which streams data over a
wireless connection to a touchscreen device, can be used to
enhance touch-based interactions. By detecting and
classifying qualities such as wrist pitch and roll, plus the
dynamics of pre- and post-touch movement, we support a
range of expressive-touch interaction techniques that extend
what is possible with today’s typical touch-sensing
hardware. Our techniques offer enhanced interaction
capabilities in skilled contexts that include gaming and
playing a musical instrument, as well as setting common
user interface controls. We envisage that these interaction
techniques will provide people with more dynamic,
expressive and intuitive forms of interaction with the
devices that they use in their everyday lives.
We describe how the interaction techniques developed can
be applied in a number of different application scenarios,
including jigsaw puzzles, a touchscreen piano, a 3D maps
application, throwing interactions and in common slider
controls. An initial technical study demonstrates the range
of wrist-movement possible, as well as the task completion
rates and times for the interaction techniques presented.
RELATED WORK
A variety of technologies and interaction techniques with
the potential to support expressive interaction have been
proposed. We begin with research that can track the pose
and movements of users’ hands and fingers before, during
and after a touch. Data gloves [e.g. 29] can support rich
expressive interaction, but require the user to wear
potentially uncomfortable gloves. Alternative technologies
include: arm-mounted piezoelectric sensors [7] and wristmounted cameras [17]. Benko et al. [1] use EMG to sense
users’ hand movements and demonstrate a range of
enhanced touch interactions, including pressure detection,
contact finger identification and off-surface pinch, throw
and flick gestures. Murugappan et al. [23] used a Kinect
depth-camera to develop a range of enhanced touch
interactions including the identification of the contact finger
and the recovery of hand posture. Marquardt et al.
presented a broad range of touch interaction techniques that
extended the traditional vocabulary of interaction with an
interactive tabletop, by tracking users’ gestures above the
surface with a Vicon motion tracking system [21]. While
these approaches have the potential to support a selection of
rich and expressive interactions, all but the use of the
Kinect depth-camera depend on specialist hardware and the
latter two systems require a static camera-based tracking
system that would not be appropriate for our mobile device
context.
Tracking properties of a user’s point of contact with a
touchscreen offers an alternative means to support
enhanced touch input. Wang and Ren used the contact size,
shape and orientation information provided by a camerabased multi-touch screen to improve performance in
selection and pointing tasks [30]. The shape of contacts
with camera-based multi-touch surfaces has been combined
with physical metaphors in order to afford touch interaction
that is more expressive and intelligible for the user [4, 33].
Rogers et al. present a technique that utilizes long-range
capacitive sensors in conjunction with a particle filter, to
create a probabilistic model of the contact finger [28]. This
model is capable of tracking finger pose, as well as
inferring movement on and above the device screen.
Other research that has also explored how changes in the
properties of a contact point might be used to enhance touch
includes [31], where Wang et al. developed an algorithm
that unambiguously inferred finger orientation from the
changes in the shape and centroid of a contact point,
therefore enabling a range of occlusion aware interaction
techniques. Similarly, [3] explored how purposeful changes
in contact point size can provide a further parameter for
touch input, whilst [2] considered changes in the centroid of
a contact point. The three previously mentioned techniques
aren’t without drawbacks, however, as they rely on a
downward vertical movement of the finger, increase
occlusion due to a larger contact size, or are difficult to
integrate into mobile devices, respectively.
In addition to using the geometrical properties of a contact
point to extend the expressivity of touch input, prior work
has also demonstrated that the sound made when making
contact with the touchscreen can be used to differentiate
touches made by different parts of the hand [11, 19]. Yet,
with these methods there is the possibility for sound
pollution or simultaneous gestures to negatively affect a
user’s interaction experience, as it is not always possible to
distinguish between different acoustic signatures. Davidson
and Han [6] explore touch pressure on a pressure-sensitive
FTIR screen, to offer an extra dimension to touch-based
interactions for layering tasks. In [13], the authors present a
prototype, consisting of a film plate laid over an iPod touch
within a sensing frame, which detects tangential touchbased forces as well as pressure. Related to this is the work
of Ramos et al. [27], who detect pressure to afford
increased control to the user by using a stylus in
conjunction with multi-state widgets.
Other proposed methods also provide enhanced touch
interactions via pressure, this time tracking the device itself:
both [9] and [16] use inertial sensors and actuators within
mobile devices to detect the pressure of a touch, allowing
for different commands to be mapped to varying pressures.
However, these approaches are limited by their reliance on
the mobile devices vibration motor to be constantly
running, adversely affecting the overall experience and
making them impractical for many mobile situations where
battery life is a consideration. Another approach along the
same lines is [14], which detects touch pressure through the
device’s accelerometer, in addition to providing hybrid
touch and motion gestures such as tilt-to-zoom and pivotto-lock. This approach is restricted as these gestures are
infeasible in certain contexts, e.g. while resting the device
flat on a desk.
Hassan et al. explored the combination of a single touch
and gesture, to allow the transfer of files from a private
device to a public display [12]. Novel interaction
possibilities have been created or enhanced in several
related methods, through exploiting the way in which the
device is held. Harrison et al. augmented two handheld
devices with pressure and tilt sensors, to explore the
potential of these being used during simple tasks such as
navigating a book or list [10]. In [18], a modifier panel is
docked to the side of a tablet screen to provide touches with
different modes, overriding the traditional multi-touch
gestures. In turn, this adds expressiveness to certain
browser-based gestures. Rahman et al. have previously
performed a study into the range of tilt angles that users are
able to assume based on wrist motion, with the intention of
this parameter being used for more expressive and richer
input [26].
Another technique that we feel is particularly related to our
work is that of Chen et al. [5]. Here, the interaction space
between a smartphone and smart watch has been explored
in order to afford novel interaction techniques involving
both devices. The in-built accelerometers are used to
introduce interaction techniques such as finger posture and
handedness recognition, as well as multi-device gestures
that include the ability to unlock the smartphone with a
synchronized hold and flip action, and control over which
of the two devices receives certain notifications, based on
the detected direction of a touch across the two device
screens.
In summary, a wide number of technologies and interaction
techniques have the potential to support more expressive
touch interaction. However, the majority of these
approaches are not yet practical for wide scale adoption
because they have impractical hardware requirements. They
are typically expensive, bulky and/or power-hungry. In
addition, many only extend the expressiveness of touch
interaction in a limited way. In this paper, we present an
approach for augmenting existing touchscreen devices with
a wide variety of different expressive interaction
techniques. From a hardware perspective, our approach
only requires a wrist-worn IMU which we believe is
becoming increasingly practical due to the growing up-take
of smart watches, fitness trackers and other wrist-based
sensing devices.
DESIGN SPACE
The central concept underpinning our Expressive Touch
system is that the movement of a user’s wrist can be
detected and combined with the dynamics of traditional
touch contact points to support richer input on multi-touch
devices. This is desirable because, despite a range of
effective multi-touch gestures such as pinch-to-zoom and
twist-to-rotate, the functionality provided by these is
sometimes limited to a small number of degrees of freedom.
By leveraging these previously unused parameters, there is
the potential to augment a user’s interaction with new
capabilities and enhance the level of expressiveness
available to them.
At the same time, the recent growth in availability of smart
watches has resulted in many users who carry both a
smartphone and a smart watch. The inertial sensors found
within the latter allow linear acceleration and rotation of the
wrist to be measured, and from these we are able to derive a
number of values pertaining to properties of the contact
point. As a result, this presents a design space with a
number of interaction techniques that exploit the
combination of a wrist-worn IMU with a touchscreen, in
order to provide the user with more expressive forms of
touch-based interaction. To explore this new design space,
we outline the Expressive Touch parameters that are
available to a user and categorize these based on when
during an interaction they can be exploited. These key
parameters, which can be detected and tracked, are: the
relative pitch of the user’s wrist; the relative roll of the
user’s wrist; and the instantaneous force of the user’s wrist.
Expression can be performed via these parameters at the
following distinct times:
Pre- and Post-Touch
The IMU readings can be used to analyze qualities of the
user’s wrist movement before or after a finger has made or
left contact with the screen; we call these pre- and posttouch respectively. Such analysis reveals the velocity with
which the user’s finger approaches and leaves the screen.
This information can then be used to influence a subsequent
or previous action on-screen.
This affords a range of novel interaction possibilities, as
previously, neither continuous nor discrete control is
available at these times. In this way, a user has the potential
to consider how they wish to affect an on-screen object
before the interaction has taken place. For example, the
force at which a touch has been detected can be mapped to
a continuous parameter, affecting the specific level at which
is it applied (e.g. the volume of a piano key or the size of
the area to select on an image). This can also be used to
initiate a particular command using the discrete categories
of soft, medium and hard (e.g. jump, punch and kick in a
gaming scenario). This extends the concept of touch
interaction from the discrete variable of whether or not
contact is being made, to either distinct force categories, or
the continuous variable of how much force this contact was
initiated with. This could potentially strengthen the
relationship between a user and their device, as their wristmovements become more tightly coupled with this greater
level of application control available to them.
Similarly, by measuring the acceleration with which a
finger is removed to end a touch, we provide another means
for the user to affect an interaction. For example, based on
the three detectable force categories above it is possible for
a touch-based delete key to remove a single letter, a
complete word, or a whole line of text. There are also
continuous possibilities, such as applying spin to a moving
ball based on how quickly the touch is lifted and how much
rotational movement is used. This helps to shift the current
understanding of touchscreen interaction from a single
touch with one or more fingers, to the much wider spectrum
of user wrist-movements during the time either side of this
touch. That is, in the design space of Expressive Touch,
certain interactions with a touchscreen can be affected
without explicitly requiring a touch.
During Touch
Another interaction possibility that we explored was the
performance of continuous gestures via pitch and roll, while
maintaining contact with the screen. This kind of
uninterrupted interaction can be used to modify an existing
touch, without the need to break contact and perform one or
more subsequent touches. This presents new possibilities
including the continuous mapping to parameter control, e.g.
pitching to change the size of the object being touched. In
addition, roll angle can be used to affect cursor placement,
rotate an object, or move a settings slider. It’s also possible
to enable simultaneous multi-parameter interaction, for
example in a map application this allows control over both
pitch and zoom based on the axis in which movement is
detected. This can help to reduce the decoupling of a user
from their device, as they’re provided with a greater level
of control that is mapped directly to their wrist-movements,
while requiring fewer touches.
Some of the novel interaction possibilities mentioned can
be seen in Figure 1, which highlights their flexibility and
capacity to augment the functionality available to the user
in a variety of contexts.
IMU was accelerating, as well as the pitch and roll angles
required for Expressive Touch interactions.
(a)
The force events (touch force on making contact and flick
force on ending contact) are calculated by removing the
estimation of gravity from the accelerometer readings, thus
providing a ‘pure’ representation of the current acceleration
of the user’s wrist.
(b)
Data streamed from the IMU is stored on the smartphone
for analysis when required by the Expressive Touch system.
This analysis takes place on the magnitude of the vector
returned from the accelerometer, with gravity removed, in
order to calculate both the instantaneous force and the touch
force. Detection of a flick gesture analyses the acceleration
data after a touch event, where a spike in the magnitude
suggests a flick gesture has occurred.
(c)
(d)
Figure 1: Expressive Touch techniques in use within a variety
of applications: (a) pre-touch force to control the volume of a
piano key press; (b) post-touch force and roll to manipulate a
dice throw, in flight; (c) wrist roll to rotate an image; (d) wrist
pitch and roll to simultaneously control the pitch and zoom of
a map.
Consequently, we suggest that this design space is based on
the temporal relations between a gesture and the point of
contact, and the specific Expressive Touch parameters that
can be utilized at these times.
SYSTEM OVERVIEW
Hardware
For ease of development, we implemented our system using
the OpenMovement WAX9 IMU platform [25] rather than
using a particular smart watch or consumer movementtracking device. Amongst other sensors, this platform
comprises a low-power 3-axis accelerometer, gyroscope
and magnetometer, as well as a Bluetooth Low-Energy
(BLE) compatible radio. Together, these sensors allow an
accurate representation of the state of the user’s wrist to be
calculated.
BLE allows data from the IMU to be streamed wirelessly to
the a paired device with minimal power requirements; the
IMU is capable of streaming at 50Hz for 6 hours on a single
charge, or at up to 400Hz if battery life isn’t crucial. As the
only inertial sensors required are an accelerometer and
gyroscope, it is also possible for a commercial smart watch,
such as the Microsoft Band [22] or Fitbit Surge [8] to be
used for any of the expressive interaction techniques that
we propose.
Implementation
Using Madgwick’s sensor fusion algorithm [20] to produce
a quaternion, which represents an approximation of the
sensor orientation, we can obtain the direction in which the
Continuous interaction metrics calculated during the touch
event are determined using the pitch and roll angles
provided by the Madgwick quaternion. At the point of
touch we monitor changes in these values and keep a
running calculation of the current pitch and roll changes
relative to the touch values.
We estimated the touch force from the acceleration data in
the time window immediately before contact (the
acceleration of the press, rather than the deceleration). We
experimented with a range of data analysis techniques to
calculate the force using the acceleration data magnitudes:
summing all data in a time window; summing data above a
specified threshold in a time window; and taking the
maximum data in a time window. These techniques are
ordered by their effectiveness, with Expressive Touch
currently using the final outlined technique for calculating
the touch and flick force.
Careful consideration was taken at each stage in the
development process to provide an interface for third
parties to interact with the Expressive Touch technology.
The Expressive Touch system is wrapped inside a single
class that developers can interface with, which provides an
event subscription system for access to the required
gestures. Developers can subscribe to a set of pre-defined
interactions (three tap force categories, a flick gesture and
metrics) by providing a callback function that will process
the required action in response to the event, as well as the
event they would like to subscribe to.
STUDY
In our experiments we aim to illustrate the capabilities of
Expressive Touch to support augmentation of touch input
with continuous control. In particular, we investigate wrist
roll, wrist pitch, and touch force, as discussed above. We
recruited 15 participants (age 30 (11.2)) to participate in our
study, and used a mobile device in conjunction with a
sensor attached to the right wrist. Each participant
completed 4 experiments that explore different aspects of
expressive touch:
The first experiments investigate the range of force that
users can (re-) produce using Expressive Touch. Initially,
the user is given the opportunity to explore the force
detection and familiarize themselves with a simple
visualization (filled circle, size depending on force). Next
we randomly select a force level between a low and a hard
level (estimated in pilot study), and ask the user to perform
3 touches at the desired force level, while receiving visual
feedback. For each touch, we record the difference between
the desired force level and that produced by the user. This
procedure is repeated 10 times, with 10 randomly selected
force levels. A second experiment of touch force
investigates the force level subjective to the user. The user
is asked to perform 10 gestures at levels “soft”, “medium”,
and “hard”, without any indication on the precise force
amount requested. We record the force of each touch
performed by the user.
The second set of experiments investigates the wrist roll
and pitch, and largely follows procedure from previous
work on rotation using multi-finger gestures [15, 24]. In a
calibration step we record the maximum angle the user is
able to rotate from a neutral, “face-on” wrist position in
both clockwise (or up for pitch) and counter-clockwise (or
down) directions. Next we randomly select a rotation angle
within the observed limits and ask the user to rotate an
image (or adjust a slider for pitch) in a single touch gesture,
while receiving visual feedback on the current level of
rotation. We measure the time taken to complete the
rotation and the difference in desired angle to the angle of
the image on touch-up. If this difference exceeds a
threshold of 5 degrees it is treated as failure to complete the
task. This procedure is repeated 10 times with 10 randomly
selected angles for both wrist rotation and pitch.
RESULTS
Results are reported in Figure 2. Overall we observe a
strong correlation of 0.96 between the requested force and
the median force produced by the participants for each level
(see (a) in Figure 2), even though the quality of the
reproduction decreases for high force levels (>0.6g). These
results are encouraging for the use of touch force as an
expressive input modality. However, the results from our
second experiment, shown in Figure 2 (b), indicate that it
may be difficult for applications to differentiate touches
performed at “soft” and “medium” force (subjective to the
user), while the “hard” force level is sufficiently different
across all users from a “soft” input.
Graph (c) in Figure 2 illustrates the maximum angles for
both roll and pitch for each participant, along with the
overall mean coverage of rotation angles. The majority of
participants show a larger degree of rotation of the (right)
wrist in clockwise direction, which is reasoned in
biomechanical constraints. The mean coverage for roll
reveals that a range of -86 deg. to +141 deg. is accessible
for continuous control applications in most users. Other
experiments on roll revealed that the completion rate for a
rotation task is above 90% for rotations up to 50% of the
users maximum rotation angle, as illustrated in graph (d).
Interestingly the completion time for the same rotation task
is largely independent of the rotation angles throughout the
range for each user (see graph (e)).
Figure 2: Results from user study: (a) shows the median, 20th, and 80th percentile tap force of users asked to perform a touch at
specific force levels; (b) shows how users interpret subjective force levels ‘low’, ‘medium’, and ‘hard’; (c) shows the coverage of
angles for roll and pitch movements of the wrist as measured with the wrist-worn IMU for each user (blue) and averaged across all
users (black); (d) and (f) show task completion rates for a roll task (rotating an image), and for a pitch task (setting a slider); (e)
and (g) show task completion times for the roll and pitch tasks in the study. See text for details.
Many participants had trouble performing the pitch
movement reliably, particularly in the “downward”
direction (elbow down) (see graph (c)). In most cases,
participants angled solely the finger or the palm of the
hand, performing a movement that the (wrist-worn) sensor
is not able to capture. The mean coverage is therefore
significantly smaller for pitch, between -6 and +46 deg. The
inability to perform the gesture for some users also led to
reduced task completion rates (graph (f)), where just a small
range of angles (<0.3) is reliably achieved by our
participants. Similarly to the results for roll, the task
completion rate for the pitch task is largely independent of
the angle of pitch, but more likely influenced by other
factors (such as difficulties performing the movement).
To conclude we suggest that Expressive Touch can benefit
a wide range of applications for smartphones, tablets and
digital tabletops. Suggestions for these include: keyboard
replacements (e.g. for capitalizing letters depending on the
force detected), shortcuts (e.g. closing apps with a specific
gesture), map control (e.g. mapping wrist pitch to zoom)
and image manipulation. We also imagine games designers
may find a use for Expressive Touch in physics based
computer games (e.g. darts, bowling and pool), as well as
more artistic games that already attempt a nuanced touch
(see existing game Dark Echo [33]). Furthermore, we can
expect to see new examples emerge as wrist-worn sensors
and new smart watches open up their platforms to allow
access to the sensor data.
DISCUSSION
REFERENCES
In this paper we contribute a new method of augmenting
touch interaction using a wrist-worn IMU device. We
demonstrate how we calculate the pitch, roll and force
associated with a touch and we present a study we ran to
investigate characteristics of these new interactions such as
the range of movement possible and the level of
repeatability.
1. Benko, H., Saponas, T.S., Morris, D., and Tan, D.
Enhancing input on and above the interactive surface
with muscle sensing. In Proc. ITS 2009. ACM Press
(2009), 93-100.
Although we have not included structured interview
feedback, informal conversations with users revealed that
certain actions are more difficult to perform than others. For
example, pitching the finger upwards towards the device
sometimes resulted in loss of contact with the screen,
especially when the user had long fingernails.
We also found that some users do not move their wrist, and
hence do not move the IMU, when tapping the screen
forcefully. While this was temporarily an issue, participants
were quick to realize this and naturally corrected their
interactions. Some users also pitched their wrist in an
inconsistent manner, pitching upwards by moving their
wrist, but pitching downwards by moving their hand,
resulting in on-screen feedback which did not match their
apparent movement.
Consequently, we suggest that interaction designers need to
carefully consider how Expressive Touch is used, in terms
of the range of movement possible, scaling the metrics of
the interaction data to suit the particular scenario, and
providing effective feedback. We do believe, however, that
these novel interaction techniques are intuitive and
effective, with some interactions superior to existing
alternatives, e.g. single touch rotation vs. two-finger touch
rotation (our centre of rotation is immediately obvious).
We have so far experimented with several smartphone
applications, including a touch sensitive musical keyboard
that allows the pianist to play with nuanced volume, a map
application where the view’s pitch corresponds with the
pitch of the wrist and a jigsaw puzzle. In the latter we find
that the single finger rotation is particularly effective;
pieces are rotated efficiently and fingers do not obscure the
graphics, which is both practical and aesthetically pleasing.
2. Bonnet, D., Appert, C., and Beaudouin-Lafon, M.
Extending the vocabulary of touch events with
ThumbRock. In Proc. GI 2013. Canadian Information
Processing Society (2013), 221-228.
3. Boring, S., Ledo, D., Chen, X.A., Marquardt, N., Tang,
A., and Greenberg, S. The fat thumb: using the thumb's
contact size for single-handed mobile interaction.
In Proc. MobileHCI 2012. ACM Press (2012), 39-48.
4. Cao, X., Wilson, A.D., Balakrishnan, R., Hinckley, K.,
and Hudson, S.E. Shapetouch: Leveraging contact shape
on interactive surfaces. In Proc. TABLETOP 2008.
IEEE (2008), 129-136.
5. Chen, X.A., Grossman, T., Wigdor, D.J., and
Fitzmaurice, G. Duet: exploring joint interactions on a
smart phone and a smart watch. In Proc. CHI 2014
ACM Press (2014), 159-168.
6. Davidson, P.L. and Han, J.Y. Extending 2D object
arrangement with pressure-sensitive layering cues.
In Proc. UIST 2008. ACM Press (2008), 87-90.
7. Deyle, T., Palinko, S., Poole, E.S., and Starner, T.
Hambone: A Bio-Acoustic Gesture Interface. In Proc.
ISWC 2007. IEEE Computer Society (2007), 1-8.
8. Fitbit Surge. Available at:
http://www.fitbit.com/uk/surge [Accessed March 2015]
9. Goel, M., Wobbrock, J., and Patel, S. GripSense: using
built-in sensors to detect hand posture and pressure on
commodity mobile phones. In Proc. UIST 2012. ACM
Press (2012), 545-554.
10. Harrison, B.L., Fishkin, K.P., Gujar, A., Mochon, C.,
and Want, R. Squeeze me, hold me, tilt me! An
exploration of manipulative user interfaces. In Proc.
CHI 1998. ACM Press/Addison-Wesley Publishing Co.
(1998), 17-24.
11. Harrison, C., Schwarz, J., and Hudson, S.E. TapSense:
enhancing finger interaction on touch surfaces. In Proc.
UIST 2011.ACM Press (2011), 627-636.
12. Hassan, N., Md., Rahman, M., Irani, P., and Graham, P.
Chucking: A One-Handed Document Sharing
Technique. In Proc. INTERACT 2009. Springer-Verlag
(2009), 264-278.
unifying touch and gesture on and above a digital
surface. In Proc. INTERACT 2011. Springer-Verlag
(2011), 461-476.
22. Microsoft Band. Available at:
http://www.microsoft.com/Microsoft-Band/en-us
[Accessed March 2015]
13. Heo, S. and Lee, G. Force gestures: augmenting touch
screen gestures with normal and tangential forces.
In Proc. UIST 2011. ACM Press (2011), 621-626.
23. Murugappan, S., Vinayak, Elmqvist, N., and Ramani, K.
Extended multitouch: recovering touch posture and
differentiating users using a depth camera. In Proc.
UIST 2012. ACM Press (2012), 487-496.
14. Hinckley, K. and Song, H. Sensor synaesthesia: touch in
motion, and motion in touch. In Proc. CHI 2011. ACM
Press (2011), 801-810.
24. Nguyen, Q., and Kipp, M. Orientation matters:
efficiency of translation-rotation multitouch tasks.
In Proc. CHI 2014. ACM Press (2014), 2013-2016.
15. Hoggan, E., Williamson, J., Oulasvirta, A., Nacenta, M.,
Kristensson, P.O., and Lehtiö, A. Multi-touch rotation
gestures: Performance and ergonomics. In Proc. CHI
2013. ACM Press (2013), 3047-3050.
25. OpenMovement WAX9. Available at:
http://github.com/digitalinteraction/openmovement/wiki
/WAX9 [Accessed January 2015]
16. Hwang, S., Bianchi, A., and Wohn, K. VibPress:
estimating pressure input using vibration absorption on
mobile devices. In Proc. MobileHCI 2013 ACM Press
(2013), 31-34.
17. Kim, D., Hilliges, O., Izadi, S., Butler, A.D., Chen, J.,
Oikonomidis, I., and Olivier, P. Digits: freehand 3D
interactions anywhere using a wrist-worn gloveless
sensor. In Proc. UIST 2012 ACM Press (2012), 167176.
18. Kleimola, J., Laine, M., Litvinova, E., and Vuorimaa, P.
TouchModifier: enriched multi-touch gestures for tablet
browsers. In Proc. ITS 2013. ACM Press (2013), 445448.
19. Lopes, P., Jota, R., and Jorge, J.A. Augmenting touch
interaction through acoustic sensing. In Proc. ITS 2011.
ACM Press (2011), 53-56.
20. Madgwick, S.O.H., Harrison, A.J.L., and Vaidyanathan,
R. Estimation of IMU and MARG orientation using a
gradient descent algorithm. In Proc. ICORR 2011. IEEE
(2011), 1-7.
21. Marquardt, N., Jota, R., Greenberg, S., and Jorge, J.A.
The continuous interaction space: interaction techniques
26. Rahman, M., Gustafson, S., Irani, P., and Subramanian,
S. Tilt techniques: investigating the dexterity of wristbased input. In Proc. CHI 2009. ACM Press (2009),
1943-1952.
27. Ramos, G., Boulos, M., and Balakrishnan, R. Pressure
widgets. In Proc. CHI 2004. ACM Press (2004), 487494.
28. Rogers, S., Williamson, J., Stewart, C., and MurraySmith, R. AnglePose: robust, precise capacitive touch
tracking via 3d orientation estimation. In Proc. CHI
2011. ACM Press (2011), 2575-2584.
29. Sturman, D.J. and Zeltzer, D. A Survey of Glove-based
Input. IEEE Comput. Graph. Appl. 14(1), 30-39.
30. Wang, F., and Ren, X. Empirical evaluation for finger
input properties in multi-touch interaction. In Proc. CHI
2009. ACM Press (2009), 1063-1072.
31. Wang, F., Cao, X., Ren, X., and Irani, P. Detecting and
leveraging finger orientation for interaction with directtouch surfaces. In Proc. UIST 2009. ACM Press (2009),
23-32.
32. Wilson, A.D., Izadi, S., Hilliges, O., Garcia-Mendoza,
A., and Kirk, D. Bringing physics to the surface.
In Proc. UIST 2008. ACM Press (2008), 67-76.
33. RAC7Games: Dark Echo. Available at:
http://www.rac7.com [Accessed January 2015]