Thursday, 13 November 2014 Welcome Keynote 1:

Thursday, 13 November 2014
All sessions will take place in the Albert Long Hall,
except for the Demos and the Posters will be in the Vedat Yerlici Conference Hall
Nominees of Best Paper Award and Student Best Paper Award are marked with a star (*)
Welcome
09:00
Albert Salah
09:15
Keynote 1: Bursting our Digital Bubbles: Life Beyond the App
Yvonne Rogers
Session Chair: Oya Aran
10:15
Break
10:45-12:25
Oral Session 1: Dialogue and Social Interaction
Session Chair: TBA
10:45 * Managing Human-Robot Engagement with Forecasts and ... um ... Hesitations (1589)
Dan Bohus and Eric Horvitz
11:10 Written Activity, Representations and Fluency as Predictors of Domain Expertise in Mathematics (1756)
Sharon Oviatt and Adrienne Cohen
11:35 * Analysis of Respiration for Prediction of “Who Will Be Next Speaker and When?" in Multi-Party
Meetings (2588)
Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, and Junji Yamato
12:00 A Multimodal In-Car Dialogue System That Tracks The Driver's Attention (1702)
Spyros Kousidis, Casey Kennington, Timo Baumann, Hendrik Buschmeier, Stefan Kopp, and David
Schlangen
12:25
Lunch
TBA
14:00-15:30
Oral Session 2: Multimodal Fusion
Session Chair: TBA
14:00 * Deep Multimodal Fusion: Combining Discrete Events and Continuous Signals (1184)
Héctor P. Martínez and Georgios N. Yannakakis
14:25 * The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during
Tutoring (2386)
Joseph F Grafsgaard, Joseph B Wiggins, Alexandria Katarina Vail, Kristy Elizabeth Boyer, Eric N Wiebe,
and James C Lester
14:50 Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal
Prediction Approach (2346)
Sunghyun Park, Han Suk Shim, Moitreya Chatterjee, Kenji Sagae, and Louis-Philippe Morency
15:15 Deception detection using a multimodal approach (1049)
Mohamed Abouelenien, Verónica Pérez-Rosas, Rada Mihalcea and Mihai Burzo
15:40
Break
16:00-18:00
Demo Session 1
Session Chair: Lale Akarun and Kazuhiro Otsuka
Demo paper title (ID#)
Authors…
Demo paper title (ID#)
Authors…
Demo paper title (ID#)
Authors…
Demo paper title (ID#)
Authors…
16:00-18:00
Poster Session 1
Session Chair: TBA
Detecting Conversing Groups with a Single Worn Accelerometer (1038)
Hayley Hung, Gwenn Englebienne, and Laura Cabrera Quiros
Identification of the Driver’s Interest Point using a Head Pose Trajectory for Situated Dialog Systems
(1055)
Young-Ho Kim and Teruhisa Misu
An Explorative Study on Crossmodal Congruence Between Visual and Tactile Icons Based on Emotional
Responses (1106)
Taekbeom Yoo, Yongjae Yoo, and Seungmoon Choi
Why We Watch the News: A Dataset for Exploring Sentiment in Broadcast Video News (1234)
Joseph G. Ellis, Brendan Jou, and Shih-Fu Chang
Dyadic Behavior Analysis in Depression Severity Assessment Interviews (1241)
Stefan Scherer, Zakia Hammal, Ying Yang, Louis-Philippe Morency, and Jeffrey Cohn
Touching the Void - Introducing CoST: Corpus of Social Touch (1596)
Merel M. Jung, Ronald Poppe, Mannes Poel, and Dirk K. J. Heylen
Unsupervised Domain Adaptation for Personalized Facial Emotion Recognition (1826)
Gloria Zen, Enver Sangineto, Elisa Ricci, and Nicu Sebe
Predicting Influential Statements in Group Discussions using Speech and Head Motion Information (1850)
Fumio Nihei, Yukiko I. Nakano, Yuki Hayashi, Hung-Hsuan Hung, and Shogo Okada
The relation of eye gaze and face pose: Potential impact on speech recognition (1973)
Malcolm Slaney, Andreas Stolcke and Dilik Hakkani-Tur
Speech-Driven Animation Constrained by Appropriate Discourse Functions (2018)
Najmeh Sadoughi, Yang Liu, and Carlos Busso
Many Fingers Make Light Work: Non-Visual Capacitive Surface Exploration (2025)
Martin Halvey and Andy Crossan
Multimodal Interaction History and its use in Error Detection and Recovery (2073)
Felix Schussel, Frank Honold, Miriam Schmidt, Nikola Bubalo, Michael Weber, and Anke Huckauf
Gesture Heatmaps: Understanding Gesture Performance with Colorful Visualizations (2113)
Radu-Daniel Vatavu, Lisa Anthony, and Jacob O. Wobbrock
Personal Aesthetics for Soft Biometrics: a Generative Multi-resolution Approach (2246)
Cristina Segalin, Alessandro Perina, and Marco Cristani
Synchronising Physiological and Behavioural Sensors in a Driving Simulator (2369)
Ronnie Taib, Benjamin Itzstein, and Kun Yu
Data-Driven Model of Nonverbal Behavior for Socially Assistive Human-Robot Interactions (2376)
Henny Admoni and Brian Scassellati
Towards Automated Assessment of Public Speaking Skills Using Multimodal Cues (2435)
Lei Chen, Gary Feng, Jilliam Joe, Chee Wee Leong, Christopher Kitchen, and Chong Min Lee
Increasing Customers' Attention using Implicit and Explicit Interaction in Urban Advertisement (2484)
Matthias Woelfel and Luigi Bucchino
System for Presenting and Creating Smell Effects to Video (2535)
Risa Suzuki, Shutaro Homma, Eri Matsuura, and Ken-ichi Okada
CrossMotion: Fusing Device and Image Motion for User Identification, Tracking and Device Association
(2561)
Andrew Wilson and Hrvoje Benko
Statistical Analysis of Personality and Identity in Chats Using a Keylogging Platform (2626)
Giorgio Roffo, Cinzia Giorgetta, Roberta Ferrario, Walter Riviera, and Marco Cristani
Understanding Users' Perceived Difficulty of Multi-Touch Gesture Articulation (2671)
Yosra Rekik, Radu-Daniel Vatavu, and Laurent Grisoni
A Multimodal Context-based Approach for Distress Assessment (2766)
Sayan Ghosh, Moitreya Chatterjee, and Louis-Philippe Morency
Exploring a Model of Gaze for Grounding in Multimodal HRI (2792)
Gregor Mehlmann, Kathrin Janowski, Tobias Baur, Markus Haring, Elisabeth Andre, and Patrick Gebhard
Predicting Learning and Engagement in Tutorial Dialogue: A Personality-Based Model (2805)
Alexandria Katarina Vail, Joseph F. Grafsgaard, Joseph B. Wiggins, James C. Lester, and Kristy Elizabeth
Boyer
Eye gaze for spoken language understanding in multi-modal conversational interactions (2826)
Dilek Hakkani-Tur, Malcolm Slaney, Asli Celikyilmaz, and Larry Heck
SoundFlex: Designing Audio to Guide Interactions with Shape-Retaining Deformable Interfaces (2859)
Koray Tahiroglu, Thomas Svedstrom, Valtteri Wikstrom, Simon Overstall, Johan Kildal, and Teemu
Ahmaniemi
Investigating Intrusiveness of Workload Adaptation (2866)
Felix Putze and Tanja Schultz
18:00
Welcome Reception
Friday, 14 November 2014
All sessions will take place in the Albert Long Hall, except for the Demos that will be in the Vedat Yerlici
Conference Hall
Keynote 2: Smart Multimodal Interaction Through Big Data
9:00
Cafer Tosun
Session Chair: Louis-Philippe Morency
10:00
Break
10:30-12:00
Oral Session 3: Affect and Cognitive Modeling
Session Chair: TBA
10:30 Natural Communication about Uncertainties in Situated Interaction (1896)
Tomislav Pejsa, Dan Bohus, Michael Cohen, Chit Saw, James Mahoney and Eric Horvitz
10:55 The SWELL Knowledge Work Dataset for Stress and User Modeling Research (2189)
Saskia Koldijk, Maya Sappelli, Suzan Verberne, Mark Neerincx and Wessel Kraaij
11:20 Rhythmic body movements of laughter (1540)
Radoslaw Niewiadomski, Maurizio Mancini, Yu Ding, Catherine Pelachaud and Gualtiero Volpe
11:45 Automatic blinking detection towards stress discovery (1520)
Alvaro Marcos-Ramiro, Marta Marron-Romera, Daniel Pizarro-Perez and Daniel Gatica-Perez
Lunch
12:00
TBA
13:30-15:00
Oral Session 4: Nonverbal Behaviors
Session Chair: TBA
13:30 * Mid-air Authentication Gestures: An Exploration of Authentication based on Palm and Finger Motions
(1820)
Ilhan Aslan, Andreas Uhl, Alexander Meschtscherjakov and Manfred Tscheligi
13:55 * Automatic Detection of Naturalistic Hand-over-Face Gesture Descriptors (2204)
Marwa Mahmoud, Tadas Baltrusaitis and Peter Robinson
14:20 Capturing Upper Body Motion in Conversation: an Appearance Quasi-Invariant Approach (2459)
Alvaro Marcos-Ramiro, Marta Marron-Romera, Daniel Pizarro-Perez and Daniel Gatica-Perez
14:45 User independent gaze estimation by exploiting similarity measures in the eye pair appearance eigenspace
(1909)
Nanxiang Li and Carlos Busso
15:00
Break
15:30-16:45
Doctoral Spotlight Session
Session Chair: Justine Cassell and Marco Cristani
Exploring Multimodality for Translator-Computer Interaction
Julián Zapata
Towards social touch intelligence: developing a robust system for automatic touch recognition
Merel Jung
Facial Expression Analysis for Estimating Pain in Clinical Settings
Karan Sikka
Realizing Robust Human-Robot Interaction under Real Environments with Noises
Takaaki Sugiyama
Speaker- and Corpus-Independent Methods for Affect Classification in Computational Paralinguistics
Heysem Kaya
Social Interactions: Understanding Technology and Stress
Ailbhe Finnerty
Multi-Resident Human Behaviour Identification in Ambient Assisted Living Environments
Hande Alemdar
Gaze-Based Proactive User Interface for Pen-Based Systems
Çağla Çığ
Appearance Based User-Independent Gaze Estimation
Nanxiang Li
Affective Analysis of Abstract Paintings Using Statistical Analysis and Art Theory
Andreza Sartori
The Secret Language of Our Body - Affect and Personality Recognition Using Physiological Signals
Julia Wache
Perceptions of interpersonal behavior are influenced by gender, facial expression intensity, and head pose
Jeffrey M. Girard
Authoring Communicative Behaviors for Situated, Embodied Characters
Tomislav Pejsa
Multimodal Analysis and Modeling of Nonverbal Behaviors during Tutoring
Joseph Grafsgaard
16:45-17:30
Grand Challenge Overviews
Session Chairs: Hatice Gunes and Dirk Heylen
Third Multimodal Learning Analytics Workshop and Grand Challenges
Xavier Ochoa, Marcelo Worsley, Katherine Chiluiza and Saturnino Luz
The Second Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2014 Challenge
Abhinav Dhall, Roland Goecke, Jyoti Joshi, Karan Sikka and Tom Gedeon
MAPTRAITS'14 - Personality Mapping Challenge and Workshop 2014
Hatice Gunes, Björn Schuller, Oya Celiktutan, Evangelos Sariyanidi and Florian Eyben
17:30-18:15
ICMI Awardee Talk
Session Chairs: Daniel Gatica-Perez
18:30
Banquet
Buses will be leaving at XXX from TBA
Saturday, 15 November 2014
All sessions will take place in the Albert Long Hall.
Keynote 3: Computation of Emotions
09:00
Peter Robinson
Session Chair: Phil Cohen
10:00
Break
10:30-12:10
Oral Session 5: Mobile and Urban Interaction
Session Chair: TBA
10:30 Non-Visual Navigation Using Combined Audio Music and Haptic Cues (1647)
Emily Fujimoto and Matthew Turk
10:55 Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions (2941)
Euan Freeman, Stephen Brewster and Vuokko Lantz
11:20 Once Upon a Crime: Towards Crime Prediction from Demographics and Mobile Data (2031)
Andrey Bogomolov, Bruno Lepri, Jacopo Staiano, Nuria Oliver, Fabio Pianesi and Alex Pentland
11:45 Impact of Coordinate Systems on 3D Manipulations in Mobile Augmented Reality (1118)
Philipp Tiefenbacher, Steven Wichert, Daniel Merget and Gerhard Rigoll
Lunch
12:00
TBA
13:30-15:00
Oral Session 6: Healthcare and Assistive Technologies
Session Chair: TBA
13:30 * Digital Reading Support for The Blind by Multimodal Interaction (2458)
Yasmine El-Glaly and Francis Quek
13:55 Measuring child visual attention using markerless head tracking from color and depth sensing cameras
(1160)
Jonathan Bidwell, Irfan Essa, Agata Rozga and Gregory Abowd
14:20 Bi-Modal Detection of Painful Reaching for Chronic Pain Rehabilitation Systems (2347)
Temitayo Olugbade, Hane Aung, Nadia Bianchi-Berthouze, Nicolai Marquardt and Amanda Williams
14:35
Break
15:05-16:05
Keynote 4: Title TBA
Alexander Waibel
Session Chair: TBA
16:05-17:15
ICMI Town Hall Meeting