Session Presentation Presenter Affiliation Title Timeslot
Ses. 1 Opening Asst. Prof. Hao Cheng Uni. of Twente, Netherlands Opening speaking 09:00-09:10
Invited speaker 1 (Online) Prof. Junmin Wang UT Austin, US Quantification of Driver Attentiveness and Impact of Visual Communication in Vehicle Lane-keeping Automation 09:10-09:40
Invited speaker 2 (Online) Prof. Takahiro Wada NAIST, Japan Computational Models of Human Self-motion Perception and Motion Sickness based on Subjective Vertical Mismatch Theory and Its Application to Vehicle Motion Comfort 09:40-10:10
Coffee break 10:10-10:30
Ses. 2 Invited speaker 3 Prof. Daofei Li Zhejiang Uni., China Human-centered approach to automated driving: from decision-making, motion planning to control 10:30-11:00
Invited speaker 4 Prof. Fang You Tongji Uni., China Design of Interactive Safety Warning Information Design of External Display Interface for Autonomous Driving Vehicles 11:00-11:30
Invited speaker 5 Bo Jiang HUST, China Advancing E2E-AD via Multimodal Planning, Reinforced Fine-Tuning, and Language Modality Integration 11:30-12:00
Lunch break 12:00-13:30
Ses. 3 Invited speaker 6 Prof. Adriana Tapus, Dr. Zhegong Shangguan ENSTA Paris, France Beyond Signals: Touch, Emotion, and Arousal in Human Driving Behavior 13:30-14:15
Invited speaker 7 Prof. Jianmin Wang Tongji Uni., China IXAI: Generative Design of Automotive Styling Based on Inception Convolution with Explainable AI 14:15-14:45
Invited speaker 8 Prof. Tatsuya Mori Waseda University, Tokyo, Japan Security and Privacy of Connected Vehicles: Gaps in Perception among AI, Humans, and Developers 14:45-15:15
Coffee break 15:15-15:30
Ses. 4 Poster 1 Lucas Elbert Suryana, Ashwin George, Lorenzo Flipse, Simeon C. Calvert, Bart van Arem Luciano Cavalcante Siebert, David Abbink, Arkady Zgonnikov TU Delft When Faster Isn’t Always Better: Between- and Within-Mode Effects of Reaction Time on Perceived Control in Shared Automated Driving. 15:30-16:30
Poster 2 Keke Long, Xiaowei Shi, Yang Li, Zhiwei Chen, Yuan Wang, Xiaopeng Li University of Wisconsin-Milwaukee,Drexel University,University of South Florida Before and after riding: changing comfort attitude towards autonomous shuttles from perspectives as riders, drivers, and pedestrians 15:30-16:30
Poster 3 Hanyang Zhuang, Longsheng Wang, Chunxiang Wang, Ming Yang Shanghai Jiao Tong University Human Machine Interface for Remote Takeover of Automated Vehicles. 15:30-16:30
Poster 4 Yunhao Cai, Yueying Chu, Peng Liu Zhejiang University Restoring Trust in Automated Vehicles: The Effect of Single Strategy and Combined Strategies 15:30-16:30
Poster 5 Lan Lan, Yuchu Chen, Haizhou Gong. Haigen Min, Peng Liu Zhejiang University, Chang’an University Remote driving in the eyes of passengers 15:30-16:30
Roundtable discussion Interactive discussions for research and collaboration in HRI and HVI. 16:30-17:00
Closing Asst. Prof. Hao Cheng Uni. of Twente, Netherlands Closing speaking 17:00-17:10

INVITED PRESENTATION 1 (20MIN + 10MIN)
Quantification of Driver Attentiveness and Impact of Visual Communication in Vehicle Lane-keeping Automation
Speaker Photo

Prof. Dr. Junmin Wang

IEEE Fellow – ASME Fellow – SAE Fellow
Fletcher Stuckey Pratt Chair Professor in Engineering
University of Texas at Austin

Abstract
This talk presents recent advances in understanding and supporting human attentiveness in vehicle lane-keeping automation. First, we introduce a personalizable, physics-based model that quantifies individual drivers’ expected attentiveness under varying driving conditions. The model formulates cognitive load as a function of vehicle speed and road curvature and leverages intuitive, gaze-derived indicators that provide greater interpretability than conventional metrics and allow for driver-specific customization. Using a high-fidelity driving simulator and an eye-tracking system, we integrated objective gaze data with subjective ratings via the NASA Task Load Index to optimize model parameters and validate predictive performance through human subject experiments. The model enables estimation of whether a driver is under-, over-, or appropriately attentive, supporting human-centric vehicle automation. Complementing this modeling perspective, we also explore the role of visual communication in shaping driver-vehicle collaboration. Visual displays are widely used in vehicle automation systems to convey the vehicle’s perception of the environment to drivers and passengers, but the effect of communication detail on human workload, engagement, and acceptance remains underexplored. To address this, we conducted a pilot study in a driving simulator to assess how varying levels of detail in visual communication impact drivers. Both objective and subjective evaluations were performed, showing good agreement across assessment methods. Together, these studies highlight how quantifying driver attentiveness and systematically designing visual communication can jointly advance safe, effective, and human-centered lane-keeping automation.
Biography
Junmin Wang is the Fletcher Stuckey Pratt Chair in Engineering and a Professor in Mechanical Engineering at University of Texas at Austin. In 2008, he started his academic career at Ohio State University, where he founded the Vehicle Systems and Control Laboratory, was early promoted to Associate Professor in September 2013 and then very early promoted to Full Professor in June 2016. In 2018, he left Ohio State University and joined University of Texas at Austin as the Accenture Endowed Professor in Mechanical Engineering. Professor Wang has a wide range of research interests covering control, modeling, estimation, optimization, diagnosis, and AI for dynamical systems, especially for automotive, vehicle, transportation, mobility, human-automation, robotic, energy storage, and manufacturing applications. Prof. Wang’s research contributions include the development of control and estimation methods that advance efficiency, driving safety, and emission performance of conventional, electrified, connected and autonomous/automated vehicles. He has five years of full-time industrial research experience (2003-2008) at Southwest Research Institute (San Antonio, Texas) where he was a Senior Research Engineer and led research projects sponsored by more than 50 industrial companies and governmental agencies worldwide. Prof. Wang’s research programs at UT Austin and Ohio State University have been funded by federal agencies and industrial companies such as National Science Foundation (NSF), Office of Naval Research (ONR), Department of Energy (DOE), U.S. Department of Transportation (DOT), National Highway Traffic Safety Administration (NHTSA), Army Research Lab (ARL), Texas Department of Transportation, GM, Ford, Honda, Tenneco, Eaton, Ftech, Denso, and others.

INVITED PRESENTATION 2 (20MIN + 10MIN)
Computational Models of Human Self-motion Perception and Motion Sickness based on Subjective Vertical Mismatch Theory and Its Application to Vehicle Motion Comfort
Speaker Photo

Prof. Dr. Takahiro Wada

Graduate School of Science and Technology, NAIST, Japan

Abstract
The importance of enhancing motion comfort in vehicles has been growing with the rapid development of automated vehicles and the increasing use of in-vehicle digital devices. Understanding human motion comfort requires a quantitative framework of self-motion perception, including motion sickness under various motion exposures such as vehicle dynamics. In this talk, I will provide an overview of recent research on modeling motion sickness and self-motion perception based on the Subjective Vertical Mismatch Theory, which has attracted increasing attention in recent years. Furthermore, I will present applications of these computational models to the study of motion perception and comfort in vehicles, including automated vehicles.
Biography
Prof. Takahiro Wada received his Ph.D. degree in Robotics from Ritsumeikan University in 1999. He then joined the Department of Robotics at the same university as a Research Associate. In 2000, he moved to Kagawa University as an Assistant Professor and was promoted to Associate Professor in 2007. In 2012, he became a Full Professor at the College of Information Science and Engineering, Ritsumeikan University. Since 2021, he has been a Full Professor at the Nara Institute of Science and Technology. His research interests include human–machine systems, human modeling, and human–robot interaction. He has served as Chair of the IEEE SMC Society Technical Committee on Haptic Shared Control and as Co-Chair of the IFAC Technical Committee on Human-Machine Systems.

INVITED PRESENTATION 3 (20MIN + 10MIN)
Human-centered approach to automated driving: from decision-making, motion planning to control
Speaker Photo

Asso. Prof. Dr. Daofei Li

Zhejiang University, China

Abstract
Autonomous driving (AD) is advancing rapidly. By leveraging state-of-the-art sensing, decision-making, and control technologies, the most advanced AD features currently available are capable of providing assistance close to SAE Level 3 on complex and congested urban roads (some manufacturers refer to this as Navigate on Autopilot, or NOA). However, given the diversity and unpredictability of driver and passenger needs, there is still a long way to go before fully trustworthy and satisfactory AD functions can be achieved. This talk focuses on personalized demands of drivers and passengers in terms of safety, comfort, and traffic efficiency, presenting our explorations in decision-making, motion planning, and motion control. The discussion includes research on personalized decision-making tailored to drivers' style preferences and motion sickness-aware motion planning considering passenger comfort.
Biography
Daofei Li received the B.S. degree in Vehicle Engineering from the Jilin University, Changchun, China, in 2003, and the Ph.D. degree in Vehicle Engineering from the Shanghai Jiao Tong University, Shanghai, China, in 2008. Since June 2008, he joined the Institute for Power Machinery and Vehicular Engineering, Faculty of Engineering, Zhejiang University (ZJU), Hangzhou, China. He was a Visiting Scholar with the University of Missouri-Columbia in 2011, and later with the University of Michigan, Ann Arbor, Michigan from 2014 to 2016. He is currently Associate Professor with ZJU and directs the Research Group of Human-Mobility-Automation. His research interests include vehicle dynamics and control, driver model and autonomous driving.

INVITED PRESENTATION 4 (20MIN + 10MIN)
Design of Interactive Safety Warning Information Design of External Display Interface for Autonomous Driving Vehicles
Speaker Photo

Prof. Dr. Fang You

The School of Arts and Media, Tongji University, China

Abstract
With the rapid advancement of autonomous driving technology, the communication and interaction design of external display interfaces has become a critical factor in ensuring traffic safety and enhancing human–vehicle coexistence. External displays are not only required to convey the vehicle’s intentions and status but also to provide pedestrians, cyclists, and other road users with clear and intuitive safety warnings. This talk focuses on the design of interactive safety warning information for external display interfaces in autonomous vehicles, addressing the functional positioning and interaction logic of such interfaces, the content and visual representation of safety information (including color, symbols, and dynamic cues), the application of multimodal interaction and context-awareness, and the validation of design effectiveness through user experiments and simulation studies. Building on theoretical analysis and case studies, the report proposes an AI-driven, scalable, and psychologically grounded design framework to support both the social acceptance and traffic safety of autonomous vehicles.
Biography
Fang You is a Professor and Ph.D. supervisor at the School of Arts and Media, Tongji University, where she leads the User Experience Laboratory and the Automotive Interaction Design Laboratory. She is also a Fellow of the Royal Society of Arts (FRSA). Her main research interests include cognitive interaction design, intelligent cockpit design and evaluation, hybrid space information, and embodied design. In recent years, she has led more than ten major projects, including the National Natural Science Foundation of China, the National Social Science Fund, projects funded by the Ministry of Science and Technology, and several provincial-level grants. She has published over 80 papers, authored five books and two textbooks, and holds 17 invention patents along with numerous utility and design patents. She has established long-term collaborations with companies such as Huawei, Baidu, and SAIC, as well as universities and research institutes in Germany and Australia. Her research outcomes have been widely applied in intelligent cockpits and human–computer interaction.

INVITED PRESENTATION 5 (20MIN + 10MIN)
Multimodal Understanding for Autonomous Driving
Speaker Photo

Bo Jiang

HUST Vision Lab, China

Abstract
In this talk, I will present our recent research in several areas of autonomous driving (AD). I will begin with our latest multimodal end-to-end models, VADv2 and DiffusionDrive. Next, I will introduce RAD, a closed-loop, interactive reinforcement fine-tuning framework based on 3D Gaussian Splatting. Then, I will share our work on incorporating language modalities into AD. This includes Senna, a dual-system vision-language-action model, and AlphaDrive, a planning-oriented VLM based on RL and reasoning. Finally, I will outline our vision for future directions in driving-oriented VLAs and interactive driving enabled by language-based instructions.
Biography
Bo Jiang is a Ph.D. candidate at Huazhong University of Science and Technology, advised by Prof. Xinggang Wang and Prof. Wenyu Liu. His research focuses on autonomous driving and multimodal understanding. He has published 4 papers in top-tier conferences and journals, with over 840 citations on Google Scholar. His representative works include the VAD series and the MapTR series, which have garnered over 3k stars on GitHub and are widely used in both academia and industry.

INVITED PRESENTATION 6 (30MIN + 15MIN)
Title:Beyond Signals: Touch, Emotion, and Arousal in Human Driving Behavior
Speaker Photo

Prof. Dr. Adriana Tapus and Dr. Zhegong Shangguan

ENSTA Paris, France

Abstract
TBD
Biography
Adriana Tapus is a Professor at ENSTA Paris, member of the Institut Polytechnique de Paris, in the Autonomous Systems and Robotics Laboratory of the Computer Science and Systems Engineering Unit (U2IS). In 2011, she obtained Habilitation (HDR) for her thesis titled "Towards a personalized Human-Robot Interaction". She obtained her PhD in Computer Science from the Swiss Federal Institute of Technology in Lausanne (EPFL), Switzerland in 2005 and her degree in Computer Engineering from Polytechnic University of Bucharest, Romania in 2001. She has worked as a Research Associate at the University of Southern California (USC), where she was among the pioneers in the development of socially assistive robotics, contributing mainly in machine learning, human modeling, and human-robot interaction. Prof. Tapus is Associate Editor of the International Journal on Social Robotics (IJSR), ACM Transactions on Human-Robot Interaction (THRI), and IEEE Transactions on Cognitive and Developmental Systems (TCDS) and is on the steering committee of several major robotics conferences (IROS, ICRA, HRI, RO-MAN, etc.). She has more than 150 research publications and received the Romanian Academy Award for her contribution in assistive robotics in 2010. In 2016 she was nominated as one of the “25 women in robotics you need to know about”. She has been the main coordinator of several national and international projects. Since 2019, she is also a founder member of the RoboticsByDesign lab. Since 2019, she is also the DIrector of the DOctoral School of IP Paris.
Zhegong Shangguan is a Research Associate/Postdoctoral Researcher at the University of Manchester, specializing in cognitive robotics, human-robot interaction, and psychology. His research explores the development of human cognition, employing cognitive and developmental robotics approaches to investigate this process. By integrating these methodologies, he aims to enhance robots with advanced cognitive capabilities and expressive emotional interactions. Currently, he is working on the ERC-funded e-TALK project under the supervision of Prof. Angelo Cangelosi at the Cognitive Robotics Lab, University of Manchester. He obtained his Ph.D. in Computer Science from École Nationale Supérieure de Techniques Avancées (ENSTA), Institut Polytechnique de Paris, where he conducted research at the U2IS (L'unité d'Informatique et d'Ingénierie des Systèmes) Laboratory under the supervision of Prof. Adriana Tapus. His doctoral work focused on social robotics and trustworthy human-vehicle interaction. In addition to his research, he serves as a reviewer for leading journals and conferences, including The International Journal of Robotics Research (IJRR), the International Journal of Human-Computer Interaction (IJHCI), CSCW, IROS, and ICRA. He is also a member of the Editorial Board for Embodied Intelligence and Robotics and a committee member of the ECSCW 2025 Exploration Session.

INVITED PRESENTATION 7 (20MIN + 10MIN)
IXAI: Generative Design of Automotive Styling Based on Inception Convolution with Explainable AI
Speaker Photo

Prof. Dr. Jianmin Wang

The School of Arts and Media, Tongji University, China

Abstract
This presentation will introduce IXAI — a generative automotive styling design framework based on Inception Convolution and Explainable Artificial Intelligence (XAI). The method integrates multimodal data processing, Inception Convolution-based target detection, and Grad-CAM visualization to achieve efficient, transparent, and highly accurate styling generation. Based on this framework, we have developed a creative tool, IXAI-CAR, capable of rapidly generating high-quality automotive styling design images. Experimental results show that the method achieves an accuracy of 98.3% and improves the SSIM image quality metric by 8.24%, significantly outperforming traditional approaches. The study demonstrates that IXAI not only optimizes the design process and output quality but also provides an efficient and transparent solution for human-computer collaboration in complex design tasks, advancing the application and development of intelligent design tools in the automotive styling domain.
Biography
Jianmin Wang, Ph.D., is a Professor and Vice Dean at the School of Arts and Media, Tongji Universityand a Fellow of the Royal Society of Arts (FRSA). He currently serves as Director of the Artificial Intelligence Media Research Center and Head of the User Experience Laboratory and Automotive Interaction Design Laboratory at Tongji University. His research interests focus on human–computer interaction, automotive virtual simulation, and big data visualization. He has led more than twenty national and provincial research projects, including the National 863 Program, the National Natural Science Foundation of China, and the National Science and Technology Support Program, as well as collaborative projects with leading enterprises such as Huawei, Mitsubishi Electric, and China Telecom. He holds 44 authorized invention patents, has published over 80 papers, and authored several monographs. His achievements have been recognized with the National Science and Technology Progress Award (Second Prize), the Ministry of Education Science and Technology Progress Award (First Prize), the Guangdong Science and Technology Progress Award, and the Guangdong Dingying Science and Technology Award. He has also served as a member of the Ministry of Education’s Teaching Steering Committee for Animation and Digital Media, and as a standing member of the Human–Computer Interaction Committee of the China Computer Federation (CCF).

INVITED PRESENTATION 8 (20MIN + 10MIN)
Security and Privacy of Connected Vehicles: Gaps in Perception among AI, Humans, and Developers
Speaker Photo

Prof. Dr. Tatsuya Mori

Waseda University, Tokyo, Japan

Abstract
Autonomous and connected vehicles are redefining mobility, yet their reliance on machine perception and large-scale data collection introduces new security and privacy challenges. This talk introduces a series of our recent studies addressing these challenges from both technical and human perspectives. We first introduce Shadow Hack, an adversarial attack that manipulates naturally occurring object shadows in LiDAR point clouds to deceive object detection models. By optimizing shadow geometry and materials, Shadow Hack achieves near-perfect attack success, while our proposed BB-Validator defense provides complete mitigation. We then present the Adversarial Retroreflective Patch (ARP), a stealthy nighttime attack on traffic sign recognition that activates only under headlight illumination. ARP achieves over 93% attack success in physical environments, and our user study shows that participants perceived ARP-modified signs as nearly natural (average score 2.04 vs. 1.81 for benign signs), whereas conventional patch attacks were easily detected (≥3.69). We also explore privacy perceptions in connected vehicles through large-scale user surveys, revealing that sensitivity depends strongly on data type and user role. Biometric, audio, and visual data evoke the greatest concern, while drivers show higher tolerance than passengers and pedestrians. These findings highlight the importance of transparency and role-aware data practices for trustworthy autonomous mobility.
Biography
Tatsuya Mori is currently a professor at Waseda University, Tokyo, Japan. He joined NTT laboratory in 1999 and moved to Waseda University in 2013. From July to December 2024, he was a visiting professor at the Politecnico di Milano. He has been engaged in research on network measurement, security, and privacy. He has received several best paper awards, including NDSS 2020, EuroUSEC 2021, and ACM ASIACCS 2025.