As part of the MOTIVATE XR Horizon Europe project, our team has explored how Extended Reality (XR) can enable more natural and efficient interaction with industrial robots. The outcome is our new research paper, “Gesture-Driven Remote Robot Mission Activation through Smart Glasses: An XR Interaction Pipeline for Industrial Assistance,” which presents a complete pipeline for controlling a mobile robot using bare-hand gestures performed via LeonardoXR smart glasses.
A Full Gesture-to-Robot Pipeline
The work demonstrates how XR wearables can serve as a hands-free command interface in industrial settings. Our pipeline spans the entire interaction loop:
- Hand tracking and gesture capture using the integrated cameras of LeonardoXR.
- On-device gesture classification through an int8-quantised Handformer neural model, deployed on the Qualcomm Hexagon 698 NPU for low-latency inference.
- Semantic mapping from gestures to robot. missions inside a Unity-based XR application
- Secure transmission of mission commands over a VPN provided by the BI-REX Competence Center.
- Execution on a MiR250 autonomous mobile robot, located 700 km away in the BI-REX smart manufacturing laboratory.
This approach eliminates the need for physical controllers and reduces the dependence on external compute resources, making the XR headset itself a standalone interaction device.
A Distributed Industrial Pilot
A key aspect of the study is the distributed nature of the pilot. While the operator performed gestures at the Youbiquo lab in Southern Italy, the MiR250 robot executed missions in Bologna. Despite the geographical separation, the system maintained:
- 84% accuracy for dynamic gesture recognition
- 91% accuracy for static gestures
- ≈230 ms end-to-end latency, including VPN and REST API interactions
This level of responsiveness shows that on-device inference and lightweight communication are sufficient for real-time robot control in geographically distributed settings.
Cross-Platform XR Integration
Recognising the importance of portability, we also integrated the gesture recognition layer into an OpenXR-based application running on a standalone headset. This experiment confirms that the gesture abstraction we designed can extend beyond a single device ecosystem, supporting the broader MOTIVATE XR goal of hardware-agnostic XR interaction frameworks.
Advancing Human–Robot Interaction in XR
Overall, this work demonstrates that gesture-driven control pipelines can be reliable, low-latency, and robust enough for industrial scenarios. By combining embedded AI, Unity-based XR interfaces, and secure remote robot communication, we provide an early blueprint for future hands-free XR-assisted industrial workflows.
The research will continue within MOTIVATE XR through expanded gesture sets, larger-scale user studies, and further validation across diverse XR devices and robotic platforms.
Author

Youbiquo
Antonio Zanesco was born in Naples, graduated in 2001 in Electronic Engineering and soon began working at CNES in Toulouse (France), where he was involved in the characterization of active matrices (APS) to be employed into satellites for observation of the Earth.



