Causal Reinforcement Learning for Human-Machine Interaction in Social Contexts
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This thesis investigates the integration of causal inference with reinforcement learning (RL) to improve human-machine interaction (HMI) in social contexts. Intelligent systems, particularly assistive robots, require sequential decision-making capabilities that allow them to interact dynamically and cooperatively with humans. Traditional RL frameworks have shown limitations in social settings, where understanding and modeling human-like social cognition are essential. This work addresses these limitations by proposing Causal Reinforcement Learning (CRL) as a framework to enhance HMI applications, particularly for children with neuromotor disabilities.
Our research first explores RL and causal inference methodologies, focusing on strategies to incorporate causal reasoning in social interactions. We design a collaborative game environment as an experiment for data collection, enabling children with neuromotor disabilities to interact with peers. Using these interactions, we develop a causal model of social cognition, which informs the behavior of a CRL agent trained to adapt and respond based on human strategic behavior. A specific application is demonstrated through the Iterated Prisoner’s Dilemma (IPD), where a CRL agent uses causal insights to learn and predict opponent strategies, achieving improved cooperation.
Key contributions include the development of a novel CRL strategy, the creation of a collaborative gaming system for data collection and analysis, and the successful implementation of CRL in IPD scenarios. This work underscores the potential of causal inference in RL to foster more intuitive and socially aware interactions, with applications in assistive robotics and social cognitive training. The research also lays the groundwork for future developments in CRL methodologies, aiming to bridge the gap between artificial agents and human social cognition.
