Research Threads.
Ongoing Research.
Learning Multi-Agent Communication (Multi-Agent Reinforcement Learning)
To date, this thread of research has explored enabling a population of embodied agents to learn nonverbal communication protocols, through physical actuation of joints. Each sender agent in the population must learn a policy that maps a given set of concepts (communicative intents) to messages (motion trajectories), for communicating with receiver agents. Communication protocols learned use high-dimensional continuous channels and emerge through agent-agent interactions (self-play). We employ a minimal set of realistic constraints as common ground (presumed background knowledge shared by all agents) for guiding the protocol learning.
Keywords: Multi-Agent Communication, Zero-Shot Communication, Emergent Communication, Multi-Agent Learning, Multi-Agent Reinforcement Learning
Past Research.
Learning from Human Agents (Interactive Robot Learning)
This body of work investigated how to enable a robotic agent to employ interaction with a human partner for efficiently learning to ground concepts in its environment. It explored two general paradigms for interactive learning: (a) passive learning from human demonstrations [top arrow] and (b) active learning through querying a human partner [bottom arrow]. Notably, the active learning work contributed autonomous reasoning capabilities for a learning agent coexisting with a human, in a non-stationary environment. In this more realistic problem setting, an agent needs to continually update its concept models in accordance with environmental change, and it exploits interaction to do so. This work employed a probabilistic reasoning framework.
Keywords: Active Learning, Learning from Demonstration, Interactive Learning, Interactive Robot Learning, Human-AI Interaction, Human-Robot Interaction