By Pinheiro, M.; Bicho, E.; Erlhagen, W.
2010 3rd IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, BioRob 2010
We present a control architecture for non-verbal HRI that allows an assistant robot to have a pro-active and anticipatory behavior. The architecture implements the coordination of actions and goals among the human, that needs help, and the robot as a dynamic process that integrates contextual cues, shared task knowledge and predicted outcome of the human motor behavior. The robot control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations with specific functionalities. Different subpopulations encode task relevant information about action means, action goals and context in form of self-sustained activation patterns. These patterns are triggered by input from connected populations and evolve continuously in time under the influence of recurrent interactions. The dynamic control architecture is validated in an assistive task in which an anthropomorphic robot acts as a personal assistant of a person with motor impairments. We show that the context dependent mapping from action observation onto appropriate complementary actions allows the robot to cope with dynamically changing situations. This includes adaptation to different users and mutual compensation of physical limitations.