Training an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewards

Document Type


Publication Date


Publication Title

IEEE Transactions on Neural Systems and Rehabilitation Engineering


Functional Electrical Stimulation (FES) employs neuroprostheses to apply electrical current to the nerves and muscles of individuals paralyzed by spinal cord injury to restore voluntary movement. Neuroprosthesis controllers calculate stimulation patterns to produce desired actions. To date, no existing controller is able to efficiently adapt its control strategy to the wide range of possible physiological arm characteristics, reaching movements, and user preferences that vary over time. Reinforcement learning (RL) is a control strategy that can incorporate human reward signals as inputs to allow human users to shape controller behavior. In this paper, ten neurologically intact human participants assigned subjective numerical rewards to train RL controllers, evaluating animations of goal-oriented reaching tasks performed using a planar musculoskeletal human arm simulation. The RL controller learning achieved using human trainers was compared with learning accomplished using human-like rewards generated by an algorithm; metrics included success at reaching the specified target; time required to reach the target; and target overshoot. Both sets of controllers learned efficiently and with minimal differences, significantly outperforming standard controllers. Reward positivity and consistency were found to be unrelated to learning success. These results suggest that human rewards can be used effectively to train RL-based FES controllers.


This work was supported by the National Institutes of Health Fellowship under Grant TRN030167, in part by the Veterans Administration Rehabilitation Research and Development Predoctoral Fellowship-Reinforcement Learning Control for an Upper-Extremity Neuroprosthesis, and in part by the Ardiem Medical Arm Control Device under Grant W81XWH0720044.