Motor adaptation via distributional learning.

Journal of Neural Engineering
Brian A MitchellLinda R Petzold

Abstract

Objective. Both artificial and biological controllers experience errors during learning that are probabilistically distributed. We develop a framework for modeling distributions of errors and relating deviations in these distributions to neural activity.Approach. The biological system we consider is a task where human subjects are required to learn to minimize the roll of an inverted T-shaped object with an unbalanced weight (i.e. one side of the object is heavier than the other side) during lift. We also collect BOLD activity during this process. For our experimental setup, we define the state of the system to be the maximum magnitude roll of the object after lift onset and give subjects the goal of achieving the zero state.Main Results. We derive a model for this problem from a variant of Temporal Difference Learning. We then combine this model with Distributional Reinforcement Learning (DRL), a framework that involves defining a value distribution by treating the reward as stochastic. This model transforms the goal of the controller from achieving a target state, to achieving a distribution over distances from the target state. We call it a Distributional Temporal Difference Model (DTDM). The DTDM allows us to model errors i...Continue Reading

Related Concepts

Related Feeds

Brain-Computer Interface

A brain-computer interface, also known as a brain-machine interface, is a bi-directional communication pathway between an external device and a wired brain. Here is the latest research on this topic.

Related Papers

Nursing Standard
C Bannister
Nursing Times
C Thompson
American Journal of Orthodontics and Dentofacial Orthopedics : Official Publication of the American Association of Orthodontists, Its Constituent Societies, and the American Board of Orthodontics
Peter M Greco
© 2021 Meta ULC. All rights reserved