How much to gain: targeted gain modulation facilitates learning in recurrent motor circuits
J Stroud, G Hennequin, and TP VogelsCOSYNE, 2017
Abstract
Primary motor cortex (M1) can be viewed as a “dynamical engine“ that produces multiphasic activity transients from specific input patterns (Churchland et al., 2012). Such temporally structured neuronal population activity can be explained by recent network models with optimised recurrent architectures (Hennequin et al., 2014 and Sussillo et al., 2015). However, it remains unclear how network activity within such models can be refined during behavioural learning. Here, we demonstrate that reward-based learning of single-neuron input-output gains is an effective mechanism for refining network activity in two commonly used recurrent network models. We show that gain modulation affects neuronal firing rate activity in a predictable manner that can be exploited to reduce errors in network outputs. Interestingly, we find that a relatively small number of modulatory control units provide sufficient flexibility to adjust high-dimensional network activity on biologically relevant time scales. Such coarse neuromodulatory control is consistent with the sparse and diffuse dopaminergic projections to M1 observed in both primates and humans (Huntley et al., 1992 and Hosp et al., 2011) as well as many other neuromodulatory systems.Additionally, traditional Hebbian synaptic plasticity mechanisms work in concert with reward-based learning of neuronal gains - allowing memories to be permanently imprinted on slower time scales after the desired activity is achieved through reward-mediated gain modulation.Furthermore, a novel network output can be intuited as a linear combination of previously learned gain patterns and refined thereafter - substantially reducing training time. Our results demonstrate that reward-based gain modulation is a viable method of learning in recurrent cortical circuits - providing several advantages as well as complementing traditional synaptic plasticity mechanisms.