| ||
Reinforcement LearningReinforcement Learning refers to the problem of determining what to do in order to maximize the amount of reward you get from the world. This concept can be formalized mathematically, and can enable a computer to teach itself — as long as you tell it what's a good outcome (e.g., what it means to win) and what's a bad outcome. I carried out a number of projects in Reinforcement Learning while at the University of Pittsburgh/Carnegie Mellon University and at the Johns Hopkins University. I'll gradually add code and examples related to Reinforcement Learning below.
Recurrent Neural NetworksRecurrent Neural Networks are networks in which feedback connections are allowed. I have been interested in recurrent networks ever since being first introduced to them in 1997 when a professor handed me a copy of Finding Structure in Time by Jeff Elman.
Towards a Theory of Basal Ganglia FunctionStarting in 2008, I began developing a theory of basal ganglia function. Specifically, I was interested in a rigorous understanding how the basal ganglia might implement reinforcement learning. To answer this question, it is important to view the basal ganglia as operating within a network with other brain regions that are not ordinarily considered — including sensory cortex, hippocampus, and the cerebellum. Each connected brain region contributes a particular computation required for reinforcement learning. A summary of this theory is forthcoming. In the meantime, please see the following relevant publications which serve as the theory's building blocks:
|