Dr. Mark Humphrys

School of Computing. Dublin City University.

Home      Blog      Teaching      Research      Contact

My big idea: Ancient Brain

Search:

CA114      CA170

CA668      CA669      Projects

Q-learning with a Neural Network

Revision - Normal supervised learning.


Q-learning with a Neural Network:
Input x,a.
Output yk = Q(x,a).


We are not learning from correct exemplars, as in normal supervised learning. That would be like being given the "correct" output:
Ok = Q*(x,a).

Instead we are learning from estimates. The output we are "moving towards" is:

so for example in discrete case we do:

that is:

In the neural network Q-learning, we backpropagate the error:




But of course the term:

is just an estimate, and Q itself is changing as we go along.

The "timeless" information is that x,a led to y,r. We can save these 4 values and "replay" the experience many times, with improved values of Q.

Read discussion of replay.


There are lots of interesting issues. For example, replay:
(x,a) -> (y,r)
a million times and it learns that all (x,a) lead to (y,r). We need to mix up our replays. Remember our discussion of over-learning and forgetting.

Also random learning, which worked with lookup tables, won't work with neural nets, because the exemplars interfere with each other. The net will just learn that all actions lead to nothing. We will need a more intelligent control policy, something like a Boltzmann distribution.


ancientbrain.com      w2mind.org      humphrysfamilytree.com

On the Internet since 1987.

Wikipedia: Sometimes I link to Wikipedia. I have written something In defence of Wikipedia. It is often a useful starting point but you cannot trust it. Linking to it is like linking to a Google search. A starting point, not a destination. I automatically highlight in red all links to Wikipedia and Google search and other possibly-unreliable user-generated content.