Temporal difference learning

Temporal difference (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods.[1]

While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known.[2] This is a form of bootstrapping, as illustrated with the following example:

Suppose you wish to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday – and thus be able to change, say, Saturday's model before Saturday arrives.[2]

Temporal difference methods are related to the temporal difference model of animal learning.[3][4][5][6][7]

  1. ^ Sutton & Barto (2018), p. 133.
  2. ^ a b Sutton, Richard S. (1 August 1988). "Learning to predict by the methods of temporal differences". Machine Learning. 3 (1): 9–44. doi:10.1007/BF00115009. ISSN 1573-0565. S2CID 207771194. Retrieved 4 April 2023.
  3. ^ Schultz, W, Dayan, P & Montague, PR. (1997). "A neural substrate of prediction and reward". Science. 275 (5306): 1593–1599. CiteSeerX 10.1.1.133.6176. doi:10.1126/science.275.5306.1593. PMID 9054347. S2CID 220093382.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  4. ^ Montague, P. R.; Dayan, P.; Sejnowski, T. J. (1996-03-01). "A framework for mesencephalic dopamine systems based on predictive Hebbian learning" (PDF). The Journal of Neuroscience. 16 (5): 1936–1947. doi:10.1523/JNEUROSCI.16-05-01936.1996. ISSN 0270-6474. PMC 6578666. PMID 8774460.
  5. ^ Montague, P.R.; Dayan, P.; Nowlan, S.J.; Pouget, A.; Sejnowski, T.J. (1993). "Using aperiodic reinforcement for directed self-organization" (PDF). Advances in Neural Information Processing Systems. 5: 969–976.
  6. ^ Montague, P. R.; Sejnowski, T. J. (1994). "The predictive brain: temporal coincidence and temporal order in synaptic learning mechanisms". Learning & Memory. 1 (1): 1–33. doi:10.1101/lm.1.1.1. ISSN 1072-0502. PMID 10467583. S2CID 44560099.
  7. ^ Sejnowski, T.J.; Dayan, P.; Montague, P.R. (1995). "Predictive Hebbian learning". Proceedings of the eighth annual conference on Computational learning theory - COLT '95. pp. 15–18. doi:10.1145/225298.225300. ISBN 0897917235. S2CID 1709691.