Search Results for author: Marc W. Howard

Found 8 papers, 2 papers with code

Foundations of a temporal RL

no code implementations20 Feb 2023 Marc W. Howard, Zahra G. Esfahani, Bao Le, Per B. Sederberg

Spiking across populations of neurons in many regions of the mammalian brain maintains a robust temporal memory, a neural timeline of the recent past.

Formal models of memory based on temporally-varying representations

no code implementations5 Jan 2022 Marc W. Howard

This chapter traces this line of thought from statistical learning theory in the 1950s, through distributed memory models in the latter part of the 20th century and early part of the 21st century through to modern models based on a scale-invariant temporal history.

Learning Theory

A deep convolutional neural network that is invariant to time rescaling

1 code implementation9 Jul 2021 Brandon G. Jacques, Zoran Tiganj, Aakash Sarkar, Marc W. Howard, Per B. Sederberg

This property, inspired by findings from contemporary neuroscience and consistent with findings from cognitive psychology, may enable networks that learn with fewer training examples, fewer weights and that generalize more robustly to out of sample data.

Time Series Time Series Analysis +1

DeepSITH: Efficient Learning via Decomposition of What and When Across Time Scales

1 code implementation NeurIPS 2021 Brandon Jacques, Zoran Tiganj, Marc W. Howard, Per B. Sederberg

SITH modules respond to their inputs with a geometrically-spaced set of time constants, enabling the DeepSITH network to learn problems along a continuum of time-scales.

Time Series Time Series Prediction

Predicting the future with a scale-invariant temporal memory for the past

no code implementations26 Jan 2021 Wei Zhong Goh, Varun Ursekar, Marc W. Howard

In recent years it has become clear that the brain maintains a temporal memory of recent events stretching far into the past.

Estimating scale-invariant future in continuous time

no code implementations18 Feb 2018 Zoran Tiganj, Samuel J. Gershman, Per B. Sederberg, Marc W. Howard

Widely used reinforcement learning algorithms discretize continuous time and estimate either transition functions from one step to the next (model-based algorithms) or a scalar value of exponentially-discounted future reward using the Bellman equation (model-free algorithms).

reinforcement-learning Reinforcement Learning (RL)

Scale-invariant temporal history (SITH): optimal slicing of the past in an uncertain world

no code implementations19 Dec 2017 Tyler A. Spears, Brandon G. Jacques, Marc W. Howard, Per B. Sederberg

In both the human brain and any general artificial intelligence (AI), a representation of the past is necessary to predict the future.

Q-Learning

Optimally fuzzy temporal memory

no code implementations22 Nov 2012 Karthik H. Shankar, Marc W. Howard

If the signal has a characteristic timescale relevant to future prediction, the memory can be a simple shift register---a moving window extending into the past, requiring storage resources that linearly grows with the timescale to be represented.

Future prediction Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.