Future prediction
47 papers with code • 0 benchmarks • 1 datasets
Benchmarks
These leaderboards are used to track progress in Future prediction
Most implemented papers
DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents
DESIRE effectively predicts future locations of objects in multiple scenes by 1) accounting for the multi-modal nature of the future prediction (i. e., given the same context, future may vary), 2) foreseeing the potential future outcomes and make a strategic prediction based on that, and 3) reasoning not only from the past motion history, but also from the scene context as well as the interactions among the agents.
Peeking into the Future: Predicting Future Person Activities and Locations in Videos
To facilitate the training, the network is learned with an auxiliary task of predicting future location in which the activity will happen.
Compositional Video Prediction
We present an approach for pixel-level future prediction given an input image of a scene.
Temporal Aggregate Representations for Long-Range Video Understanding
Future prediction, especially in long-range videos, requires reasoning from current and past observations.
Uncertainty-based Traffic Accident Anticipation with Spatio-Temporal Relational Learning
The derived uncertainty-based ranking loss is found to significantly boost model performance by improving the quality of relational features.
MultiPath++: Efficient Information Fusion and Trajectory Aggregation for Behavior Prediction
Predicting the future behavior of road users is one of the most challenging and important problems in autonomous driving.
TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting
Going beyond the mainstream paradigms of plain decomposition and multiperiodicity analysis, we analyze temporal variations in a novel view of multiscale-mixing, which is based on an intuitive but important observation that time series present distinct patterns in different sampling scales.
Interpreting Tree Ensembles with inTrees
Tree ensembles such as random forests and boosted trees are accurate but difficult to understand, debug and deploy.
Decomposing Motion and Content for Natural Video Sequence Prediction
To the best of our knowledge, this is the first end-to-end trainable network architecture with motion and content separation to model the spatiotemporal dynamics for pixel-level future prediction in natural videos.
Improving Video Generation for Multi-functional Applications
In this paper, we aim to improve the state-of-the-art video generative adversarial networks (GANs) with a view towards multi-functional applications.