129 papers with code • 13 benchmarks • 7 datasets
Incremental learning aims to develop artificially intelligent systems that can continuously learn to address new tasks from new data while preserving knowledge learned from previously learned tasks.
We present a novel algorithm for anomaly detection on very large datasets and data streams.
During the off-line training phase, an effective sampling strategy is introduced to control this distribution and make the model focus on the semantic distractors.
Ranked #9 on Visual Object Tracking on VOT2017/18
Unsupervised Cross-dataset Person Re-identification by Transfer Learning of Spatial-Temporal Patterns
Most of the proposed person re-identification algorithms conduct supervised training and testing on single labeled datasets with small size, so directly deploying these trained models to a large-scale real-world camera network may lead to poor performance due to underfitting.
Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing phase-by-phase.
However, there is an inherent trade-off to effectively learning new concepts without catastrophic forgetting of previous ones.
A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data.
Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production.
Lifelong learning has attracted much attention, but existing works still struggle to fight catastrophic forgetting and accumulate knowledge over long stretches of incremental learning.
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
Ranked #2 on Out-of-Distribution Detection on MS-1M vs. IJB-C