For piecewise linear neural networks, given a weighting function that relates the errors of different input activation regions together, we obtain a bound on each region's generalization error that scales inversely with the density of training samples.
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
In this paper, we are rather interested by the locations of an image that contribute to the model's training.
Lifelong learning capabilities are crucial for artificial autonomous agents operating on real-world data, which is typically non-stationary and temporally correlated.
The method is not specialised to computer vision and operates on any paired dataset samples; in our experiments we use random transforms to obtain a pair from each image.
Ranked #1 on Unsupervised Semantic Segmentation on COCO-Stuff-3