This work shows that we can recover from this weakness by bridging the gap between sequential adversarial team games and 2-player games.
Deep Neural Networks (DNNs) enable a wide series of technological advancements, ranging from clinical imaging, to predictive industrial maintenance and autonomous driving.
Models trained in federated settings often suffer from degraded performances and fail at generalizing, especially when facing heterogeneous scenarios.
For similar reasons, Federated Learning has been recently introduced as a new machine learning paradigm aiming to learn a global model while preserving privacy and leveraging data on millions of remote devices.
data severely impairs both the performance of the trained neural network and its convergence rate, increasing the number of communication rounds requested to reach a performance comparable to that of the centralized scenario.
Interestingly, we show that our game is more expressive than the original extensive-form game as any state/action abstraction of the extensive-form game can be captured by our representation, while the reverse does not hold.
As opposed to existing approaches, that need to generate pseudo-labels offline, we use an auxiliary classifier, trained with image-level labels and regularized by the segmentation model, to obtain pseudo-supervision online and update the model incrementally.
Clustering may reduce heterogeneity by identifying the domains, but it deprives each cluster model of the data and supervision of others.
Event cameras are novel bio-inspired sensors, which asynchronously capture pixel-level intensity changes in the form of "events".
Team members can coordinate their strategies before the beginning of the game, but are unable to communicate during the playing phase of the game.
Dynamic Vision Sensors (DVSs) asynchronously stream events in correspondence of pixels subject to brightness changes.
Many real-world applications involve teams of agents that have to coordinate their actions to reach a common goal against potential adversaries.
Event-based cameras are neuromorphic sensors capable of efficiently encoding visual information in the form of sparse sequences of events.
We introduce ReConvNet, a recurrent convolutional architecture for semi-supervised video object segmentation that is able to fast adapt its features to focus on any specific object of interest at inference time.
Event-based cameras, also known as neuromorphic cameras, are bioinspired sensors able to perceive changes in the scene at high frequency with low power consumption.
This paper introduces Non-Autonomous Input-Output Stable Network(NAIS-Net), a very deep architecture where each stacked processing block is derived from a time-invariant non-autonomous dynamical system.
In this paper we propose a novel method to refine both the geometry and the semantic labeling of a given mesh.
Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features.
Ranked #15 on Semantic Segmentation on CamVid