Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance.
Standard Federated Learning (FL) techniques are limited to clients with identical network architectures.
Thus, the computational cost to each user grows with the number of sources and requires an expensive training step for each data provider. To address these issues, we propose Scalable Neural Data Server (SNDS), a large-scale search engine that can theoretically index thousands of datasets to serve relevant ML data to end users.
Alternative solutions seek to exploit driving simulators that can generate large amounts of labeled data with a plethora of content variations.
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset.
We provide empirical results for several f-divergences and show that some, not considered previously in domain-adversarial learning, achieve state-of-the-art results in practice.
NDS consists of a dataserver which indexes several large popular image datasets, and aims to recommend data to a client, an end-user with a target application with its own small labeled dataset.
We propose Neural Turtle Graphics (NTG), a novel generative model for spatial graphs, and demonstrate its applications in modeling city road layouts.
Here, we propose a new two-stream CNN architecture for semantic segmentation that explicitly wires shape information as a separate processing branch, i. e. shape stream, that processes information in parallel to the classical stream.
Ranked #17 on Semantic Segmentation on Cityscapes test
Training models to high-end performance requires availability of large labeled datasets, which are expensive to get.
We further reason about true object boundaries during training using a level set formulation, which allows the network to learn from misaligned labels in an end-to-end fashion.
Moreover, synthetic SDR data combined with real KITTI data outperforms real KITTI data alone.
We present a system for training deep neural networks for object detection using synthetic images.