Search Results for author: David Acuna

Found 15 papers, 5 papers with code

Domain Adversarial Training: A Game Perspective

no code implementations ICLR 2022 David Acuna, Marc T Law, Guojun Zhang, Sanja Fidler

Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance.

Domain Adaptation

Federated Learning with Heterogeneous Architectures using Graph HyperNetworks

no code implementations20 Jan 2022 Or Litany, Haggai Maron, David Acuna, Jan Kautz, Gal Chechik, Sanja Fidler

Standard Federated Learning (FL) techniques are limited to clients with identical network architectures.

Federated Learning

Scalable Neural Data Server: A Data Recommender for Transfer Learning

no code implementations NeurIPS 2021 Tianshi Cao, Sasha (Alexandre) Doubov, David Acuna, Sanja Fidler

Thus, the computational cost to each user grows with the number of sources and requires an expensive training step for each data provider. To address these issues, we propose Scalable Neural Data Server (SNDS), a large-scale search engine that can theoretically index thousands of datasets to serve relevant ML data to end users.

Transfer Learning

Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation

no code implementations NeurIPS 2021 David Acuna, Jonah Philion, Sanja Fidler

Alternative solutions seek to exploit driving simulators that can generate large amounts of labeled data with a plethora of content variations.

Autonomous Driving Domain Adaptation

f-Domain-Adversarial Learning: Theory and Algorithms

1 code implementation21 Jun 2021 David Acuna, Guojun Zhang, Marc T. Law, Sanja Fidler

Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset.

Learning Theory Unsupervised Domain Adaptation

Complex Momentum for Optimization in Games

no code implementations16 Feb 2021 Jonathan Lorraine, David Acuna, Paul Vicol, David Duvenaud

We generalize gradient descent with momentum for optimization in differentiable games to have complex-valued momentum.

f-Domain-Adversarial Learning: Theory and Algorithms for Unsupervised Domain Adaptation with Neural Networks

no code implementations1 Jan 2021 David Acuna, Guojun Zhang, Marc T Law, Sanja Fidler

We provide empirical results for several f-divergences and show that some, not considered previously in domain-adversarial learning, achieve state-of-the-art results in practice.

Generalization Bounds Learning Theory +1

Neural Data Server: A Large-Scale Search Engine for Transfer Learning Data

no code implementations CVPR 2020 Xi Yan, David Acuna, Sanja Fidler

NDS consists of a dataserver which indexes several large popular image datasets, and aims to recommend data to a client, an end-user with a target application with its own small labeled dataset.

Image Classification Instance Segmentation +3

Neural Turtle Graphics for Modeling City Road Layouts

no code implementations ICCV 2019 Hang Chu, Daiqing Li, David Acuna, Amlan Kar, Maria Shugrina, Xinkai Wei, Ming-Yu Liu, Antonio Torralba, Sanja Fidler

We propose Neural Turtle Graphics (NTG), a novel generative model for spatial graphs, and demonstrate its applications in modeling city road layouts.

Gated-SCNN: Gated Shape CNNs for Semantic Segmentation

3 code implementations ICCV 2019 Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler

Here, we propose a new two-stream CNN architecture for semantic segmentation that explicitly wires shape information as a separate processing branch, i. e. shape stream, that processes information in parallel to the classical stream.

Semantic Segmentation

Meta-Sim: Learning to Generate Synthetic Datasets

no code implementations ICCV 2019 Amlan Kar, Aayush Prakash, Ming-Yu Liu, Eric Cameracci, Justin Yuan, Matt Rusiniak, David Acuna, Antonio Torralba, Sanja Fidler

Training models to high-end performance requires availability of large labeled datasets, which are expensive to get.

Devil is in the Edges: Learning Semantic Boundaries from Noisy Annotations

1 code implementation CVPR 2019 David Acuna, Amlan Kar, Sanja Fidler

We further reason about true object boundaries during training using a level set formulation, which allows the network to learn from misaligned labels in an end-to-end fashion.

Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.