Search Results for author: David Acuna

Found 21 papers, 5 papers with code

RefFusion: Reference Adapted Diffusion Models for 3D Scene Inpainting

no code implementations16 Apr 2024 Ashkan Mirzaei, Riccardo de Lutio, Seung Wook Kim, David Acuna, Jonathan Kelly, Sanja Fidler, Igor Gilitschenski, Zan Gojcic

In this work, we propose an approach for 3D scene inpainting -- the task of coherently replacing parts of the reconstructed scene with desired content.

3D Inpainting Image Inpainting

Can Feedback Enhance Semantic Grounding in Large Vision-Language Models?

no code implementations9 Apr 2024 Yuan-Hong Liao, Rafid Mahmood, Sanja Fidler, David Acuna

We find that if prompted appropriately, VLMs can utilize feedback both in a single step and iteratively, showcasing the potential of feedback as an alternative technique to improve grounding in internet-scale VLMs.

DreamTeacher: Pretraining Image Backbones with Deep Generative Models

no code implementations ICCV 2023 Daiqing Li, Huan Ling, Amlan Kar, David Acuna, Seung Wook Kim, Karsten Kreis, Antonio Torralba, Sanja Fidler

In this work, we introduce a self-supervised feature representation learning framework DreamTeacher that utilizes generative networks for pre-training downstream image backbones.

Knowledge Distillation Representation Learning

Bridging the Sim2Real gap with CARE: Supervised Detection Adaptation with Conditional Alignment and Reweighting

no code implementations9 Feb 2023 Viraj Prabhu, David Acuna, Andrew Liao, Rafid Mahmood, Marc T. Law, Judy Hoffman, Sanja Fidler, James Lucas

Sim2Real domain adaptation (DA) research focuses on the constrained setting of adapting from a labeled synthetic source domain to an unlabeled or sparsely labeled real target domain.

Autonomous Driving Domain Adaptation +3

Neural Light Field Estimation for Street Scenes with Differentiable Virtual Object Insertion

no code implementations19 Aug 2022 Zian Wang, Wenzheng Chen, David Acuna, Jan Kautz, Sanja Fidler

In this work, we propose a neural approach that estimates the 5D HDR light field from a single image, and a differentiable object insertion formulation that enables end-to-end training with image-based losses that encourage realism.

Autonomous Driving Lighting Estimation +1

Scalable Neural Data Server: A Data Recommender for Transfer Learning

no code implementations NeurIPS 2021 Tianshi Cao, Sasha Doubov, David Acuna, Sanja Fidler

NDS uses a mixture of experts trained on data sources to estimate similarity between each source and the downstream task.

Transfer Learning

Domain Adversarial Training: A Game Perspective

no code implementations ICLR 2022 David Acuna, Marc T Law, Guojun Zhang, Sanja Fidler

Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance.

Domain Adaptation

Federated Learning with Heterogeneous Architectures using Graph HyperNetworks

no code implementations20 Jan 2022 Or Litany, Haggai Maron, David Acuna, Jan Kautz, Gal Chechik, Sanja Fidler

Standard Federated Learning (FL) techniques are limited to clients with identical network architectures.

Federated Learning

Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation

no code implementations NeurIPS 2021 David Acuna, Jonah Philion, Sanja Fidler

Alternative solutions seek to exploit driving simulators that can generate large amounts of labeled data with a plethora of content variations.

Autonomous Driving Domain Adaptation

f-Domain-Adversarial Learning: Theory and Algorithms

1 code implementation21 Jun 2021 David Acuna, Guojun Zhang, Marc T. Law, Sanja Fidler

Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset.

Learning Theory Unsupervised Domain Adaptation

Complex Momentum for Optimization in Games

no code implementations16 Feb 2021 Jonathan Lorraine, David Acuna, Paul Vicol, David Duvenaud

We generalize gradient descent with momentum for optimization in differentiable games to have complex-valued momentum.

f-Domain-Adversarial Learning: Theory and Algorithms for Unsupervised Domain Adaptation with Neural Networks

no code implementations1 Jan 2021 David Acuna, Guojun Zhang, Marc T Law, Sanja Fidler

We provide empirical results for several f-divergences and show that some, not considered previously in domain-adversarial learning, achieve state-of-the-art results in practice.

Generalization Bounds Learning Theory +1

Neural Data Server: A Large-Scale Search Engine for Transfer Learning Data

no code implementations CVPR 2020 Xi Yan, David Acuna, Sanja Fidler

NDS consists of a dataserver which indexes several large popular image datasets, and aims to recommend data to a client, an end-user with a target application with its own small labeled dataset.

Image Classification Instance Segmentation +4

Neural Turtle Graphics for Modeling City Road Layouts

no code implementations ICCV 2019 Hang Chu, Daiqing Li, David Acuna, Amlan Kar, Maria Shugrina, Xinkai Wei, Ming-Yu Liu, Antonio Torralba, Sanja Fidler

We propose Neural Turtle Graphics (NTG), a novel generative model for spatial graphs, and demonstrate its applications in modeling city road layouts.

Gated-SCNN: Gated Shape CNNs for Semantic Segmentation

4 code implementations ICCV 2019 Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler

Here, we propose a new two-stream CNN architecture for semantic segmentation that explicitly wires shape information as a separate processing branch, i. e. shape stream, that processes information in parallel to the classical stream.

Image Segmentation Semantic Segmentation

Meta-Sim: Learning to Generate Synthetic Datasets

no code implementations ICCV 2019 Amlan Kar, Aayush Prakash, Ming-Yu Liu, Eric Cameracci, Justin Yuan, Matt Rusiniak, David Acuna, Antonio Torralba, Sanja Fidler

Training models to high-end performance requires availability of large labeled datasets, which are expensive to get.

Devil is in the Edges: Learning Semantic Boundaries from Noisy Annotations

1 code implementation CVPR 2019 David Acuna, Amlan Kar, Sanja Fidler

We further reason about true object boundaries during training using a level set formulation, which allows the network to learn from misaligned labels in an end-to-end fashion.

Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.