1 code implementation • CVPR 2024 • Nastaran Saadati, Minh Pham, Nasla Saleem, Joshua R. Waite, Aditya Balu, Zhanhong Jiang, Chinmay Hegde, Soumik Sarkar
This DIMAT paradigm presents a new opportunity for the future decentralized learning, enhancing its adaptability to real-world with sparse and light-weight communication and computation.
no code implementations • 4 Apr 2024 • Minh Pham, Kelly O. Marshall, Chinmay Hegde, Niv Cohen
Finally, we show that Diverse Inversion enables us to apply a TV edit only to a subset of the model weights, enhancing the erasure capabilities while better maintaining the core functionality of the model.
no code implementations • 27 Feb 2024 • Raunak Manekar, Elisa Negrini, Minh Pham, Daniel Jacobs, Jaideep Srivastava, Stanley J. Osher, Jianwei Miao
Phase retrieval (PR) is fundamentally important in scientific imaging and is crucial for nanoscale techniques like coherent diffractive imaging (CDI).
1 code implementation • 7 Aug 2023 • Benjamin Feuer, Ameya Joshi, Minh Pham, Chinmay Hegde
To our knowledge, this is the first result showing (near) state-of-the-art distributional robustness on limited data budgets.
1 code implementation • 3 Aug 2023 • Minh Pham, Kelly O. Marshall, Niv Cohen, Govind Mittal, Chinmay Hegde
Text-to-image generative models can produce photo-realistic images for an extremely broad range of concepts, and their usage has proliferated widely among the general public.
1 code implementation • 14 Jun 2023 • Kelly O. Marshall, Minh Pham, Ameya Joshi, Anushrut Jignasu, Aditya Balu, Adarsh Krishnamurthy, Chinmay Hegde
Current state-of-the-art methods for text-to-shape generation either require supervised training using a labeled dataset of pre-defined 3D shapes, or perform expensive inference-time optimization of implicit neural representations.
no code implementations • 17 Jun 2022 • Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde
We first show that even with a highly accurate teacher, self-distillation allows a student to surpass the teacher in all cases.
no code implementations • 1 Jun 2022 • Tan Nguyen, Minh Pham, Tam Nguyen, Khai Nguyen, Stanley J. Osher, Nhat Ho
Multi-head attention empowers the recent success of transformers, the state-of-the-art models that have achieved remarkable success in sequence modeling and beyond.
no code implementations • 12 May 2022 • Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde
Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.
no code implementations • 5 Mar 2021 • Anand Ramakrishnan, Minh Pham, Jacob Whitehill
For the task of face verification, we explore the utility of harnessing auxiliary facial emotion labels to impose explicit geometric constraints on the embedding space when training deep embedding models.
1 code implementation • 17 Jun 2018 • Stanley Osher, Bao Wang, Penghang Yin, Xiyang Luo, Farzin Barekat, Minh Pham, Alex Lin
We propose a class of very simple modifications of gradient descent and stochastic gradient descent.
no code implementations • 21 Oct 2017 • Penghang Yin, Minh Pham, Adam Oberman, Stanley Osher
In this paper, we propose an implicit gradient descent algorithm for the classic $k$-means problem.