Search Results for author: Minh Pham

Found 12 papers, 5 papers with code

DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning Models

1 code implementation11 Apr 2024 Nastaran Saadati, Minh Pham, Nasla Saleem, Joshua R. Waite, Aditya Balu, Zhanhong Jiang, Chinmay Hegde, Soumik Sarkar

This DIMAT paradigm presents a new opportunity for the future decentralized learning, enhancing its adaptability to real-world with sparse and light-weight communication and computation.

Robust Concept Erasure Using Task Vectors

no code implementations4 Apr 2024 Minh Pham, Kelly O. Marshall, Chinmay Hegde, Niv Cohen

Finally, we show that Diverse Inversion enables us to apply a TV edit only to a subset of the model weights, enhancing the erasure capabilities while better maintaining the core functionality of the model.

Word Embeddings

LoDIP: Low light phase retrieval with deep image prior

no code implementations27 Feb 2024 Raunak Manekar, Elisa Negrini, Minh Pham, Daniel Jacobs, Jaideep Srivastava

Phase retrieval (PR) is a fundamental challenge in scientific imaging, enabling nanoscale techniques like coherent diffractive imaging (CDI).

Retrieval Time Series

Distributionally Robust Classification on a Data Budget

1 code implementation7 Aug 2023 Benjamin Feuer, Ameya Joshi, Minh Pham, Chinmay Hegde

To our knowledge, this is the first result showing (near) state-of-the-art distributional robustness on limited data budgets.

Classification Image Classification +1

Circumventing Concept Erasure Methods For Text-to-Image Generative Models

1 code implementation3 Aug 2023 Minh Pham, Kelly O. Marshall, Niv Cohen, Govind Mittal, Chinmay Hegde

Text-to-image generative models can produce photo-realistic images for an extremely broad range of concepts, and their usage has proliferated widely among the general public.

Face Swapping Word Embeddings

ZeroForge: Feedforward Text-to-Shape Without 3D Supervision

1 code implementation14 Jun 2023 Kelly O. Marshall, Minh Pham, Ameya Joshi, Anushrut Jignasu, Aditya Balu, Adarsh Krishnamurthy, Chinmay Hegde

Current state-of-the-art methods for text-to-shape generation either require supervised training using a labeled dataset of pre-defined 3D shapes, or perform expensive inference-time optimization of implicit neural representations.

Text-to-Shape Generation

Revisiting Self-Distillation

no code implementations17 Jun 2022 Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde

We first show that even with a highly accurate teacher, self-distillation allows a student to surpass the teacher in all cases.

Knowledge Distillation Model Compression

Transformer with Fourier Integral Attentions

no code implementations1 Jun 2022 Tan Nguyen, Minh Pham, Tam Nguyen, Khai Nguyen, Stanley J. Osher, Nhat Ho

Multi-head attention empowers the recent success of transformers, the state-of-the-art models that have achieved remarkable success in sequence modeling and beyond.

Image Classification Language Modelling +1

Smooth-Reduce: Leveraging Patches for Improved Certified Robustness

no code implementations12 May 2022 Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde

Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.

Harnessing Geometric Constraints from Emotion Labels to improve Face Verification

no code implementations5 Mar 2021 Anand Ramakrishnan, Minh Pham, Jacob Whitehill

For the task of face verification, we explore the utility of harnessing auxiliary facial emotion labels to impose explicit geometric constraints on the embedding space when training deep embedding models.

Face Verification Multi-Task Learning +1

Laplacian Smoothing Gradient Descent

1 code implementation17 Jun 2018 Stanley Osher, Bao Wang, Penghang Yin, Xiyang Luo, Farzin Barekat, Minh Pham, Alex Lin

We propose a class of very simple modifications of gradient descent and stochastic gradient descent.

Stochastic Backward Euler: An Implicit Gradient Descent Algorithm for $k$-means Clustering

no code implementations21 Oct 2017 Penghang Yin, Minh Pham, Adam Oberman, Stanley Osher

In this paper, we propose an implicit gradient descent algorithm for the classic $k$-means problem.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.