no code implementations • 27 May 2024 • Florian Bordes, Richard Yuanzhe Pang, Anurag Ajay, Alexander C. Li, Adrien Bardes, Suzanne Petryk, Oscar Mañas, Zhiqiu Lin, Anas Mahmoud, Bargav Jayaraman, Mark Ibrahim, Melissa Hall, Yunyang Xiong, Jonathan Lebensold, Candace Ross, Srihari Jayakumar, Chuan Guo, Diane Bouchacourt, Haider Al-Tahan, Karthik Padthe, Vasu Sharma, Hu Xu, Xiaoqing Ellen Tan, Megan Richards, Samuel Lavoie, Pietro Astolfi, Reyhane Askari Hemmat, Jun Chen, Kushal Tirumala, Rim Assouel, Mazda Moayeri, Arjang Talattof, Kamalika Chaudhuri, Zechun Liu, Xilun Chen, Quentin Garrido, Karen Ullrich, Aishwarya Agrawal, Kate Saenko, Asli Celikyilmaz, Vikas Chandra
Then, we present and discuss approaches to evaluate VLMs.
1 code implementation • 21 Apr 2024 • Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian
While recently Large Language Models (LLMs) have achieved remarkable successes, they are vulnerable to certain jailbreaking attacks that lead to generation of inappropriate or harmful content.
1 code implementation • 3 Apr 2024 • Kamalika Chaudhuri, Chuan Guo, Laurens van der Maaten, Saeed Mahloujifar, Mark Tygert
The HCR bounds appear to be insufficient on their own to guarantee confidentiality of the inputs to inference with standard deep neural nets, "ResNet-18" and "Swin-T," pre-trained on the data set, "ImageNet-1000," which contains 1000 classes.
1 code implementation • 21 Mar 2024 • Jonathan Lebensold, Maziar Sanjabi, Pietro Astolfi, Adriana Romero-Soriano, Kamalika Chaudhuri, Mike Rabbat, Chuan Guo
Text-to-image diffusion models have been shown to suffer from sample-level memorization, possibly reproducing near-perfect replica of images that they are trained on, which may be undesirable.
no code implementations • 7 Mar 2024 • Shengyuan Hu, Saeed Mahloujifar, Virginia Smith, Kamalika Chaudhuri, Chuan Guo
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
no code implementations • 4 Mar 2024 • Tom Sander, Yaodong Yu, Maziar Sanjabi, Alain Durmus, Yi Ma, Kamalika Chaudhuri, Chuan Guo
In this work, we show that effective DP representation learning can be done via image captioning and scaling up to internet-scale multimodal datasets.
no code implementations • 3 Feb 2024 • Bargav Jayaraman, Chuan Guo, Kamalika Chaudhuri
Vision-Language Models (VLMs) have emerged as the state-of-the-art representation learning solution, with myriads of downstream applications such as image classification, retrieval and generation.
no code implementations • 24 Jan 2024 • Chuan Guo, Yuxuan Mu, Xinxin Zuo, Peng Dai, Youliang Yan, Juwei Lu, Li Cheng
Building upon this, we present a novel generative model that produces diverse stylization results of a single motion (latent) code.
1 code implementation • 20 Jan 2024 • Nhat M. Hoang, Kehong Gong, Chuan Guo, Michael Bi Mi
Specifically, we separate the denoising objectives of a diffusion model into two stages: obtaining conditional rough motion approximations in the initial $T-T^*$ steps by learning the noisy annotated motions, followed by the unconditional refinement of these preliminary motions during the last $T^*$ steps using unannotated motions.
1 code implementation • 29 Nov 2023 • Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, Li Cheng
For the base-layer motion tokens, a Masked Transformer is designated to predict randomly masked motion tokens conditioned on text input at training stage.
Ranked #1 on Motion Synthesis on HumanML3D
no code implementations • 4 Aug 2023 • Ruihan Wu, Chuan Guo, Kamalika Chaudhuri
In this work, we look at how to use generic large-scale public data to improve the quality of differentially private image generation in Generative Adversarial Networks (GANs), and provide an improved method that uses public data effectively.
1 code implementation • 15 Jun 2023 • Yaodong Yu, Maziar Sanjabi, Yi Ma, Kamalika Chaudhuri, Chuan Guo
In this work, we propose as a mitigation measure a recipe to train foundation vision models with differential privacy (DP) guarantee.
1 code implementation • 12 Jun 2023 • Yuqing Zhu, Xuandong Zhao, Chuan Guo, Yu-Xiang Wang
Most existing approaches of differentially private (DP) machine learning focus on private training.
no code implementations • 5 Jun 2023 • Trishita Tiwari, Suchin Gururangan, Chuan Guo, Weizhe Hua, Sanjay Kariyappa, Udit Gupta, Wenjie Xiong, Kiwan Maeng, Hsien-Hsin S. Lee, G. Edward Suh
In today's machine learning (ML) models, any part of the training data can affect its output.
1 code implementation • NeurIPS 2023 • Casey Meehan, Florian Bordes, Pascal Vincent, Kamalika Chaudhuri, Chuan Guo
Self-supervised learning (SSL) algorithms can produce useful image representations by learning to associate different parts of natural images with one another.
1 code implementation • ICCV 2023 • Kehong Gong, Dongze Lian, Heng Chang, Chuan Guo, Zihang Jiang, Xinxin Zuo, Michael Bi Mi, Xinchao Wang
We propose a novel task for generating 3D dance movements that simultaneously incorporate both text and music modalities.
1 code implementation • 8 Nov 2022 • Chuan Guo, Kamalika Chaudhuri, Pierre Stock, Mike Rabbat
In private federated learning (FL), a server aggregates differentially private updates from a large number of clients in order to train a machine learning model.
no code implementations • 24 Oct 2022 • Chuan Guo, Alexandre Sablayrolles, Maziar Sanjabi
Differential privacy (DP) is by far the most widely accepted framework for mitigating privacy risks in machine learning.
1 code implementation • 19 Oct 2022 • Ruihan Wu, Xiangyu Chen, Chuan Guo, Kilian Q. Weinberger
Gradient inversion attack enables recovery of training samples from model gradients in federated learning (FL), and constitutes a serious threat to data privacy.
no code implementations • 21 Sep 2022 • Kiwan Maeng, Chuan Guo, Sanjay Kariyappa, Edward Suh
Split learning and inference propose to run training/inference of a large model that is split across client devices and the cloud.
no code implementations • 12 Sep 2022 • Sanjay Kariyappa, Chuan Guo, Kiwan Maeng, Wenjie Xiong, G. Edward Suh, Moinuddin K Qureshi, Hsien-Hsin S. Lee
Federated learning (FL) aims to perform privacy-preserving machine learning on distributed data held by multiple data owners.
1 code implementation • 4 Jul 2022 • Chuan Guo, Xinxin Zuo, Sen Wang, Li Cheng
Our approach is flexible, could be used for both text2motion and motion2text tasks.
Ranked #3 on Motion Captioning on HumanML3D
1 code implementation • ICLR 2022 • Wei Ji, Jingjing Li, Qi Bi, Chuan Guo, Jie Liu, Li Cheng
The laborious and time-consuming manual annotation has become a real bottleneck in various practical scenarios.
no code implementations • 25 Mar 2022 • Elvis Dohmatob, Chuan Guo, Morgane Goibert
Finally, we show that if a decision-region is compact, then it admits a universal adversarial perturbation with $L_2$ norm which is $\sqrt{d}$ times smaller than the typical $L_2$ norm of a data point.
1 code implementation • 15 Mar 2022 • Kamalika Chaudhuri, Chuan Guo, Mike Rabbat
Federated data analytics is a framework for distributed data analysis where a server compiles noisy responses from a group of distributed low-bandwidth user devices to estimate aggregate statistics.
1 code implementation • 25 Feb 2022 • Ruihan Wu, Jin Peng Zhou, Kilian Q. Weinberger, Chuan Guo
Label differential privacy (label-DP) is a popular framework for training private ML models on datasets with public features and sensitive private labels.
1 code implementation • 28 Jan 2022 • Chuan Guo, Brian Karrer, Kamalika Chaudhuri, Laurens van der Maaten
Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks.
no code implementations • 4 Jan 2022 • Antonio Ginart, Laurens van der Maaten, James Zou, Chuan Guo
Recent data-extraction attacks have exposed that language models can memorize some training samples verbatim.
1 code implementation • CVPR 2022 • Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, Li Cheng
Automated generation of 3D human motions from text is a challenging problem.
Ranked #6 on Motion Synthesis on InterHuman
1 code implementation • NeurIPS 2021 • Yiyou Sun, Chuan Guo, Yixuan Li
Out-of-distribution (OOD) detection has received much attention lately due to its practical importance in enhancing the safe deployment of neural networks.
Ranked #13 on Out-of-Distribution Detection on ImageNet-1k vs SUN
1 code implementation • ICLR 2022 • Lauren Watson, Chuan Guo, Graham Cormode, Alex Sablayrolles
The vulnerability of machine learning models to membership inference attacks has received much attention in recent years.
no code implementations • 12 Nov 2021 • Chuan Guo, Xinxin Zuo, Sen Wang, Xinshuang Liu, Shihao Zou, Minglun Gong, Li Cheng
Action2motion stochastically generates plausible 3D pose sequences of a prescribed action category, which are processed and rendered by motion2video to form 2D videos.
no code implementations • NeurIPS 2021 • Weizhe Hua, Yichi Zhang, Chuan Guo, Zhiru Zhang, G. Edward Suh
Neural network robustness has become a central topic in machine learning in recent years.
1 code implementation • 15 Aug 2021 • Shihao Zou, Xinxin Zuo, Sen Wang, Yiming Qian, Chuan Guo, Li Cheng
This paper focuses on a new problem of estimating human pose and shape from single polarization images.
1 code implementation • ICCV 2021 • Shihao Zou, Chuan Guo, Xinxin Zuo, Sen Wang, Pengyu Wang, Xiaoqin Hu, Shoushun Chen, Minglun Gong, Li Cheng
Event camera is an emerging imaging sensor for capturing dynamics of moving objects as events, which motivates our work in estimating 3D human pose and shape from the event signals.
no code implementations • NeurIPS 2021 • Ruihan Wu, Chuan Guo, Yi Su, Kilian Q. Weinberger
Machine learning models often encounter distribution shifts when deployed in the real world.
no code implementations • 5 May 2021 • Hanieh Hashemi, Yongqin Wang, Chuan Guo, Murali Annavaram
This learning setting presents, among others, two unique challenges: how to protect privacy of the clients' data during training, and how to ensure integrity of the trained model.
2 code implementations • EMNLP 2021 • Chuan Guo, Alexandre Sablayrolles, Hervé Jégou, Douwe Kiela
We propose the first general-purpose gradient-based attack against transformer models.
1 code implementation • NeurIPS 2021 • Ruihan Wu, Chuan Guo, Awni Hannun, Laurens van der Maaten
Machine-learning systems such as self-driving cars or virtual assistants are composed of a large number of machine-learning models that recognize image content, transcribe speech, analyze natural language, infer preferences, rank options, etc.
1 code implementation • 23 Feb 2021 • Awni Hannun, Chuan Guo, Laurens van der Maaten
This information leaks either through the model itself or through predictions made by the model.
1 code implementation • 9 Feb 2021 • Ruihan Wu, Chuan Guo, Felix Wu, Rahul Kidambi, Laurens van der Maaten, Kilian Q. Weinberger
We develop a novel approach for paper bidding and assignment that is much more robust against such attacks.
1 code implementation • 30 Jul 2020 • Chuan Guo, Xinxin Zuo, Sen Wang, Shihao Zou, Qingyao Sun, Annan Deng, Minglun Gong, Li Cheng
Action recognition is a relatively established task, where givenan input sequence of human motion, the goal is to predict its ac-tion category.
no code implementations • 30 Apr 2020 • Shihao Zou, Xinxin Zuo, Yiming Qian, Sen Wang, Chuan Guo, Chi Xu, Minglun Gong, Li Cheng
Polarization images are known to be able to capture polarized reflected lights that preserve rich geometric cues of an object, which has motivated its recent applications in reconstructing detailed surface normal of the objects of interest.
no code implementations • 24 Feb 2020 • Chuan Guo, Ruihan Wu, Kilian Q. Weinberger
Modern neural networks often contain significantly more parameters than the size of their training data.
no code implementations • 9 Jan 2020 • Chuan Guo, Awni Hannun, Brian Knott, Laurens van der Maaten, Mark Tygert, Ruiyu Zhu
Secure multiparty computations enable the distribution of so-called shares of sensitive data to multiple parties such that the multiple parties can effectively process the data while being unable to glean much information about the data (at least not without collusion among all parties to put back together all the shares).
no code implementations • ICLR 2020 • Chuan Guo, Ruihan Wu, Kilian Q. Weinberger
The complexity of large-scale neural networks can lead to poor understanding of their internal details.
no code implementations • NeurIPS 2019 • Chuan Guo, Ali Mousavi, Xiang Wu, Daniel N. Holtmann-Rice, Satyen Kale, Sashank Reddi, Sanjiv Kumar
In extreme classification settings, embedding-based neural network models are currently not competitive with sparse linear and tree-based methods in terms of accuracy.
1 code implementation • ICML 2020 • Chuan Guo, Tom Goldstein, Awni Hannun, Laurens van der Maaten
Good data stewardship requires removal of data at the request of the data's owner.
1 code implementation • NeurIPS 2019 • Tao Yu, Shengyuan Hu, Chuan Guo, Wei-Lun Chao, Kilian Q. Weinberger
Natural images are virtually surrounded by low-density misclassified regions that can be efficiently discovered by gradient-guided search --- enabling the generation of adversarial images.
4 code implementations • ICLR 2019 • Chuan Guo, Jacob R. Gardner, Yurong You, Andrew Gordon Wilson, Kilian Q. Weinberger
We propose an intriguingly simple method for the construction of adversarial images in the black-box setting.
1 code implementation • 24 Sep 2018 • Chuan Guo, Jared S. Frank, Kilian Q. Weinberger
In this paper we propose to restrict the search for adversarial images to a low frequency domain.
4 code implementations • ICLR 2018 • Qiantong Xu, Gao Huang, Yang Yuan, Chuan Guo, Yu Sun, Felix Wu, Kilian Weinberger
Evaluating generative adversarial networks (GANs) is inherently challenging.
1 code implementation • ICLR 2018 • Chuan Guo, Mayank Rana, Moustapha Cisse, Laurens van der Maaten
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system.
17 code implementations • ICML 2017 • Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications.
1 code implementation • NeurIPS 2016 • Gao Huang, Chuan Guo, Matt J. Kusner, Yu Sun, Fei Sha, Kilian Q. Weinberger
Accurately measuring the similarity between text documents lies at the core of many real world applications of machine learning.