1 code implementation • ICML 2020 • Zinan Lin, Kiran Thekumparampil, Giulia Fanti, Sewoong Oh
This contrastive regularizer is inspired by a natural notion of disentanglement: latent traversal.
1 code implementation • 22 Dec 2024 • Enshu Liu, Xuefei Ning, Yu Wang, Zinan Lin
As the first work to demonstrate the possibility of one-step generation for image AR models, DD challenges the prevailing notion that AR models are inherently slow, and opens up new opportunities for efficient AR generation.
no code implementations • 5 Dec 2024 • Kaiyi Huang, Yukun Huang, Xuefei Ning, Zinan Lin, Yu Wang, Xihui Liu
To avoid hallucination of a single MLLM agent, we decompose this stage to four sequentially-executed MLLM-based agents: verification agent, suggestion agent, correction agent, and output structuring agent.
1 code implementation • 12 Nov 2024 • Chengquan Guo, Xun Liu, Chulin Xie, Andy Zhou, Yi Zeng, Zinan Lin, Dawn Song, Bo Li
To provide comprehensive and practical evaluations on the safety of code agents, we propose RedCode, a benchmark for risky code execution and generation: (1) RedCode-Exec provides challenging prompts that could lead to risky code execution, aiming to evaluate code agents' ability to recognize and handle unsafe code.
1 code implementation • 1 Jul 2024 • Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang
For example, we demonstrate that pruning up to 75% of experts in Mixtral $8\times7$B-Instruct results in a substantial reduction in parameters with minimal performance loss.
1 code implementation • 20 Jun 2024 • Xuefei Ning, Zifu Wang, Shiyao Li, Zinan Lin, Peiran Yao, Tianyu Fu, Matthew B. Blaschko, Guohao Dai, Huazhong Yang, Yu Wang
We reveal some findings: (1) Teaching materials that make it easier for students to learn have clearer and more accurate logic when using in-context learning as the student's "learning" method; (2) Weak-to-strong generalization: LbT might help improve strong models by teaching weak models; (3) Diversity in students might help: teaching multiple students could be better than teaching one student or the teacher itself.
1 code implementation • 4 Jun 2024 • Tianchen Zhao, Tongcheng Fang, Enshu Liu, Rui Wan, Widyadewi Soedarmadji, Shiyao Li, Zinan Lin, Guohao Dai, Shengen Yan, Huazhong Yang, Xuefei Ning, Yu Wang
Diffusion transformers (DiTs) have exhibited remarkable performance in visual generation tasks, such as generating realistic images or videos based on textual instructions.
1 code implementation • 30 May 2024 • Sangyun Lee, Zinan Lin, Giulia Fanti
In this work, we propose improved techniques for training rectified flows, allowing them to compete with \emph{knowledge distillation} methods even in the low NFE setting.
Ranked #21 on
Image Generation
on ImageNet 64x64
no code implementations • 28 May 2024 • Tianchen Zhao, Xuefei Ning, Tongcheng Fang, Enshu Liu, Guyue Huang, Zinan Lin, Shengen Yan, Guohao Dai, Yu Wang
Finally, we develop an integer-programming-based method to conduct bit-width allocation.
1 code implementation • 2 Apr 2024 • Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Sergey Yekhanin, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang
For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10.
no code implementations • CVPR 2024 • Lin Zhao, Tianchen Zhao, Zinan Lin, Xuefei Ning, Guohao Dai, Huazhong Yang, Yu Wang
In recent years, there has been significant progress in the development of text-to-image generative models.
no code implementations • 13 Mar 2024 • Arturs Backurs, Zinan Lin, Sepideh Mahabadi, Sandeep Silwal, Jakub Tarnawski
We abstract out this common subroutine and study the following fundamental algorithmic problem: Given a similarity function $f$ and a large high-dimensional private dataset $X \subset \mathbb{R}^d$, output a differentially private (DP) data structure which approximates $\sum_{x \in X} f(x, y)$ for any query $y$.
2 code implementations • 4 Mar 2024 • Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin
Lin et al. (2024) recently introduced the Private Evolution (PE) algorithm to generate DP synthetic images with only API access to diffusion models.
1 code implementation • 11 Dec 2023 • Ronghao Ni, Zinan Lin, Shuaiqi Wang, Giulia Fanti
By using MoLE existing linear-centric models can achieve SOTA LTSF results in 68% of the experiments that PatchTST reports and we compare to, whereas existing single-head linear-centric models achieve SOTA results in only 25% of cases.
Ranked #1 on
Time Series Forecasting
on Electricity (720)
no code implementations • 7 Dec 2023 • Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc
Compressed beamforming algorithm is used in the current Wi-Fi standard to reduce the beamforming feedback overhead (BFO).
no code implementations • 7 Dec 2023 • Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc
We explore several methods that consider different representations of the data in the candidate set.
1 code implementation • 21 Sep 2023 • Xinyu Tang, Richard Shin, Huseyin A. Inan, Andre Manoel, FatemehSadat Mireshghallah, Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Robert Sim
Our results demonstrate that our algorithm can achieve competitive performance with strong privacy levels.
1 code implementation • 28 Jul 2023 • Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang
This work aims at decreasing the end-to-end generation latency of large language models (LLMs).
no code implementations • NeurIPS 2023 • Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li
Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly.
1 code implementation • 15 Jun 2023 • Enshu Liu, Xuefei Ning, Zinan Lin, Huazhong Yang, Yu Wang
Diffusion probabilistic models (DPMs) are a new class of generative models that have achieved state-of-the-art generation quality in various domains.
1 code implementation • 24 May 2023 • Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Harsha Nori, Sergey Yekhanin
We further demonstrate the promise of applying PE on large foundation models such as Stable Diffusion to tackle challenging private datasets with a small number of high-resolution images.
1 code implementation • 23 May 2023 • Da Yu, Sivakanth Gopi, Janardhan Kulkarni, Zinan Lin, Saurabh Naik, Tomasz Lukasz Religa, Jian Yin, Huishuai Zhang
In this work, we show that a careful pre-training on a \emph{subset} of the public dataset that is guided by the private dataset is crucial to train small language models with differential privacy.
no code implementations • 21 Mar 2023 • Dugang Liu, Pengxiang Cheng, Zinan Lin, Xiaolian Zhang, Zhenhua Dong, Rui Zhang, Xiuqiang He, Weike Pan, Zhong Ming
To bridge this gap, we study the debiasing problem from a new perspective and propose to directly minimize the upper bound of an ideal objective function, which facilitates a better potential solution to the system-induced biases.
1 code implementation • 3 Mar 2023 • Zinan Lin, Shuaiqi Wang, Vyas Sekar, Giulia Fanti
We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution (e. g., mean, standard deviation).
no code implementations • 3 Jun 2022 • Zinan Lin, Vyas Sekar, Giulia Fanti
By drawing connections to the generalization properties of GANs, we prove that under some assumptions, GAN-generated samples inherently satisfy some (weak) privacy guarantees.
1 code implementation • 20 Mar 2022 • Zinan Lin, Hao Liang, Giulia Fanti, Vyas Sekar
We study the problem of learning generative adversarial networks (GANs) for a rare class of an unlabeled dataset subject to a labeling budget.
no code implementations • 9 Mar 2022 • Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc
The IEEE 802. 11 standard based wireless local area networks (WLANs) or Wi-Fi networks are critical to provide internet access in today's world.
no code implementations • 22 Jan 2021 • Todd Huster, Jeremy E. J. Cohen, Zinan Lin, Kevin Chan, Charles Kamhoua, Nandi Leslie, Cho-Yu Jason Chiang, Vyas Sekar
A Pareto GAN leverages extreme value theory and the functional properties of neural networks to learn a distribution that matches the asymptotic behavior of the marginal distributions of the features.
1 code implementation • 13 Jan 2021 • Mircea Trofin, Yundi Qian, Eugene Brevdo, Zinan Lin, Krzysztof Choromanski, David Li
Leveraging machine-learning (ML) techniques for compiler optimizations has been widely studied and explored in academia.
1 code implementation • NeurIPS 2021 • Zinan Lin, Vyas Sekar, Giulia Fanti
Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs).
4 code implementations • 30 Sep 2019 • Zinan Lin, Alankar Jain, Chen Wang, Giulia Fanti, Vyas Sekar
By shedding light on the promise and challenges, we hope our work can rekindle the conversation on workflows for data sharing.
1 code implementation • 14 Jun 2019 • Zinan Lin, Kiran Koshy Thekumparampil, Giulia Fanti, Sewoong Oh
Disentangled generative models map a latent code vector to a target space, while enforcing that a subset of the learned latent codes are interpretable and associated with distinct properties of the target distribution.
2 code implementations • NeurIPS 2018 • Kiran Koshy Thekumparampil, Ashish Khetan, Zinan Lin, Sewoong Oh
When the distribution of the noise is known, we introduce a novel architecture which we call Robust Conditional GAN (RCGAN).
1 code implementation • IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2018 • Zinan Lin, Yongfeng Huang, Jilong Wang
Experiments show that on full embedding rate samples, RNN-SM is of high detection accuracy, which remains over 90% even when the sample is as short as 0. 1 s, and is significantly higher than other state-of-the-art methods.
7 code implementations • NeurIPS 2018 • Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh
Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples.