Search Results for author: Zinan Lin

Found 35 papers, 24 papers with code

Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching

1 code implementation22 Dec 2024 Enshu Liu, Xuefei Ning, Yu Wang, Zinan Lin

As the first work to demonstrate the possibility of one-step generation for image AR models, DD challenges the prevailing notion that AR models are inherently slow, and opens up new opportunities for efficient AR generation.

Text-to-Image Generation

GenMAC: Compositional Text-to-Video Generation with Multi-Agent Collaboration

no code implementations5 Dec 2024 Kaiyi Huang, Yukun Huang, Xuefei Ning, Zinan Lin, Yu Wang, Xihui Liu

To avoid hallucination of a single MLLM agent, we decompose this stage to four sequentially-executed MLLM-based agents: verification agent, suggestion agent, correction agent, and output structuring agent.

Attribute Hallucination +2

RedCode: Risky Code Execution and Generation Benchmark for Code Agents

1 code implementation12 Nov 2024 Chengquan Guo, Xun Liu, Chulin Xie, Andy Zhou, Yi Zeng, Zinan Lin, Dawn Song, Bo Li

To provide comprehensive and practical evaluations on the safety of code agents, we propose RedCode, a benchmark for risky code execution and generation: (1) RedCode-Exec provides challenging prompts that could lead to risky code execution, aiming to evaluate code agents' ability to recognize and handle unsafe code.

Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs

1 code implementation1 Jul 2024 Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang

For example, we demonstrate that pruning up to 75% of experts in Mixtral $8\times7$B-Instruct results in a substantial reduction in parameters with minimal performance loss.

Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study

1 code implementation20 Jun 2024 Xuefei Ning, Zifu Wang, Shiyao Li, Zinan Lin, Peiran Yao, Tianyu Fu, Matthew B. Blaschko, Guohao Dai, Huazhong Yang, Yu Wang

We reveal some findings: (1) Teaching materials that make it easier for students to learn have clearer and more accurate logic when using in-context learning as the student's "learning" method; (2) Weak-to-strong generalization: LbT might help improve strong models by teaching weak models; (3) Diversity in students might help: teaching multiple students could be better than teaching one student or the teacher itself.

In-Context Learning Knowledge Distillation

ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation

1 code implementation4 Jun 2024 Tianchen Zhao, Tongcheng Fang, Enshu Liu, Rui Wan, Widyadewi Soedarmadji, Shiyao Li, Zinan Lin, Guohao Dai, Shengen Yan, Huazhong Yang, Xuefei Ning, Yu Wang

Diffusion transformers (DiTs) have exhibited remarkable performance in visual generation tasks, such as generating realistic images or videos based on textual instructions.

Quantization Video Generation

Improving the Training of Rectified Flows

1 code implementation30 May 2024 Sangyun Lee, Zinan Lin, Giulia Fanti

In this work, we propose improved techniques for training rectified flows, allowing them to compete with \emph{knowledge distillation} methods even in the low NFE setting.

Image Generation Knowledge Distillation +2

Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better

1 code implementation2 Apr 2024 Enshu Liu, Junyi Zhu, Zinan Lin, Xuefei Ning, Matthew B. Blaschko, Sergey Yekhanin, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang

For example, LCSC achieves better performance using 1 number of function evaluation (NFE) than the base model with 2 NFE on consistency distillation, and decreases the NFE of DM from 15 to 9 while maintaining the generation quality on CIFAR-10.

Efficiently Computing Similarities to Private Datasets

no code implementations13 Mar 2024 Arturs Backurs, Zinan Lin, Sepideh Mahabadi, Sandeep Silwal, Jakub Tarnawski

We abstract out this common subroutine and study the following fundamental algorithmic problem: Given a similarity function $f$ and a large high-dimensional private dataset $X \subset \mathbb{R}^d$, output a differentially private (DP) data structure which approximates $\sum_{x \in X} f(x, y)$ for any query $y$.

Density Estimation Dimensionality Reduction

Differentially Private Synthetic Data via Foundation Model APIs 2: Text

2 code implementations4 Mar 2024 Chulin Xie, Zinan Lin, Arturs Backurs, Sivakanth Gopi, Da Yu, Huseyin A Inan, Harsha Nori, Haotian Jiang, Huishuai Zhang, Yin Tat Lee, Bo Li, Sergey Yekhanin

Lin et al. (2024) recently introduced the Private Evolution (PE) algorithm to generate DP synthetic images with only API access to diffusion models.

Privacy Preserving

Mixture-of-Linear-Experts for Long-term Time Series Forecasting

1 code implementation11 Dec 2023 Ronghao Ni, Zinan Lin, Shuaiqi Wang, Giulia Fanti

By using MoLE existing linear-centric models can achieve SOTA LTSF results in 68% of the experiments that PatchTST reports and we compare to, whereas existing single-head linear-centric models achieve SOTA results in only 25% of cases.

Time Series Time Series Forecasting

Enhanced Index-Based Feedback Overhead Reduction for WLANs

no code implementations7 Dec 2023 Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc

Compressed beamforming algorithm is used in the current Wi-Fi standard to reduce the beamforming feedback overhead (BFO).

Clustering

An Unsupervised Machine Learning Scheme for Index-Based CSI Feedback in Wi-Fi

no code implementations7 Dec 2023 Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc

We explore several methods that consider different representations of the data in the candidate set.

Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation

1 code implementation28 Jul 2023 Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang, Huazhong Yang, Yu Wang

This work aims at decreasing the end-to-end generation latency of large language models (LLMs).

DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models

no code implementations NeurIPS 2023 Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, Sang T. Truong, Simran Arora, Mantas Mazeika, Dan Hendrycks, Zinan Lin, Yu Cheng, Sanmi Koyejo, Dawn Song, Bo Li

Yet, while the literature on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for sensitive applications such as healthcare and finance -- where mistakes can be costly.

Adversarial Robustness Ethics +1

OMS-DPM: Optimizing the Model Schedule for Diffusion Probabilistic Models

1 code implementation15 Jun 2023 Enshu Liu, Xuefei Ning, Zinan Lin, Huazhong Yang, Yu Wang

Diffusion probabilistic models (DPMs) are a new class of generative models that have achieved state-of-the-art generation quality in various domains.

Differentially Private Synthetic Data via Foundation Model APIs 1: Images

1 code implementation24 May 2023 Zinan Lin, Sivakanth Gopi, Janardhan Kulkarni, Harsha Nori, Sergey Yekhanin

We further demonstrate the promise of applying PE on large foundation models such as Stable Diffusion to tackle challenging private datasets with a small number of high-resolution images.

Selective Pre-training for Private Fine-tuning

1 code implementation23 May 2023 Da Yu, Sivakanth Gopi, Janardhan Kulkarni, Zinan Lin, Saurabh Naik, Tomasz Lukasz Religa, Jian Yin, Huishuai Zhang

In this work, we show that a careful pre-training on a \emph{subset} of the public dataset that is guided by the private dataset is crucial to train small language models with differential privacy.

Model Compression Transfer Learning

Bounding System-Induced Biases in Recommender Systems with A Randomized Dataset

no code implementations21 Mar 2023 Dugang Liu, Pengxiang Cheng, Zinan Lin, Xiaolian Zhang, Zhenhua Dong, Rui Zhang, Xiuqiang He, Weike Pan, Zhong Ming

To bridge this gap, we study the debiasing problem from a new perspective and propose to directly minimize the upper bound of an ideal objective function, which facilitates a better potential solution to the system-induced biases.

Recommendation Systems

Summary Statistic Privacy in Data Sharing

1 code implementation3 Mar 2023 Zinan Lin, Shuaiqi Wang, Vyas Sekar, Giulia Fanti

We study a setting where a data holder wishes to share data with a receiver, without revealing certain summary statistics of the data distribution (e. g., mean, standard deviation).

Quantization

On the Privacy Properties of GAN-generated Samples

no code implementations3 Jun 2022 Zinan Lin, Vyas Sekar, Giulia Fanti

By drawing connections to the generalization properties of GANs, we prove that under some assumptions, GAN-generated samples inherently satisfy some (weak) privacy guarantees.

RareGAN: Generating Samples for Rare Classes

1 code implementation20 Mar 2022 Zinan Lin, Hao Liang, Giulia Fanti, Vyas Sekar

We study the problem of learning generative adversarial networks (GANs) for a rare class of an unlabeled dataset subject to a labeling budget.

Active Learning Diversity

Intelligent Feedback Overhead Reduction (iFOR) in Wi-Fi 7 and Beyond

no code implementations9 Mar 2022 Mrugen Deshmukh, Zinan Lin, Hanqing Lou, Mahmoud Kamel, Rui Yang, Ismail Guvenc

The IEEE 802. 11 standard based wireless local area networks (WLANs) or Wi-Fi networks are critical to provide internet access in today's world.

Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed Distributions

no code implementations22 Jan 2021 Todd Huster, Jeremy E. J. Cohen, Zinan Lin, Kevin Chan, Charles Kamhoua, Nandi Leslie, Cho-Yu Jason Chiang, Vyas Sekar

A Pareto GAN leverages extreme value theory and the functional properties of neural networks to learn a distribution that matches the asymptotic behavior of the marginal distributions of the features.

Epidemiology Open-Ended Question Answering

MLGO: a Machine Learning Guided Compiler Optimizations Framework

1 code implementation13 Jan 2021 Mircea Trofin, Yundi Qian, Eugene Brevdo, Zinan Lin, Krzysztof Choromanski, David Li

Leveraging machine-learning (ML) techniques for compiler optimizations has been widely studied and explored in academia.

BIG-bench Machine Learning Diversity

Why Spectral Normalization Stabilizes GANs: Analysis and Improvements

1 code implementation NeurIPS 2021 Zinan Lin, Vyas Sekar, Giulia Fanti

Spectral normalization (SN) is a widely-used technique for improving the stability and sample quality of Generative Adversarial Networks (GANs).

Using GANs for Sharing Networked Time Series Data: Challenges, Initial Promise, and Open Questions

4 code implementations30 Sep 2019 Zinan Lin, Alankar Jain, Chen Wang, Giulia Fanti, Vyas Sekar

By shedding light on the promise and challenges, we hope our work can rekindle the conversation on workflows for data sharing.

Synthetic Data Generation Time Series +1

InfoGAN-CR and ModelCentrality: Self-supervised Model Training and Selection for Disentangling GANs

1 code implementation14 Jun 2019 Zinan Lin, Kiran Koshy Thekumparampil, Giulia Fanti, Sewoong Oh

Disentangled generative models map a latent code vector to a target space, while enforcing that a subset of the learned latent codes are interpretable and associated with distinct properties of the target distribution.

Disentanglement Model Selection

Robustness of Conditional GANs to Noisy Labels

2 code implementations NeurIPS 2018 Kiran Koshy Thekumparampil, Ashish Khetan, Zinan Lin, Sewoong Oh

When the distribution of the noise is known, we introduce a novel architecture which we call Robust Conditional GAN (RCGAN).

RNN-SM: Fast Steganalysis of VoIP Streams Using Recurrent Neural Network

1 code implementation IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 2018 Zinan Lin, Yongfeng Huang, Jilong Wang

Experiments show that on full embedding rate samples, RNN-SM is of high detection accuracy, which remains over 90% even when the sample is as short as 0. 1 s, and is significantly higher than other state-of-the-art methods.

Quantization Steganalysis

PacGAN: The power of two samples in generative adversarial networks

7 code implementations NeurIPS 2018 Zinan Lin, Ashish Khetan, Giulia Fanti, Sewoong Oh

Generative adversarial networks (GANs) are innovative techniques for learning generative models of complex data distributions from samples.

Diversity Two-sample testing +1

Cannot find the paper you are looking for? You can Submit a new open access paper.