Search Results for author: Zhuofeng Wu

Found 9 papers, 2 papers with code

IDPG: An Instance-Dependent Prompt Generation Method

no code implementations NAACL 2022 Zhuofeng Wu, Sinong Wang, Jiatao Gu, Rui Hou, Yuxiao Dong, V. G. Vinod Vydiswaran, Hao Ma

Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage.

Language Modelling Natural Language Understanding +2

ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox Generative Model Trigger

no code implementations27 Apr 2023 Jiazhao Li, Yijin Yang, Zhuofeng Wu, V. G. Vinod Vydiswaran, Chaowei Xiao

Textual backdoor attacks pose a practical threat to existing systems, as they can compromise the model by inserting imperceptible triggers into inputs and manipulating labels in the training dataset.

Backdoor Attack

Defending against Insertion-based Textual Backdoor Attacks via Attribution

1 code implementation3 May 2023 Jiazhao Li, Zhuofeng Wu, Wei Ping, Chaowei Xiao, V. G. Vinod Vydiswaran

Textual backdoor attack, as a novel attack model, has been shown to be effective in adding a backdoor to the model during training.

Backdoor Attack Language Modelling

Adversarial Demonstration Attacks on Large Language Models

no code implementations24 May 2023 Jiongxiao Wang, Zichen Liu, Keun Hee Park, Zhuojun Jiang, Zhaoheng Zheng, Zhuofeng Wu, Muhao Chen, Chaowei Xiao

We propose a novel attack method named advICL, which aims to manipulate only the demonstration without changing the input to mislead the models.

In-Context Learning

PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model

1 code implementation NeurIPS 2023 Yizhe Zhang, Jiatao Gu, Zhuofeng Wu, Shuangfei Zhai, Josh Susskind, Navdeep Jaitly

Autoregressive models for text sometimes generate repetitive and low-quality output because errors accumulate during the steps of generation.

Denoising

HiCL: Hierarchical Contrastive Learning of Unsupervised Sentence Embeddings

no code implementations15 Oct 2023 Zhuofeng Wu, Chaowei Xiao, VG Vinod Vydiswaran

In this paper, we propose a hierarchical contrastive learning framework, HiCL, which considers local segment-level and global sequence-level relationships to improve training efficiency and effectiveness.

Contrastive Learning Sentence +2

Divide-or-Conquer? Which Part Should You Distill Your LLM?

no code implementations22 Feb 2024 Zhuofeng Wu, He Bai, Aonan Zhang, Jiatao Gu, VG Vinod Vydiswaran, Navdeep Jaitly, Yizhe Zhang

Recent methods have demonstrated that Large Language Models (LLMs) can solve reasoning tasks better when they are encouraged to solve subtasks of the main task first.

Problem Decomposition

Self-Supervised Spatially Variant PSF Estimation for Aberration-Aware Depth-from-Defocus

no code implementations28 Feb 2024 Zhuofeng Wu, Yusuke Monno, Masatoshi Okutomi

In this paper, we address the task of aberration-aware depth-from-defocus (DfD), which takes account of spatially variant point spread functions (PSFs) of a real camera.

Depth Estimation Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.