Search Results for author: Tianxing He

Found 33 papers, 23 papers with code

Jailbreak Large Vision-Language Models Through Multi-Modal Linkage

1 code implementation30 Nov 2024 Yu Wang, Xiaofei Zhou, Yichen Wang, Geyuan Zhang, Tianxing He

With the significant advancement of Large Vision-Language Models (VLMs), concerns about their potential misuse and abuse have grown rapidly.

Style-Compress: An LLM-Based Prompt Compression Framework Considering Task-Specific Styles

no code implementations17 Oct 2024 Xiao Pu, Tianxing He, Xiaojun Wan

In a preliminary study, we discover that when instructing language models to compress prompts, different compression styles (e. g., extractive or abstractive) impact performance of compressed prompts on downstream tasks.

In-Context Learning Informativeness +2

Can LLM Graph Reasoning Generalize beyond Pattern Memorization?

1 code implementation23 Jun 2024 Yizhuo Zhang, Heng Wang, Shangbin Feng, Zhaoxuan Tan, Xiaochuang Han, Tianxing He, Yulia Tsvetkov

To this end, we propose the NLGift benchmark, an evaluation suite of LLM graph reasoning generalization: whether LLMs could go beyond semantic, numeric, structural, reasoning patterns in the synthetic training data and improve utility on real-world graph-based tasks.

Memorization

Towards Black-Box Membership Inference Attack for Diffusion Models

no code implementations25 May 2024 Jingwei Li, Jing Dong, Tianxing He, Jingzhao Zhang

Given the rising popularity of AI-generated art and the associated copyright concerns, identifying whether an artwork was used to train a diffusion model is an important research topic.

Image-Variation Inference Attack +1

Learning Syntax Without Planting Trees: Understanding When and Why Transformers Generalize Hierarchically

1 code implementation25 Apr 2024 Kabir Ahuja, Vidhisha Balachandran, Madhur Panwar, Tianxing He, Noah A. Smith, Navin Goyal, Yulia Tsvetkov

Transformers trained on natural language data have been shown to learn its hierarchical structure and generalize to sentences with unseen syntactic structures without explicitly encoding any structural bias.

Inductive Bias Language Modelling

Stumbling Blocks: Stress Testing the Robustness of Machine-Generated Text Detectors Under Attacks

1 code implementation18 Feb 2024 Yichen Wang, Shangbin Feng, Abe Bohan Hou, Xiao Pu, Chao Shen, Xiaoming Liu, Yulia Tsvetkov, Tianxing He

Our experiments reveal that almost none of the existing detectors remain robust under all the attacks, and all detectors exhibit different loopholes.

k-SemStamp: A Clustering-Based Semantic Watermark for Detection of Machine-Generated Text

1 code implementation17 Feb 2024 Abe Bohan Hou, Jingyu Zhang, Yichen Wang, Daniel Khashabi, Tianxing He

Recent watermarked generation algorithms inject detectable signatures during language generation to facilitate post-hoc detection.

Text Detection Text Generation

Learning Time-Invariant Representations for Individual Neurons from Population Dynamics

1 code implementation NeurIPS 2023 Lu Mi, Trung Le, Tianxing He, Eli Shlizerman, Uygar Sümbül

This suggests that neuronal activity is a combination of its time-invariant identity and the inputs the neuron receives from the rest of the circuit.

Self-Supervised Learning

KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large Language Models

1 code implementation15 Oct 2023 Yuyang Bai, Shangbin Feng, Vidhisha Balachandran, Zhaoxuan Tan, Shiqi Lou, Tianxing He, Yulia Tsvetkov

To gain a better understanding of LLMs' knowledge abilities and their generalization, we evaluate 10 open-source and black-box LLMs on the KGQuiz benchmark across the five knowledge-intensive tasks and knowledge domains.

Multiple-choice Triplet +1

On the Zero-Shot Generalization of Machine-Generated Text Detectors

no code implementations8 Oct 2023 Xiao Pu, Jingyu Zhang, Xiaochuang Han, Yulia Tsvetkov, Tianxing He

The rampant proliferation of large language models, fluent enough to generate text indistinguishable from human-written language, gives unprecedented importance to the detection of machine-generated text.

Zero-shot Generalization

Knowledge Crosswords: Geometric Knowledge Reasoning with Large Language Models

1 code implementation2 Oct 2023 Wenxuan Ding, Shangbin Feng, YuHan Liu, Zhaoxuan Tan, Vidhisha Balachandran, Tianxing He, Yulia Tsvetkov

The novel setting of geometric knowledge reasoning necessitates new LM abilities beyond existing atomic/linear multi-hop QA, such as backtracking, verifying facts and constraints, reasoning with uncertainty, and more.

Resolving Knowledge Conflicts in Large Language Models

1 code implementation2 Oct 2023 Yike Wang, Shangbin Feng, Heng Wang, Weijia Shi, Vidhisha Balachandran, Tianxing He, Yulia Tsvetkov

To this end, we introduce an evaluation framework for simulating contextual knowledge conflicts and quantitatively evaluating to what extent LLMs achieve these goals.

LatticeGen: A Cooperative Framework which Hides Generated Text in a Lattice for Privacy-Aware Generation on Cloud

1 code implementation29 Sep 2023 Mengke Zhang, Tianxing He, Tianle Wang, Lu Mi, FatemehSadat Mireshghallah, Binyi Chen, Hao Wang, Yulia Tsvetkov

In the current user-server interaction paradigm of prompted generation with large language models (LLM) on cloud, the server fully controls the generation process, which leaves zero options for users who want to keep the generated text to themselves.

Can Language Models Solve Graph Problems in Natural Language?

2 code implementations NeurIPS 2023 Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, Yulia Tsvetkov

We then propose Build-a-Graph Prompting and Algorithmic Prompting, two instruction-based approaches to enhance LLMs in solving natural language graph problems.

In-Context Learning Knowledge Probing +2

Knowledge Card: Filling LLMs' Knowledge Gaps with Plug-in Specialized Language Models

2 code implementations17 May 2023 Shangbin Feng, Weijia Shi, Yuyang Bai, Vidhisha Balachandran, Tianxing He, Yulia Tsvetkov

Ultimately, Knowledge Card framework enables dynamic synthesis and updates of knowledge from diverse domains.

Retrieval

On the Blind Spots of Model-Based Evaluation Metrics for Text Generation

1 code implementation20 Dec 2022 Tianxing He, Jingyu Zhang, Tianle Wang, Sachin Kumar, Kyunghyun Cho, James Glass, Yulia Tsvetkov

In this work, we explore a useful but often neglected methodology for robustness analysis of text generation evaluation metrics: stress tests with synthetic data.

Text Generation

PCFG-based Natural Language Interface Improves Generalization for Controlled Text Generation

1 code implementation14 Oct 2022 Jingyu Zhang, James Glass, Tianxing He

Existing work on controlled text generation (CTG) assumes a control interface of categorical attributes.

Attribute Text Generation

Controlling the Focus of Pretrained Language Generation Models

1 code implementation Findings (ACL) 2022 Jiabao Ji, Yoon Kim, James Glass, Tianxing He

This work aims to develop a control mechanism by which a user can select spans of context as "highlights" for the model to focus on, and generate relevant output.

Abstractive Text Summarization Response Generation +1

Revisiting Latent-Space Interpolation via a Quantitative Evaluation Framework

1 code implementation13 Oct 2021 Lu Mi, Tianxing He, Core Francisco Park, Hao Wang, Yue Wang, Nir Shavit

In this work, we show how data labeled with semantically continuous attributes can be utilized to conduct a quantitative evaluation of latent-space interpolation algorithms, for variational autoencoders.

An Empirical Study on Few-shot Knowledge Probing for Pretrained Language Models

1 code implementation6 Sep 2021 Tianxing He, Kyunghyun Cho, James Glass

Prompt-based knowledge probing for 1-hop relations has been used to measure how much world knowledge is stored in pretrained language models.

Knowledge Probing Prompt Engineering +1

Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models

1 code implementation EACL 2021 Tianxing He, Bryan McCann, Caiming Xiong, Ehsan Hosseini-Asl

In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e. g., Roberta) for natural language understanding (NLU) tasks.

Language Modelling Natural Language Understanding

Quantifying Exposure Bias for Open-ended Language Generation

no code implementations28 Sep 2020 Tianxing He, Jingzhao Zhang, Zhiming Zhou, James R. Glass

The exposure bias problem refers to the incrementally distorted generation induced by the training-generation discrepancy, in teacher-forcing training for auto-regressive neural network language models (LM).

Text Generation

Analyzing the Forgetting Problem in the Pretrain-Finetuning of Dialogue Response Models

no code implementations16 Oct 2019 Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, Fuchun Peng

We find that mix-review effectively regularizes the finetuning process, and the forgetting problem is alleviated to some extent.

Decoder Response Generation +2

Negative Training for Neural Dialogue Response Generation

1 code implementation ACL 2020 Tianxing He, James Glass

Although deep learning models have brought tremendous advancements to the field of open-domain dialogue response generation, recent research results have revealed that the trained models have undesirable generation behaviors, such as malicious responses and generic (boring) responses.

Diversity Response Generation

Detecting egregious responses in neural sequence-to-sequence models

no code implementations ICLR 2019 Tianxing He, James Glass

We adopt an empirical methodology, in which we first create lists of egregious output sequences, and then design a discrete optimization algorithm to find input sequences that will cause the model to generate them.

Response Generation

On Training Bi-directional Neural Network Language Model with Noise Contrastive Estimation

1 code implementation19 Feb 2016 Tianxing He, Yu Zhang, Jasha Droppo, Kai Yu

We propose to train bi-directional neural network language model(NNLM) with noise contrastive estimation(NCE).

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.