Search Results for author: Ruixiang Tang

Found 22 papers, 7 papers with code

LoRA-as-an-Attack! Piercing LLM Safety Under The Share-and-Play Scenario

no code implementations29 Feb 2024 Hongyi Liu, Zirui Liu, Ruixiang Tang, Jiayi Yuan, Shaochen Zhong, Yu-Neng Chuang, Li Li, Rui Chen, Xia Hu

Our aim is to raise awareness of the potential risks under the emerging share-and-play scenario, so as to proactively prevent potential consequences caused by LoRA-as-an-Attack.

Large Language Models As Faithful Explainers

no code implementations7 Feb 2024 Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Ruixiang Tang, Fan Yang, Mengnan Du, Xuanting Cai, Xia Hu

In this work, we introduce a generative explanation framework, xLLM, to improve the faithfulness of the explanations provided in natural language formats for LLMs.

Decision Making

Assessing Privacy Risks in Language Models: A Case Study on Summarization Tasks

no code implementations20 Oct 2023 Ruixiang Tang, Gord Lueck, Rodolfo Quispe, Huseyin A Inan, Janardhan Kulkarni, Xia Hu

Large language models have revolutionized the field of NLP by achieving state-of-the-art performance on various tasks.

text similarity

DiscoverPath: A Knowledge Refinement and Retrieval System for Interdisciplinarity on Biomedical Research

1 code implementation4 Sep 2023 Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Kwei-Herng Lai, Daochen Zha, Ruixiang Tang, Fan Yang, Alfredo Costilla Reyes, Kaixiong Zhou, Xiaoqian Jiang, Xia Hu

The exponential growth in scholarly publications necessitates advanced tools for efficient article retrieval, especially in interdisciplinary fields where diverse terminologies are used to describe similar research.

named-entity-recognition Named Entity Recognition +5

Large Language Models Can be Lazy Learners: Analyze Shortcuts in In-Context Learning

no code implementations26 May 2023 Ruixiang Tang, Dehan Kong, Longtao Huang, Hui Xue

Large language models (LLMs) have recently shown great potential for in-context learning, where LLMs learn a new task simply by conditioning on a few input-label pairs (prompts).

In-Context Learning

Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model

1 code implementation NeurIPS 2023 Zirui Liu, Guanchu Wang, Shaochen Zhong, Zhaozhuo Xu, Daochen Zha, Ruixiang Tang, Zhimeng Jiang, Kaixiong Zhou, Vipin Chaudhary, Shuai Xu, Xia Hu

While the model parameters do contribute to memory usage, the primary memory bottleneck during training arises from storing feature maps, also known as activations, as they are crucial for gradient calculation.

Language Modelling Stochastic Optimization

Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond

1 code implementation26 Apr 2023 Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, Xia Hu

This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks.

Language Modelling Natural Language Understanding +1

Large Language Models for Healthcare Data Augmentation: An Example on Patient-Trial Matching

no code implementations24 Mar 2023 Jiayi Yuan, Ruixiang Tang, Xiaoqian Jiang, Xia Hu

The process of matching patients with suitable clinical trials is essential for advancing medical research and providing optimal care.

Data Augmentation Text Generation

SPeC: A Soft Prompt-Based Calibration on Performance Variability of Large Language Model in Clinical Notes Summarization

no code implementations23 Mar 2023 Yu-Neng Chuang, Ruixiang Tang, Xiaoqian Jiang, Xia Hu

Electronic health records (EHRs) store an extensive array of patient information, encompassing medical histories, diagnoses, treatments, and test outcomes.

Language Modelling Large Language Model

Did You Train on My Dataset? Towards Public Dataset Protection with Clean-Label Backdoor Watermarking

1 code implementation20 Mar 2023 Ruixiang Tang, Qizhang Feng, Ninghao Liu, Fan Yang, Xia Hu

To overcome this challenge, we introduce a clean-label backdoor watermarking framework that uses imperceptible perturbations to replace mislabeled samples.

Anomaly Detection

PheME: A deep ensemble framework for improving phenotype prediction from multi-modal data

no code implementations19 Mar 2023 Shenghan Zhang, Haoxuan Li, Ruixiang Tang, Sirui Ding, Laila Rasmy, Degui Zhi, Na Zou, Xia Hu

In this work, we present PheME, an Ensemble framework using Multi-modality data of structured EHRs and unstructured clinical notes for accurate Phenotype prediction.

Ensemble Learning

Does Synthetic Data Generation of LLMs Help Clinical Text Mining?

no code implementations8 Mar 2023 Ruixiang Tang, Xiaotian Han, Xiaoqian Jiang, Xia Hu

Our method has resulted in significant improvements in the performance of downstream tasks, improving the F1-score from 23. 37% to 63. 99% for the named entity recognition task and from 75. 86% to 83. 59% for the relation extraction task.

Code Generation named-entity-recognition +5

Fairly Predicting Graft Failure in Liver Transplant for Organ Assigning

no code implementations18 Feb 2023 Sirui Ding, Ruixiang Tang, Daochen Zha, Na Zou, Kai Zhang, Xiaoqian Jiang, Xia Hu

To tackle this problem, this work proposes a fair machine learning framework targeting graft failure prediction in liver transplant.

Fairness Knowledge Distillation

The Science of Detecting LLM-Generated Texts

no code implementations4 Feb 2023 Ruixiang Tang, Yu-Neng Chuang, Xia Hu

The emergence of large language models (LLMs) has resulted in the production of LLM-generated texts that is highly sophisticated and almost indistinguishable from texts written by humans.

LLM-generated Text Detection Misinformation +2

Mitigating Relational Bias on Knowledge Graphs

no code implementations26 Nov 2022 Yu-Neng Chuang, Kwei-Herng Lai, Ruixiang Tang, Mengnan Du, Chia-Yuan Chang, Na Zou, Xia Hu

Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning.

Graph Representation Learning Knowledge Graphs +1

Defense Against Explanation Manipulation

no code implementations8 Nov 2021 Ruixiang Tang, Ninghao Liu, Fan Yang, Na Zou, Xia Hu

Explainable machine learning attracts increasing attention as it improves transparency of models, which is helpful for machine learning to be trusted in real applications.

Adversarial Attack BIG-bench Machine Learning

Was my Model Stolen? Feature Sharing for Robust and Transferable Watermarks

no code implementations29 Sep 2021 Ruixiang Tang, Hongye Jin, Curtis Wigington, Mengnan Du, Rajiv Jain, Xia Hu

The main idea is to insert a watermark which is only known to defender into the protected model and the watermark will then be transferred into all stolen models.

Model extraction

Fairness via Representation Neutralization

no code implementations NeurIPS 2021 Mengnan Du, Subhabrata Mukherjee, Guanchu Wang, Ruixiang Tang, Ahmed Hassan Awadallah, Xia Hu

This process not only requires a lot of instance-level annotations for sensitive attributes, it also does not guarantee that all fairness sensitive information has been removed from the encoder.

Attribute Classification +1

Deep Serial Number: Computational Watermarking for DNN Intellectual Property Protection

no code implementations17 Nov 2020 Ruixiang Tang, Mengnan Du, Xia Hu

In this paper, we present DSN (Deep Serial Number), a simple yet effective watermarking algorithm designed specifically for deep neural networks (DNNs).

Knowledge Distillation valid

Mitigating Gender Bias in Captioning Systems

1 code implementation15 Jun 2020 Ruixiang Tang, Mengnan Du, Yuening Li, Zirui Liu, Na Zou, Xia Hu

Image captioning has made substantial progress with huge supporting image collections sourced from the web.

Benchmarking Gender Prediction +1

An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks

1 code implementation15 Jun 2020 Ruixiang Tang, Mengnan Du, Ninghao Liu, Fan Yang, Xia Hu

In this paper, we investigate a specific security problem called trojan attack, which aims to attack deployed DNN systems relying on the hidden trigger patterns inserted by malicious hackers.

Cannot find the paper you are looking for? You can Submit a new open access paper.