Search Results for author: Hua Shen

Found 14 papers, 9 papers with code

Gentopia: A Collaborative Platform for Tool-Augmented LLMs

1 code implementation8 Aug 2023 Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu

We present gentopia, an ALM framework enabling flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm.

MultiTurnCleanup: A Benchmark for Multi-Turn Spoken Conversational Transcript Cleanup

1 code implementation19 May 2023 Hua Shen, Vicky Zayats, Johann C. Rocholl, Daniel D. Walker, Dirk Padfield

Current disfluency detection models focus on individual utterances each from a single speaker.

Does Human Collaboration Enhance the Accuracy of Identifying LLM-Generated Deepfake Texts?

2 code implementations3 Apr 2023 Adaku Uchendu, Jooyoung Lee, Hua Shen, Thai Le, Ting-Hao 'Kenneth' Huang, Dongwon Lee

Advances in Large Language Models (e. g., GPT-4, LLaMA) have improved the generation of coherent sentences resembling human writing on a large scale, resulting in the creation of so-called deepfake texts.

Face Swapping Human Detection +1

Parachute: Evaluating Interactive Human-LM Co-writing Systems

no code implementations11 Mar 2023 Hua Shen, Tongshuang Wu

A surge of advances in language models (LMs) has led to significant interest in using LMs to build co-writing systems, in which humans and LMs interactively contribute to a shared writing artifact.

SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks

no code implementations1 Mar 2023 Kai-Wei Chang, Yu-Kai Wang, Hua Shen, Iu-thing Kang, Wei-Cheng Tseng, Shang-Wen Li, Hung-Yi Lee

For speech processing, SpeechPrompt shows its high parameter efficiency and competitive performance on a few speech classification tasks.

Ranked #17 on Spoken Language Understanding on Fluent Speech Commands (using extra training data)

Classification Language Modelling +1

ScatterShot: Interactive In-context Example Curation for Text Transformation

1 code implementation14 Feb 2023 Tongshuang Wu, Hua Shen, Daniel S. Weld, Jeffrey Heer, Marco Tulio Ribeiro

ScatterShot iteratively slices unlabeled data into task-specific patterns, samples informative inputs from underexplored or not-yet-saturated slices in an active learning manner, and helps users label more efficiently with the help of an LLM and the current example set.

Active Learning In-Context Learning

Are Shortest Rationales the Best Explanations for Human Understanding?

1 code implementation ACL 2022 Hua Shen, Tongshuang Wu, Wenbo Guo, Ting-Hao 'Kenneth' Huang

Existing self-explaining models typically favor extracting the shortest possible rationales - snippets of an input text "responsible for" corresponding output - to explain the model prediction, with the assumption that shorter rationales are more intuitive to humans.

Explaining the Road Not Taken

no code implementations27 Mar 2021 Hua Shen, Ting-Hao 'Kenneth' Huang

It is unclear if existing interpretations of deep neural network models respond effectively to the needs of users.

Explainable Artificial Intelligence (XAI)

A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models

1 code implementation5 Nov 2019 Ren Pang, Hua Shen, Xinyang Zhang, Shouling Ji, Yevgeniy Vorobeychik, Xiapu Luo, Alex Liu, Ting Wang

Specifically, (i) we develop a new attack model that jointly optimizes adversarial inputs and poisoned models; (ii) with both analytical and empirical evidence, we reveal that there exist intriguing "mutual reinforcement" effects between the two attack vectors -- leveraging one vector significantly amplifies the effectiveness of the other; (iii) we demonstrate that such effects enable a large design spectrum for the adversary to enhance the existing attacks that exploit both vectors (e. g., backdoor attacks), such as maximizing the attack evasiveness with respect to various detection methods; (iv) finally, we discuss potential countermeasures against such optimized attacks and their technical challenges, pointing to several promising research directions.

Interpretable Deep Learning under Fire

no code implementations3 Dec 2018 Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, Ting Wang

The improved interpretability is believed to offer a sense of security by involving human in the decision-making process.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.