1 code implementation • 15 Feb 2024 • Chen Ling, Xujiang Zhao, Xuchao Zhang, Wei Cheng, Yanchi Liu, Yiyou Sun, Mika Oishi, Takao Osaki, Katsushi Matsuda, Jie Ji, Guangji Bai, Liang Zhao, Haifeng Chen
Existing works have been devoted to quantifying the uncertainty in LLM's response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning.
no code implementations • 18 Oct 2023 • Chen Ling, Xuchao Zhang, Xujiang Zhao, Yanchi Liu, Wei Cheng, Mika Oishi, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao
In this work, we leverage pre-trained language models to iteratively retrieve reasoning paths on the external knowledge base, which does not require task-specific supervision.
no code implementations • 3 Oct 2023 • Yijia Xiao, Yiqiao Jin, Yushi Bai, Yue Wu, Xianjun Yang, Xiao Luo, Wenchao Yu, Xujiang Zhao, Yanchi Liu, Haifeng Chen, Wei Wang, Wei Cheng
To address this challenge, we introduce Privacy Protection Language Models (PPLM), a novel paradigm for fine-tuning LLMs that effectively injects domain-specific knowledge while safeguarding data privacy.
no code implementations • 22 Sep 2023 • Yujie Lin, Chen Zhao, Minglai Shao, Baoluo Meng, Xujiang Zhao, Haifeng Chen
This approach effectively separates environmental information and sensitive attributes from the embedded representation of classification features.
no code implementations • 7 Sep 2023 • Chen Ling, Xujiang Zhao, Xuchao Zhang, Yanchi Liu, Wei Cheng, Haoyu Wang, Zhengzhang Chen, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text, typically in the form of (subject, relation, object) triples.
Ranked #6 on Open Information Extraction on OIE2016
no code implementations • 31 Aug 2023 • Yujie Lin, Chen Zhao, Minglai Shao, Xujiang Zhao, Haifeng Chen
In aligning p with p*, several factors can affect the adaptation rate, including the causal dependencies between variables in p. In real-life scenarios, however, we have to consider the fairness of the training process, and it is particularly crucial to involve a sensitive variable (bias) present between a cause and an effect variable.
no code implementations • 30 May 2023 • Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Dhagash Mehta, Stefano Pasquali, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, Carl Yang, Liang Zhao
In this article, we present a comprehensive survey on domain specification techniques for large language models, an emerging direction critical for large language model applications.
no code implementations • 20 Apr 2023 • Xujiang Zhao
In the first part of this thesis, we develop a general learning framework to quantify multiple types of uncertainties caused by different root causes, such as vacuity (i. e., uncertainty due to a lack of evidence) and dissonance (i. e., uncertainty due to conflicting evidence), for graph neural networks.
1 code implementation • 6 Mar 2023 • Xianjun Yang, Wei Cheng, Xujiang Zhao, Wenchao Yu, Linda Petzold, Haifeng Chen
Experimental results underscore the significant performance improvement achieved by dynamic prompt tuning across a wide range of tasks, including NLP tasks, vision recognition tasks, and vision-language tasks.
no code implementations • 4 Feb 2023 • Tanmoy Chowdhury, Chen Ling, Xuchao Zhang, Xujiang Zhao, Guangji Bai, Jian Pei, Haifeng Chen, Liang Zhao
Knowledge-enhanced neural machine reasoning has garnered significant attention as a cutting-edge yet challenging research area with numerous practical applications.
no code implementations • 12 Jun 2022 • Zhen Guo, Zelin Wan, Qisheng Zhang, Xujiang Zhao, Feng Chen, Jin-Hee Cho, Qi Zhang, Lance M. Kaplan, Dong H. Jeong, Audun Jøsang
We found that only a few studies have leveraged the mature uncertainty research in belief/evidence theories in ML/DL to tackle complex problems under different types of uncertainty.
1 code implementation • 1 Mar 2022 • Haoliang Wang, Chen Zhao, Xujiang Zhao, Feng Chen
During the forward pass of Deep Neural Networks (DNNs), inputs gradually transformed from low-level features to high-level conceptual labels.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 5 Feb 2022 • Xujiang Zhao, Xuchao Zhang, Wei Cheng, Wenchao Yu, Yuncong Chen, Haifeng Chen, Feng Chen
Sound Event Early Detection (SEED) is an essential task in recognizing the acoustic environments and soundscapes.
1 code implementation • EMNLP 2021 • Liyan Xu, Xuchao Zhang, Xujiang Zhao, Haifeng Chen, Feng Chen, Jinho D. Choi
Recent multilingual pre-trained language models have achieved remarkable zero-shot performance, where the model is only finetuned on one source language and directly evaluated on target languages.
1 code implementation • NeurIPS 2021 • KrishnaTeja Killamsetty, Xujiang Zhao, Feng Chen, Rishabh Iyer
In this work, we propose RETRIEVE, a coreset selection framework for efficient and robust semi-supervised learning.
1 code implementation • 26 Dec 2020 • Yibo Hu, Yuzhe Ou, Xujiang Zhao, Jin-Hee Cho, Feng Chen
By considering the multidimensional uncertainty, we proposed a novel uncertainty-aware evidential NN called WGAN-ENN (WENN) for solving an out-of-distribution (OOD) detection problem.
Generative Adversarial Network Multi-class Classification +3
no code implementations • NeurIPS 2020 • Weishi Shi, Xujiang Zhao, Feng Chen, Qi Yu
We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data.
1 code implementation • NeurIPS 2020 • Xujiang Zhao, Feng Chen, Shu Hu, Jin-Hee Cho
To clarify the reasons behind the results, we provided the theoretical proof that explains the relationships between different types of uncertainties considered in this work.
1 code implementation • 7 Oct 2020 • Xujiang Zhao, Killamsetty Krishnateja, Rishabh Iyer, Feng Chen
This work addresses the following question: How do out-of-distribution (OOD) data adversely affect semi-supervised learning algorithms?
no code implementations • 15 Oct 2019 • Xujiang Zhao, Yuzhe Ou, Lance Kaplan, Feng Chen, Jin-Hee Cho
However, an ENN is trained as a black box without explicitly considering different types of inherent data uncertainty, such as vacuity (uncertainty due to a lack of evidence) or dissonance (uncertainty due to conflicting evidence).
1 code implementation • 12 Oct 2019 • Xujiang Zhao, Feng Chen, Jin-Hee Cho
Subjective Logic (SL) is one of well-known belief models that can explicitly deal with uncertain opinions and infer unknown opinions based on a rich set of operators of fusing multiple opinions.
no code implementations • 25 Sep 2019 • Weishi Shi, Xujiang Zhao, Feng Chen, Qi Yu
We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data.
no code implementations • 25 Sep 2019 • Xujiang Zhao, Feng Chen, Shu Hu, Jin-Hee Cho
In this work, we propose a Bayesian deep learning framework reflecting various types of uncertainties for classification predictions by leveraging the powerful modeling and learning capabilities of GNNs.