1 code implementation • 11 Oct 2023 • Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng Jiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, Yidong Wang, Linyi Yang, Jindong Wang, Xing Xie, Zheng Zhang, Yue Zhang
This survey addresses the crucial issue of factuality in Large Language Models (LLMs).
1 code implementation • 8 Oct 2023 • Guangsheng Bao, Yanbin Zhao, Zhiyang Teng, Linyi Yang, Yue Zhang
Large language models (LLMs) have shown the ability to produce fluent and cogent content, presenting both productivity opportunities and societal risks.
1 code implementation • 6 Jul 2023 • Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.
no code implementations • 25 Jun 2023 • Jingxiong Li, Sunyi Zheng, Zhongyi Shui, Shichuan Zhang, Linyi Yang, Yuxuan Sun, Yunlong Zhang, Honglin Li, Yuanxin Ye, Peter M. A. van Ooijen, Kang Li, Lin Yang
This yields a non-trivial reconstruction task, allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results.
2 code implementations • 8 Jun 2023 • Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, Yue Zhang
To ensure the reliability of PandaLM, we collect a diverse human-annotated test dataset, where all contexts are generated by humans and labels are aligned with human preferences.
1 code implementation • 7 Jun 2023 • Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Yue Zhang, Neil Zhenqiang Gong, Xing Xie
The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness to prompts.
no code implementations • 23 May 2023 • Linyi Yang, Yaoxiao Song, Xuan Ren, Chenyang Lyu, Yidong Wang, Lingqiao Liu, Jindong Wang, Jennifer Foster, Yue Zhang
Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution.
1 code implementation • 22 May 2023 • Yafu Li, Qintong Li, Leyang Cui, Wei Bi, Longyue Wang, Linyi Yang, Shuming Shi, Yue Zhang
In practical scenarios, the detector faces texts from various domains or LLMs without knowing their sources.
1 code implementation • 15 May 2023 • Linyi Yang, Yingpeng Ma, Yue Zhang
Using FinTrust, we show that the consistency of state-of-the-art NLP models for financial forecasting is poor.
1 code implementation • 14 May 2023 • Yingjie Niu, Linyi Yang, Ruihai Dong, Yue Zhang
Our method has been theoretically and empirically shown to be effective in enhancing the generalization ability of both generative and discriminative models.
1 code implementation • 22 Feb 2023 • Jindong Wang, Xixu Hu, Wenxin Hou, Hao Chen, Runkai Zheng, Yidong Wang, Linyi Yang, Haojun Huang, Wei Ye, Xiubo Geng, Binxin Jiao, Yue Zhang, Xing Xie
In this paper, we conduct a thorough evaluation of the robustness of ChatGPT from the adversarial and out-of-distribution (OOD) perspective.
no code implementations • 17 Dec 2022 • Chenyang Lyu, Linyi Yang, Yue Zhang, Yvette Graham, Jennifer Foster
User and product information associated with a review is useful for sentiment polarity prediction.
1 code implementation • 15 Nov 2022 • Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.
Natural Language Understanding
Out-of-Distribution Generalization
1 code implementation • 8 Sep 2022 • Yile Wang, Linyi Yang, Zhiyang Teng, Ming Zhou, Yue Zhang
Transformer-based pre-trained models have gained much advance in recent years, becoming one of the most important backbones in natural language processing.
1 code implementation • COLING 2022 • Linyi Yang, Lifan Yuan, Leyang Cui, Wenyang Gao, Yue Zhang
Few-shot Named Entity Recognition (NER) is imperative for entity tagging in limited resource domains and thus received proper attention in recent years.
4 code implementations • 12 Aug 2022 • Yidong Wang, Hao Chen, Yue Fan, Wang Sun, Ran Tao, Wenxin Hou, RenJie Wang, Linyi Yang, Zhi Zhou, Lan-Zhe Guo, Heli Qi, Zhen Wu, Yu-Feng Li, Satoshi Nakamura, Wei Ye, Marios Savvides, Bhiksha Raj, Takahiro Shinozaki, Bernt Schiele, Jindong Wang, Xing Xie, Yue Zhang
We further provide the pre-trained versions of the state-of-the-art neural models for CV tasks to make the cost affordable for further tuning.
1 code implementation • 15 Apr 2022 • Linyi Yang, Zhen Wang, Yuxiang Wu, Jie Yang, Yue Zhang
Understanding causality is key to the success of NLP applications, especially in high-stakes domains.
no code implementations • 14 Apr 2022 • Yun Luo, Hongjie Cai, Linyi Yang, Yanxia Qin, Rui Xia, Yue Zhang
Since previous studies on open-domain targeted sentiment analysis are limited in dataset domain variety and sentence level, we propose a novel dataset consisting of 6, 013 human-labeled data to extend the data domains in topics of interest and document level.
1 code implementation • ACL 2022 • Jinghui Lu, Linyi Yang, Brian Mac Namee, Yue Zhang
We present a novel rationale-centric framework with human-in-the-loop -- Rationales-centric Double-robustness Learning (RDL) -- to boost model out-of-distribution performance in few-shot learning scenarios.
no code implementations • 5 Jan 2022 • Linyi Yang, Jiazheng Li, Ruihai Dong, Yue Zhang, Barry Smyth
Financial forecasting has been an important and active area of machine learning research because of the challenges it presents and the potential rewards that even minor improvements in prediction accuracy or forecasting may entail.
no code implementations • 29 Jun 2021 • Linyi Yang, Tin Lok James Ng, Barry Smyth, Ruihai Dong
The explosion in the sheer magnitude and complexity of financial news data in recent years makes it increasingly challenging for investment analysts to extract valuable insights and perform analysis.
1 code implementation • ACL 2021 • Linyi Yang, Jiazheng Li, Pádraig Cunningham, Yue Zhang, Barry Smyth, Ruihai Dong
While state-of-the-art NLP models have been achieving the excellent performance of a wide range of tasks in recent years, important questions are being raised about their robustness and their underlying sensitivity to systematic biases that may exist in their training and test data.
no code implementations • COLING 2020 • Linyi Yang, Eoin M. Kenny, Tin Lok James Ng, Yi Yang, Barry Smyth, Ruihai Dong
Corporate mergers and acquisitions (M&A) account for billions of dollars of investment globally every year, and offer an interesting and challenging domain for artificial intelligence.
no code implementations • 13 Feb 2019 • Linyi Yang, Zheng Zhang, Su Xiong, Lirui Wei, James Ng, Lina Xu, Ruihai Dong
It has been shown that financial news leads to the fluctuation of stock prices.