Search Results for author: Qianqian Xie

Found 39 papers, 19 papers with code

GenCompareSum: a hybrid unsupervised summarization method using salience

1 code implementation BioNLP (ACL) 2022 Jennifer Bishop, Qianqian Xie, Sophia Ananiadou

To this end, we propose a hybrid, unsupervised, abstractive-extractive approach, in which we walk through a document, generating salient textual fragments representing its key points.

Extractive Summarization Text Summarization

Enhancing Content-based Recommendation via Large Language Model

no code implementations30 Mar 2024 Wentao Xu, Qianqian Xie, Shuo Yang, Jiangxia Cao, Shuchao Pang

However, they still neglect the following two points: (1) The content semantic is a universal world knowledge; how do we extract the multi-aspect semantic information to empower different domains?

Language Modelling Large Language Model +1

MetaAligner: Conditional Weak-to-Strong Correction for Generalizable Multi-Objective Alignment of Language Models

no code implementations25 Mar 2024 Kailai Yang, Zhiwei Liu, Qianqian Xie, Tianlin Zhang, Nirui Song, Jimin Huang, Ziyan Kuang, Sophia Ananiadou

Recent advancements in large language models (LLMs) aim to tackle heterogeneous human expectations and values via multi-objective preference alignment.

In-Context Learning

No Language is an Island: Unifying Chinese and English in Financial Large Language Models, Instruction Data, and Benchmarks

3 code implementations10 Mar 2024 Gang Hu, Ke Qin, Chenhan Yuan, Min Peng, Alejandro Lopez-Lira, Benyou Wang, Sophia Ananiadou, Wanlong Yu, Jimin Huang, Qianqian Xie

While the progression of Large Language Models (LLMs) has notably propelled financial analysis, their application has largely been confined to singular language realms, leaving untapped the potential of bilingual Chinese-English capacity.

HealMe: Harnessing Cognitive Reframing in Large Language Models for Psychotherapy

no code implementations26 Feb 2024 Mengxi Xiao, Qianqian Xie, Ziyan Kuang, Zhicheng Liu, Kailai Yang, Min Peng, Weiguang Han, Jimin Huang

Large Language Models (LLMs) can play a vital role in psychotherapy by adeptly handling the crucial task of cognitive reframing and overcoming challenges such as shame, distrust, therapist skill variability, and resource scarcity.

The Lay Person's Guide to Biomedicine: Orchestrating Large Language Models

no code implementations21 Feb 2024 Zheheng Luo, Qianqian Xie, Sophia Ananiadou

Moreover, automated methods that can effectively assess the `layness' of generated summaries are lacking.

Text Simplification

Factual Consistency Evaluation of Summarisation in the Era of Large Language Models

no code implementations21 Feb 2024 Zheheng Luo, Qianqian Xie, Sophia Ananiadou

Experiments on TreatFact suggest that both previous methods and LLM-based evaluators are unable to capture factual inconsistencies in clinical summaries, posing a new challenge for FC evaluation.

Misinformation

Me LLaMA: Foundation Large Language Models for Medical Applications

1 code implementation20 Feb 2024 Qianqian Xie, Qingyu Chen, Aokun Chen, Cheng Peng, Yan Hu, Fongci Lin, Xueqing Peng, Jimin Huang, Jeffrey Zhang, Vipina Keloth, Xinyu Zhou, Huan He, Lucila Ohno-Machado, Yonghui Wu, Hua Xu, Jiang Bian

In response to this challenge, this study introduces Me-LLaMA, a novel medical LLM family that includes foundation models - Me-LLaMA 13/70B, along with their chat-enhanced versions - Me-LLaMA 13/70B-chat, developed through continual pre-training and instruction tuning of LLaMA2 using large medical datasets.

Few-Shot Learning

Dólares or Dollars? Unraveling the Bilingual Prowess of Financial LLMs Between Spanish and English

2 code implementations12 Feb 2024 Xiao Zhang, Ruoyu Xiang, Chenhan Yuan, Duanyu Feng, Weiguang Han, Alejandro Lopez-Lira, Xiao-Yang Liu, Sophia Ananiadou, Min Peng, Jimin Huang, Qianqian Xie

We evaluate our model and existing LLMs using FLARE-ES, the first comprehensive bilingual evaluation benchmark with 21 datasets covering 9 tasks.

EmoLLMs: A Series of Emotional Large Language Models and Annotation Tools for Comprehensive Affective Analysis

1 code implementation16 Jan 2024 Zhiwei Liu, Kailai Yang, Tianlin Zhang, Qianqian Xie, Zeping Yu, Sophia Ananiadou

In this paper, we propose EmoLLMs, the first series of open-sourced instruction-following LLMs for comprehensive affective analysis based on fine-tuning various LLMs with instruction data, the first multi-task affective analysis instruction dataset (AAID) with 234K data samples based on various classification and regression tasks to support LLM instruction tuning, and a comprehensive affective evaluation benchmark (AEB) with 14 tasks from various sources and domains to test the generalization ability of LLMs.

Instruction Following regression +1

LAiW: A Chinese Legal Large Language Models Benchmark

1 code implementation9 Oct 2023 Yongfu Dai, Duanyu Feng, Jimin Huang, Haochen Jia, Qianqian Xie, Yifang Zhang, Weiguang Han, Wei Tian, Hao Wang

Through automated evaluation of current general and legal domain LLMs on our benchmark, we indicate that these LLMs may not align with the logic of legal practice.

Information Retrieval

Back to the Future: Towards Explainable Temporal Reasoning with Large Language Models

1 code implementation2 Oct 2023 Chenhan Yuan, Qianqian Xie, Jimin Huang, Sophia Ananiadou

In this paper, we introduce the first task of explainable temporal reasoning, to predict an event's occurrence at a future timestamp based on context which requires multiple reasoning over multiple events, and subsequently provide a clear explanation for their prediction.

Attribute Instruction Following +1

Empowering Many, Biasing a Few: Generalist Credit Scoring through Large Language Models

1 code implementation1 Oct 2023 Duanyu Feng, Yongfu Dai, Jimin Huang, Yifang Zhang, Qianqian Xie, Weiguang Han, Zhengyu Chen, Alejandro Lopez-Lira, Hao Wang

We then propose the first Credit and Risk Assessment Large Language Model (CALM) by instruction tuning, tailored to the nuanced demands of various financial risk assessment tasks.

Decision Making Language Modelling +1

Overview of the BioLaySumm 2023 Shared Task on Lay Summarization of Biomedical Research Articles

no code implementations29 Sep 2023 Tomas Goldsack, Zheheng Luo, Qianqian Xie, Carolina Scarton, Matthew Shardlow, Sophia Ananiadou, Chenghua Lin

This paper presents the results of the shared task on Lay Summarisation of Biomedical Research Articles (BioLaySumm), hosted at the BioNLP Workshop at ACL 2023.

Lay Summarization

LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive Summarisation

1 code implementation21 Sep 2023 Jennifer A Bishop, Qianqian Xie, Sophia Ananiadou

This framework outperforms existing state-of-the-art metrics in its ability to correlate with human measures of factuality when used to evaluate long document summarisation data sets.

A scoping review on multimodal deep learning in biomedical images and texts

no code implementations14 Jul 2023 Zhaoyi Sun, Mingquan Lin, Qingqing Zhu, Qianqian Xie, Fei Wang, Zhiyong Lu, Yifan Peng

In this scoping review, we aim to provide a comprehensive overview of the current state of the field and identify key concepts, types of studies, and research gaps with a focus on biomedical images and texts joint learning, mainly because these two were the most commonly available data types in MDL research.

Cross-Modal Retrieval Decision Making +5

Graph Contrastive Topic Model

1 code implementation5 Jul 2023 Zheheng Luo, Lei Liu, Qianqian Xie, Sophia Ananiadou

Based on it, we propose the graph contrastive topic model (GCTM), which conducts graph contrastive learning (GCL) using informative positive and negative samples that are generated by the graph-based sampling strategy leveraging in-depth correlation and irrelevance among documents and words.

Contrastive Learning Representation Learning

PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark for Finance

2 code implementations8 Jun 2023 Qianqian Xie, Weiguang Han, Xiao Zhang, Yanzhao Lai, Min Peng, Alejandro Lopez-Lira, Jimin Huang

This paper introduces PIXIU, a comprehensive framework including the first financial LLM based on fine-tuning LLaMA with instruction data, the first instruction data with 136K data samples to support the fine-tuning, and an evaluation benchmark with 5 tasks and 9 datasets.

Conversational Question Answering Language Modelling +5

Word Grounded Graph Convolutional Network

1 code implementation10 May 2023 Zhibin Lu, Qianqian Xie, Benyou Wang, Jian-Yun Nie

An inductive Word-grounded Graph Convolutional Network (WGCN) is proposed to learn word and document representations based on WGraph in a supervised manner.

text-classification Text Classification

A Survey for Biomedical Text Summarization: From Pre-trained to Large Language Models

no code implementations18 Apr 2023 Qianqian Xie, Zheheng Luo, Benyou Wang, Sophia Ananiadou

In this paper, we present a systematic review of recent advancements in BTS, leveraging cutting-edge NLP techniques from PLMs to LLMs, to help understand the latest progress, challenges, and future directions.

Information Retrieval Language Modelling +3

Zero-shot Temporal Relation Extraction with ChatGPT

no code implementations11 Apr 2023 Chenhan Yuan, Qianqian Xie, Sophia Ananiadou

The current shortcomings of ChatGPT on temporal relation extraction are also discussed in this paper.

Relation Temporal Relation Extraction

The Wall Street Neophyte: A Zero-Shot Analysis of ChatGPT Over MultiModal Stock Movement Prediction Challenges

no code implementations10 Apr 2023 Qianqian Xie, Weiguang Han, Yanzhao Lai, Min Peng, Jimin Huang

Recently, large language models (LLMs) like ChatGPT have demonstrated remarkable performance across a variety of natural language processing tasks.

Mastering Pair Trading with Risk-Aware Recurrent Reinforcement Learning

no code implementations1 Apr 2023 Weiguang Han, Jimin Huang, Qianqian Xie, Boyi Zhang, Yanzhao Lai, Min Peng

Although pair trading is the simplest hedging strategy for an investor to eliminate market risk, it is still a great challenge for reinforcement learning (RL) methods to perform pair trading as human expertise.

PAIR TRADING reinforcement-learning +1

ChatGPT as a Factual Inconsistency Evaluator for Text Summarization

no code implementations27 Mar 2023 Zheheng Luo, Qianqian Xie, Sophia Ananiadou

In this paper, we particularly explore ChatGPT's ability to evaluate factual inconsistency under a zero-shot setting by examining it on both coarse-grained and fine-grained evaluation tasks including binary entailment inference, summary ranking, and consistency rating.

Abstractive Text Summarization Natural Language Inference +3

FactReranker: Fact-guided Reranker for Faithful Radiology Report Summarization

no code implementations15 Mar 2023 Qianqian Xie, Jiayu Zhou, Yifan Peng, Fei Wang

We propose to extract medical facts of the input medical report, its gold summary, and candidate summaries based on the RadGraph schema and design the fact-guided reranker to efficiently incorporate the extracted medical facts for selecting the optimal summary.

Graph Generation

CitationSum: Citation-aware Graph Contrastive Learning for Scientific Paper Summarization

no code implementations26 Jan 2023 Zheheng Luo, Qianqian Xie, Sophia Ananiadou

To fill that gap, we propose a novel citation-aware scientific paper summarization framework based on citation graphs, able to accurately locate and incorporate the salient contents from references, as well as capture varying relevance between source papers and their references.

Contrastive Learning Text Summarization

Select and Trade: Towards Unified Pair Trading with Hierarchical Reinforcement Learning

1 code implementation25 Jan 2023 Weiguang Han, Boyi Zhang, Qianqian Xie, Min Peng, Yanzhao Lai, Jimin Huang

For pair selection, ignoring the trading performance results in the wrong assets being selected with irrelevant price movements, while the agent trained for trading can overfit to the selected assets without any historical information of other assets.

Hierarchical Reinforcement Learning PAIR TRADING +2

SMiLE: Schema-augmented Multi-level Contrastive Learning for Knowledge Graph Link Prediction

1 code implementation10 Oct 2022 Miao Peng, Ben Liu, Qianqian Xie, Wenjie Xu, Hua Wang, Min Peng

Specifically, we first exploit network schema as the prior constraint to sample negatives and pre-train our model by employing a multi-level contrastive learning method to yield both prior schema and contextual information.

Contrastive Learning Knowledge Graphs +1

Readability Controllable Biomedical Document Summarization

no code implementations10 Oct 2022 Zheheng Luo, Qianqian Xie, Sophia Ananiadou

Different from general documents, it is recognised that the ease with which people can understand a biomedical text is eminently varied, owing to the highly technical nature of biomedical documents and the variance of readers' domain knowledge.

Document Summarization Extractive Summarization +1

GRETEL: Graph Contrastive Topic Enhanced Language Model for Long Document Extractive Summarization

no code implementations COLING 2022 Qianqian Xie, Jimin Huang, Tulika Saha, Sophia Ananiadou

Recently, neural topic models (NTMs) have been incorporated into pre-trained language models (PLMs), to capture the global semantic information for text summarization.

Contrastive Learning Extractive Summarization +3

Can Language Models Make Fun? A Case Study in Chinese Comical Crosstalk

1 code implementation2 Jul 2022 Benyou Wang, Xiangbo Wu, Xiaokang Liu, Jianquan Li, Prayag Tiwari, Qianqian Xie

However, the humor aspect of natural language is relatively under-investigated, especially in the age of pre-trained language models.

Benchmarking Machine Translation +1

Pre-trained Language Models in Biomedical Domain: A Systematic Survey

1 code implementation11 Oct 2021 Benyou Wang, Qianqian Xie, Jiahuan Pei, Zhihong Chen, Prayag Tiwari, Zhao Li, Jie Fu

In this paper, we summarize the recent progress of pre-trained language models in the biomedical domain and their applications in biomedical downstream tasks.

MLR: A Two-stage Conversational Query Rewriting Model with Multi-task Learning

no code implementations13 Apr 2020 Shuangyong Song, Chao Wang, Qianqian Xie, Xinxing Zu, Huan Chen, Haiqing Chen

In this paper, we propose the conversational query rewriting model - MLR, which is a Multi-task model on sequence Labeling and query Rewriting.

Multi-Task Learning

Neural Sparse Topical Coding

no code implementations ACL 2018 Min Peng, Qianqian Xie, Yanchun Zhang, Hua Wang, Xiuzhen Zhang, Jimin Huang, Gang Tian

Topic models with sparsity enhancement have been proven to be effective at learning discriminative and coherent latent topics of short texts, which is critical to many scientific and engineering applications.

Language Modelling Topic Models +1

Cannot find the paper you are looking for? You can Submit a new open access paper.