Search Results for author: Nuo Chen

Found 76 papers, 30 papers with code

A Transformer-based Threshold-Free Framework for Multi-Intent NLU

no code implementations COLING 2022 Lisung Chen, Nuo Chen, Yuexian Zou, Yong Wang, Xinzhong Sun

Furthermore, we propose a threshold-free intent multi-intent classifier that utilizes the output of IND task and detects the multiple intents without depending on the threshold.

Intent Detection Multi-Task Learning +3

LLM-Guided Taxonomy and Hierarchical Uncertainty for 3D Point CLoud Active Learning

no code implementations25 May 2025 Chenxi Li, Nuo Chen, Fengyun Tan, Yantong Chen, Bochun Yuan, Tianrui Li, Chongshou Li

We present a novel active learning framework for 3D point cloud semantic segmentation that, for the first time, integrates large language models (LLMs) to construct hierarchical label structures and guide uncertainty-based sample selection.

Active Learning Semantic Segmentation

XtraGPT: LLMs for Human-AI Collaboration on Controllable Academic Paper Revision

no code implementations16 May 2025 Nuo Chen, Andre Lin HuiKai, Jiaying Wu, Junyi Hou, Zining Zhang, Qian Wang, Xidong Wang, Bingsheng He

Despite the growing adoption of large language models (LLMs) in academic workflows, their capabilities remain limited when it comes to supporting high-quality scientific writing.

Text Generation

StreamRL: Scalable, Heterogeneous, and Elastic RL for LLMs with Disaggregated Stream Generation

no code implementations22 Apr 2025 Yinmin Zhong, Zili Zhang, Xiaoniu Song, Hanpeng Hu, Chao Jin, Bingyang Wu, Nuo Chen, Yukun Chen, Yu Zhou, Changyi Wan, HongYu Zhou, Yimin Jiang, Yibo Zhu, Daxin Jiang

However, in real-world deployments, we observe that the colocated architecture suffers from resource coupling, where the two stages are constrained to use the same resources.

Reinforcement Learning (RL) Scheduling

Assessing Judging Bias in Large Reasoning Models: An Empirical Study

no code implementations14 Apr 2025 Qian Wang, Zhanzhi Lou, Zhenheng Tang, Nuo Chen, Xuandong Zhao, Wenxuan Zhang, Dawn Song, Bingsheng He

Large Reasoning Models (LRMs) like DeepSeek-R1 and OpenAI-o1 have demonstrated remarkable reasoning capabilities, raising important questions about their biases in LLM-as-a-judge settings.

In-Context Learning Position

JudgeLRM: Large Reasoning Models as a Judge

no code implementations31 Mar 2025 Nuo Chen, Zhiyuan Hu, Qingyun Zou, Jiaying Wu, Qian Wang, Bryan Hooi, Bingsheng He

The rise of Large Language Models (LLMs) as evaluators offers a scalable alternative to human annotation, yet existing Supervised Fine-Tuning (SFT) for judges approaches often fall short in domains requiring complex reasoning.

Reinforcement Learning (RL)

SolBench: A Dataset and Benchmark for Evaluating Functional Correctness in Solidity Code Completion and Repair

no code implementations3 Mar 2025 Zaoyu Chen, Haoran Qin, Nuo Chen, Xiangyu Zhao, Lei Xue, Xiapu Luo, Xiao-Ming Wu

To fill this gap, we introduce SolBench, a benchmark for evaluating the functional correctness of Solidity smart contracts generated by code completion models.

Code Completion Code Repair +1

Task-Oriented 6-DoF Grasp Pose Detection in Clutters

no code implementations24 Feb 2025 An-Lan Wang, Nuo Chen, Kun-Yu Lin, Li Yuan-Ming, Wei-Shi Zheng

With an aim to get more general and practical grasp models, in this paper, we investigate the problem named Task-Oriented 6-DoF Grasp Pose Detection in Clutters (TO6DGC), which extends the task-oriented problem to a more general 6-DOF Grasp Pose Detection in Cluttered (multi-object) scenario.

Grasp Generation

RankFlow: A Multi-Role Collaborative Reranking Workflow Utilizing Large Language Models

no code implementations2 Feb 2025 Can Jin, Hongwu Peng, Anxiang Zhang, Nuo Chen, Jiahui Zhao, Xi Xie, Kuangzheng Li, Shuya Feng, Kai Zhong, Caiwen Ding, Dimitris N. Metaxas

In an Information Retrieval (IR) system, reranking plays a critical role by sorting candidate passages according to their relevance to a specific query.

Information Retrieval Reranking

Evaluating Conversational Recommender Systems with Large Language Models: A User-Centric Evaluation Framework

no code implementations16 Jan 2025 Nuo Chen, Quanyu Dai, Xiaoyu Dong, Xiao-Ming Wu, Zhenhua Dong

Conversational recommender systems (CRS) involve both recommendation and dialogue tasks, which makes their evaluation a unique challenge.

Recommendation Systems

What Limits LLM-based Human Simulation: LLMs or Our Design?

no code implementations15 Jan 2025 Qian Wang, Jiaying Wu, Zhenheng Tang, Bingqiao Luo, Nuo Chen, Wei Chen, Bingsheng He

We argue that advancing LLM-based human simulation requires addressing both LLM's inherent limitations and simulation framework design challenges.

Explainable Saliency: Articulating Reasoning with Contextual Prioritization

no code implementations CVPR 2025 Nuo Chen, Ming Jiang, Qi Zhao

Deep saliency models, which predict what parts of an image capture our attention, are often like black boxes.

Navigate Saliency Prediction

Data Driven Automatic Electrical Machine Preliminary Design with Artificial Intelligence Expert Guidance

no code implementations18 Nov 2024 Yiwei Wang, Tao Yang, Hailin Huang, Tianjie Zou, Jincai Li, Nuo Chen, Zhuoran Zhang

Once trained, guided by metaheuristic algorithms, the surrogate model can generate thousands of geometric scalable designs, covering a wide power range, forming an AI expert database to guide future preliminary design.

Prognosis

GCoder: Improving Large Language Model for Generalized Graph Problem Solving

1 code implementation24 Oct 2024 Qifan Zhang, Xiaobin Hong, Jianheng Tang, Nuo Chen, Yuhan Li, Wenzhong Li, Jing Tang, Jia Li

Furthermore, GCoder efficiently manages large-scale graphs with millions of nodes and diverse input formats, overcoming the limitations of previous models focused on the reasoning steps paradigm.

Language Modeling Language Modelling +1

Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts

1 code implementation14 Oct 2024 Guorui Zheng, Xidong Wang, Juhao Liang, Nuo Chen, Yuping Zheng, Benyou Wang

In order to leverage the generalization capability of multilingual LLMs to efficiently scale to more resource-constrained languages, we explore the internal information flow of LLMs from a multilingual perspective using Mixture of Experts (MoE) modularity.

Mixture-of-Experts

Retrieving, Rethinking and Revising: The Chain-of-Verification Can Improve Retrieval Augmented Generation

no code implementations8 Oct 2024 Bolei He, Nuo Chen, Xinran He, Lingyong Yan, Zhenkai Wei, Jinchang Luo, Zhen-Hua Ling

To address these issues, we propose the chain-of-verification (CoV-RAG) to enhance the external retrieval correctness and internal generation consistency.

Language Modeling Language Modelling +3

AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment

no code implementations24 Sep 2024 Nuo Chen, Jiqun Liu, Xiaoyu Dong, Qijiong Liu, Tetsuya Sakai, Xiao-Ming Wu

Our finding demonstrates that LLM%u2019s judgments, similar to human judgments, are also influenced by threshold priming biases, and suggests that researchers and system engineers should take into account potential human-like cognitive biases in designing, evaluating, and auditing LLMs in IR tasks and beyond.

Decision Making Information Retrieval

ControlMath: Controllable Data Generation Promotes Math Generalist Models

no code implementations20 Sep 2024 Nuo Chen, Ning Wu, Jianhui Chang, Jia Li

The module creates diverse equations, which the Problem-Crafter agent then transforms into math word problems.

Data Augmentation Diversity +3

The Oscars of AI Theater: A Survey on Role-Playing with Language Models

1 code implementation16 Jul 2024 Nuo Chen, Yan Wang, Yang Deng, Jia Li

This survey explores the burgeoning field of role-playing with language models, focusing on their development from early persona-based models to advanced character-driven simulations facilitated by Large Language Models (LLMs).

Survey

A Reflective LLM-based Agent to Guide Zero-shot Cryptocurrency Trading

no code implementations27 Jun 2024 Yuan Li, Bingqiao Luo, Qian Wang, Nuo Chen, Xu Liu, Bingsheng He

The utilization of Large Language Models (LLMs) in financial trading has primarily been concentrated within the stock market, aiding in economic and financial decisions.

Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections

2 code implementations5 Jun 2024 Zihan Luo, Hong Huang, Yongkang Zhou, Jiping Zhang, Nuo Chen, Hai Jin

Despite the remarkable capabilities demonstrated by Graph Neural Networks (GNNs) in graph-related tasks, recent research has revealed the fairness vulnerabilities in GNNs when facing malicious adversarial attacks.

Fairness

Is Your LLM Outdated? Evaluating LLMs at Temporal Generalization

1 code implementation14 May 2024 Chenghao Zhu, Nuo Chen, Yufei Gao, Yunyi Zhang, Prayag Tiwari, Benyou Wang

The rapid advancement of Large Language Models (LLMs) highlights the urgent need for evolving evaluation methodologies that keep pace with improvements in language comprehension and information processing.

Vector Quantization for Recommender Systems: A Review and Outlook

1 code implementation6 May 2024 Qijiong Liu, Xiaoyu Dong, Jiaren Xiao, Nuo Chen, Hengchang Hu, Jieming Zhu, Chenxu Zhu, Tetsuya Sakai, Xiao-Ming Wu

Finally, the survey analyzes the remaining challenges and anticipates future trends in VQ4Rec, including the challenges associated with the training of vector quantization, the opportunities presented by large language models, and emerging trends in multimodal recommender systems.

Feature Compression Quantization +2

Structure-aware Fine-tuning for Code Pre-trained Models

no code implementations11 Apr 2024 Jiayi Wu, Renyu Zhu, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao

Over the past few years, we have witnessed remarkable advancements in Code Pre-trained Models (CodePTMs).

Multi-Task Learning

Decoy Effect In Search Interaction: Understanding User Behavior and Measuring System Vulnerability

no code implementations27 Mar 2024 Nuo Chen, Jiqun Liu, Hanpei Fang, Yuankai Luo, Tetsuya Sakai, Xiao-Ming Wu

This study examines the decoy effect's underexplored influence on user search interactions and methods for measuring information retrieval (IR) systems' vulnerability to this effect.

Information Retrieval Retrieval

Apollo: A Lightweight Multilingual Medical LLM towards Democratizing Medical AI to 6B People

1 code implementation6 Mar 2024 Xidong Wang, Nuo Chen, Junyin Chen, Yidong Wang, Guorui Zhen, Chunxian Zhang, Xiangbo Wu, Yan Hu, Anningzhe Gao, Xiang Wan, Haizhou Li, Benyou Wang

Despite the vast repository of global medical knowledge predominantly being in English, local languages are crucial for delivering tailored healthcare services, particularly in areas with limited medical resources.

GraphWiz: An Instruction-Following Language Model for Graph Problems

1 code implementation25 Feb 2024 Nuo Chen, Yuhan Li, Jianheng Tang, Jia Li

Large language models (LLMs) have achieved impressive success across several fields, but their proficiency in understanding and resolving complex graph problems is less explored.

Instruction Following Language Modeling +1

From Good to Great: Improving Math Reasoning with Tool-Augmented Interleaf Prompting

no code implementations18 Dec 2023 Nuo Chen, Hongguang Li, Baoyuan Wang, Jia Li

IMP-TIP follows the ``From Good to Great" concept, collecting multiple potential solutions from both LLMs and their Tool-Augmented counterparts for the same math problem, and then selecting or re-generating the most accurate answer after cross-checking these solutions via tool-augmented interleaf prompting.

Diversity GSM8K +2

Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers

1 code implementation7 Dec 2023 Nuo Chen, Ning Wu, Shining Liang, Ming Gong, Linjun Shou, Dongmei Zhang, Jia Li

This paper presents an in-depth analysis of Large Language Models (LLMs), focusing on LLaMA, a prominent open-source foundational model in natural language processing.

Math Multiple-choice +1

MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria

1 code implementation23 Nov 2023 Wentao Ge, Shunian Chen, Guiming Hardy Chen, Junying Chen, Zhihong Chen, Nuo Chen, Wenya Xie, Shuo Yan, Chenghao Zhu, Ziyue Lin, Song Dingjie, Xidong Wang, Anningzhe Gao, Zhang Zhiyi, Jianquan Li, Xiang Wan, Benyou Wang

To this end, in our paper, we propose a new evaluation paradigm for MLLMs, which is evaluating MLLMs with per-sample criteria using potent MLLM as the judge.

Decoy Effect in Search Interaction: A Pilot Study

no code implementations4 Nov 2023 Nuo Chen, Jiqun Liu, Tetsuya Sakai, Xiao-Ming Wu

In recent years, the influence of cognitive effects and biases on users' thinking, behaving, and decision-making has garnered increasing attention in the field of interactive information retrieval.

Decision Making Information Retrieval +1

Breaking Language Barriers in Multilingual Mathematical Reasoning: Insights and Observations

2 code implementations31 Oct 2023 Nuo Chen, Zinan Zheng, Ning Wu, Ming Gong, Dongmei Zhang, Jia Li

This indicates that crafting multilingual corpora can be regarded as a vital strategy for enhancing model performance in a specific language, especially in mathematical reasoning tasks.

GSM8K Math +1

Uncertainty-aware Parameter-Efficient Self-training for Semi-supervised Language Understanding

1 code implementation19 Oct 2023 Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li

The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios.

ChatDev: Communicative Agents for Software Development

1 code implementation16 Jul 2023 Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun

Numerous studies used deep learning to improve specific phases in a waterfall model, such as design, coding, and testing.

Decision Making

A Meta-Evaluation of C/W/L/A Metrics: System Ranking Similarity, System Ranking Consistency and Discriminative Power

no code implementations6 Jul 2023 Nuo Chen, Tetsuya Sakai

In this study, we investigate the statistical stability of C/W/L/A metrics from the perspective of: (1) the system ranking similarity among aggregations, (2) the system ranking consistency of aggregations and (3) the discriminative power of aggregations.

Information Retrieval

TransCoder: Towards Unified Transferable Code Representation Learning Inspired by Human Skills

1 code implementation23 May 2023 Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li, Ming Gao

To tackle the issue, in this paper, we present TransCoder, a unified Transferable fine-tuning strategy for Code representation learning.

Clone Detection Code Summarization +2

ONCE: Boosting Content-based Recommendation with Both Open- and Closed-source Large Language Models

3 code implementations11 May 2023 Qijiong Liu, Nuo Chen, Tetsuya Sakai, Xiao-Ming Wu

Personalized content-based recommender systems have become indispensable tools for users to navigate through the vast amount of content available on platforms like daily news websites and book recommendation services.

Navigate News Generation +3

Alleviating Over-smoothing for Unsupervised Sentence Representation

1 code implementation9 May 2023 Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Bowen Cao, Jianhui Chang, Daxin Jiang, Jia Li

Currently, learning better unsupervised sentence representations is the pursuit of many natural language processing communities.

Contrastive Learning Semantic Textual Similarity +1

Mapping Degeneration Meets Label Evolution: Learning Infrared Small Target Detection with Single Point Supervision

1 code implementation CVPR 2023 Xinyi Ying, Li Liu, Yingqian Wang, Ruojing Li, Nuo Chen, Zaiping Lin, Weidong Sheng, Shilin Zhou

Interestingly, during the training phase supervised by point labels, we discover that CNNs first learn to segment a cluster of pixels near the targets, and then gradually converge to predict groundtruth point labels.

Improve Retrieval-based Dialogue System via Syntax-Informed Attention

no code implementations12 Mar 2023 Tengtao Song, Nuo Chen, Ji Jiang, Zhihong Zhu, Yuexian Zou

Since incorporating syntactic information like dependency structures into neural models can promote a better understanding of the sentences, such a method has been widely used in NLP tasks.

Retrieval Sentence

HugNLP: A Unified and Comprehensive Library for Natural Language Processing

2 code implementations28 Feb 2023 Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao

In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios.

FiTs: Fine-grained Two-stage Training for Knowledge-aware Question Answering

1 code implementation23 Feb 2023 Qichen Ye, Bowen Cao, Nuo Chen, Weiyuan Xu, Yuexian Zou

Despite the promising result of recent KAQA systems which tend to integrate linguistic knowledge from pre-trained language models (PLM) and factual knowledge from knowledge graphs (KG) to answer complex questions, a bottleneck exists in effectively fusing the representations from PLMs and KGs because of (i) the semantic and distributional gaps between them, and (ii) the difficulties in joint reasoning over the provided knowledge from both modalities.

Knowledge Graphs MedQA +2

Natural Response Generation for Chinese Reading Comprehension

1 code implementation17 Feb 2023 Nuo Chen, Hongguang Li, Yinan Bao, Baoyuan Wang, Jia Li

To this end, we construct a new dataset called Penguin to promote the research of MRC, providing a training and test bed for natural response generation to real scenarios.

Chinese Reading Comprehension Machine Reading Comprehension +1

Bridge the Gap between Language models and Tabular Understanding

no code implementations16 Feb 2023 Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Chenyu You, Jianhui Chang, Daxin Jiang, Jia Li

For instance, TPLMs jointly pre-trained with table and text input could be effective for tasks also with table-text joint input like table question answering, but it may fail for tasks with only tables or text as input such as table retrieval.

Contrastive Learning Language Modeling +3

Human Mobility Modeling During the COVID-19 Pandemic via Deep Graph Diffusion Infomax

no code implementations12 Dec 2022 Yang Liu, Yu Rong, Zhuoning Guo, Nuo Chen, Tingyang Xu, Fugee Tsung, Jia Li

To address these challenges, we formulate the micro perspective mobility modeling into computing the relevance score between a diffusion and a location, conditional on a geometric graph.

Exploring and Exploiting Multi-Granularity Representations for Machine Reading Comprehension

no code implementations18 Aug 2022 Nuo Chen, Chenyu You

To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the coarse-grained representations of the source sequences, i. e., passage and question.

Machine Reading Comprehension

Automatic Prosody Annotation with Pre-Trained Text-Speech Model

1 code implementation16 Jun 2022 Ziqian Dai, Jianwei Yu, Yan Wang, Nuo Chen, Yanyao Bian, Guangzhi Li, Deng Cai, Dong Yu

Prosodic boundary plays an important role in text-to-speech synthesis (TTS) in terms of naturalness and readability.

Speech Synthesis text-to-speech +3

End-to-end Spoken Conversational Question Answering: Task, Dataset and Model

no code implementations Findings (NAACL) 2022 Chenyu You, Nuo Chen, Fenglin Liu, Shen Ge, Xian Wu, Yuexian Zou

To evaluate the capacity of SCQA systems in a dialogue-style interaction, we assemble a Spoken Conversational Question Answering (Spoken-CoQA) dataset with more than 40k question-answer pairs from 4k conversations.

4k Conversational Question Answering +2

Detecting Recolored Image by Spatial Correlation

no code implementations23 Apr 2022 Yushu Zhang, Nuo Chen, Shuren Qi, Mingfu Xue, Xiaochun Cao

In this paper, we try to explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.

Image Forensics Image Manipulation

Bridging the Gap between Language Models and Cross-Lingual Sequence Labeling

no code implementations NAACL 2022 Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Daxin Jiang

Large-scale cross-lingual pre-trained language models (xPLMs) have shown effectiveness in cross-lingual sequence labeling tasks (xSL), such as cross-lingual machine reading comprehension (xMRC) by transferring knowledge from a high-resource language to low-resource languages.

Contrastive Learning Language Modeling +2

From Good to Best: Two-Stage Training for Cross-lingual Machine Reading Comprehension

no code implementations9 Dec 2021 Nuo Chen, Linjun Shou, Min Gong, Jian Pei, Daxin Jiang

Cross-lingual Machine Reading Comprehension (xMRC) is challenging due to the lack of training data in low-resource languages.

Contrastive Learning Machine Reading Comprehension

Self-supervised Contrastive Cross-Modality Representation Learning for Spoken Question Answering

no code implementations Findings (EMNLP) 2021 Chenyu You, Nuo Chen, Yuexian Zou

In this paper, we propose novel training schemes for spoken question answering with a self-supervised training stage and a contrastive representation learning stage.

Question Answering Representation Learning

Towards Visual Explainable Active Learning for Zero-Shot Classification

no code implementations15 Aug 2021 Shichao Jia, Zeyu Li, Nuo Chen, Jiawan Zhang

This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems.

Active Learning Attribute +2

Text Anchor Based Metric Learning for Small-footprint Keyword Spotting

no code implementations12 Aug 2021 Li Wang, Rongzhi Gu, Nuo Chen, Yuexian Zou

Recently proposed metric learning approaches improved the generalizability of models for the KWS task, and 1D-CNN based KWS models have achieved the state-of-the-arts (SOTA) in terms of model size.

Metric Learning Small-Footprint Keyword Spotting

Adaptive Bi-directional Attention: Exploring Multi-Granularity Representations for Machine Reading Comprehension

no code implementations20 Dec 2020 Nuo Chen, Fenglin Liu, Chenyu You, Peilin Zhou, Yuexian Zou

To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the \textit{coarse-grained} representations of the source sequences, i. e., passage and question.

Machine Reading Comprehension

Impact of Low-Resolution ADC on DOA Estimation Performance for Massive MIMO Receive Array

no code implementations1 Nov 2020 Baihua Shi, Nuo Chen, Xicheng Zhu, Yuwen Qian, Yijin Zhang, Feng Shu, Jiangzhou Wang

In this paper, we present a new scenario of direction of arrival (DOA) estimation using massive multiple-input multiple-output (MIMO) receive array with low-resolution analog-to-digital convertors (ADCs), which can strike a good balance between performance and circuit cost.

Information Theory Signal Processing Information Theory

Contextualized Attention-based Knowledge Transfer for Spoken Conversational Question Answering

no code implementations21 Oct 2020 Chenyu You, Nuo Chen, Yuexian Zou

Spoken conversational question answering (SCQA) requires machines to model complex dialogue flow given the speech utterances and text corpora.

Audio Signal Processing Conversational Question Answering +2

Knowledge Distillation for Improved Accuracy in Spoken Question Answering

no code implementations21 Oct 2020 Chenyu You, Nuo Chen, Yuexian Zou

However, the recent work shows that ASR systems generate highly noisy transcripts, which critically limit the capability of machine comprehension on the SQA task.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +6

Towards Data Distillation for End-to-end Spoken Conversational Question Answering

no code implementations18 Oct 2020 Chenyu You, Nuo Chen, Fenglin Liu, Dongchao Yang, Yuexian Zou

In spoken question answering, QA systems are designed to answer questions from contiguous text spans within the related speech transcripts.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Cannot find the paper you are looking for? You can Submit a new open access paper.