no code implementations • COLING 2022 • Lisung Chen, Nuo Chen, Yuexian Zou, Yong Wang, Xinzhong Sun
Furthermore, we propose a threshold-free intent multi-intent classifier that utilizes the output of IND task and detects the multiple intents without depending on the threshold.
Ranked #3 on
Semantic Frame Parsing
on MixSNIPS
no code implementations • 16 Jun 2025 • Xueqing Peng, Lingfei Qian, Yan Wang, Ruoyu Xiang, Yueru He, Yang Ren, Mingyang Jiang, Jeff Zhao, Huan He, Yi Han, Yun Feng, Yuechen Jiang, Yupeng Cao, Haohang Li, Yangyang Yu, Xiaoyu Wang, Penglei Gao, Shengyuan Lin, Keyi Wang, Shanshan Yang, Yilun Zhao, Zhiwei Liu, Peng Lu, Jerry Huang, Suyuchen Wang, Triantafillos Papadopoulos, Polydoros Giannouris, Efstathia Soufleri, Nuo Chen, Guojun Xiong, Zhiyang Deng, Yijia Zhao, Mingquan Lin, Meikang Qiu, Kaleb E Smith, Arman Cohan, Xiao-Yang Liu, Jimin Huang, Alejandro Lopez-Lira, Xi Chen, Junichi Tsujii, Jian-Yun Nie, Sophia Ananiadou, Qianqian Xie
We introduce MultiFinBen, the first multilingual and multimodal benchmark tailored to the global financial domain, evaluating LLMs across modalities (text, vision, audio) and linguistic settings (monolingual, bilingual, multilingual) on domain-specific tasks.
1 code implementation • 4 Jun 2025 • Xiaomi LLM-Core Team, :, Zihao Yue, Zhenru Lin, YiFan Song, Weikun Wang, Shuhuai Ren, Shuhao Gu, Shicheng Li, Peidian Li, Liang Zhao, Lei LI, Kainan Bao, Hao Tian, Hailin Zhang, Gang Wang, Dawei Zhu, Cici, Chenhong He, Bowen Ye, Bowen Shen, Zihan Zhang, Zihan Jiang, Zhixian Zheng, Zhichao Song, Zhenbo Luo, Yue Yu, Yudong Wang, Yuanyuan Tian, Yu Tu, Yihan Yan, Yi Huang, Xu Wang, Xinzhe Xu, Xingchen Song, Xing Zhang, Xing Yong, Xin Zhang, Xiangwei Deng, Wenyu Yang, Wenhan Ma, Weiwei Lv, Weiji Zhuang, Wei Liu, Sirui Deng, Shuo Liu, Shimao Chen, Shihua Yu, Shaohui Liu, Shande Wang, Rui Ma, Qiantong Wang, Peng Wang, Nuo Chen, Menghang Zhu, Kangyang Zhou, Kang Zhou, Kai Fang, Jun Shi, Jinhao Dong, Jiebao Xiao, Jiaming Xu, Huaqiu Liu, Hongshen Xu, Heng Qu, Haochen Zhao, Hanglong Lv, Guoan Wang, Duo Zhang, Dong Zhang, Di Zhang, Chong Ma, Chang Liu, Can Cai, Bingquan Xia
We open-source MiMo-VL-7B-SFT and MiMo-VL-7B-RL, two powerful vision-language models delivering state-of-the-art performance in both general visual understanding and multimodal reasoning.
no code implementations • 25 May 2025 • Chenxi Li, Nuo Chen, Fengyun Tan, Yantong Chen, Bochun Yuan, Tianrui Li, Chongshou Li
We present a novel active learning framework for 3D point cloud semantic segmentation that, for the first time, integrates large language models (LLMs) to construct hierarchical label structures and guide uncertainty-based sample selection.
no code implementations • 16 May 2025 • Nuo Chen, Andre Lin HuiKai, Jiaying Wu, Junyi Hou, Zining Zhang, Qian Wang, Xidong Wang, Bingsheng He
Despite the growing adoption of large language models (LLMs) in academic workflows, their capabilities remain limited when it comes to supporting high-quality scientific writing.
1 code implementation • 12 May 2025 • Xiaomi LLM-Core Team, :, Bingquan Xia, Bowen Shen, Cici, Dawei Zhu, Di Zhang, Gang Wang, Hailin Zhang, Huaqiu Liu, Jiebao Xiao, Jinhao Dong, Liang Zhao, Peidian Li, Peng Wang, Shihua Yu, Shimao Chen, Weikun Wang, Wenhan Ma, Xiangwei Deng, Yi Huang, YiFan Song, Zihan Jiang, Bowen Ye, Can Cai, Chenhong He, Dong Zhang, Duo Zhang, Guoan Wang, Hao Tian, Haochen Zhao, Heng Qu, Hongshen Xu, Jun Shi, Kainan Bao, Qingkai Fang, Kang Zhou, Kangyang Zhou, Lei LI, Menghang Zhu, Nuo Chen, Qiantong Wang, Shaohui Liu, Shicheng Li, Shuhao Gu, Shuhuai Ren, Shuo Liu, Sirui Deng, Weiji Zhuang, Weiwei Lv, Wenyu Yang, Xin Zhang, Xing Yong, Xing Zhang, Xingchen Song, Xinzhe Xu, Xu Wang, Yihan Yan, Yu Tu, Yuanyuan Tian, Yudong Wang, Yue Yu, Zhenru Lin, Zhichao Song, Zihao Yue
We present MiMo-7B, a large language model born for reasoning tasks, with optimization across both pre-training and post-training stages.
no code implementations • 22 Apr 2025 • Yinmin Zhong, Zili Zhang, Xiaoniu Song, Hanpeng Hu, Chao Jin, Bingyang Wu, Nuo Chen, Yukun Chen, Yu Zhou, Changyi Wan, HongYu Zhou, Yimin Jiang, Yibo Zhu, Daxin Jiang
However, in real-world deployments, we observe that the colocated architecture suffers from resource coupling, where the two stages are constrained to use the same resources.
no code implementations • 14 Apr 2025 • Qian Wang, Zhanzhi Lou, Zhenheng Tang, Nuo Chen, Xuandong Zhao, Wenxuan Zhang, Dawn Song, Bingsheng He
Large Reasoning Models (LRMs) like DeepSeek-R1 and OpenAI-o1 have demonstrated remarkable reasoning capabilities, raising important questions about their biases in LLM-as-a-judge settings.
no code implementations • 31 Mar 2025 • Nuo Chen, Zhiyuan Hu, Qingyun Zou, Jiaying Wu, Qian Wang, Bryan Hooi, Bingsheng He
The rise of Large Language Models (LLMs) as evaluators offers a scalable alternative to human annotation, yet existing Supervised Fine-Tuning (SFT) for judges approaches often fall short in domains requiring complex reasoning.
no code implementations • 16 Mar 2025 • Kanzhi Cheng, Wenpo Song, Jiaxin Fan, Zheng Ma, Qiushi Sun, Fangzhi Xu, Chenyang Yan, Nuo Chen, Jianbing Zhang, Jiajun Chen
Image captioning has been a longstanding challenge in vision-language research.
no code implementations • 3 Mar 2025 • Zaoyu Chen, Haoran Qin, Nuo Chen, Xiangyu Zhao, Lei Xue, Xiapu Luo, Xiao-Ming Wu
To fill this gap, we introduce SolBench, a benchmark for evaluating the functional correctness of Solidity smart contracts generated by code completion models.
no code implementations • 24 Feb 2025 • An-Lan Wang, Nuo Chen, Kun-Yu Lin, Li Yuan-Ming, Wei-Shi Zheng
With an aim to get more general and practical grasp models, in this paper, we investigate the problem named Task-Oriented 6-DoF Grasp Pose Detection in Clutters (TO6DGC), which extends the task-oriented problem to a more general 6-DOF Grasp Pose Detection in Cluttered (multi-object) scenario.
no code implementations • 2 Feb 2025 • Can Jin, Hongwu Peng, Anxiang Zhang, Nuo Chen, Jiahui Zhao, Xi Xie, Kuangzheng Li, Shuya Feng, Kai Zhong, Caiwen Ding, Dimitris N. Metaxas
In an Information Retrieval (IR) system, reranking plays a critical role by sorting candidate passages according to their relevance to a specific query.
no code implementations • 16 Jan 2025 • Nuo Chen, Quanyu Dai, Xiaoyu Dong, Xiao-Ming Wu, Zhenhua Dong
Conversational recommender systems (CRS) involve both recommendation and dialogue tasks, which makes their evaluation a unique challenge.
no code implementations • 15 Jan 2025 • Qian Wang, Jiaying Wu, Zhenheng Tang, Bingqiao Luo, Nuo Chen, Wei Chen, Bingsheng He
We argue that advancing LLM-based human simulation requires addressing both LLM's inherent limitations and simulation framework design challenges.
no code implementations • CVPR 2025 • Nuo Chen, Ming Jiang, Qi Zhao
Deep saliency models, which predict what parts of an image capture our attention, are often like black boxes.
no code implementations • 18 Nov 2024 • Yiwei Wang, Tao Yang, Hailin Huang, Tianjie Zou, Jincai Li, Nuo Chen, Zhuoran Zhang
Once trained, guided by metaheuristic algorithms, the surrogate model can generate thousands of geometric scalable designs, covering a wide power range, forming an AI expert database to guide future preliminary design.
1 code implementation • 24 Oct 2024 • Qifan Zhang, Xiaobin Hong, Jianheng Tang, Nuo Chen, Yuhan Li, Wenzhong Li, Jing Tang, Jia Li
Furthermore, GCoder efficiently manages large-scale graphs with millions of nodes and diverse input formats, overcoming the limitations of previous models focused on the reasoning steps paradigm.
1 code implementation • 14 Oct 2024 • Guorui Zheng, Xidong Wang, Juhao Liang, Nuo Chen, Yuping Zheng, Benyou Wang
In order to leverage the generalization capability of multilingual LLMs to efficiently scale to more resource-constrained languages, we explore the internal information flow of LLMs from a multilingual perspective using Mixture of Experts (MoE) modularity.
no code implementations • 8 Oct 2024 • Bolei He, Nuo Chen, Xinran He, Lingyong Yan, Zhenkai Wei, Jinchang Luo, Zhen-Hua Ling
To address these issues, we propose the chain-of-verification (CoV-RAG) to enhance the external retrieval correctness and internal generation consistency.
no code implementations • 24 Sep 2024 • Nuo Chen, Jiqun Liu, Xiaoyu Dong, Qijiong Liu, Tetsuya Sakai, Xiao-Ming Wu
Our finding demonstrates that LLM%u2019s judgments, similar to human judgments, are also influenced by threshold priming biases, and suggests that researchers and system engineers should take into account potential human-like cognitive biases in designing, evaluating, and auditing LLMs in IR tasks and beyond.
no code implementations • 20 Sep 2024 • Nuo Chen, Ning Wu, Jianhui Chang, Jia Li
The module creates diverse equations, which the Problem-Crafter agent then transforms into math word problems.
1 code implementation • 22 Aug 2024 • Zhixiang Guo, Xinming Wu, Luming Liang, Hanlin Sheng, Nuo Chen, Zhengfa Bi
We explore adapting foundation models (FMs) from the computer vision domain to geoscience.
1 code implementation • 16 Jul 2024 • Nuo Chen, Yan Wang, Yang Deng, Jia Li
This survey explores the burgeoning field of role-playing with language models, focusing on their development from early persona-based models to advanced character-driven simulations facilitated by Large Language Models (LLMs).
no code implementations • 27 Jun 2024 • Yuan Li, Bingqiao Luo, Qian Wang, Nuo Chen, Xu Liu, Bingsheng He
The utilization of Large Language Models (LLMs) in financial trading has primarily been concentrated within the stock market, aiding in economic and financial decisions.
no code implementations • 16 Jun 2024 • Sanbao Su, Nuo Chen, Chenchen Lin, Felix Juefei-Xu, Chen Feng, Fei Miao
To address this, we propose an uncertainty-aware OCC method ($\alpha$-OCC).
3D Semantic Occupancy Prediction
3D Semantic Scene Completion
+4
no code implementations • CVPR 2024 • Yiming Li, Zhiheng Li, Nuo Chen, Moonjun Gong, Zonglin Lyu, Zehong Wang, Peili Jiang, Chen Feng
More specifically, MARS is collected with a fleet of autonomous vehicles driving within a certain geographical area.
2 code implementations • 5 Jun 2024 • Zihan Luo, Hong Huang, Yongkang Zhou, Jiping Zhang, Nuo Chen, Hai Jin
Despite the remarkable capabilities demonstrated by Graph Neural Networks (GNNs) in graph-related tasks, recent research has revealed the fairness vulnerabilities in GNNs when facing malicious adversarial attacks.
1 code implementation • 14 May 2024 • Chenghao Zhu, Nuo Chen, Yufei Gao, Yunyi Zhang, Prayag Tiwari, Benyou Wang
The rapid advancement of Large Language Models (LLMs) highlights the urgent need for evolving evaluation methodologies that keep pace with improvements in language comprehension and information processing.
1 code implementation • 6 May 2024 • Qijiong Liu, Xiaoyu Dong, Jiaren Xiao, Nuo Chen, Hengchang Hu, Jieming Zhu, Chenxu Zhu, Tetsuya Sakai, Xiao-Ming Wu
Finally, the survey analyzes the remaining challenges and anticipates future trends in VQ4Rec, including the challenges associated with the training of vector quantization, the opportunities presented by large language models, and emerging trends in multimodal recommender systems.
no code implementations • 11 Apr 2024 • Jiayi Wu, Renyu Zhu, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao
Over the past few years, we have witnessed remarkable advancements in Code Pre-trained Models (CodePTMs).
no code implementations • 27 Mar 2024 • Nuo Chen, Jiqun Liu, Hanpei Fang, Yuankai Luo, Tetsuya Sakai, Xiao-Ming Wu
This study examines the decoy effect's underexplored influence on user search interactions and methods for measuring information retrieval (IR) systems' vulnerability to this effect.
1 code implementation • 6 Mar 2024 • Xidong Wang, Nuo Chen, Junyin Chen, Yidong Wang, Guorui Zhen, Chunxian Zhang, Xiangbo Wu, Yan Hu, Anningzhe Gao, Xiang Wan, Haizhou Li, Benyou Wang
Despite the vast repository of global medical knowledge predominantly being in English, local languages are crucial for delivering tailored healthcare services, particularly in areas with limited medical resources.
1 code implementation • 25 Feb 2024 • Nuo Chen, Yuhan Li, Jianheng Tang, Jia Li
Large language models (LLMs) have achieved impressive success across several fields, but their proficiency in understanding and resolving complex graph problems is less explored.
1 code implementation • 19 Feb 2024 • Nuo Chen, Hongguang Li, Juhua Huang, Baoyuan Wang, Jia Li
Existing retrieval-based methods have made significant strides in maintaining long-term conversations.
no code implementations • 18 Dec 2023 • Nuo Chen, Hongguang Li, Baoyuan Wang, Jia Li
IMP-TIP follows the ``From Good to Great" concept, collecting multiple potential solutions from both LLMs and their Tool-Augmented counterparts for the same math problem, and then selecting or re-generating the most accurate answer after cross-checking these solutions via tool-augmented interleaf prompting.
1 code implementation • 7 Dec 2023 • Nuo Chen, Ning Wu, Shining Liang, Ming Gong, Linjun Shou, Dongmei Zhang, Jia Li
This paper presents an in-depth analysis of Large Language Models (LLMs), focusing on LLaMA, a prominent open-source foundational model in natural language processing.
no code implementations • 4 Dec 2023 • Ping Zhou, Nuo Chen, Yuda Xu, Chengcai Xu
The light field imaging in restrictive object space (ROS-LF) is complicated but significant.
1 code implementation • 23 Nov 2023 • Wentao Ge, Shunian Chen, Guiming Hardy Chen, Junying Chen, Zhihong Chen, Nuo Chen, Wenya Xie, Shuo Yan, Chenghao Zhu, Ziyue Lin, Song Dingjie, Xidong Wang, Anningzhe Gao, Zhang Zhiyi, Jianquan Li, Xiang Wan, Benyou Wang
To this end, in our paper, we propose a new evaluation paradigm for MLLMs, which is evaluating MLLMs with per-sample criteria using potent MLLM as the judge.
no code implementations • 4 Nov 2023 • Nuo Chen, Jiqun Liu, Tetsuya Sakai, Xiao-Ming Wu
In recent years, the influence of cognitive effects and biases on users' thinking, behaving, and decision-making has garnered increasing attention in the field of interactive information retrieval.
2 code implementations • 31 Oct 2023 • Nuo Chen, Zinan Zheng, Ning Wu, Ming Gong, Dongmei Zhang, Jia Li
This indicates that crafting multilingual corpora can be regarded as a vital strategy for enhancing model performance in a specific language, especially in mathematical reasoning tasks.
1 code implementation • 19 Oct 2023 • Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li
The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios.
1 code implementation • 16 Jul 2023 • Chen Qian, Wei Liu, Hongzhang Liu, Nuo Chen, Yufan Dang, Jiahao Li, Cheng Yang, Weize Chen, Yusheng Su, Xin Cong, Juyuan Xu, Dahai Li, Zhiyuan Liu, Maosong Sun
Numerous studies used deep learning to improve specific phases in a waterfall model, such as design, coding, and testing.
no code implementations • 6 Jul 2023 • Nuo Chen, Tetsuya Sakai
In this study, we investigate the statistical stability of C/W/L/A metrics from the perspective of: (1) the system ranking similarity among aggregations, (2) the system ranking consistency of aggregations and (3) the discriminative power of aggregations.
1 code implementation • 15 Jun 2023 • Yiming Li, Sihang Li, Xinhao Liu, Moonjun Gong, Kenan Li, Nuo Chen, Zijun Wang, Zhiheng Li, Tao Jiang, Fisher Yu, Yue Wang, Hang Zhao, Zhiding Yu, Chen Feng
Monocular scene understanding is a foundational component of autonomous systems.
3D Semantic Scene Completion
3D Semantic Scene Completion from a single 2D image
1 code implementation • 23 May 2023 • Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li, Ming Gao
To tackle the issue, in this paper, we present TransCoder, a unified Transferable fine-tuning strategy for Code representation learning.
no code implementations • 17 May 2023 • Chengcheng Han, Liqing Cui, Renyu Zhu, Jianing Wang, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao
In this paper, we introduce gradient descent into black-box tuning scenario through knowledge distillation.
1 code implementation • 14 May 2023 • Qiushi Sun, Chengcheng Han, Nuo Chen, Renyu Zhu, Jingyang Gong, Xiang Li, Ming Gao
Large language models (LLMs) have shown increasing power on various natural language processing (NLP) tasks.
3 code implementations • 11 May 2023 • Qijiong Liu, Nuo Chen, Tetsuya Sakai, Xiao-Ming Wu
Personalized content-based recommender systems have become indispensable tools for users to navigate through the vast amount of content available on platforms like daily news websites and book recommendation services.
1 code implementation • 9 May 2023 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Bowen Cao, Jianhui Chang, Daxin Jiang, Jia Li
Currently, learning better unsupervised sentence representations is the pursuit of many natural language processing communities.
1 code implementation • CVPR 2023 • Xinyi Ying, Li Liu, Yingqian Wang, Ruojing Li, Nuo Chen, Zaiping Lin, Weidong Sheng, Shilin Zhou
Interestingly, during the training phase supervised by point labels, we discover that CNNs first learn to segment a cluster of pixels near the targets, and then gradually converge to predict groundtruth point labels.
no code implementations • 12 Mar 2023 • Tengtao Song, Nuo Chen, Ji Jiang, Zhihong Zhu, Yuexian Zou
Since incorporating syntactic information like dependency structures into neural models can promote a better understanding of the sentences, such a method has been widely used in NLP tasks.
2 code implementations • 28 Feb 2023 • Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao
In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios.
1 code implementation • 27 Feb 2023 • Nuo Chen, Hongguang Li, Junqing He, Yinan Bao, Xinshi Lin, Qi Yang, Jianfeng Liu, Ruyi Gan, Jiaxing Zhang, Baoyuan Wang, Jia Li
Thus, model's comprehension ability towards real scenarios are hard to evaluate reasonably.
1 code implementation • 23 Feb 2023 • Qichen Ye, Bowen Cao, Nuo Chen, Weiyuan Xu, Yuexian Zou
Despite the promising result of recent KAQA systems which tend to integrate linguistic knowledge from pre-trained language models (PLM) and factual knowledge from knowledge graphs (KG) to answer complex questions, a bottleneck exists in effectively fusing the representations from PLMs and KGs because of (i) the semantic and distributional gaps between them, and (ii) the difficulties in joint reasoning over the provided knowledge from both modalities.
1 code implementation • 17 Feb 2023 • Nuo Chen, Hongguang Li, Yinan Bao, Baoyuan Wang, Jia Li
To this end, we construct a new dataset called Penguin to promote the research of MRC, providing a training and test bed for natural response generation to real scenarios.
Chinese Reading Comprehension
Machine Reading Comprehension
+1
no code implementations • 16 Feb 2023 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Chenyu You, Jianhui Chang, Daxin Jiang, Jia Li
For instance, TPLMs jointly pre-trained with table and text input could be effective for tasks also with table-text joint input like table question answering, but it may fail for tasks with only tables or text as input such as table retrieval.
no code implementations • 12 Dec 2022 • Yang Liu, Yu Rong, Zhuoning Guo, Nuo Chen, Tingyang Xu, Fugee Tsung, Jia Li
To address these challenges, we formulate the micro perspective mobility modeling into computing the relevance score between a diffusion and a location, conditional on a geometric graph.
1 code implementation • 13 Nov 2022 • Nuo Chen, Yan Wang, Haiyun Jiang, Deng Cai, Yuhan Li, Ziyang Chen, Longyue Wang, Jia Li
In this paper, we introduce the Harry Potter Dialogue (HPD) dataset, designed to advance the study of dialogue agents and character alignment.
no code implementations • 19 Oct 2022 • Tetsuya Sakai, Sijie Tao, Maria Maistro, Zhumin Chu, Yujing Li, Nuo Chen, Nicola Ferro, Junjie Wang, Ian Soboroff, Yiqun Liu
The noise is due to a fatal bug in the backend of our relevance assessment interface.
1 code implementation • 7 Oct 2022 • Nuo Chen, Qiushi Sun, Renyu Zhu, Xiang Li, Xuesong Lu, Ming Gao
To interpret these models, some probing methods have been applied.
no code implementations • 18 Aug 2022 • Nuo Chen, Chenyu You
To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the coarse-grained representations of the source sequences, i. e., passage and question.
1 code implementation • 16 Jun 2022 • Ziqian Dai, Jianwei Yu, Yan Wang, Nuo Chen, Yanyao Bian, Guangzhi Li, Deng Cai, Dong Yu
Prosodic boundary plays an important role in text-to-speech synthesis (TTS) in terms of naturalness and readability.
no code implementations • Findings (NAACL) 2022 • Chenyu You, Nuo Chen, Fenglin Liu, Shen Ge, Xian Wu, Yuexian Zou
To evaluate the capacity of SCQA systems in a dialogue-style interaction, we assemble a Spoken Conversational Question Answering (Spoken-CoQA) dataset with more than 40k question-answer pairs from 4k conversations.
Ranked #1 on
Spoken Language Understanding
on Spoken-SQuAD
no code implementations • 23 Apr 2022 • Yushu Zhang, Nuo Chen, Shuren Qi, Mingfu Xue, Xiaochun Cao
In this paper, we try to explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
no code implementations • NAACL 2022 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Daxin Jiang
Large-scale cross-lingual pre-trained language models (xPLMs) have shown effectiveness in cross-lingual sequence labeling tasks (xSL), such as cross-lingual machine reading comprehension (xMRC) by transferring knowledge from a high-resource language to low-resource languages.
no code implementations • 9 Dec 2021 • Nuo Chen, Linjun Shou, Min Gong, Jian Pei, Daxin Jiang
Cross-lingual Machine Reading Comprehension (xMRC) is challenging due to the lack of training data in low-resource languages.
no code implementations • Findings (EMNLP) 2021 • Chenyu You, Nuo Chen, Yuexian Zou
In this paper, we propose novel training schemes for spoken question answering with a self-supervised training stage and a contrastive representation learning stage.
no code implementations • 15 Aug 2021 • Shichao Jia, Zeyu Li, Nuo Chen, Jiawan Zhang
This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems.
no code implementations • 12 Aug 2021 • Li Wang, Rongzhi Gu, Nuo Chen, Yuexian Zou
Recently proposed metric learning approaches improved the generalizability of models for the KWS task, and 1D-CNN based KWS models have achieved the state-of-the-arts (SOTA) in terms of model size.
no code implementations • 4 Jun 2021 • Nuo Chen, Chenyu You, Yuexian Zou
We also utilize the proposed self-supervised learning tasks to capture intra-sentence coherence.
no code implementations • 20 Dec 2020 • Nuo Chen, Fenglin Liu, Chenyu You, Peilin Zhou, Yuexian Zou
To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the \textit{coarse-grained} representations of the source sequences, i. e., passage and question.
no code implementations • 1 Nov 2020 • Baihua Shi, Nuo Chen, Xicheng Zhu, Yuwen Qian, Yijin Zhang, Feng Shu, Jiangzhou Wang
In this paper, we present a new scenario of direction of arrival (DOA) estimation using massive multiple-input multiple-output (MIMO) receive array with low-resolution analog-to-digital convertors (ADCs), which can strike a good balance between performance and circuit cost.
Information Theory Signal Processing Information Theory
no code implementations • 21 Oct 2020 • Chenyu You, Nuo Chen, Yuexian Zou
Spoken conversational question answering (SCQA) requires machines to model complex dialogue flow given the speech utterances and text corpora.
Audio Signal Processing
Conversational Question Answering
+2
no code implementations • 21 Oct 2020 • Chenyu You, Nuo Chen, Yuexian Zou
However, the recent work shows that ASR systems generate highly noisy transcripts, which critically limit the capability of machine comprehension on the SQA task.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+6
no code implementations • 18 Oct 2020 • Chenyu You, Nuo Chen, Fenglin Liu, Dongchao Yang, Yuexian Zou
In spoken question answering, QA systems are designed to answer questions from contiguous text spans within the related speech transcripts.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2