Search Results for author: Jun Huang

Found 34 papers, 17 papers with code

Meta Distant Transfer Learning for Pre-trained Language Models

no code implementations EMNLP 2021 Chengyu Wang, Haojie Pan, Minghui Qiu, Jun Huang, Fei Yang, Yin Zhang

For tasks related to distant domains with different class label sets, PLMs may memorize non-transferable knowledge for the target domain and suffer from negative transfer.

Implicit Relations Meta-Learning +2

$\textbf{P$^2$A}$: A Dataset and Benchmark for Dense Action Detection from Table Tennis Match Broadcasting Videos

no code implementations26 Jul 2022 Jiang Bian, Qingzhong Wang, Haoyi Xiong, Jun Huang, Chen Liu, Xuhong LI, Jun Cheng, Jun Zhao, Feixiang Lu, Dejing Dou

While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging.

Action Detection Action Localization +1

KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering

1 code implementation6 May 2022 Jianing Wang, Chengyu Wang, Minghui Qiu, Qiuhui Shi, Hongbin Wang, Jun Huang, Ming Gao

Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs).

Contrastive Learning Few-Shot Learning +4

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

1 code implementation1 Apr 2022 Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang, Jun Huang

Pre-trained Language Models (PLMs) have achieved remarkable performance for various language understanding tasks in IR systems, which require the fine-tuning process based on labeled training data.

Contrastive Learning

From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression

2 code implementations14 Dec 2021 Runxin Xu, Fuli Luo, Chengyu Wang, Baobao Chang, Jun Huang, Songfang Huang, Fei Huang

Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge.

Contrastive Learning Language Modelling +2

DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding

1 code implementation2 Dec 2021 Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang

Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.

Knowledge Graphs Language Modelling +1

S-DCCRN: Super Wide Band DCCRN with learnable complex feature for speech enhancement

no code implementations16 Nov 2021 Shubo Lv, Yihui Fu, Mengtao Xing, Jiayao Sun, Lei Xie, Jun Huang, Yannan Wang, Tao Yu

In speech enhancement, complex neural network has shown promising performance due to their effectiveness in processing complex-valued spectrum.

Denoising Speech Denoising +1

Millimeter-Wave NR-U and WiGig Coexistence: Joint User Grouping, Beam Coordination and Power Control

no code implementations11 Aug 2021 Xiaoxia Xu, Qimei Chen, Hao Jiang, Jun Huang

Our aim for the proposed coexistence network is to maximize the spectral efficiency while ensuring the strict NR-U delay requirement and the WiGig transmission performance in real time environments.

Multi-layered Semantic Representation Network for Multi-label Image Classification

1 code implementation22 Jun 2021 Xiwen Qu, Hao Che, Jun Huang, Linchuan Xu, Xiao Zheng

To this end, this paper designs a Multi-layered Semantic Representation Network (MSRN) which discovers both local and global semantics of labels through modeling label correlations and utilizes the label semantics to guide the semantic representations learning at multiple layers through an attention mechanism.

Classification Multi-Label Classification +1

A heuristic resolution of the Abraham-Minkowski controversy

no code implementations4 Jan 2021 Guoxu Feng, Jun Huang

This paper reviews the history and origin of the Abraham-Minkowski controversy and points out that it is a continuation of the controversy over the speed of light in medium.

Optics

Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains

1 code implementation ACL 2021 Haojie Pan, Chengyu Wang, Minghui Qiu, Yichang Zhang, Yaliang Li, Jun Huang

We argue that training a teacher with transferable knowledge digested across domains can achieve better generalization capability to help knowledge distillation.

Knowledge Distillation Language Modelling +3

Learning to Expand: Reinforced Pseudo-relevance Feedback Selection for Information-seeking Conversations

no code implementations25 Nov 2020 Haojie Pan, Cen Chen, Minghui Qiu, Liu Yang, Feng Ji, Jun Huang, Haiqing Chen

More specifically, we proposed a reinforced selector to extract useful PRF terms to enhance response candidates and a BERT based response ranker to rank the PRF-enhanced responses.

EasyTransfer -- A Simple and Scalable Deep Transfer Learning Platform for NLP Applications

2 code implementations18 Nov 2020 Minghui Qiu, Peng Li, Chengyu Wang, Hanjie Pan, Ang Wang, Cen Chen, Xianyan Jia, Yaliang Li, Jun Huang, Deng Cai, Wei Lin

The literature has witnessed the success of leveraging Pre-trained Language Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural Language Processing (NLP) applications, yet it is not easy to build an easy-to-use and scalable TL toolkit for this purpose.

Conversational Question Answering Natural Language Processing +1

EasyASR: A Distributed Machine Learning Platform for End-to-end Automatic Speech Recognition

no code implementations14 Sep 2020 Chengyu Wang, Mengli Cheng, Xu Hu, Jun Huang

We present EasyASR, a distributed machine learning platform for training and serving large-scale Automatic Speech Recognition (ASR) models, as well as collecting and processing audio data at scale.

Automatic Speech Recognition Machine Learning +1

One-shot Text Field Labeling using Attention and Belief Propagation for Structure Information Extraction

1 code implementation9 Sep 2020 Mengli Cheng, Minghui Qiu, Xing Shi, Jun Huang, Wei. Lin

Existing learning based methods for text labeling task usually require a large amount of labeled examples to train a specific model for each type of document.

One-Shot Learning

Knowledge-Empowered Representation Learning for Chinese Medical Reading Comprehension: Task, Model and Resources

1 code implementation Findings (ACL) 2021 Taolin Zhang, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He, Jun Huang

In this paper, we introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences from medical information sources simultaneously, in order to ensure the high reliability of medical knowledge serving.

Machine Reading Comprehension Multi-Task Learning +1

Weakly Supervised Construction of ASR Systems with Massive Video Data

no code implementations4 Aug 2020 Mengli Cheng, Chengyu Wang, Xu Hu, Jun Huang, Xiaobo Wang

Building Automatic Speech Recognition (ASR) systems from scratch is significantly challenging, mostly due to the time-consuming and financially-expensive process of annotating a large amount of audio data with transcripts.

Automatic Speech Recognition Optical Character Recognition +2

Meta Fine-Tuning Neural Language Models for Multi-Domain Text Mining

2 code implementations EMNLP 2020 Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He

In this paper, we propose an effective learning procedure named Meta Fine-Tuning (MFT), served as a meta-learner to solve a group of similar NLP tasks for neural language models.

Few-Shot Learning Language Modelling

SwapText: Image Based Texts Transfer in Scenes

no code implementations CVPR 2020 Qiangpeng Yang, Hongsheng Jin, Jun Huang, Wei. Lin

First, a novel text swapping network is proposed to replace text labels only in the foreground image.

Image Generation Translation

KEML: A Knowledge-Enriched Meta-Learning Framework for Lexical Relation Classification

no code implementations25 Feb 2020 Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He

We further combine a meta-learning process over the auxiliary task distribution and supervised learning to train the neural lexical relation classifier.

General Classification Meta-Learning +1

AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural Architecture Search

1 code implementation13 Jan 2020 Daoyuan Chen, Yaliang Li, Minghui Qiu, Zhen Wang, Bofang Li, Bolin Ding, Hongbo Deng, Jun Huang, Wei. Lin, Jingren Zhou

Motivated by the necessity and benefits of task-oriented BERT compression, we propose a novel compression method, AdaBERT, that leverages differentiable Neural Architecture Search to automatically compress BERT into task-adaptive small models for specific tasks.

Knowledge Distillation Natural Language Processing +1

Learning to Selectively Transfer: Reinforced Transfer Learning for Deep Text Matching

no code implementations30 Dec 2018 Chen Qu, Feng Ji, Minghui Qiu, Liu Yang, Zhiyu Min, Haiqing Chen, Jun Huang, W. Bruce Croft

Specifically, the data selector "acts" on the source domain data to find a subset for optimization of the TL model, and the performance of the TL model can provide "rewards" in turn to update the selector.

Information Retrieval Natural Language Inference +4

Review Helpfulness Prediction with Embedding-Gated CNN

no code implementations29 Aug 2018 Cen Chen, Minghui Qiu, Yinfei Yang, Jun Zhou, Jun Huang, Xiaolong Li, Forrest Bao

Product reviews, in the form of texts dominantly, significantly help consumers finalize their purchasing decisions.

Response Ranking with Deep Matching Networks and External Knowledge in Information-seeking Conversation Systems

1 code implementation1 May 2018 Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W. Bruce Croft, Jun Huang, Haiqing Chen

Our models and research findings provide new insights on how to utilize external knowledge with deep neural models for response selection and have implications for the design of the next generation of information-seeking conversation systems.

Knowledge Distillation Text Matching

Modelling Domain Relationships for Transfer Learning on Retrieval-based Question Answering Systems in E-commerce

1 code implementation23 Nov 2017 Jianfei Yu, Minghui Qiu, Jing Jiang, Jun Huang, Shuangyong Song, Wei Chu, Haiqing Chen

In this paper, we study transfer learning for the PI and NLI problems, aiming to propose a general framework, which can effectively and efficiently adapt the shared knowledge learned from a resource-rich source domain to a resource- poor target domain.

Chatbot Natural Language Inference +3

AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine

no code implementations ACL 2017 Minghui Qiu, Feng-Lin Li, Siyu Wang, Xing Gao, Yan Chen, Weipeng Zhao, Haiqing Chen, Jun Huang, Wei Chu

We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models.

Chatbot Information Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.