no code implementations • ACL 2017 • Minghui Qiu, Feng-Lin Li, Siyu Wang, Xing Gao, Yan Chen, Weipeng Zhao, Haiqing Chen, Jun Huang, Wei Chu
We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models.
1 code implementation • 23 Nov 2017 • Jianfei Yu, Minghui Qiu, Jing Jiang, Jun Huang, Shuangyong Song, Wei Chu, Haiqing Chen
In this paper, we study transfer learning for the PI and NLI problems, aiming to propose a general framework, which can effectively and efficiently adapt the shared knowledge learned from a resource-rich source domain to a resource- poor target domain.
no code implementations • 12 Jan 2018 • Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, Guwei Jin, Wei Chu
We present AliMe Assist, an intelligent assistant designed for creating an innovative online shopping experience in E-commerce.
1 code implementation • 1 May 2018 • Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W. Bruce Croft, Jun Huang, Haiqing Chen
Our models and research findings provide new insights on how to utilize external knowledge with deep neural models for response selection and have implications for the design of the next generation of information-seeking conversation systems.
no code implementations • ACL 2018 • Minghui Qiu, Liu Yang, Feng Ji, Weipeng Zhao, Wei Zhou, Jun Huang, Haiqing Chen, W. Bruce Croft, Wei. Lin
Building multi-turn information-seeking conversation systems is an important and challenging research topic.
no code implementations • 29 Aug 2018 • Cen Chen, Minghui Qiu, Yinfei Yang, Jun Zhou, Jun Huang, Xiaolong Li, Forrest Bao
Product reviews, in the form of texts dominantly, significantly help consumers finalize their purchasing decisions.
no code implementations • 30 Dec 2018 • Chen Qu, Feng Ji, Minghui Qiu, Liu Yang, Zhiyu Min, Haiqing Chen, Jun Huang, W. Bruce Croft
Specifically, the data selector "acts" on the source domain data to find a subset for optimization of the TL model, and the performance of the TL model can provide "rewards" in turn to update the selector.
1 code implementation • 7 Feb 2019 • Łukasz Kidziński, Carmichael Ong, Sharada Prasanna Mohanty, Jennifer Hicks, Sean F. Carroll, Bo Zhou, Hongsheng Zeng, Fan Wang, Rongzhong Lian, Hao Tian, Wojciech Jaśkowski, Garrett Andersen, Odd Rune Lykkebø, Nihat Engin Toklu, Pranav Shyam, Rupesh Kumar Srivastava, Sergey Kolesnikov, Oleksii Hrinchuk, Anton Pechenko, Mattias Ljungström, Zhen Wang, Xu Hu, Zehong Hu, Minghui Qiu, Jun Huang, Aleksei Shpilman, Ivan Sosin, Oleg Svidchenko, Aleksandra Malysheva, Daniel Kudenko, Lance Rane, Aditya Bhatt, Zhengfei Wang, Penghui Qi, Zeyang Yu, Peng Peng, Quan Yuan, Wenxin Li, Yunsheng Tian, Ruihan Yang, Pingchuan Ma, Shauharda Khadka, Somdeb Majumdar, Zach Dwiel, Yinyin Liu, Evren Tumer, Jeremy Watson, Marcel Salathé, Sergey Levine, Scott Delp
In the NeurIPS 2018 Artificial Intelligence for Prosthetics challenge, participants were tasked with building a controller for a musculoskeletal model with a goal of matching a given time-varying velocity vector.
1 code implementation • 13 Jan 2020 • Daoyuan Chen, Yaliang Li, Minghui Qiu, Zhen Wang, Bofang Li, Bolin Ding, Hongbo Deng, Jun Huang, Wei. Lin, Jingren Zhou
Motivated by the necessity and benefits of task-oriented BERT compression, we propose a novel compression method, AdaBERT, that leverages differentiable Neural Architecture Search to automatically compress BERT into task-adaptive small models for specific tasks.
no code implementations • 25 Feb 2020 • Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He
We further combine a meta-learning process over the auxiliary task distribution and supervised learning to train the neural lexical relation classifier.
no code implementations • CVPR 2020 • Qiangpeng Yang, Hongsheng Jin, Jun Huang, Wei. Lin
First, a novel text swapping network is proposed to replace text labels only in the foreground image.
2 code implementations • EMNLP 2020 • Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He
In this paper, we propose an effective learning procedure named Meta Fine-Tuning (MFT), served as a meta-learner to solve a group of similar NLP tasks for neural language models.
no code implementations • 31 Jul 2020 • Linchuan Xu, Jun Huang, Atsushi Nitanda, Ryo Asaoka, Kenji Yamanishi
In this paper, we thus propose a novel global spatial attention mechanism in CNNs mainly for medical image classification.
no code implementations • 4 Aug 2020 • Mengli Cheng, Chengyu Wang, Xu Hu, Jun Huang, Xiaobo Wang
Building Automatic Speech Recognition (ASR) systems from scratch is significantly challenging, mostly due to the time-consuming and financially-expensive process of annotating a large amount of audio data with transcripts.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
1 code implementation • Findings (ACL) 2021 • Taolin Zhang, Chengyu Wang, Minghui Qiu, Bite Yang, Xiaofeng He, Jun Huang
In this paper, we introduce a multi-target MRC task for the medical domain, whose goal is to predict answers to medical questions and the corresponding support sentences from medical information sources simultaneously, in order to ensure the high reliability of medical knowledge serving.
1 code implementation • 9 Sep 2020 • Mengli Cheng, Minghui Qiu, Xing Shi, Jun Huang, Wei. Lin
Existing learning based methods for text labeling task usually require a large amount of labeled examples to train a specific model for each type of document.
no code implementations • 14 Sep 2020 • Chengyu Wang, Mengli Cheng, Xu Hu, Jun Huang
We present EasyASR, a distributed machine learning platform for training and serving large-scale Automatic Speech Recognition (ASR) models, as well as collecting and processing audio data at scale.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 28 Oct 2020 • Yiwu Yao, Yuchao Li, Chengyu Wang, Tianhang Yu, Houjiang Chen, Xiaotang Jiang, Jun Yang, Jun Huang, Wei Lin, Hui Shu, Chengfei Lv
The intensive computation of Automatic Speech Recognition (ASR) models obstructs them from being deployed on mobile devices.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
2 code implementations • 18 Nov 2020 • Minghui Qiu, Peng Li, Chengyu Wang, Hanjie Pan, Ang Wang, Cen Chen, Xianyan Jia, Yaliang Li, Jun Huang, Deng Cai, Wei Lin
The literature has witnessed the success of leveraging Pre-trained Language Models (PLMs) and Transfer Learning (TL) algorithms to a wide range of Natural Language Processing (NLP) applications, yet it is not easy to build an easy-to-use and scalable TL toolkit for this purpose.
no code implementations • 25 Nov 2020 • Haojie Pan, Cen Chen, Chengyu Wang, Minghui Qiu, Liu Yang, Feng Ji, Jun Huang
More specifically, we propose a reinforced selector to extract useful PRF terms to enhance response candidates and a BERT-based response ranker to rank the PRF-enhanced responses.
1 code implementation • ACL 2021 • Haojie Pan, Chengyu Wang, Minghui Qiu, Yichang Zhang, Yaliang Li, Jun Huang
However, the large model sizes, together with the long inference time, limit the deployment of such models in real-time applications.
no code implementations • 4 Jan 2021 • Guoxu Feng, Jun Huang
This paper reviews the history and origin of the Abraham-Minkowski controversy and points out that it is a continuation of the controversy over the speed of light in medium.
Optics
2 code implementations • 12 May 2021 • Abhinav Jangda, Jun Huang, Guodong Liu, Amir Hossein Nodehi Sabet, Saeed Maleki, Youshan Miao, Madanlal Musuvathi, Todd Mytkowicz, Olli Sarikivi
Therefore, we present CoCoNeT, with a DSL to express a program with both computation and communication.
1 code implementation • 22 Jun 2021 • Xiwen Qu, Hao Che, Jun Huang, Linchuan Xu, Xiao Zheng
To this end, this paper designs a Multi-layered Semantic Representation Network (MSRN) which discovers both local and global semantics of labels through modeling label correlations and utilizes the label semantics to guide the semantic representations learning at multiple layers through an attention mechanism.
Ranked #6 on Multi-Label Classification on PASCAL VOC 2007
no code implementations • 11 Aug 2021 • Xiaoxia Xu, Qimei Chen, Hao Jiang, Jun Huang
Our aim for the proposed coexistence network is to maximize the spectral efficiency while ensuring the strict NR-U delay requirement and the WiGig transmission performance in real time environments.
no code implementations • 16 Nov 2021 • Shubo Lv, Yihui Fu, Mengtao Xing, Jiayao Sun, Lei Xie, Jun Huang, Yannan Wang, Tao Yu
In speech enhancement, complex neural network has shown promising performance due to their effectiveness in processing complex-valued spectrum.
1 code implementation • 2 Dec 2021 • Taolin Zhang, Chengyu Wang, Nan Hu, Minghui Qiu, Chengguang Tang, Xiaofeng He, Jun Huang
Knowledge-Enhanced Pre-trained Language Models (KEPLMs) are pre-trained models with relation triples injecting from knowledge graphs to improve language understanding abilities.
2 code implementations • 14 Dec 2021 • Runxin Xu, Fuli Luo, Chengyu Wang, Baobao Chang, Jun Huang, Songfang Huang, Fei Huang
Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge.
1 code implementation • 1 Apr 2022 • Ziyun Xu, Chengyu Wang, Minghui Qiu, Fuli Luo, Runxin Xu, Songfang Huang, Jun Huang
Pre-trained Language Models (PLMs) have achieved remarkable performance for various language understanding tasks in IR systems, which require the fine-tuning process based on labeled training data.
1 code implementation • 30 Apr 2022 • Chengyu Wang, Minghui Qiu, Chen Shi, Taolin Zhang, Tingting Liu, Lei LI, Jianing Wang, Ming Wang, Jun Huang, Wei Lin
The success of Pre-Trained Models (PTMs) has reshaped the development of Natural Language Processing (NLP).
1 code implementation • 6 May 2022 • Jianing Wang, Chengyu Wang, Minghui Qiu, Qiuhui Shi, Hongbin Wang, Jun Huang, Ming Gao
Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs).
no code implementations • 26 Jul 2022 • Jiang Bian, Xuhong LI, Tao Wang, Qingzhong Wang, Jun Huang, Chen Liu, Jun Zhao, Feixiang Lu, Dejing Dou, Haoyi Xiong
While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging.
2 code implementations • 27 Aug 2022 • Ziheng Wu, Xinyi Zou, Wenmeng Zhou, Jun Huang
We develop an all-in-one computer vision toolbox named EasyCV to facilitate the use of various SOTA computer vision methods.
1 code implementation • 11 Oct 2022 • Taolin Zhang, Junwei DOng, Jianing Wang, Chengyu Wang, Ang Wang, Yinghui Liu, Jun Huang, Yong Li, Xiaofeng He
Recently, knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis.
no code implementations • 17 Oct 2022 • Jianing Wang, Chengcheng Han, Chengyu Wang, Chuanqi Tan, Minghui Qiu, Songfang Huang, Jun Huang, Ming Gao
Few-shot Named Entity Recognition (NER) aims to identify named entities with very little annotated data.
no code implementations • 5 Dec 2022 • Mingyuan Fan, Cen Chen, Chengyu Wang, Xiaodan Li, Wenmeng Zhou, Jun Huang
Recent works have brought attention to the vulnerability of Federated Learning (FL) systems to gradient leakage attacks.
no code implementations • ICCV 2023 • Yuhui Quan, Huan Teng, Ruotao Xu, Jun Huang, Hui Ji
This paper proposes a fingerprinting framework for DNN models of image restoration.
no code implementations • 19 Jan 2023 • Shuzhen Rao, Jun Huang, Zengming Tang
Motivated by the observation that the domain shift between training tasks and target tasks usually can reflect in their style variation, we propose Task Augmented Meta-Learning (TAML) to conduct style transfer-based task augmentation to improve the domain generalization ability.
no code implementations • 17 Feb 2023 • Jianing Wang, Chengyu Wang, Jun Huang, Ming Gao, Aoying Zhou
Neural sequence labeling (NSL) aims at assigning labels for input language tokens, which covers a broad range of applications, such as named entity recognition (NER) and slot filling, etc.
1 code implementation • ICCV 2023 • Shilong Liu, Tianhe Ren, Jiayu Chen, Zhaoyang Zeng, Hao Zhang, Feng Li, Hongyang Li, Jun Huang, Hang Su, Jun Zhu, Lei Zhang
We point out that the unstable matching in DETR is caused by a multi-optimization path problem, which is highlighted by the one-to-one matching design in DETR.
1 code implementation • 24 May 2023 • Zhongjie Duan, Chengyu Wang, Cen Chen, Jun Huang, Weining Qian
In this paper, we first provide a detailed theoretical and empirical analysis of the generation process of the diffusion models based on schedulers.
no code implementations • 24 May 2023 • Zhen-Ru Zhang, Chuanqi Tan, Haiyang Xu, Chengyu Wang, Jun Huang, Songfang Huang
In addition, taking the gate as a probing, we validate the efficiency and effectiveness of the variable prefix.
no code implementations • 28 May 2023 • Jiapeng Wang, Chengyu Wang, Xiaodan Wang, Jun Huang, Lianwen Jin
Large-scale pre-trained text-image models with dual-encoder architectures (such as CLIP) are typically adopted for various vision-language applications, including text-image retrieval.
1 code implementation • The International Conference on Machine Learning (ICML) 2023 • Hang Xu, Wenxuan Zhang, Jiawei Fei, Yuzhe Wu, Tingwen Xie, Jun Huang, Yuchen Xie, Mohamed Elhoseiny, Panos Kalnis
Distributed training of large deep neural networks requires frequent exchange of massive data between machines, thus communication efficiency is a major concern.
no code implementations • 16 Jul 2023 • Mingyuan Fan, Cen Chen, Chengyu Wang, Wenmeng Zhou, Jun Huang
Split learning enables collaborative deep learning model training while preserving data privacy and model security by avoiding direct sharing of raw data and model details (i. e., sever and clients only hold partial sub-networks and exchange intermediate computations).
1 code implementation • ICCV 2023 • Weifeng Lin, Ziheng Wu, Jiayu Chen, Jun Huang, Lianwen Jin
Specifically, SMT with 11. 5M / 2. 4GFLOPs and 32M / 7. 7GFLOPs can achieve 82. 2% and 84. 3% top-1 accuracy on ImageNet-1K, respectively.
no code implementations • 31 Jul 2023 • Mingyuan Fan, Chengyu Wang, Cen Chen, Yang Liu, Jun Huang
Diffusion models and large language models have emerged as leading-edge generative models, revolutionizing various aspects of human life.
no code implementations • 7 Aug 2023 • Zhongjie Duan, Lizhou You, Chengyu Wang, Cen Chen, Ziheng Wu, Weining Qian, Jun Huang
In recent years, diffusion models have emerged as the most powerful approach in image synthesis.
no code implementations • 29 Aug 2023 • Jianing Wang, Chengyu Wang, Cen Chen, Ming Gao, Jun Huang, Aoying Zhou
We propose TransPrompt v2, a novel transferable prompting framework for few-shot learning across similar or distant text classification tasks.
no code implementations • 11 Sep 2023 • Chengyu Wang, Zhongjie Duan, Bingyan Liu, Xinyi Zou, Cen Chen, Kui Jia, Jun Huang
Text-to-image synthesis for the Chinese language poses unique challenges due to its large vocabulary size, and intricate character relationships.
no code implementations • 20 Sep 2023 • Yukang Xie, Chengyu Wang, Junbing Yan, Jiyong Zhou, Feiqi Deng, Jun Huang
Recently, Large Language Models (LLMs) have achieved amazing zero-shot learning performance over a variety of Natural Language Processing (NLP) tasks, especially for text generative tasks.
no code implementations • 21 Sep 2023 • Zhenzhen Chu, Jiayu Chen, Cen Chen, Chengyu Wang, Ziheng Wu, Jun Huang, Weining Qian
Position-aware global tokens also contain the position information of the image, which makes our model better for vision tasks.
no code implementations • 26 Sep 2023 • Jianing Wang, Chengyu Wang, Chuanqi Tan, Jun Huang, Ming Gao
In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks by conditioning on a few training examples, eliminating the need for parameter updates and achieving competitive performance.
2 code implementations • 7 Oct 2023 • Ziheng Wu, Jiaqi Xu, Xinyi Zou, Kunzhe Huang, Xing Shi, Jun Huang
By training a digital doppelganger of a specific user ID using 5 to 20 relevant images, the finetuned model (according to the trained LoRA model) allows for the generation of AI photos using arbitrary templates.
1 code implementation • 7 Oct 2023 • Jun Huang, Yang Yang, Hang Yu, Jianguo Li, Xiao Zheng
The MST graph provides a virtual representation of the status and scheduling relationships among service instances of a real-world microservice system.
no code implementations • 9 Oct 2023 • Weifeng Lin, Ziheng Wu, Jiayu Chen, Wentao Yang, Mingxin Huang, Jun Huang, Lianwen Jin
Fine-tuning pre-trained Vision Transformers (ViT) has consistently demonstrated promising performance in the realm of visual recognition.
1 code implementation • 19 Oct 2023 • Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li
The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios.
no code implementations • 12 Nov 2023 • Tingfeng Cao, Chengyu Wang, Bingyan Liu, Ziheng Wu, Jinhui Zhu, Jun Huang
Then, to ensure that our generated prompts can generate more beautiful images, we further propose a Reinforcement Learning with Visual AI Feedback technique to fine-tune our model to maximize the reward values of the generated prompts, where the reward values are calculated based on the PickScore and the Aesthetic Scores.
no code implementations • 12 Nov 2023 • Tingfeng Cao, Chengyu Wang, Chuanqi Tan, Jun Huang, Jinhui Zhu
In cross-lingual language understanding, machine translation is often utilized to enhance the transferability of models across languages, either by translating the training data from the source language to the target, or from the target to the source to aid inference.
no code implementations • 12 Nov 2023 • Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Wei zhang
Reasoning is a distinctive human capacity, enabling us to address complex problems by breaking them down into a series of manageable cognitive steps.
1 code implementation • 15 Nov 2023 • Zhongjie Duan, Chengyu Wang, Cen Chen, Weining Qian, Jun Huang, Mingyi Jin
In the interpolation mode, FastBlend surpasses video interpolation and model-based video processing approaches.
no code implementations • 22 Nov 2023 • Chengyu Wang, Junbing Yan, Wei zhang, Jun Huang
This paper delves into the pressing need in Parameter-Efficient Fine-Tuning (PEFT) for Large Language Models (LLMs).
1 code implementation • 4 Dec 2023 • Xiangru Zhu, Penglei Sun, Chengyu Wang, Jingping Liu, Zhixu Li, Yanghua Xiao, Jun Huang
We use Winoground-T2I with a dual objective: to evaluate the performance of T2I models and the metrics used for their evaluation.
no code implementations • 29 Jan 2024 • Zhongjie Duan, Chengyu Wang, Cen Chen, Weining Qian, Jun Huang
Toon shading is a type of non-photorealistic rendering task of animation.
no code implementations • 19 Feb 2024 • Junbing Yan, Chengyu Wang, Jun Huang, Wei zhang
Over the past few years, the abilities of large language models (LLMs) have received extensive attention, which have performed exceptionally well in complicated scenarios such as logical reasoning and symbolic inference.
no code implementations • 6 Mar 2024 • Bingyan Liu, Chengyu Wang, Tingfeng Cao, Kui Jia, Jun Huang
Deep Text-to-Image Synthesis (TIS) models such as Stable Diffusion have recently gained significant popularity for creative Text-to-image generation.
no code implementations • 8 Mar 2024 • Jiapeng Wang, Chengyu Wang, Tingfeng Cao, Jun Huang, Lianwen Jin
We present DiffChat, a novel method to align Large Language Models (LLMs) to "chat" with prompt-as-input Text-to-Image Synthesis (TIS) models (e. g., Stable Diffusion) for interactive image creation.
no code implementations • 17 Mar 2024 • Junbing Yan, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, Longtao Huang, Hui Xue, Wei zhang
KEPLMs are pre-trained models that utilize external knowledge to enhance language understanding.
1 code implementation • 19 Mar 2024 • Xiang Li, Zhenyu Li, Chen Shi, Yong Xu, Qing Du, Mingkui Tan, Jun Huang, Wei Lin
The task of financial analysis primarily encompasses two key areas: stock trend prediction and the corresponding financial question answering.
no code implementations • EMNLP 2021 • Chengyu Wang, Haojie Pan, Minghui Qiu, Jun Huang, Fei Yang, Yin Zhang
For tasks related to distant domains with different class label sets, PLMs may memorize non-transferable knowledge for the target domain and suffer from negative transfer.
2 code implementations • EMNLP 2021 • Chengyu Wang, Jianing Wang, Minghui Qiu, Jun Huang, Ming Gao
Based on continuous prompt embeddings, we propose TransPrompt, a transferable prompting framework for few-shot learning across similar tasks.