no code implementations • 4 Sep 2023 • Yuhao Zhou, Minjia Shi, Yuxin Tian, Yuanxi Li, Qing Ye, Jiancheng Lv
However, a significant challenge arises when coordinating FL with crowd intelligence which diverse client groups possess disparate objectives due to data heterogeneity or distinct tasks.
no code implementations • 7 Jun 2023 • Hongru Liang, Jia Liu, Weihong Du, dingnan jin, Wenqiang Lei, Zujie Wen, Jiancheng Lv
The machine reading comprehension (MRC) of user manuals has huge potential in customer service.
no code implementations • 9 May 2023 • Caiyang Yu, Xianggen Liu, Wentao Feng, Chenwei Tang, Jiancheng Lv
Neural Architecture Search (NAS) has emerged as one of the effective methods to design the optimal neural network architecture automatically.
no code implementations • 26 Apr 2023 • Liming Xu, Hanqi Li, Bochuan Zheng, Weisheng Li, Jiancheng Lv
To this end, we, in this paper, propose a novel deep lifelong cross-modal hashing to achieve lifelong hashing retrieval instead of re-training hash function repeatedly when new data arrive.
no code implementations • 18 Apr 2023 • Peng Zeng, Xiaotian Song, Andrew Lensen, Yuwei Ou, Yanan sun, Mengjie Zhang, Jiancheng Lv
With these designs, the proposed DGP method can efficiently search for the GP trees with higher performance, thus being capable of dealing with high-dimensional SR. To demonstrate the effectiveness of DGP, we conducted various experiments against the state of the arts based on both GP and deep neural networks.
1 code implementation • IEEE Transactions on Pattern Analysis and Machine Intelligence 2023 • Yijie Lin, Yuanbiao Gou, Xiaotian Liu, Jinfeng Bai, Jiancheng Lv, Xi Peng
In this article, we propose a unified framework to solve the following two challenging problems in incomplete multi-view representation learning: i) how to learn a consistent representation unifying different views, and ii) how to recover the missing views.
no code implementations • 27 Feb 2023 • Yuhao Zhou, Mingjia Shi, Yuanxi Li, Qing Ye, Yanan sun, Jiancheng Lv
Reducing communication overhead in federated learning (FL) is challenging but crucial for large-scale distributed privacy-preserving machine learning.
1 code implementation • CVPR 2023 • Haiyu Zhao, Yuanbiao Gou, Boyun Li, Dezhong Peng, Jiancheng Lv, Xi Peng
Vision Transformers have shown promising performance in image restoration, which usually conduct window- or channel-based attention to avoid intensive computations.
no code implementations • CVPR 2023 • Yuanbiao Gou, Peng Hu, Jiancheng Lv, Hongyuan Zhu, Xi Peng
Existing studies have empirically observed that the resolution of the low-frequency region is easier to enhance than that of the high-frequency one.
no code implementations • CVPR 2023 • Yuze Tan, Yixi Liu, Shudong Huang, Wentao Feng, Jiancheng Lv
Multi-view clustering have hitherto been studied due to their effectiveness in dealing with heterogeneous data.
no code implementations • 28 Dec 2022 • Yuwei Ou, Xiangning Xie, Shangce Gao, Yanan sun, Kay Chen Tan, Jiancheng Lv
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks, and various methods have been proposed for the defense.
no code implementations • 19 Nov 2022 • Mingjia Shi, Yuhao Zhou, Qing Ye, Jiancheng Lv
Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing.
Ranked #1 on
Image Classification
on Fashion-MNIST
(Accuracy metric)
1 code implementation • 7 Nov 2022 • Youcheng Huang, Wenqiang Lei, Jie Fu, Jiancheng Lv
Incorporating large-scale pre-trained models with the prototypical neural networks is a de-facto paradigm in few-shot named entity recognition.
no code implementations • 27 Oct 2022 • Shudong Huang, Wentao Feng, Chenwei Tang, Jiancheng Lv
Many problems in science and engineering can be represented by a set of partial differential equations (PDEs) through mathematical modeling.
no code implementations • CVPR 2023 • Pengxin Zeng, Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, Xi Peng
Fair clustering aims to divide data into distinct clusters while preventing sensitive attributes (\textit{e. g.}, gender, race, RNA sequencing technique) from dominating the clustering.
no code implementations • 11 Aug 2022 • Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Qian Qu, Jiancheng Lv
To address this challenge, we explore a new draft-command-edit manner in description generation, leading to the proposed new task-controllable text editing in E-commerce.
no code implementations • 7 Apr 2022 • Wenqiang Lei, Yao Zhang, Feifan Song, Hongru Liang, Jiaxin Mao, Jiancheng Lv, Zhenglu Yang, Tat-Seng Chua
To this end, we contribute to advance the study of the proactive dialogue policy to a more natural and challenging setting, i. e., interacting dynamically with users.
no code implementations • 6 Apr 2022 • Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv
Federated learning (FL) is identified as a crucial enabler for large-scale distributed machine learning (ML) without the need for local raw dataset sharing, substantially reducing privacy concerns and alleviating the isolated data problem.
1 code implementation • 8 Mar 2022 • Yuanbiao Gou, Peng Hu, Jiancheng Lv, Joey Tianyi Zhou, Xi Peng
AFuB devotes to adaptively sampling and transferring the features from one scale to another scale, which fuses the multi-scale features with varying characteristics from coarse to fine.
no code implementations • 5 Mar 2022 • Yi Gao, Chenwei Tang, Jiancheng Lv
Generalized Zero-Shot Learning (GZSL) aims to recognize both seen and unseen classes by training only the seen classes, in which the instances of unseen classes tend to be biased towards the seen class.
1 code implementation • CVPR 2022 • Boyun Li, Xiao Liu, Peng Hu, Zhongqin Wu, Jiancheng Lv, Xi Peng
In this paper, we study a challenging problem in image restoration, namely, how to develop an all-in-one method that could recover images from a variety of unknown corruption types and levels.
1 code implementation • CVPR 2022 • Mouxing Yang, Zhenyu Huang, Peng Hu, Taihao Li, Jiancheng Lv, Xi Peng
To solve the TNL problem, we propose a novel method for robust VI-ReID, termed DuAlly Robust Training (DART).
no code implementations • 18 Dec 2021 • Xian Zhang, Hao Zhang, Jiancheng Lv, Xiaojie Li
Face deblurring aims to restore a clear face image from a blurred input image with more explicit structure and facial details.
1 code implementation • ICLR 2022 • Hang Zhang, Yeyun Gong, Yelong Shen, Jiancheng Lv, Nan Duan, Weizhu Chen
To address these challenges, we present Adversarial Retriever-Ranker (AR2), which consists of a dual-encoder retriever plus a cross-encoder ranker.
1 code implementation • ACL 2021 • Kexin Yang, Wenqiang Lei, Dayiheng Liu, Weizhen Qi, Jiancheng Lv
However, in this work, we experimentally reveal that this assumption does not always hold for the text generation tasks like text summarization and story ending generation.
no code implementations • 14 Jul 2021 • Boyun Li, Yijie Lin, Xiao Liu, Peng Hu, Jiancheng Lv, Xi Peng
To generate plausible haze, we study two less-touched but challenging problems in hazy image rendering, namely, i) how to estimate the transmission map from a single image without auxiliary information, and ii) how to adaptively learn the airlight from exemplars, i. e., unpaired real hazy images.
1 code implementation • ACL 2021 • Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, Tat-Seng Chua
In this work, we extract samples from real financial reports to build a new large-scale QA dataset containing both Tabular And Textual data, named TAT-QA, where numerical reasoning is usually required to infer the answer, such as addition, subtraction, multiplication, division, counting, comparison/sorting, and the compositions.
Ranked #1 on
Question Answering
on TAT-QA
no code implementations • 10 May 2021 • Hang Zhang, Yeyun Gong, Yelong Shen, Weisheng Li, Jiancheng Lv, Nan Duan, Weizhu Chen
We first evaluate Poolingformer on two long sequence QA tasks: the monolingual NQ and the multilingual TyDi QA.
no code implementations • 3 May 2021 • Jindi Lv, Qing Ye, Yanan sun, Juan Zhao, Jiancheng Lv
In this paper, we propose a novel approach, Heart-Darts, to efficiently classify the ECG signals by automatically designing the CNN model with the differentiable architecture search (i. e., Darts, a cell-based neural architecture search method).
no code implementations • 23 Apr 2021 • Cheng Luo, Dayiheng Liu, Chanjuan Li, Li Lu, Jiancheng Lv
The system includes modules such as dialogue topic prediction, knowledge matching and dialogue generation.
1 code implementation • 21 Apr 2021 • Yuhao Zhou, Xihua Li, Yunbo Cao, Xuemin Zhao, Qing Ye, Jiancheng Lv
With pivot module reconstructed the decoder for individual students and leveled learning specialized encoders for groups, personalized DKT was achieved.
1 code implementation • CVPR 2021 • Yijie Lin, Yuanbiao Gou, Zitao Liu, Boyun Li, Jiancheng Lv, Xi Peng
In this paper, we study two challenging problems in incomplete multi-view clustering analysis, namely, i) how to learn an informative and consistent representation among different views without the help of labels and ii) how to recover the missing views from data.
Ranked #1 on
Incomplete multi-view clustering
on n-MNIST
no code implementations • 15 Jan 2021 • Chuan Liu, Yi Gao, Jiancheng Lv
It allows the network to use a higher learning rate and speed up training.
1 code implementation • 12 Dec 2020 • Yuhao Zhou, Ye Qing, Jiancheng Lv
Petabytes of data are generated each day by emerging Internet of Things (IoT), but only few of them can be finally collected and used for Machine Learning (ML) purposes due to the apprehension of data & privacy leakage, which seriously retarding ML's growth.
no code implementations • NeurIPS 2020 • Zhenyu Huang, Peng Hu, Joey Tianyi Zhou, Jiancheng Lv, Xi Peng
To solve this practical and challenging problem, we propose a novel multi-view clustering method termed partially view-aligned clustering (PVC).
1 code implementation • Findings (ACL) 2021 • Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou, Nan Duan
Multi-task benchmarks such as GLUE and SuperGLUE have driven great progress of pretraining and transfer learning in Natural Language Processing (NLP).
1 code implementation • EMNLP 2020 • Dayiheng Liu, Yeyun Gong, Jie Fu, Yu Yan, Jiusheng Chen, Jiancheng Lv, Nan Duan, Ming Zhou
In this paper, we propose a novel data augmentation method, referred to as Controllable Rewriting based Question Data Augmentation (CRQDA), for machine reading comprehension (MRC), question generation, and question-answering natural language inference tasks.
1 code implementation • 24 Sep 2020 • Huishuang Tian, Kexin Yang, Dayiheng Liu, Jiancheng Lv
Previous studies usually use the supervised models which deeply rely on parallel data.
Cultural Vocal Bursts Intensity Prediction
Language Modelling
+1
1 code implementation • 6 Sep 2020 • Yuhao Zhou, Qing Ye, Hailun Zhang, Jiancheng Lv
While distributed training significantly speeds up the training process of the deep neural network (DNN), the utilization of the cluster is relatively low due to the time-consuming data synchronizing between workers.
no code implementations • 6 Sep 2020 • Qing Ye, Yuxuan Han, Yanan sun, Jiancheng Lv
Synchronous methods are widely used in distributed training the Deep Neural Networks (DNNs).
1 code implementation • 23 Jul 2020 • Qing Ye, Yuhao Zhou, Mingjia Shi, Yanan sun, Jiancheng Lv
Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster.
no code implementations • ACL 2020 • Dayiheng Liu, Yeyun Gong, Jie Fu, Yu Yan, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Nan Duan
The representations are then fed into the predictor to obtain the span of the short answer, the paragraph of the long answer, and the answer type in a cascaded manner.
no code implementations • ACL 2020 • Hang Zhang, Dayiheng Liu, Jiancheng Lv, Cheng Luo
To our knowledge, this is the first attempt to generate punchlines with knowledge enhanced model.
no code implementations • CVPR 2020 • Zhe Jiang, Yu Zhang, Dongqing Zou, Jimmy Ren, Jiancheng Lv, Yebin Liu
Recovering sharp video sequence from a motion-blurred image is highly ill-posed due to the significant loss of motion information in the blurring process.
Ranked #20 on
Image Deblurring
on GoPro
(using extra training data)
1 code implementation • EMNLP 2020 • Dayiheng Liu, Yeyun Gong, Jie Fu, Wei Liu, Yu Yan, Bo Shao, Daxin Jiang, Jiancheng Lv, Nan Duan
Furthermore, we propose a simple and effective method to mine the keyphrases of interest in the news article and build a first large-scale keyphrase-aware news headline corpus, which contains over 180K aligned triples of $<$news article, headline, keyphrase$>$.
no code implementations • 24 Mar 2020 • Yusen Liu, Dayiheng Liu, Jiancheng Lv, Yongsheng Sang
We proposed an infilling-based Chinese poetry generation model which can infill the Concrete keywords into each line of poems in an explicit way, and an abstract information embedding to integrate the Abstract information into generated poems.
no code implementations • 16 Feb 2020 • Yanan Sun, Ziyao Ren, Gary G. Yen, Bing Xue, Mengjie Zhang, Jiancheng Lv
Data mining on existing CNN can discover useful patterns and fundamental sub-comments from their architectures, providing researchers with strong prior knowledge to design proper CNN architectures when they have no expertise in CNNs.
no code implementations • 5 Feb 2020 • Xian Zhang, Xin Wang, Bin Kong, Youbing Yin, Qi Song, Siwei Lyu, Jiancheng Lv, Canghong Shi, Xiaojie Li
We firstly represent only face regions using the latent variable as the domain knowledge and combine it with the non-face parts textures to generate high-quality face images with plausible contents.
no code implementations • 19 Nov 2019 • Yusen Liu, Dayiheng Liu, Jiancheng Lv
For the user's convenience, we deploy the system at the WeChat applet platform, users can use the system on the mobile device whenever and wherever possible.
1 code implementation • 9 Aug 2019 • Zhuojun Chen, Junhao Cheng, Yuchen Yuan, Dongping Liao, Yizhou Li, Jiancheng Lv
We seek to improve crowd counting as we perceive limits of currently prevalent density map estimation approach on both prediction accuracy and time efficiency.
1 code implementation • 10 Jun 2019 • Bijue Jia, Jiancheng Lv, Dayiheng Liu
Thereinto, downbeat tracking has been a fundamental and continuous problem in Music Information Retrieval (MIR) area.
1 code implementation • 29 May 2019 • Dayiheng Liu, Jie Fu, Yidan Zhang, Chris Pal, Jiancheng Lv
We propose a new framework that utilizes the gradients to revise the sentence in a continuous space during inference to achieve text style transfer.
1 code implementation • ACL 2019 • Dayiheng Liu, Jie Fu, PengFei Liu, Jiancheng Lv
Text infilling is defined as a task for filling in the missing part of a sentence or paragraph, which is suitable for many real-world natural language generation scenarios.
2 code implementations • 24 May 2019 • Dayiheng Liu, Xu Yang, Feng He, YuanYuan Chen, Jiancheng Lv
It has been previously observed that training Variational Recurrent Autoencoders (VRAE) for text generation suffers from serious uninformative latent variables problem.
no code implementations • 28 Aug 2018 • Dongdong Chen, Jiancheng Lv, Mike E. Davies
We investigate the potential of a restricted Boltzmann Machine (RBM) for discriminative representation learning.
no code implementations • 22 Aug 2018 • Xi Peng, Yunnan Li, Ivor W. Tsang, Hongyuan Zhu, Jiancheng Lv, Joey Tianyi Zhou
The second is implementing discrete $k$-means with a differentiable neural network that embraces the advantages of parallel computing, online clustering, and clustering-favorable representation learning.
no code implementations • 11 Aug 2018 • Dayiheng Liu, Jiancheng Lv, Kexin Yang, Qian Qu
Ancient Chinese brings the wisdom and spirit culture of the Chinese nation.
Cultural Vocal Bursts Intensity Prediction
Machine Translation
+2
1 code implementation • 26 Jun 2018 • Dayiheng Liu, Quan Guo, Wubo Li, Jiancheng Lv
Given a picture, the first line, the title and the other lines of the poem are successively generated in three stages.
no code implementations • 21 Jun 2018 • Dayiheng Liu, Jie Fu, Qian Qu, Jiancheng Lv
Incorporating prior knowledge like lexical constraints into the model's output to generate meaningful and coherent sentences has many applications in dialogue system, machine translation, image captioning, etc.
no code implementations • 29 Mar 2017 • Junyu Luo, Yong Xu, Chenwei Tang, Jiancheng Lv
The inverse mapping of GANs'(Generative Adversarial Nets) generator has a great potential value. Hence, some works have been developed to construct the inverse function of generator by directly learning or adversarial learning. While the results are encouraging, the problem is highly challenging and the existing ways of training inverse models of GANs have many disadvantages, such as hard to train or poor performance. Due to these reasons, we propose a new approach based on using inverse generator ($IG$) model as encoder and pre-trained generator ($G$) as decoder of an AutoEncoder network to train the $IG$ model.