1 code implementation • 30 Oct 2024 • Youcheng Huang, Fengbin Zhu, Jingkun Tang, Pan Zhou, Wenqiang Lei, Jiancheng Lv, Tat-Seng Chua
With the new RADAR dataset, we further develop a novel and effective iN-time Embedding-based AdveRSarial Image DEtection (NEARSIDE) method, which exploits a single vector that distilled from the hidden states of VLMs, which we call the attacking direction, to achieve the detection of adversarial images against benign ones in the input.
no code implementations • 20 Aug 2024 • Jian Wang, Xin Lan, Yuxin Tian, Jiancheng Lv
Generative adversarial networks (GANs) have made impressive advances in image generation, but they often require large-scale training data to avoid degradation caused by discriminator overfitting.
no code implementations • 9 Aug 2024 • Hong Liu, Liren Shan, Han Bao, Ronghui You, Yuhao Yi, Jiancheng Lv
Federated learning is often used in environments with many unverified participants.
1 code implementation • 15 Jun 2024 • Xiaochen Ma, Xuekang Zhu, Lei Su, Bo Du, Zhuohang Jiang, Bingkui Tong, Zeyu Lei, Xinyu Yang, Chi-Man Pun, Jiancheng Lv, Jizhe Zhou
A comprehensive benchmark is yet to be established in the Image Manipulation Detection \& Localization (IMDL) field.
no code implementations • 4 Jun 2024 • Youcheng Huang, Jingkun Tang, Duanyu Feng, Zheng Zhang, Wenqiang Lei, Jiancheng Lv, Anthony G. Cohn
We find that this also induces dishonesty in helpful and harmless alignment where LLMs tell lies in generating harmless responses.
no code implementations • 20 May 2024 • Chen Huang, Yiping Jin, Ilija Ilievski, Wenqiang Lei, Jiancheng Lv
To address this issue, interactive data annotation utilizes an annotation model to provide suggestions for humans to approve or correct.
no code implementations • 20 May 2024 • Chen Huang, Yang Deng, Wenqiang Lei, Jiancheng Lv, Ido Dagan
As such, informative or hard data is assigned to the expert for annotation, while easy data is handled by the model.
no code implementations • 16 May 2024 • Chen Huang, Xinwei Yang, Yang Deng, Wenqiang Lei, Jiancheng Lv, Tat-Seng Chua
However, successful legal case matching requires the tacit knowledge of legal practitioners, which is difficult to verbalize and encode into machines.
no code implementations • 7 May 2024 • Xianggen Liu, Yan Guo, Haoran Li, Jin Liu, Shudong Huang, Bowen Ke, Jiancheng Lv
Large Language Models (LLMs) have made great strides in areas such as language processing and computer vision.
no code implementations • International Journal of Computer Vision 2024 • Zhenyu Huang, Peng Hu, guocheng niu, Xinyan Xiao, Jiancheng Lv, Xi Peng
This paper studies a new learning paradigm for noisy labels, i. e., noisy correspondence (NC).
Cross-modal retrieval with noisy correspondence Text Retrieval +2
1 code implementation • 4 Apr 2024 • Chen Huang, Peixin Qin, Yang Deng, Wenqiang Lei, Jiancheng Lv, Tat-Seng Chua
The conversational recommendation system (CRS) has been criticized regarding its user experience in real-world scenarios, despite recent significant progress achieved in academia.
1 code implementation • IEEE Transactions on Image Processing 2024 • Xinran Ma, Mouxing Yang, Yunfan Li, Peng Hu, Jiancheng Lv, Xi Peng
Thanks to the consistency refining and mining strategy of CREAM, the overfitting on the false positives could be prevented and the consistency rooted in the false negatives could be exploited, thus leading to a robust CMR method.
Ranked #1 on Graph Matching on SPair-71k
Cross-modal retrieval with noisy correspondence Graph Matching +1
no code implementations • 20 Mar 2024 • Chengzhe Feng, Yanan sun, Ke Li, Pan Zhou, Jiancheng Lv, Aojun Lu
We conduct GenAP on three popular code intelligence PLMs with three canonical code intelligence tasks including defect prediction, code summarization, and code translation.
no code implementations • 13 Mar 2024 • Yuxin Tian, Mouxing Yang, Yunfan Li, Dayiheng Liu, Xingzhang Ren, Xi Peng, Jiancheng Lv
A natural expectation for PEFTs is that the performance of various PEFTs is positively related to the data size and fine-tunable parameter size.
no code implementations • 28 Feb 2024 • Zhengqing Zang, Chenyu Lin, Chenwei Tang, Tao Wang, Jiancheng Lv
Instead of directly encoding the descriptions into class embedding space which suffers from the representation gap problem, we propose to infuse the prior inter-class visual similarity conveyed in the descriptions into the embedding learning.
1 code implementation • 25 Feb 2024 • Hongjie Wu, Linchao He, Mingqin Zhang, Dongdong Chen, Kunming Luo, Mengting Luo, Ji-Zhe Zhou, Hu Chen, Jiancheng Lv
Specifically, we opt for a sample consistent with the measurement identity at each generative step, exploiting the sampling selection as an avenue for output stability and enhancement.
1 code implementation • 26 Jan 2024 • Chen Huang, Haoyang Li, Yifan Zhang, Wenqiang Lei, Jiancheng Lv
To this end, various methods have been proposed to create an adaptive filter by incorporating an extra filter (e. g., a high-pass filter) extracted from the graph topology.
1 code implementation • 23 Jan 2024 • Chen Huang, Duanyu Feng, Wenqiang Lei, Jiancheng Lv
Motivated by this, we develop a time-efficient approach called DREditor to edit the matching rule of an off-the-shelf dense retrieval model to suit a specific domain.
no code implementations • 15 Jan 2024 • Youcheng Huang, Wenqiang Lei, Zheng Zhang, Jiancheng Lv, Shuicheng Yan
In this paper, we empirically find that the effects of different contexts upon LLMs in recalling the same knowledge follow a Gaussian-like distribution.
1 code implementation • 20 Dec 2023 • Yuhao Yi, Ronghui You, Hong Liu, Changxin Liu, YuAn Wang, Jiancheng Lv
Our analysis show that constant approximations to the 1-center and 1-mean clustering problems with outliers provide near-optimal resilient aggregators for metric-based criteria, which have been proven to be crucial in the homogeneous and heterogeneous cases respectively.
1 code implementation • 12 Dec 2023 • Chen Huang, Peixin Qin, Wenqiang Lei, Jiancheng Lv
One of the key factors in language productivity and human cognition is the ability of systematic compositionality, which refers to understanding composed unseen examples of seen primitives.
no code implementations • 7 Nov 2023 • Chenwei Tang, Wenqiang Zhou, Dong Wang, Caiyang Yu, Zhenan He, Jizhe Zhou, Shudong Huang, Yi Gao, Jianming Chen, Wentao Feng, Jiancheng Lv
The advent of Industry 4. 0 has precipitated the incorporation of Artificial Intelligence (AI) methods within industrial contexts, aiming to realize intelligent manufacturing, operation as well as maintenance, also known as industrial intelligence.
no code implementations • 7 Nov 2023 • Hang Zhang, Yeyun Gong, Xingwei He, Dayiheng Liu, Daya Guo, Jiancheng Lv, Jian Guo
Most dense retrieval models contain an implicit assumption: the training query-document pairs are exactly matched.
1 code implementation • 18 Oct 2023 • Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, Jianping Fan, Xi Peng
The core of clustering is incorporating prior knowledge to construct supervision signals.
no code implementations • 13 Oct 2023 • Chenyu Lin, Yusheng He, Zhengqing Zang, Chenwei Tang, Tao Wang, Jiancheng Lv
This report outlines our team's participation in VCL Challenges B Continual Test_time Adaptation, focusing on the technical details of our approach.
no code implementations • 11 Oct 2023 • Junzhe Xu, Suling Duan, Chenwei Tang, Zhenan He, Jiancheng Lv
Second, we propose Attribute Revision Module (ARM), which generates image-level semantics by revising the ground-truth value of each attribute, compensating for performance degradation caused by ignoring intra-class variation.
no code implementations • 4 Sep 2023 • Yuhao Zhou, Minjia Shi, Yuxin Tian, Yuanxi Li, Qing Ye, Jiancheng Lv
However, a significant challenge arises when coordinating FL with crowd intelligence which diverse client groups possess disparate objectives due to data heterogeneity or distinct tasks.
no code implementations • 22 Jul 2023 • Linchao He, Hongyu Yan, Mengting Luo, Hongjie Wu, Kunming Luo, Wang Wang, Wenchao Du, Hu Chen, Hongyu Yang, Yi Zhang, Jiancheng Lv
To address this issue, we propose to utilize the history information of the diffusion-based inverse solvers.
1 code implementation • 7 Jun 2023 • Hongru Liang, Jia Liu, Weihong Du, dingnan jin, Wenqiang Lei, Zujie Wen, Jiancheng Lv
The machine reading comprehension (MRC) of user manuals has huge potential in customer service.
no code implementations • 9 May 2023 • Caiyang Yu, Xianggen Liu, Yifan Wang, Yun Liu, Wentao Feng, Xiong Deng, Chenwei Tang, Jiancheng Lv
Neural Architecture Search (NAS) has emerged as one of the effective methods to design the optimal neural network architecture automatically.
no code implementations • 26 Apr 2023 • Liming Xu, Hanqi Li, Bochuan Zheng, Weisheng Li, Jiancheng Lv
To this end, we, in this paper, propose a novel deep lifelong cross-modal hashing to achieve lifelong hashing retrieval instead of re-training hash function repeatedly when new data arrive.
no code implementations • 18 Apr 2023 • Peng Zeng, Xiaotian Song, Andrew Lensen, Yuwei Ou, Yanan sun, Mengjie Zhang, Jiancheng Lv
With these designs, the proposed DGP method can efficiently search for the GP trees with higher performance, thus being capable of dealing with high-dimensional SR. To demonstrate the effectiveness of DGP, we conducted various experiments against the state of the arts based on both GP and deep neural networks.
1 code implementation • IEEE Transactions on Pattern Analysis and Machine Intelligence 2023 • Yijie Lin, Yuanbiao Gou, Xiaotian Liu, Jinfeng Bai, Jiancheng Lv, Xi Peng
In this article, we propose a unified framework to solve the following two challenging problems in incomplete multi-view representation learning: i) how to learn a consistent representation unifying different views, and ii) how to recover the missing views.
no code implementations • ICCV 2023 • Yuhao Zhou, Mingjia Shi, Yuanxi Li, Qing Ye, Yanan sun, Jiancheng Lv
Reducing communication overhead in federated learning (FL) is challenging but crucial for large-scale distributed privacy-preserving machine learning.
no code implementations • 14 Jan 2023 • Xiaotian Song, Xiangning Xie, Zeqiong Lv, Gary G. Yen, Weiping Ding, Jiancheng Lv, Yanan sun
In surveying each category, we further discuss the design principles and analyze the strengths and weaknesses to clarify the landscape of existing EEMs, thus making easily understanding the research trends of EEMs.
1 code implementation • CVPR 2023 • Haiyu Zhao, Yuanbiao Gou, Boyun Li, Dezhong Peng, Jiancheng Lv, Xi Peng
Vision Transformers have shown promising performance in image restoration, which usually conduct window- or channel-based attention to avoid intensive computations.
no code implementations • CVPR 2023 • Yuanbiao Gou, Peng Hu, Jiancheng Lv, Hongyuan Zhu, Xi Peng
Existing studies have empirically observed that the resolution of the low-frequency region is easier to enhance than that of the high-frequency one.
no code implementations • CVPR 2023 • Yuze Tan, Yixi Liu, Shudong Huang, Wentao Feng, Jiancheng Lv
Multi-view clustering have hitherto been studied due to their effectiveness in dealing with heterogeneous data.
no code implementations • 28 Dec 2022 • Yuwei Ou, Xiangning Xie, Shangce Gao, Yanan sun, Kay Chen Tan, Jiancheng Lv
Deep neural networks (DNNs) are found to be vulnerable to adversarial attacks, and various methods have been proposed for the defense.
no code implementations • 19 Nov 2022 • Mingjia Shi, Yuhao Zhou, Qing Ye, Jiancheng Lv
Federated learning (FL for simplification) is a distributed machine learning technique that utilizes global servers and collaborative clients to achieve privacy-preserving global model training without direct data sharing.
Ranked #1 on Image Classification on Fashion-MNIST (Accuracy metric)
1 code implementation • 7 Nov 2022 • Youcheng Huang, Wenqiang Lei, Jie Fu, Jiancheng Lv
Incorporating large-scale pre-trained models with the prototypical neural networks is a de-facto paradigm in few-shot named entity recognition.
no code implementations • 27 Oct 2022 • Shudong Huang, Wentao Feng, Chenwei Tang, Jiancheng Lv
Many problems in science and engineering can be represented by a set of partial differential equations (PDEs) through mathematical modeling.
1 code implementation • CVPR 2023 • Pengxin Zeng, Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, Xi Peng
Fair clustering aims to divide data into distinct clusters while preventing sensitive attributes (\textit{e. g.}, gender, race, RNA sequencing technique) from dominating the clustering.
Ranked #1 on Image Clustering on HAR
no code implementations • 11 Aug 2022 • Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Qian Qu, Jiancheng Lv
To address this challenge, we explore a new draft-command-edit manner in description generation, leading to the proposed new task-controllable text editing in E-commerce.
no code implementations • 7 Apr 2022 • Wenqiang Lei, Yao Zhang, Feifan Song, Hongru Liang, Jiaxin Mao, Jiancheng Lv, Zhenglu Yang, Tat-Seng Chua
To this end, we contribute to advance the study of the proactive dialogue policy to a more natural and challenging setting, i. e., interacting dynamically with users.
no code implementations • 6 Apr 2022 • Yuhao Zhou, Minjia Shi, Yuxin Tian, Qing Ye, Jiancheng Lv
Federated learning (FL) is identified as a crucial enabler for large-scale distributed machine learning (ML) without the need for local raw dataset sharing, substantially reducing privacy concerns and alleviating the isolated data problem.
1 code implementation • 8 Mar 2022 • Yuanbiao Gou, Peng Hu, Jiancheng Lv, Joey Tianyi Zhou, Xi Peng
AFuB devotes to adaptively sampling and transferring the features from one scale to another scale, which fuses the multi-scale features with varying characteristics from coarse to fine.
no code implementations • 5 Mar 2022 • Yi Gao, Chenwei Tang, Jiancheng Lv
Generalized Zero-Shot Learning (GZSL) aims to recognize both seen and unseen classes by training only the seen classes, in which the instances of unseen classes tend to be biased towards the seen class.
1 code implementation • CVPR 2022 • Mouxing Yang, Zhenyu Huang, Peng Hu, Taihao Li, Jiancheng Lv, Xi Peng
To solve the TNL problem, we propose a novel method for robust VI-ReID, termed DuAlly Robust Training (DART).
1 code implementation • CVPR 2022 • Boyun Li, Xiao Liu, Peng Hu, Zhongqin Wu, Jiancheng Lv, Xi Peng
In this paper, we study a challenging problem in image restoration, namely, how to develop an all-in-one method that could recover images from a variety of unknown corruption types and levels.
no code implementations • 18 Dec 2021 • Xian Zhang, Hao Zhang, Jiancheng Lv, Xiaojie Li
Face deblurring aims to restore a clear face image from a blurred input image with more explicit structure and facial details.
1 code implementation • ICLR 2022 • Hang Zhang, Yeyun Gong, Yelong Shen, Jiancheng Lv, Nan Duan, Weizhu Chen
To address these challenges, we present Adversarial Retriever-Ranker (AR2), which consists of a dual-encoder retriever plus a cross-encoder ranker.
1 code implementation • ACL 2021 • Kexin Yang, Wenqiang Lei, Dayiheng Liu, Weizhen Qi, Jiancheng Lv
However, in this work, we experimentally reveal that this assumption does not always hold for the text generation tasks like text summarization and story ending generation.
no code implementations • 14 Jul 2021 • Boyun Li, Yijie Lin, Xiao Liu, Peng Hu, Jiancheng Lv, Xi Peng
To generate plausible haze, we study two less-touched but challenging problems in hazy image rendering, namely, i) how to estimate the transmission map from a single image without auxiliary information, and ii) how to adaptively learn the airlight from exemplars, i. e., unpaired real hazy images.
1 code implementation • ACL 2021 • Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, Tat-Seng Chua
In this work, we extract samples from real financial reports to build a new large-scale QA dataset containing both Tabular And Textual data, named TAT-QA, where numerical reasoning is usually required to infer the answer, such as addition, subtraction, multiplication, division, counting, comparison/sorting, and the compositions.
Ranked #1 on Question Answering on TAT-QA
no code implementations • 10 May 2021 • Hang Zhang, Yeyun Gong, Yelong Shen, Weisheng Li, Jiancheng Lv, Nan Duan, Weizhu Chen
We first evaluate Poolingformer on two long sequence QA tasks: the monolingual NQ and the multilingual TyDi QA.
no code implementations • 3 May 2021 • Jindi Lv, Qing Ye, Yanan sun, Juan Zhao, Jiancheng Lv
In this paper, we propose a novel approach, Heart-Darts, to efficiently classify the ECG signals by automatically designing the CNN model with the differentiable architecture search (i. e., Darts, a cell-based neural architecture search method).
no code implementations • 23 Apr 2021 • Cheng Luo, Dayiheng Liu, Chanjuan Li, Li Lu, Jiancheng Lv
The system includes modules such as dialogue topic prediction, knowledge matching and dialogue generation.
1 code implementation • 21 Apr 2021 • Yuhao Zhou, Xihua Li, Yunbo Cao, Xuemin Zhao, Qing Ye, Jiancheng Lv
With pivot module reconstructed the decoder for individual students and leveled learning specialized encoders for groups, personalized DKT was achieved.
2 code implementations • CVPR 2021 • Yijie Lin, Yuanbiao Gou, Zitao Liu, Boyun Li, Jiancheng Lv, Xi Peng
In this paper, we study two challenging problems in incomplete multi-view clustering analysis, namely, i) how to learn an informative and consistent representation among different views without the help of labels and ii) how to recover the missing views from data.
Ranked #1 on Incomplete multi-view clustering on n-MNIST
no code implementations • 15 Jan 2021 • Chuan Liu, Yi Gao, Jiancheng Lv
It allows the network to use a higher learning rate and speed up training.
1 code implementation • 12 Dec 2020 • Yuhao Zhou, Ye Qing, Jiancheng Lv
Petabytes of data are generated each day by emerging Internet of Things (IoT), but only few of them can be finally collected and used for Machine Learning (ML) purposes due to the apprehension of data & privacy leakage, which seriously retarding ML's growth.
no code implementations • NeurIPS 2020 • Zhenyu Huang, Peng Hu, Joey Tianyi Zhou, Jiancheng Lv, Xi Peng
To solve this practical and challenging problem, we propose a novel multi-view clustering method termed partially view-aligned clustering (PVC).
1 code implementation • Findings (ACL) 2021 • Dayiheng Liu, Yu Yan, Yeyun Gong, Weizhen Qi, Hang Zhang, Jian Jiao, Weizhu Chen, Jie Fu, Linjun Shou, Ming Gong, Pengcheng Wang, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Ruofei Zhang, Winnie Wu, Ming Zhou, Nan Duan
Multi-task benchmarks such as GLUE and SuperGLUE have driven great progress of pretraining and transfer learning in Natural Language Processing (NLP).
1 code implementation • EMNLP 2020 • Dayiheng Liu, Yeyun Gong, Jie Fu, Yu Yan, Jiusheng Chen, Jiancheng Lv, Nan Duan, Ming Zhou
In this paper, we propose a novel data augmentation method, referred to as Controllable Rewriting based Question Data Augmentation (CRQDA), for machine reading comprehension (MRC), question generation, and question-answering natural language inference tasks.
1 code implementation • 24 Sep 2020 • Huishuang Tian, Kexin Yang, Dayiheng Liu, Jiancheng Lv
Previous studies usually use the supervised models which deeply rely on parallel data.
Cultural Vocal Bursts Intensity Prediction Language Modelling +1
1 code implementation • 6 Sep 2020 • Yuhao Zhou, Qing Ye, Hailun Zhang, Jiancheng Lv
While distributed training significantly speeds up the training process of the deep neural network (DNN), the utilization of the cluster is relatively low due to the time-consuming data synchronizing between workers.
no code implementations • 6 Sep 2020 • Qing Ye, Yuxuan Han, Yanan sun, Jiancheng Lv
Synchronous methods are widely used in distributed training the Deep Neural Networks (DNNs).
1 code implementation • 23 Jul 2020 • Qing Ye, Yuhao Zhou, Mingjia Shi, Yanan sun, Jiancheng Lv
Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and datasetpartition are dynamically adjusted in consideration of the current performanceof the worker, thereby improving the utilization of the cluster.
no code implementations • ACL 2020 • Dayiheng Liu, Yeyun Gong, Jie Fu, Yu Yan, Jiusheng Chen, Daxin Jiang, Jiancheng Lv, Nan Duan
The representations are then fed into the predictor to obtain the span of the short answer, the paragraph of the long answer, and the answer type in a cascaded manner.
no code implementations • ACL 2020 • Hang Zhang, Dayiheng Liu, Jiancheng Lv, Cheng Luo
To our knowledge, this is the first attempt to generate punchlines with knowledge enhanced model.
no code implementations • CVPR 2020 • Zhe Jiang, Yu Zhang, Dongqing Zou, Jimmy Ren, Jiancheng Lv, Yebin Liu
Recovering sharp video sequence from a motion-blurred image is highly ill-posed due to the significant loss of motion information in the blurring process.
Ranked #34 on Image Deblurring on GoPro (using extra training data)
1 code implementation • EMNLP 2020 • Dayiheng Liu, Yeyun Gong, Jie Fu, Wei Liu, Yu Yan, Bo Shao, Daxin Jiang, Jiancheng Lv, Nan Duan
Furthermore, we propose a simple and effective method to mine the keyphrases of interest in the news article and build a first large-scale keyphrase-aware news headline corpus, which contains over 180K aligned triples of $<$news article, headline, keyphrase$>$.
no code implementations • 24 Mar 2020 • Yusen Liu, Dayiheng Liu, Jiancheng Lv, Yongsheng Sang
We proposed an infilling-based Chinese poetry generation model which can infill the Concrete keywords into each line of poems in an explicit way, and an abstract information embedding to integrate the Abstract information into generated poems.
no code implementations • 16 Feb 2020 • Yanan Sun, Ziyao Ren, Gary G. Yen, Bing Xue, Mengjie Zhang, Jiancheng Lv
Data mining on existing CNN can discover useful patterns and fundamental sub-comments from their architectures, providing researchers with strong prior knowledge to design proper CNN architectures when they have no expertise in CNNs.
no code implementations • 5 Feb 2020 • Xian Zhang, Xin Wang, Bin Kong, Youbing Yin, Qi Song, Siwei Lyu, Jiancheng Lv, Canghong Shi, Xiaojie Li
We firstly represent only face regions using the latent variable as the domain knowledge and combine it with the non-face parts textures to generate high-quality face images with plausible contents.
no code implementations • 19 Nov 2019 • Yusen Liu, Dayiheng Liu, Jiancheng Lv
For the user's convenience, we deploy the system at the WeChat applet platform, users can use the system on the mobile device whenever and wherever possible.
1 code implementation • 9 Aug 2019 • Zhuojun Chen, Junhao Cheng, Yuchen Yuan, Dongping Liao, Yizhou Li, Jiancheng Lv
We seek to improve crowd counting as we perceive limits of currently prevalent density map estimation approach on both prediction accuracy and time efficiency.
1 code implementation • 10 Jun 2019 • Bijue Jia, Jiancheng Lv, Dayiheng Liu
Thereinto, downbeat tracking has been a fundamental and continuous problem in Music Information Retrieval (MIR) area.
1 code implementation • 29 May 2019 • Dayiheng Liu, Jie Fu, Yidan Zhang, Chris Pal, Jiancheng Lv
We propose a new framework that utilizes the gradients to revise the sentence in a continuous space during inference to achieve text style transfer.
1 code implementation • ACL 2019 • Dayiheng Liu, Jie Fu, PengFei Liu, Jiancheng Lv
Text infilling is defined as a task for filling in the missing part of a sentence or paragraph, which is suitable for many real-world natural language generation scenarios.
2 code implementations • 24 May 2019 • Dayiheng Liu, Xu Yang, Feng He, YuanYuan Chen, Jiancheng Lv
It has been previously observed that training Variational Recurrent Autoencoders (VRAE) for text generation suffers from serious uninformative latent variables problem.
no code implementations • 28 Aug 2018 • Dongdong Chen, Jiancheng Lv, Mike E. Davies
We investigate the potential of a restricted Boltzmann Machine (RBM) for discriminative representation learning.
no code implementations • 22 Aug 2018 • Xi Peng, Yunnan Li, Ivor W. Tsang, Hongyuan Zhu, Jiancheng Lv, Joey Tianyi Zhou
The second is implementing discrete $k$-means with a differentiable neural network that embraces the advantages of parallel computing, online clustering, and clustering-favorable representation learning.
no code implementations • 11 Aug 2018 • Dayiheng Liu, Jiancheng Lv, Kexin Yang, Qian Qu
Ancient Chinese brings the wisdom and spirit culture of the Chinese nation.
Cultural Vocal Bursts Intensity Prediction Machine Translation +2
1 code implementation • 26 Jun 2018 • Dayiheng Liu, Quan Guo, Wubo Li, Jiancheng Lv
Given a picture, the first line, the title and the other lines of the poem are successively generated in three stages.
no code implementations • 21 Jun 2018 • Dayiheng Liu, Jie Fu, Qian Qu, Jiancheng Lv
Incorporating prior knowledge like lexical constraints into the model's output to generate meaningful and coherent sentences has many applications in dialogue system, machine translation, image captioning, etc.
no code implementations • 29 Mar 2017 • Junyu Luo, Yong Xu, Chenwei Tang, Jiancheng Lv
The inverse mapping of GANs'(Generative Adversarial Nets) generator has a great potential value. Hence, some works have been developed to construct the inverse function of generator by directly learning or adversarial learning. While the results are encouraging, the problem is highly challenging and the existing ways of training inverse models of GANs have many disadvantages, such as hard to train or poor performance. Due to these reasons, we propose a new approach based on using inverse generator ($IG$) model as encoder and pre-trained generator ($G$) as decoder of an AutoEncoder network to train the $IG$ model.