Search Results for author: Dongliang Xie

Found 6 papers, 1 papers with code

Crossing-Domain Generative Adversarial Networks for Unsupervised Multi-Domain Image-to-Image Translation

no code implementations27 Aug 2020 Xuewen Yang, Dongliang Xie, Xin Wang

In this work, we propose a general framework for unsupervised image-to-image translation across multiple domains, which can translate images from domain X to any a domain without requiring direct training between the two domains involved in image translation.

Translation Unsupervised Image-To-Image Translation

Learning Tuple Compatibility for Conditional OutfitRecommendation

no code implementations18 Aug 2020 Xuewen Yang, Dongliang Xie, Xin Wang, Jiangbo Yuan, Wanying Ding, Pengyun Yan

Our contributions include: 1) Designing a Mixed Category Attention Net (MCAN) which integrates both fine-grained and coarse category information into recommendation and learns the compatibility among fashion tuples.

Cultural Vocal Bursts Intensity Prediction Recommendation Systems

Adaptive Activation Network and Functional Regularization for Efficient and Flexible Deep Multi-Task Learning

no code implementations19 Nov 2019 Yingru Liu, Xuewen Yang, Dongliang Xie, Xin Wang, Li Shen, Hao-Zhi Huang, Niranjan Balasubramanian

In this paper, we propose a novel deep learning model called Task Adaptive Activation Network (TAAN) that can automatically learn the optimal network architecture for MTL.

Multi-Task Learning

Latent Part-of-Speech Sequences for Neural Machine Translation

no code implementations IJCNLP 2019 Xuewen Yang, Yingru Liu, Dongliang Xie, Xin Wang, Niranjan Balasubramanian

In this work, we introduce a new latent variable model, LaSyn, that captures the co-dependence between syntax and semantics, while allowing for effective and efficient inference over the latent space.

Machine Translation NMT +1

ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA

no code implementations1 Dec 2016 Song Han, Junlong Kang, Huizi Mao, Yiming Hu, Xin Li, Yubin Li, Dongliang Xie, Hong Luo, Song Yao, Yu Wang, Huazhong Yang, William J. Dally

Evaluated on the LSTM for speech recognition benchmark, ESE is 43x and 3x faster than Core i7 5930k CPU and Pascal Titan X GPU implementations.

Quantization speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.