no code implementations • 24 May 2023 • Shilv Cai, Xu Zou, Liqun Chen, Luxin Yan, Sheng Zhong
To simultaneously achieve a higher compression rate and better enhancement performance for low-light images, we propose a novel image compression framework with joint optimization of low-light image enhancement.
no code implementations • CVPR 2023 • Qian Jiang, Changyou Chen, Han Zhao, Liqun Chen, Qing Ping, Son Dinh Tran, Yi Xu, Belinda Zeng, Trishul Chilimbi
Hence we advocate that the key of better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Few-Shot Image Classification
Open-Ended Question Answering
+6
1 code implementation • 12 Sep 2022 • Shilv Cai, Zhijun Zhang, Liqun Chen, Luxin Yan, Sheng Zhong, Xu Zou
We implement the IAT in a mathematical invertible manner on a single rate Invertible Neural Network (INN) based model and the quality level (QLevel) would be fed into the IAT to generate scaling and bias tensors.
no code implementations • CVPR 2022 • Jiali Duan, Liqun Chen, Son Tran, Jinyu Yang, Yi Xu, Belinda Zeng, Trishul Chilimbi
Aligning signals from different modalities is an important step in vision-language representation learning as it affects the performance of later stages such as cross-modality fusion.
1 code implementation • CVPR 2022 • Jinyu Yang, Jiali Duan, Son Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda Zeng, Trishul Chilimbi, Junzhou Huang
Besides CMA, TCL introduces an intra-modal contrastive objective to provide complementary benefits in representation learning.
Ranked #1 on
Zero-Shot Cross-Modal Retrieval
on COCO 2014
no code implementations • 1 Dec 2021 • Yanjie Wang, Xu Zou, Zhijun Zhang, Wenhui Xu, Liqun Chen, Sheng Zhong, Luxin Yan, Guodong Wang
Detecting oriented objects along with estimating their rotation information is one crucial step for analyzing remote sensing images.
1 code implementation • 2 Jul 2021 • Junya Chen, Zhe Gan, Xuan Li, Qing Guo, Liqun Chen, Shuyang Gao, Tagyoung Chung, Yi Xu, Belinda Zeng, Wenlian Lu, Fan Li, Lawrence Carin, Chenyang Tao
InfoNCE-based contrastive representation learners, such as SimCLR, have been tremendously successful in recent years.
no code implementations • NAACL 2021 • Vivek Subramanian, Matthew Engelhard, Sam Berchuck, Liqun Chen, Ricardo Henao, Lawrence Carin
In many natural language processing applications, identifying predictive text can be as important as the predictions themselves.
no code implementations • 1 Jan 2021 • Liqun Chen, Yizhe Zhang, Dianqi Li, Chenyang Tao, Dong Wang, Lawrence Carin
There has been growing interest in representation learning for text data, based on theoretical arguments and empirical evidence.
no code implementations • CVPR 2021 • Liqun Chen, Dong Wang, Zhe Gan, Jingjing Liu, Ricardo Henao, Lawrence Carin
The primary goal of knowledge distillation (KD) is to encapsulate the information of a model learned from a teacher network into a student network, with the latter being more compact than the former.
Ranked #7 on
Knowledge Distillation
on CIFAR-100
no code implementations • 6 Dec 2020 • Dong Wang, Yuewei Yang, Chenyang Tao, Zhe Gan, Liqun Chen, Fanjie Kong, Ricardo Henao, Lawrence Carin
Deep neural networks excel at comprehending complex visual signals, delivering on par or even superior performance to that of human experts.
no code implementations • EMNLP 2020 • Guoyin Wang, Chunyuan Li, Jianqiao Li, Hao Fu, Yuh-Chen Lin, Liqun Chen, Yizhe Zhang, Chenyang Tao, Ruiyi Zhang, Wenlin Wang, Dinghan Shen, Qian Yang, Lawrence Carin
An extension is further proposed to improve the OT learning, based on the structural and contextual information of the text sequences.
1 code implementation • NAACL 2021 • Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan
Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness.
no code implementations • 14 Aug 2020 • Siyang Yuan, Ke Bai, Liqun Chen, Yizhe Zhang, Chenyang Tao, Chunyuan Li, Guoyin Wang, Ricardo Henao, Lawrence Carin
Cross-domain alignment between image objects and text sequences is key to many visual-language tasks, and it poses a fundamental challenge to both computer vision and natural language processing.
1 code implementation • ICML 2020 • Liqun Chen, Zhe Gan, Yu Cheng, Linjie Li, Lawrence Carin, Jingjing Liu
In GOT, cross-domain alignment is formulated as a graph matching problem, by representing entities into a dynamically-constructed graph.
1 code implementation • NeurIPS 2019 • Chenyang Tao, Liqun Chen, Shuyang Dai, Junya Chen, Ke Bai, Dong Wang, Jianfeng Feng, Wenlian Lu, Georgiy Bobashev, Lawrence Carin
Inference, estimation, sampling and likelihood evaluation are four primary goals of probabilistic modeling.
no code implementations • 20 Nov 2019 • Wenlin Wang, Hongteng Xu, Zhe Gan, Bai Li, Guoyin Wang, Liqun Chen, Qian Yang, Wenqi Wang, Lawrence Carin
We propose a novel graph-driven generative model, that unifies multiple heterogeneous learning tasks into the same framework.
1 code implementation • NeurIPS 2019 • Wenlin Wang, Chenyang Tao, Zhe Gan, Guoyin Wang, Liqun Chen, Xinyuan Zhang, Ruiyi Zhang, Qian Yang, Ricardo Henao, Lawrence Carin
This paper considers a novel variational formulation of network embeddings, with special focus on textual networks.
no code implementations • 24 Jun 2019 • Dong Wang, Yitong Li, Wei Cao, Liqun Chen, Qi Wei, Lawrence Carin
We propose a Leaked Motion Video Predictor (LMVP) to predict future frames by capturing the spatial and temporal dependencies from given inputs.
no code implementations • ACL 2019 • Liqun Chen, Guoyin Wang, Chenyang Tao, Dinghan Shen, Pengyu Cheng, Xinyuan Zhang, Wenlin Wang, Yizhe Zhang, Lawrence Carin
Constituting highly informative network embeddings is an important tool for network analysis.
no code implementations • ACL 2019 • Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Jianfeng Gao, Lawrence Carin
Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation with latent variables.
no code implementations • ICLR 2019 • Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, Lawrence Carin
Sequence-to-sequence models are commonly trained via maximum likelihood estimation (MLE).
no code implementations • 2 Nov 2018 • Ruiyi Zhang, Changyou Chen, Zhe Gan, Wenlin Wang, Liqun Chen, Dinghan Shen, Guoyin Wang, Lawrence Carin
Sequence generation with reinforcement learning (RL) has received significant attention recently.
no code implementations • 27 Sep 2018 • Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Lawrence Carin
Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation.
1 code implementation • NeurIPS 2018 • Liqun Chen, Shuyang Dai, Chenyang Tao, Dinghan Shen, Zhe Gan, Haichao Zhang, Yizhe Zhang, Lawrence Carin
However, the discrete nature of text hinders the application of GAN to text-generation tasks.
1 code implementation • ICML 2018 • Chenyang Tao, Liqun Chen, Ricardo Henao, Jianfeng Feng, Lawrence Carin Duke
To assess the difference between real and synthetic data, Generative Adversarial Networks (GANs) are trained using a distribution discrepancy measure.
no code implementations • ICML 2018 • Liqun Chen, Chenyang Tao, Ruiyi Zhang, Ricardo Henao, Lawrence Carin Duke
Recent advances on the scalability and flexibility of variational inference have made it successful at unravelling hidden patterns in complex data.
no code implementations • 29 May 2018 • Changyou Chen, Ruiyi Zhang, Wenlin Wang, Bai Li, Liqun Chen
There has been recent interest in developing scalable Bayesian sampling methods such as stochastic gradient MCMC (SG-MCMC) and Stein variational gradient descent (SVGD) for big-data analysis.
no code implementations • NeurIPS 2017 • Yunchen Pu, Wei-Yao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li, Lawrence Carin
A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: ($i$) from observed data fed through the encoder to yield codes, and ($ii$) from latent codes drawn from a simple prior and propagated through the decoder to manifest data.
1 code implementation • NeurIPS 2017 • Zhe Gan, Liqun Chen, Wei-Yao Wang, Yunchen Pu, Yizhe Zhang, Hao liu, Chunyuan Li, Lawrence Carin
The generators are designed to learn the two-way conditional distributions between the two domains, while the discriminators implicitly define a ternary discriminative function, which is trained to distinguish real data pairs and two kinds of fake data pairs.
Image-to-Image Translation
Semi-Supervised Image Classification
+1
2 code implementations • 6 Sep 2017 • Liqun Chen, Shuyang Dai, Yunchen Pu, Chunyuan Li, Qinliang Su, Lawrence Carin
A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback-Leibler divergence.
5 code implementations • NeurIPS 2017 • Chunyuan Li, Hao liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao, Lawrence Carin
We investigate the non-identifiability issues associated with bidirectional adversarial training for joint distribution matching.
no code implementations • ICML 2018 • Changyou Chen, Chunyuan Li, Liqun Chen, Wenlin Wang, Yunchen Pu, Lawrence Carin
Distinct from normalizing flows and GANs, CTFs can be adopted to achieve the above two goals in one framework, with theoretical guarantees.
no code implementations • 24 Apr 2017 • Jorden Whitefield, Liqun Chen, Frank Kargl, Andrew Paverd, Steve Schneider, Helen Treharne, Stephan Wesemeyer
This paper focusses on the formal analysis of a particular element of security mechanisms for V2X found in many proposals: the revocation of malicious or misbehaving vehicles from the V2X system by invalidating their credentials.
Cryptography and Security D.2.4; D.4.6