Search Results for author: Tianlang Chen

Found 19 papers, 7 papers with code

Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt Tensor Products

1 code implementation18 Jan 2024 Shengjie Luo, Tianlang Chen, Aditi S. Krishnapriyan

We mathematically connect the commonly used Clebsch-Gordan coefficients to the Gaunt coefficients, which are integrals of products of three spherical harmonics.

One Transformer Can Understand Both 2D & 3D Molecular Data

1 code implementation4 Oct 2022 Shengjie Luo, Tianlang Chen, Yixian Xu, Shuxin Zheng, Tie-Yan Liu, LiWei Wang, Di He

To achieve this goal, in this work, we develop a novel Transformer-based Molecular model called Transformer-M, which can take molecular data of 2D or 3D formats as input and generate meaningful semantic representations.

Graph Regression molecular representation +1

TransVG++: End-to-End Visual Grounding with Language Conditioned Vision Transformer

1 code implementation14 Jun 2022 Jiajun Deng, Zhengyuan Yang, Daqing Liu, Tianlang Chen, Wengang Zhou, Yanyong Zhang, Houqiang Li, Wanli Ouyang

For another, we devise Language Conditioned Vision Transformer that removes external fusion modules and reuses the uni-modal ViT for vision-language fusion at the intermediate layers.

Visual Grounding

More Than Just Attention: Improving Cross-Modal Attentions with Contrastive Constraints for Image-Text Matching

no code implementations20 May 2021 Yuxiao Chen, Jianbo Yuan, Long Zhao, Tianlang Chen, Rui Luo, Larry Davis, Dimitris N. Metaxas

Cross-modal attention mechanisms have been widely applied to the image-text matching task and have achieved remarkable improvements thanks to its capability of learning fine-grained relevance across different modalities.

Contrastive Learning Image Captioning +4

TransVG: End-to-End Visual Grounding with Transformers

2 code implementations ICCV 2021 Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wengang Zhou, Houqiang Li

In this paper, we present a neat yet effective transformer-based framework for visual grounding, namely TransVG, to address the task of grounding a language query to the corresponding region onto an image.

Referring Expression Comprehension Visual Grounding

Global Image Sentiment Transfer

no code implementations22 Jun 2020 Jie An, Tianlang Chen, Songyang Zhang, Jiebo Luo

This work proposes a novel framework consisting of a reference image retrieval step and a global sentiment transfer step to transfer sentiments of images according to a given sentiment tag.

Image Retrieval Retrieval +3

Image Sentiment Transfer

no code implementations19 Jun 2020 Tianlang Chen, Wei Xiong, Haitian Zheng, Jiebo Luo

In this paper, we propose an effective and flexible framework that performs image sentiment transfer at the object level.

Disentanglement Image-to-Image Translation +2

Adaptive Offline Quintuplet Loss for Image-Text Matching

1 code implementation ECCV 2020 Tianlang Chen, Jiajun Deng, Jiebo Luo

For each image or text anchor in a training mini-batch, the model is trained to distinguish between a positive and the most confusing negative of the anchor mined from the mini-batch (i. e. online hard negative).

Image-text matching Text Matching

Expressing Objects just like Words: Recurrent Visual Embedding for Image-Text Matching

no code implementations20 Feb 2020 Tianlang Chen, Jiebo Luo

Existing image-text matching approaches typically infer the similarity of an image-text pair by capturing and aggregating the affinities between the text and each independent object of the image.

Image-text matching Object +4

Grounding-Tracking-Integration

no code implementations13 Dec 2019 Zhengyuan Yang, Tushar Kumar, Tianlang Chen, Jinsong Su, Jiebo Luo

In this paper, we study Tracking by Language that localizes the target box sequence in a video based on a language query.

Large-scale Tag-based Font Retrieval with Generative Feature Learning

no code implementations ICCV 2019 Tianlang Chen, Zhaowen Wang, Ning Xu, Hailin Jin, Jiebo Luo

In this paper, we address the problem of large-scale tag-based font retrieval which aims to bring semantics to the font selection process and enable people without expert knowledge to use fonts effectively.

Retrieval TAG

``Factual'' or ``Emotional'': Stylized Image Captioning with Adaptive Learning and Attention

no code implementations ECCV 2018 Tianlang Chen, Zhongping Zhang, Quanzeng You, Chen Fang, Zhaowen Wang, Hailin Jin, Jiebo Luo

It uses two groups of matrices to capture the factual and stylized knowledge, respectively, and automatically learns the word-level weights of the two groups based on previous context.

Image Captioning

"Factual" or "Emotional": Stylized Image Captioning with Adaptive Learning and Attention

no code implementations10 Jul 2018 Tianlang Chen, Zhongping Zhang, Quanzeng You, Chen Fang, Zhaowen Wang, Hailin Jin, Jiebo Luo

It uses two groups of matrices to capture the factual and stylized knowledge, respectively, and automatically learns the word-level weights of the two groups based on previous context.

Image Captioning

When Saliency Meets Sentiment: Understanding How Image Content Invokes Emotion and Sentiment

no code implementations14 Nov 2016 Honglin Zheng, Tianlang Chen, Jiebo Luo

The experiments on a representative image emotion dataset have shown interesting correlation between saliency and sentiment in different scene types and in turn shed light on the mechanism of visual sentiment evocation.

Saliency Detection Sentiment Analysis +1

Cannot find the paper you are looking for? You can Submit a new open access paper.