Search Results for author: Tianqin Li

Found 7 papers, 4 papers with code

Does resistance to style-transfer equal Global Shape Bias? Measuring network sensitivity to global shape configuration

no code implementations11 Oct 2023 Ziqi Wen, Tianqin Li, Zhi Jing, Tai Sing Lee

The current benchmark for evaluating a model's global shape bias is a set of style-transferred images with the assumption that resistance to the attack of style transfer is related to the development of global structure sensitivity in the model.

Image Classification Object Recognition +3

Conditional Contrastive Learning with Kernel

1 code implementation ICLR 2022 Yao-Hung Hubert Tsai, Tianqin Li, Martin Q. Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, Ruslan Salakhutdinov

Conditional contrastive learning frameworks consider the conditional sampling procedure that constructs positive or negative data pairs conditioned on specific variables.

Contrastive Learning

TPU-GAN: Learning temporal coherence from dynamic point cloud sequences

1 code implementation ICLR 2022 Zijie Li, Tianqin Li, Amir Barati Farimani

Our model, Temporal Point cloud Upsampling GAN (TPU-GAN), can implicitly learn the underlying temporal coherence from point cloud sequence, which in turn guides the generator to produce temporally coherent output.

Generative Adversarial Network point cloud upsampling +1

Prototype memory and attention mechanisms for few shot image generation

no code implementations ICLR 2022 Tianqin Li, Zijie Li, Andrew Luo, Harold Rockwell, Amir Barati Farimani, Tai Sing Lee

To test our proposal, we show in a few-shot image generation task, that having a prototype memory during attention can improve image synthesis quality, learn interpretable visual concept clusters, as well as improve the robustness of the model.

Image Generation Online Clustering

Integrating Auxiliary Information in Self-supervised Learning

no code implementations5 Jun 2021 Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan Salakhutdinov, Louis-Philippe Morency

Our approach contributes as follows: 1) Comparing to conventional self-supervised representations, the auxiliary-information-infused self-supervised representations bring the performance closer to the supervised representations; 2) The presented Cl-InfoNCE can also work with unsupervised constructed clusters (e. g., k-means clusters) and outperform strong clustering-based self-supervised learning approaches, such as the Prototypical Contrastive Learning (PCL) method; 3) We show that Cl-InfoNCE may be a better approach to leverage the data clustering information, by comparing it to the baseline approach - learning to predict the clustering assignments with cross-entropy loss.

Clustering Contrastive Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.