Search Results for author: Lam Tran

Found 7 papers, 1 papers with code

KOPPA: Improving Prompt-based Continual Learning with Key-Query Orthogonal Projection and Prototype-based One-Versus-All

no code implementations26 Nov 2023 Quyen Tran, Lam Tran, Khoat Than, Toan Tran, Dinh Phung, Trung Le

Drawing inspiration from prompt tuning techniques applied to Large Language Models, recent methods based on pre-trained ViT networks have achieved remarkable results in the field of Continual Learning.

Continual Learning Meta-Learning

Robust Contrastive Learning With Theory Guarantee

no code implementations16 Nov 2023 Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le

Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.

Contrastive Learning

Conditional Support Alignment for Domain Adaptation with Label Shift

no code implementations29 May 2023 Anh T Nguyen, Lam Tran, Anh Tong, Tuan-Duy H. Nguyen, Toan Tran

In this paper, we propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions, aiming at a more helpful representation for the classification task.

Unsupervised Domain Adaptation

Improving Multi-task Learning via Seeking Task-based Flat Regions

no code implementations24 Nov 2022 Hoang Phan, Lam Tran, Ngoc N. Tran, Nhat Ho, Dinh Phung, Trung Le

Multi-Task Learning (MTL) is a widely-used and powerful learning paradigm for training deep neural networks that allows learning more than one objective by a single backbone.

Multi-Task Learning speech-recognition +1

Security and Privacy Enhanced Gait Authentication with Random Representation Learning and Digital Lockers

no code implementations5 Aug 2021 Lam Tran, Thuc Nguyen, Hyunil Kim, Deokjai Choi

However, most existing approaches stored the enrolled gait pattern insecurely for matching with the validating pattern, thus, posed critical security and privacy issues.

Binarization Representation Learning

On Benefits of Selection Diversity via Bilevel Exclusive Sparsity

no code implementations CVPR 2016 Haichuan Yang, Yijun Huang, Lam Tran, Ji Liu, Shuai Huang

In this paper, we proposed a general bilevel exclusive sparsity formulation to pursue the diversity by restricting the overall sparsity and the sparsity in each group.

feature selection Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.