Search Results for author: Trung Le

Found 69 papers, 30 papers with code

Parameterized Rate-Distortion Stochastic Encoder

no code implementations ICML 2020 Quan Hoang, Trung Le, Dinh Phung

We propose a novel gradient-based tractable approach for the Blahut-Arimoto (BA) algorithm to compute the rate-distortion function where the BA algorithm is fully parameterized.

Frequency Attention for Knowledge Distillation

1 code implementation9 Mar 2024 Cuong Pham, Van-Anh Nguyen, Trung Le, Dinh Phung, Gustavo Carneiro, Thanh-Toan Do

Inspired by the benefits of the frequency domain, we propose a novel module that functions as an attention mechanism in the frequency domain.

Image Classification Knowledge Distillation +3

Optimal Transport for Structure Learning Under Missing Data

1 code implementation23 Feb 2024 Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung

Merely filling in missing values with existing imputation methods and subsequently applying structure learning on the complete data is empirical shown to be sub-optimal.

Causal Discovery Imputation

A Class-aware Optimal Transport Approach with Higher-Order Moment Matching for Unsupervised Domain Adaptation

no code implementations29 Jan 2024 Tuan Nguyen, Van Nguyen, Trung Le, He Zhao, Quan Hung Tran, Dinh Phung

Additionally, we propose minimizing class-aware Higher-order Moment Matching (HMM) to align the corresponding class regions on the source and target domains.

Unsupervised Domain Adaptation

DiffAugment: Diffusion based Long-Tailed Visual Relationship Recognition

no code implementations1 Jan 2024 Parul Gupta, Tuan Nguyen, Abhinav Dhall, Munawar Hayat, Trung Le, Thanh-Toan Do

The task of Visual Relationship Recognition (VRR) aims to identify relationships between two interacting objects in an image and is particularly challenging due to the widely-spread and highly imbalanced distribution of <subject, relation, object> triplets.

Object Relation

Class-Prototype Conditional Diffusion Model for Continual Learning with Generative Replay

no code implementations10 Dec 2023 Khanh Doan, Quyen Tran, Tuan Nguyen, Dinh Phung, Trung Le

To address this, we propose the Class-Prototype Conditional Diffusion Model (CPDM), a GR-based approach for continual learning that enhances image quality in generators and thus reduces catastrophic forgetting in classifiers.

Continual Learning Denoising +1

KOPPA: Improving Prompt-based Continual Learning with Key-Query Orthogonal Projection and Prototype-based One-Versus-All

no code implementations26 Nov 2023 Quyen Tran, Lam Tran, Khoat Than, Toan Tran, Dinh Phung, Trung Le

Drawing inspiration from prompt tuning techniques applied to Large Language Models, recent methods based on pre-trained ViT networks have achieved remarkable results in the field of Continual Learning.

Continual Learning Meta-Learning

Robust Contrastive Learning With Theory Guarantee

no code implementations16 Nov 2023 Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le

Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.

Contrastive Learning

Learning Time-Invariant Representations for Individual Neurons from Population Dynamics

1 code implementation NeurIPS 2023 Lu Mi, Trung Le, Tianxing He, Eli Shlizerman, Uygar Sümbül

This suggests that neuronal activity is a combination of its time-invariant identity and the inputs the neuron receives from the rest of the circuit.

Self-Supervised Learning

Cross-adversarial local distribution regularization for semi-supervised medical image segmentation

no code implementations2 Oct 2023 Thanh Nguyen-Duc, Trung Le, Roland Bammer, He Zhao, Jianfei Cai, Dinh Phung

Medical semi-supervised segmentation is a technique where a model is trained to segment objects of interest in medical images with limited annotated data.

Image Segmentation Segmentation +2

Unleash Data Generation for Efficient and Effective Data-free Knowledge Distillation

no code implementations30 Sep 2023 Minh-Tuan Tran, Trung Le, Xuan-May Le, Mehrtash Harandi, Quan Hung Tran, Dinh Phung

By reinitializing the noisy layer in each iteration, we aim to facilitate the generation of diverse samples while still retaining the method's efficiency, thanks to the ease of learning provided by LTE.

Data-free Knowledge Distillation

Optimal Transport Model Distributional Robustness

1 code implementation NeurIPS 2023 Van-Anh Nguyen, Trung Le, Anh Tuan Bui, Thanh-Toan Do, Dinh Phung

Interestingly, our developed theories allow us to flexibly incorporate the concept of sharpness awareness into training, whether it's a single model, ensemble models, or Bayesian Neural Networks, by considering specific forms of the center model distribution.

Learning to Quantize Vulnerability Patterns and Match to Locate Statement-Level Vulnerabilities

1 code implementation26 May 2023 Michael Fu, Trung Le, Van Nguyen, Chakkrit Tantithamthavorn, Dinh Phung

Prior studies found that vulnerabilities across different vulnerable programs may exhibit similar vulnerable scopes, implicitly forming discernible vulnerability patterns that can be learned by DL models through supervised training.

Vulnerability Detection

Learning Directed Graphical Models with Optimal Transport

1 code implementation25 May 2023 Vy Vo, Trung Le, Long-Tung Vuong, He Zhao, Edwin Bonilla, Dinh Phung

Estimating the parameters of a probabilistic directed graphical model from incomplete data remains a long-standing challenge.

Representation Learning

Sharpness & Shift-Aware Self-Supervised Learning

no code implementations17 May 2023 Ngoc N. Tran, Son Duong, Hoang Phan, Tung Pham, Dinh Phung, Trung Le

Self-supervised learning aims to extract meaningful features from unlabeled data for further downstream tasks.

Classification Contrastive Learning +2

Generating Adversarial Examples with Task Oriented Multi-Objective Optimization

1 code implementation26 Apr 2023 Anh Bui, Trung Le, He Zhao, Quan Tran, Paul Montague, Dinh Phung

The key factor for the success of adversarial training is the capability to generate qualified and divergent adversarial examples which satisfy some objectives/goals (e. g., finding adversarial examples that maximize the model losses for simultaneously attacking multiple models).

Hyperbolic Geometry in Computer Vision: A Survey

no code implementations21 Apr 2023 Pengfei Fang, Mehrtash Harandi, Trung Le, Dinh Phung

Hyperbolic geometry, a Riemannian manifold endowed with constant sectional negative curvature, has been considered an alternative embedding space in many learning scenarios, \eg, natural language processing, graph learning, \etc, as a result of its intriguing property of encoding the data's hierarchical structure (like irregular graph or tree-likeness data).

Graph Learning Image Classification

Vector Quantized Wasserstein Auto-Encoder

no code implementations12 Feb 2023 Tung-Long Vuong, Trung Le, He Zhao, Chuanxia Zheng, Mehrtash Harandi, Jianfei Cai, Dinh Phung

Learning deep discrete latent presentations offers a promise of better symbolic and summarized abstractions that are more useful to subsequent downstream tasks.

Clustering Image Reconstruction

Multiple Perturbation Attack: Attack Pixelwise Under Different $\ell_p$-norms For Better Adversarial Performance

no code implementations5 Dec 2022 Ngoc N. Tran, Anh Tuan Bui, Dinh Phung, Trung Le

On the other hand, in order to achieve that, we need to devise even stronger adversarial attacks to challenge these defense models.

Continual Learning with Optimal Transport based Mixture Model

no code implementations30 Nov 2022 Quyen Tran, Hoang Phan, Khoat Than, Dinh Phung, Trung Le

To address this issue, in this work, we first propose an online mixture model learning approach based on nice properties of the mature optimal transport theory (OT-MM).

Class Incremental Learning Incremental Learning

Improving Multi-task Learning via Seeking Task-based Flat Regions

no code implementations24 Nov 2022 Hoang Phan, Lam Tran, Ngoc N. Tran, Nhat Ho, Dinh Phung, Trung Le

Multi-Task Learning (MTL) is a widely-used and powerful learning paradigm for training deep neural networks that allows learning more than one objective by a single backbone.

Multi-Task Learning speech-recognition +1

Vision Transformer Visualization: What Neurons Tell and How Neurons Behave?

1 code implementation14 Oct 2022 Van-Anh Nguyen, Khanh Pham Dinh, Long Tung Vuong, Thanh-Toan Do, Quan Hung Tran, Dinh Phung, Trung Le

Our approach departs from the computational process of ViTs with a focus on visualizing the local and global information in input images and the latent feature embeddings at multiple levels.

Feature-based Learning for Diverse and Privacy-Preserving Counterfactual Explanations

1 code implementation27 Sep 2022 Vy Vo, Trung Le, Van Nguyen, He Zhao, Edwin Bonilla, Gholamreza Haffari, Dinh Phung

Interpretable machine learning seeks to understand the reasoning process of complex black-box systems that are long notorious for lack of explainability.

counterfactual feature selection +3

Cross Project Software Vulnerability Detection via Domain Adaptation and Max-Margin Principle

1 code implementation19 Sep 2022 Van Nguyen, Trung Le, Chakkrit Tantithamthavorn, John Grundy, Hung Nguyen, Dinh Phung

However, there are still two open and significant issues for SVD in terms of i) learning automatic representations to improve the predictive performance of SVD, and ii) tackling the scarcity of labeled vulnerabilities datasets that conventionally need laborious labeling effort by experts.

Domain Adaptation Representation Learning +2

An Additive Instance-Wise Approach to Multi-class Model Interpretation

1 code implementation7 Jul 2022 Vy Vo, Van Nguyen, Trung Le, Quan Hung Tran, Gholamreza Haffari, Seyit Camtepe, Dinh Phung

A popular attribution-based approach is to exploit local neighborhoods for learning instance-specific explainers in an additive manner.

Additive models Interpretable Machine Learning

STNDT: Modeling Neural Population Activity with a Spatiotemporal Transformer

no code implementations9 Jun 2022 Trung Le, Eli Shlizerman

Modeling neural population dynamics underlying noisy single-trial spiking activities is essential for relating neural observation and behavior.

Contrastive Learning

Stochastic Multiple Target Sampling Gradient Descent

1 code implementation4 Jun 2022 Hoang Phan, Ngoc Tran, Trung Le, Toan Tran, Nhat Ho, Dinh Phung

Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem.

Multi-Task Learning

Global-Local Regularization Via Distributional Robustness

1 code implementation1 Mar 2022 Hoang Phan, Trung Le, Trung Phung, Tuan Anh Bui, Nhat Ho, Dinh Phung

First, they purely focus on local regularization to strengthen model robustness, missing a global regularization effect which is useful in many real-world applications (e. g., domain adaptation, domain generalization, and adversarial machine learning).

Domain Generalization

A Unified Wasserstein Distributional Robustness Framework for Adversarial Training

1 code implementation ICLR 2022 Tuan Anh Bui, Trung Le, Quan Tran, He Zhao, Dinh Phung

We introduce a new Wasserstein cost function and a new series of risk functions, with which we show that standard AT methods are special cases of their counterparts in our framework.

On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources

no code implementations NeurIPS 2021 Trung Phung, Trung Le, Long Vuong, Toan Tran, Anh Tran, Hung Bui, Dinh Phung

Domain adaptation (DA) benefits from the rigorous theoretical works that study its insightful characteristics and various aspects, e. g., learning domain-invariant representations and its trade-off.

Domain Generalization Transfer Learning

On Label Shift in Domain Adaptation via Wasserstein Distance

no code implementations29 Oct 2021 Trung Le, Dat Do, Tuan Nguyen, Huy Nguyen, Hung Bui, Nhat Ho, Dinh Phung

We study the label shift problem between the source and target domains in general domain adaptation (DA) settings.

Domain Adaptation

ReGVD: Revisiting Graph Neural Networks for Vulnerability Detection

1 code implementation14 Oct 2021 Van-Anh Nguyen, Dai Quoc Nguyen, Van Nguyen, Trung Le, Quan Hung Tran, Dinh Phung

Identifying vulnerabilities in the source code is essential to protect the software systems from cyber security attacks.

Graph Embedding text-classification +2

STEM: An Approach to Multi-Source Domain Adaptation With Guarantees

1 code implementation1 Oct 2021 Van-Anh Nguyen, Tuan Nguyen, Trung Le, Quan Hung Tran, Dinh Phung

To address the second challenge, we propose to bridge the gap between the target domain and the mixture of source domains in the latent space via a generator or feature extractor.

Improving Robustness with Optimal Transport based Adversarial Generalization

no code implementations29 Sep 2021 Siqi Xia, Shijie Liu, Trung Le, Dinh Phung, Sarah Erfani, Benjamin I. P. Rubinstein, Christopher Leckie, Paul Montague

More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples.

Fine-grained Software Vulnerability Detection via Information Theory and Contrastive Learning

no code implementations29 Sep 2021 Van Nguyen, Trung Le, John C. Grundy, Dinh Phung

Software vulnerabilities existing in a program or function of computer systems have been becoming a serious and crucial concern.

Contrastive Learning Representation Learning +1

LASSO: Latent Sub-spaces Orientation for Domain Generalization

no code implementations29 Sep 2021 Long Tung Vuong, Trung Quoc Phung, Toan Tran, Anh Tuan Tran, Dinh Phung, Trung Le

To achieve a satisfactory generalization performance on prediction tasks in an unseen domain, existing domain generalization (DG) approaches often rely on the strict assumption of fixed domain-invariant features and common hypotheses learned from a set of training domains.

Domain Generalization

SyntheticFur dataset for neural rendering

1 code implementation13 May 2021 Trung Le, Ryan Poplin, Fred Bertsch, Andeep Singh Toor, Margaret L. Oh

We introduce a new dataset called SyntheticFur built specifically for machine learning training.

Generative Adversarial Network Neural Rendering

Improved and Efficient Text Adversarial Attacks using Target Information

no code implementations27 Apr 2021 Mahmoud Hossam, Trung Le, He Zhao, Viet Huynh, Dinh Phung

There has been recently a growing interest in studying adversarial examples on natural language models in the black-box setting.

Sentence

Text Generation with Deep Variational GAN

no code implementations27 Apr 2021 Mahmoud Hossam, Trung Le, Michael Papasimeon, Viet Huynh, Dinh Phung

Generating realistic sequences is a central task in many machine learning applications.

Text Generation

On Transportation of Mini-batches: A Hierarchical Approach

2 code implementations11 Feb 2021 Khai Nguyen, Dang Nguyen, Quoc Nguyen, Tung Pham, Hung Bui, Dinh Phung, Trung Le, Nhat Ho

To address these problems, we propose a novel mini-batch scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that finds the optimal coupling between mini-batches and it can be seen as an approximation to a well-defined distance on the space of probability measures.

Domain Adaptation

Understanding and Achieving Efficient Robustness with Adversarial Supervised Contrastive Learning

1 code implementation25 Jan 2021 Anh Bui, Trung Le, He Zhao, Paul Montague, Seyit Camtepe, Dinh Phung

Central to this approach is the selection of positive (similar) and negative (dissimilar) sets to provide the model the opportunity to `contrast' between data and class representation in the latent space.

Contrastive Learning

STEM: An Approach to Multi-Source Domain Adaptation With Guarantees

1 code implementation ICCV 2021 Van-Anh Nguyen, Tuan Nguyen, Trung Le, Quan Hung Tran, Dinh Phung

To address the second challenge, we propose to bridge the gap between the target domain and the mixture of source domains in the latent space via a generator or feature extractor.

Multi-Source Unsupervised Domain Adaptation Unsupervised Domain Adaptation

Explain by Evidence: An Explainable Memory-based Neural Network for Question Answering

no code implementations COLING 2020 Quan Tran, Nhan Dam, Tuan Lai, Franck Dernoncourt, Trung Le, Nham Le, Dinh Phung

Interpretability and explainability of deep neural networks are challenging due to their scale, complexity, and the agreeable notions on which the explaining process rests.

Question Answering

Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework for Refining Arbitrary Dense Adversarial Attacks

no code implementations13 Oct 2020 He Zhao, Thanh Nguyen, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.

Adversarial Attack Detection

Improving Ensemble Robustness by Collaboratively Promoting and Demoting Adversarial Robustness

1 code implementation21 Sep 2020 Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung

An important technique of this approach is to control the transferability of adversarial examples among ensemble members.

Adversarial Robustness

Neural Topic Model via Optimal Transport

1 code implementation ICLR 2021 He Zhao, Dinh Phung, Viet Huynh, Trung Le, Wray Buntine

Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis.

Topic Models

Improving Adversarial Robustness by Enforcing Local and Global Compactness

1 code implementation ECCV 2020 Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung

The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application.

Adversarial Robustness Clustering

OptiGAN: Generative Adversarial Networks for Goal Optimized Sequence Generation

1 code implementation16 Apr 2020 Mahmoud Hossam, Trung Le, Viet Huynh, Michael Papasimeon, Dinh Phung

One of the challenging problems in sequence generation tasks is the optimized generation of sequences with specific desired goals.

reinforcement-learning Reinforcement Learning (RL)

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

no code implementations3 Oct 2019 He Zhao, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.

Adversarial Attack Translation

Maximal Divergence Sequential Autoencoder for Binary Software Vulnerability Detection

no code implementations ICLR 2019 Tue Le, Tuan Nguyen, Trung Le, Dinh Phung, Paul Montague, Olivier De Vel, Lizhen Qu

Due to the sharp increase in the severity of the threat imposed by software vulnerabilities, the detection of vulnerabilities in binary code has become an important concern in the software industry, such as the embedded systems industry, and in the field of computer security.

Computer Security Vulnerability Detection

When Can Neural Networks Learn Connected Decision Regions?

no code implementations25 Jan 2019 Trung Le, Dinh Phung

Previous work has questioned the conditions under which the decision regions of a neural network are connected and further showed the implications of the corresponding theory to the problem of adversarial manipulation of classifiers.

On Deep Domain Adaptation: Some Theoretical Understandings

no code implementations15 Nov 2018 Trung Le, Khanh Nguyen, Nhat Ho, Hung Bui, Dinh Phung

The underlying idea of deep domain adaptation is to bridge the gap between source and target domains in a joint space so that a supervised classifier trained on labeled source data can be nicely transferred to the target domain.

Domain Adaptation Transfer Learning

MGAN: Training Generative Adversarial Nets with Multiple Generators

1 code implementation ICLR 2018 Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung

We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem.

KGAN: How to Break The Minimax Game in GAN

no code implementations6 Nov 2017 Trung Le, Tu Dinh Nguyen, Dinh Phung

In this paper, we propose a new viewpoint for GANs, which is termed as the minimizing general loss viewpoint.

General Classification

Scalable Support Vector Clustering Using Budget

no code implementations19 Sep 2017 Tung Pham, Trung Le, Hang Dang

In this paper, we propose applying Stochastic Gradient Descent (SGD) framework to the first phase of support-based clustering for finding the domain of novelty and a new strategy to perform the clustering assignment.

Clustering Outlier Detection

Analogical-based Bayesian Optimization

no code implementations19 Sep 2017 Trung Le, Khanh Nguyen, Tu Dinh Nguyen, Dinh Phung

With this spirit, in this paper, we propose Analogical-based Bayesian Optimization that can maximize black-box function over a domain where only a similarity score can be defined.

Bayesian Optimization Gaussian Processes

Dual Discriminator Generative Adversarial Nets

2 code implementations NeurIPS 2017 Tu Dinh Nguyen, Trung Le, Hung Vu, Dinh Phung

We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem.

Ranked #18 on Image Generation on STL-10 (Inception score metric)

Generative Adversarial Network

Geometric Enclosing Networks

no code implementations16 Aug 2017 Trung Le, Hung Vu, Tu Dinh Nguyen, Dinh Phung

Training model to generate data has increasingly attracted research attention and become important in modern world applications.

Multi-Generator Generative Adversarial Nets

no code implementations8 Aug 2017 Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung

A minimax formulation is able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN.

Dual Space Gradient Descent for Online Learning

no code implementations NeurIPS 2016 Trung Le, Tu Nguyen, Vu Nguyen, Dinh Phung

However, this approach still suffers from a serious shortcoming as it needs to use a high dimensional random feature space to achieve a sufficiently accurate kernel approximation.

Scalable Semi-supervised Learning with Graph-based Kernel Machine

no code implementations22 Jun 2016 Trung Le, Khanh Nguyen, Van Nguyen, Vu Nguyen, Dinh Phung

Acquiring labels are often costly, whereas unlabeled data are usually easy to obtain in modern machine learning applications.

BIG-bench Machine Learning

Approximation Vector Machines for Large-scale Online Learning

1 code implementation22 Apr 2016 Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung

One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity.

General Classification regression

Cannot find the paper you are looking for? You can Submit a new open access paper.