Search Results for author: Trung Le

Found 88 papers, 44 papers with code

Parameterized Rate-Distortion Stochastic Encoder

no code implementations ICML 2020 Quan Hoang, Trung Le, Dinh Phung

We propose a novel gradient-based tractable approach for the Blahut-Arimoto (BA) algorithm to compute the rate-distortion function where the BA algorithm is fully parameterized.

Fantastic Targets for Concept Erasure in Diffusion Models and Where To Find Them

1 code implementation31 Jan 2025 Anh Bui, Trang Vu, Long Vuong, Trung Le, Paul Montague, Tamas Abraham, Junae Kim, Dinh Phung

To address this limitation, we model the concept space as a graph and empirically analyze the effects of erasing one concept on the remaining concepts.

Explicit Eigenvalue Regularization Improves Sharpness-Aware Minimization

1 code implementation22 Jan 2025 Haocheng Luo, Tuan Truong, Tung Pham, Mehrtash Harandi, Dinh Phung, Trung Le

Sharpness-Aware Minimization (SAM) has attracted significant attention for its effectiveness in improving generalization across various tasks.

Brain-to-Text Benchmark '24: Lessons Learned

1 code implementation23 Dec 2024 Francis R. Willett, Jingyuan Li, Trung Le, Chaofei Fan, Mingfei Chen, Eli Shlizerman, Yue Chen, Xin Zheng, Tatsuo S. Okubo, Tyler Benster, Hyun Dong Lee, Maxwell Kounga, E. Kelly Buchanan, David Zoltowski, Scott W. Linderman, Jaimie M. Henderson

Speech brain-computer interfaces aim to decipher what a person is trying to say from neural activity alone, restoring communication to people with paralysis who have lost the ability to speak intelligibly.

Language Modeling Language Modelling +3

Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation

no code implementations26 Nov 2024 Minh-Tuan Tran, Trung Le, Xuan-May Le, Jianfei Cai, Mehrtash Harandi, Dinh Phung

Data-Free Knowledge Distillation (DFKD) is an advanced technique that enables knowledge transfer from a teacher model to a student model without relying on original training data.

Data-free Knowledge Distillation Diversity +1

Brain-to-Text Decoding with Context-Aware Neural Representations and Large Language Models

no code implementations16 Nov 2024 Jingyuan Li, Trung Le, Chaofei Fan, Mingfei Chen, Eli Shlizerman

Decoding attempted speech from neural activity offers a promising avenue for restoring communication abilities in individuals with speech impairments.

Erasing Undesirable Concepts in Diffusion Models with Adversarial Preservation

2 code implementations21 Oct 2024 Anh Bui, Long Vuong, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung

A practical solution is to selectively removing target concepts from the model, but this may impact the remaining concepts.

Leveraging Hierarchical Taxonomies in Prompt-based Continual Learning

1 code implementation6 Oct 2024 Quyen Tran, Hoang Phan, Minh Le, Tuan Truong, Dinh Phung, Linh Ngo, Thien Nguyen, Nhat Ho, Trung Le

Drawing inspiration from human learning behaviors, this work proposes a novel approach to mitigate catastrophic forgetting in Prompt-based Continual Learning models by exploiting the relationships between continuously emerging class data.

Continual Learning

Improving Generalization with Flat Hilbert Bayesian Inference

1 code implementation5 Oct 2024 Tuan Truong, Quyen Tran, Quan Pham-Ngoc, Nhat Ho, Dinh Phung, Trung Le

We introduce Flat Hilbert Bayesian Inference (FHBI), an algorithm designed to enhance generalization in Bayesian inference.

Bayesian Inference

Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts

1 code implementation3 Oct 2024 Minh Le, Chau Nguyen, Huy Nguyen, Quyen Tran, Trung Le, Nhat Ho

Specifically, we show that the reparameterization strategy implicitly encodes a shared structure between prefix key and value vectors.

Preserving Generalization of Language models in Few-shot Continual Relation Extraction

1 code implementation1 Oct 2024 Quyen Tran, Nguyen Xuan Thanh, Nguyen Hoang Anh, Nam Le Hai, Trung Le, Linh Van Ngo, Thien Huu Nguyen

Few-shot Continual Relations Extraction (FCRE) is an emerging and dynamic area of study where models can sequentially integrate knowledge from new relations with limited labeled data while circumventing catastrophic forgetting and preserving prior knowledge from pre-trained backbones.

Continual Relation Extraction Language Modeling +2

Connective Viewpoints of Signal-to-Noise Diffusion Models

no code implementations8 Aug 2024 Khanh Doan, Long Tung Vuong, Tuan Nguyen, Anh Tuan Bui, Quyen Tran, Thanh-Toan Do, Dinh Phung, Trung Le

Diffusion models (DM) have become fundamental components of generative models, excelling across various domains such as image creation, audio generation, and complex data interpolation.

Audio Generation

MetaAug: Meta-Data Augmentation for Post-Training Quantization

2 code implementations20 Jul 2024 Cuong Pham, Hoang Anh Dung, Cuong C. Nguyen, Trung Le, Dinh Phung, Gustavo Carneiro, Thanh-Toan Do

The transformation network modifies the original calibration data and the modified data will be used as the training set to learn the quantized model with the objective that the quantized model achieves a good performance on the original calibration data.

Data Augmentation Meta-Learning +1

Enhancing Domain Adaptation through Prompt Gradient Alignment

1 code implementation13 Jun 2024 Hoang Phan, Lam Tran, Quyen Tran, Trung Le

To tackle this, a line of works based on prompt learning leverages the power of large-scale pre-trained vision-language models to learn both domain-invariant and specific features through a set of domain-agnostic and domain-specific learnable prompts.

Multi-Source Unsupervised Domain Adaptation Unsupervised Domain Adaptation

Agnostic Sharpness-Aware Minimization

no code implementations11 Jun 2024 Van-Anh Nguyen, Quyen Tran, Tuan Truong, Thanh-Toan Do, Dinh Phung, Trung Le

Sharpness-aware minimization (SAM) has been instrumental in improving deep neural network training by minimizing both the training loss and the sharpness of the loss landscape, leading the model into flatter minima that are associated with better generalization properties.

Meta-Learning

Diversity-Aware Agnostic Ensemble of Sharpness Minimizers

no code implementations19 Mar 2024 Anh Bui, Vy Vo, Tung Pham, Dinh Phung, Trung Le

There has long been plenty of theoretical and empirical evidence supporting the success of ensemble learning.

Diversity Ensemble Learning

Removing Undesirable Concepts in Text-to-Image Diffusion Models with Learnable Prompts

2 code implementations18 Mar 2024 Anh Bui, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung

By transferring this knowledge to the prompt, erasing undesirable concepts becomes more stable and has minimal negative impact on other concepts.

Transfer Learning

Frequency Attention for Knowledge Distillation

1 code implementation9 Mar 2024 Cuong Pham, Van-Anh Nguyen, Trung Le, Dinh Phung, Gustavo Carneiro, Thanh-Toan Do

Inspired by the benefits of the frequency domain, we propose a novel module that functions as an attention mechanism in the frequency domain.

Image Classification Knowledge Distillation +3

Optimal Transport for Structure Learning Under Missing Data

1 code implementation23 Feb 2024 Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung

To address this problem, we propose a score-based algorithm for learning causal structures from missing data based on optimal transport.

Causal Discovery Imputation +1

A Class-aware Optimal Transport Approach with Higher-Order Moment Matching for Unsupervised Domain Adaptation

no code implementations29 Jan 2024 Tuan Nguyen, Van Nguyen, Trung Le, He Zhao, Quan Hung Tran, Dinh Phung

Additionally, we propose minimizing class-aware Higher-order Moment Matching (HMM) to align the corresponding class regions on the source and target domains.

Unsupervised Domain Adaptation

Erasing Undesirable Influence in Diffusion Models

1 code implementation11 Jan 2024 Jing Wu, Trung Le, Munawar Hayat, Mehrtash Harandi

Diffusion models are highly effective at generating high-quality images but pose risks, such as the unintentional generation of NSFW (not safe for work) content.

Denoising Image Generation +2

DiffAugment: Diffusion based Long-Tailed Visual Relationship Recognition

no code implementations1 Jan 2024 Parul Gupta, Tuan Nguyen, Abhinav Dhall, Munawar Hayat, Trung Le, Thanh-Toan Do

The task of Visual Relationship Recognition (VRR) aims to identify relationships between two interacting objects in an image and is particularly challenging due to the widely-spread and highly imbalanced distribution of <subject, relation, object> triplets.

Object Relation +1

Class-Prototype Conditional Diffusion Model with Gradient Projection for Continual Learning

no code implementations10 Dec 2023 Khanh Doan, Quyen Tran, Tung Lam Tran, Tuan Nguyen, Dinh Phung, Trung Le

To address this, we propose the Gradient Projection Class-Prototype Conditional Diffusion Model (GPPDM), a GR-based approach for continual learning that enhances image quality in generators and thus reduces the CF in classifiers.

Continual Learning Denoising +1

KOPPA: Improving Prompt-based Continual Learning with Key-Query Orthogonal Projection and Prototype-based One-Versus-All

no code implementations26 Nov 2023 Quyen Tran, Hoang Phan, Lam Tran, Khoat Than, Toan Tran, Dinh Phung, Trung Le

Drawing inspiration from prompt tuning techniques applied to Large Language Models, recent methods based on pre-trained ViT networks have achieved remarkable results in the field of Continual Learning.

Continual Learning Meta-Learning

Robust Contrastive Learning With Theory Guarantee

no code implementations16 Nov 2023 Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le

Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.

Contrastive Learning

Learning Time-Invariant Representations for Individual Neurons from Population Dynamics

1 code implementation NeurIPS 2023 Lu Mi, Trung Le, Tianxing He, Eli Shlizerman, Uygar Sümbül

This suggests that neuronal activity is a combination of its time-invariant identity and the inputs the neuron receives from the rest of the circuit.

Self-Supervised Learning

Cross-adversarial local distribution regularization for semi-supervised medical image segmentation

no code implementations2 Oct 2023 Thanh Nguyen-Duc, Trung Le, Roland Bammer, He Zhao, Jianfei Cai, Dinh Phung

Medical semi-supervised segmentation is a technique where a model is trained to segment objects of interest in medical images with limited annotated data.

Image Segmentation Segmentation +2

NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation

1 code implementation CVPR 2024 Minh-Tuan Tran, Trung Le, Xuan-May Le, Mehrtash Harandi, Quan Hung Tran, Dinh Phung

In this paper, we propose a novel Noisy Layer Generation method (NAYER) which relocates the random source from the input to a noisy layer and utilizes the meaningful constant label-text embedding (LTE) as the input.

Data-free Knowledge Distillation Language Modelling

Optimal Transport Model Distributional Robustness

1 code implementation NeurIPS 2023 Van-Anh Nguyen, Trung Le, Anh Tuan Bui, Thanh-Toan Do, Dinh Phung

Interestingly, our developed theories allow us to flexibly incorporate the concept of sharpness awareness into training, whether it's a single model, ensemble models, or Bayesian Neural Networks, by considering specific forms of the center model distribution.

model

Learning to Quantize Vulnerability Patterns and Match to Locate Statement-Level Vulnerabilities

1 code implementation26 May 2023 Michael Fu, Trung Le, Van Nguyen, Chakkrit Tantithamthavorn, Dinh Phung

Prior studies found that vulnerabilities across different vulnerable programs may exhibit similar vulnerable scopes, implicitly forming discernible vulnerability patterns that can be learned by DL models through supervised training.

Vulnerability Detection

Parameter Estimation in DAGs from Incomplete Data via Optimal Transport

1 code implementation25 May 2023 Vy Vo, Trung Le, Tung-Long Vuong, He Zhao, Edwin Bonilla, Dinh Phung

Estimating the parameters of a probabilistic directed graphical model from incomplete data is a long-standing challenge.

Representation Learning

Sharpness & Shift-Aware Self-Supervised Learning

no code implementations17 May 2023 Ngoc N. Tran, Son Duong, Hoang Phan, Tung Pham, Dinh Phung, Trung Le

Self-supervised learning aims to extract meaningful features from unlabeled data for further downstream tasks.

Classification Contrastive Learning +2

Generating Adversarial Examples with Task Oriented Multi-Objective Optimization

1 code implementation26 Apr 2023 Anh Bui, Trung Le, He Zhao, Quan Tran, Paul Montague, Dinh Phung

The key factor for the success of adversarial training is the capability to generate qualified and divergent adversarial examples which satisfy some objectives/goals (e. g., finding adversarial examples that maximize the model losses for simultaneously attacking multiple models).

Hyperbolic Geometry in Computer Vision: A Survey

no code implementations21 Apr 2023 Pengfei Fang, Mehrtash Harandi, Trung Le, Dinh Phung

Hyperbolic geometry, a Riemannian manifold endowed with constant sectional negative curvature, has been considered an alternative embedding space in many learning scenarios, \eg, natural language processing, graph learning, \etc, as a result of its intriguing property of encoding the data's hierarchical structure (like irregular graph or tree-likeness data).

Graph Learning Image Classification +1

Vector Quantized Wasserstein Auto-Encoder

no code implementations12 Feb 2023 Tung-Long Vuong, Trung Le, He Zhao, Chuanxia Zheng, Mehrtash Harandi, Jianfei Cai, Dinh Phung

Learning deep discrete latent presentations offers a promise of better symbolic and summarized abstractions that are more useful to subsequent downstream tasks.

Clustering Decoder +1

Multiple Perturbation Attack: Attack Pixelwise Under Different $\ell_p$-norms For Better Adversarial Performance

no code implementations5 Dec 2022 Ngoc N. Tran, Anh Tuan Bui, Dinh Phung, Trung Le

On the other hand, in order to achieve that, we need to devise even stronger adversarial attacks to challenge these defense models.

Continual Learning with Optimal Transport based Mixture Model

no code implementations30 Nov 2022 Quyen Tran, Hoang Phan, Khoat Than, Dinh Phung, Trung Le

To address this issue, in this work, we first propose an online mixture model learning approach based on nice properties of the mature optimal transport theory (OT-MM).

class-incremental learning Class Incremental Learning +2

Improving Multi-task Learning via Seeking Task-based Flat Regions

no code implementations24 Nov 2022 Hoang Phan, Lam Tran, Quyen Tran, Ngoc N. Tran, Tuan Truong, Nhat Ho, Dinh Phung, Trung Le

Multi-Task Learning (MTL) is a widely-used and powerful learning paradigm for training deep neural networks that allows learning more than one objective by a single backbone.

Multi-Task Learning speech-recognition +1

Vision Transformer Visualization: What Neurons Tell and How Neurons Behave?

1 code implementation14 Oct 2022 Van-Anh Nguyen, Khanh Pham Dinh, Long Tung Vuong, Thanh-Toan Do, Quan Hung Tran, Dinh Phung, Trung Le

Our approach departs from the computational process of ViTs with a focus on visualizing the local and global information in input images and the latent feature embeddings at multiple levels.

Feature-based Learning for Diverse and Privacy-Preserving Counterfactual Explanations

1 code implementation27 Sep 2022 Vy Vo, Trung Le, Van Nguyen, He Zhao, Edwin Bonilla, Gholamreza Haffari, Dinh Phung

Interpretable machine learning seeks to understand the reasoning process of complex black-box systems that are long notorious for lack of explainability.

counterfactual Diversity +4

Cross Project Software Vulnerability Detection via Domain Adaptation and Max-Margin Principle

1 code implementation19 Sep 2022 Van Nguyen, Trung Le, Chakkrit Tantithamthavorn, John Grundy, Hung Nguyen, Dinh Phung

However, there are still two open and significant issues for SVD in terms of i) learning automatic representations to improve the predictive performance of SVD, and ii) tackling the scarcity of labeled vulnerabilities datasets that conventionally need laborious labeling effort by experts.

Domain Adaptation Representation Learning +2

An Additive Instance-Wise Approach to Multi-class Model Interpretation

1 code implementation7 Jul 2022 Vy Vo, Van Nguyen, Trung Le, Quan Hung Tran, Gholamreza Haffari, Seyit Camtepe, Dinh Phung

A popular attribution-based approach is to exploit local neighborhoods for learning instance-specific explainers in an additive manner.

Additive models Interpretable Machine Learning

STNDT: Modeling Neural Population Activity with a Spatiotemporal Transformer

no code implementations9 Jun 2022 Trung Le, Eli Shlizerman

Modeling neural population dynamics underlying noisy single-trial spiking activities is essential for relating neural observation and behavior.

Contrastive Learning

Stochastic Multiple Target Sampling Gradient Descent

1 code implementation4 Jun 2022 Hoang Phan, Ngoc Tran, Trung Le, Toan Tran, Nhat Ho, Dinh Phung

Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem.

Multi-Task Learning

Global-Local Regularization Via Distributional Robustness

1 code implementation1 Mar 2022 Hoang Phan, Trung Le, Trung Phung, Tuan Anh Bui, Nhat Ho, Dinh Phung

First, they purely focus on local regularization to strengthen model robustness, missing a global regularization effect which is useful in many real-world applications (e. g., domain adaptation, domain generalization, and adversarial machine learning).

Adversarial Robustness Domain Generalization +1

A Unified Wasserstein Distributional Robustness Framework for Adversarial Training

1 code implementation ICLR 2022 Tuan Anh Bui, Trung Le, Quan Tran, He Zhao, Dinh Phung

We introduce a new Wasserstein cost function and a new series of risk functions, with which we show that standard AT methods are special cases of their counterparts in our framework.

On Learning Domain-Invariant Representations for Transfer Learning with Multiple Sources

no code implementations NeurIPS 2021 Trung Phung, Trung Le, Long Vuong, Toan Tran, Anh Tran, Hung Bui, Dinh Phung

Domain adaptation (DA) benefits from the rigorous theoretical works that study its insightful characteristics and various aspects, e. g., learning domain-invariant representations and its trade-off.

Domain Generalization Transfer Learning

On Label Shift in Domain Adaptation via Wasserstein Distance

no code implementations29 Oct 2021 Trung Le, Dat Do, Tuan Nguyen, Huy Nguyen, Hung Bui, Nhat Ho, Dinh Phung

We study the label shift problem between the source and target domains in general domain adaptation (DA) settings.

Domain Adaptation

ReGVD: Revisiting Graph Neural Networks for Vulnerability Detection

1 code implementation14 Oct 2021 Van-Anh Nguyen, Dai Quoc Nguyen, Van Nguyen, Trung Le, Quan Hung Tran, Dinh Phung

Identifying vulnerabilities in the source code is essential to protect the software systems from cyber security attacks.

Graph Embedding Graph Neural Network +3

STEM: An Approach to Multi-Source Domain Adaptation With Guarantees

1 code implementation1 Oct 2021 Van-Anh Nguyen, Tuan Nguyen, Trung Le, Quan Hung Tran, Dinh Phung

To address the second challenge, we propose to bridge the gap between the target domain and the mixture of source domains in the latent space via a generator or feature extractor.

Improving Robustness with Optimal Transport based Adversarial Generalization

no code implementations29 Sep 2021 Siqi Xia, Shijie Liu, Trung Le, Dinh Phung, Sarah Erfani, Benjamin I. P. Rubinstein, Christopher Leckie, Paul Montague

More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples.

LASSO: Latent Sub-spaces Orientation for Domain Generalization

no code implementations29 Sep 2021 Long Tung Vuong, Trung Quoc Phung, Toan Tran, Anh Tuan Tran, Dinh Phung, Trung Le

To achieve a satisfactory generalization performance on prediction tasks in an unseen domain, existing domain generalization (DG) approaches often rely on the strict assumption of fixed domain-invariant features and common hypotheses learned from a set of training domains.

Domain Generalization

Fine-grained Software Vulnerability Detection via Information Theory and Contrastive Learning

no code implementations29 Sep 2021 Van Nguyen, Trung Le, John C. Grundy, Dinh Phung

Software vulnerabilities existing in a program or function of computer systems have been becoming a serious and crucial concern.

Contrastive Learning Representation Learning +1

SyntheticFur dataset for neural rendering

1 code implementation13 May 2021 Trung Le, Ryan Poplin, Fred Bertsch, Andeep Singh Toor, Margaret L. Oh

We introduce a new dataset called SyntheticFur built specifically for machine learning training.

Generative Adversarial Network Neural Rendering

Text Generation with Deep Variational GAN

no code implementations27 Apr 2021 Mahmoud Hossam, Trung Le, Michael Papasimeon, Viet Huynh, Dinh Phung

Generating realistic sequences is a central task in many machine learning applications.

Diversity Text Generation

Improved and Efficient Text Adversarial Attacks using Target Information

no code implementations27 Apr 2021 Mahmoud Hossam, Trung Le, He Zhao, Viet Huynh, Dinh Phung

There has been recently a growing interest in studying adversarial examples on natural language models in the black-box setting.

Sentence

On Transportation of Mini-batches: A Hierarchical Approach

2 code implementations11 Feb 2021 Khai Nguyen, Dang Nguyen, Quoc Nguyen, Tung Pham, Hung Bui, Dinh Phung, Trung Le, Nhat Ho

To address these problems, we propose a novel mini-batch scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that finds the optimal coupling between mini-batches and it can be seen as an approximation to a well-defined distance on the space of probability measures.

Domain Adaptation

Understanding and Achieving Efficient Robustness with Adversarial Supervised Contrastive Learning

1 code implementation25 Jan 2021 Anh Bui, Trung Le, He Zhao, Paul Montague, Seyit Camtepe, Dinh Phung

Central to this approach is the selection of positive (similar) and negative (dissimilar) sets to provide the model the opportunity to `contrast' between data and class representation in the latent space.

Contrastive Learning

STEM: An Approach to Multi-Source Domain Adaptation With Guarantees

1 code implementation ICCV 2021 Van-Anh Nguyen, Tuan Nguyen, Trung Le, Quan Hung Tran, Dinh Phung

To address the second challenge, we propose to bridge the gap between the target domain and the mixture of source domains in the latent space via a generator or feature extractor.

Multi-Source Unsupervised Domain Adaptation Unsupervised Domain Adaptation

Explain by Evidence: An Explainable Memory-based Neural Network for Question Answering

no code implementations COLING 2020 Quan Tran, Nhan Dam, Tuan Lai, Franck Dernoncourt, Trung Le, Nham Le, Dinh Phung

Interpretability and explainability of deep neural networks are challenging due to their scale, complexity, and the agreeable notions on which the explaining process rests.

Question Answering

Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework for Refining Arbitrary Dense Adversarial Attacks

no code implementations13 Oct 2020 He Zhao, Thanh Nguyen, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.

Adversarial Attack Detection

Neural Topic Model via Optimal Transport

1 code implementation ICLR 2021 He Zhao, Dinh Phung, Viet Huynh, Trung Le, Wray Buntine

Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis.

model Topic Models

Improving Adversarial Robustness by Enforcing Local and Global Compactness

1 code implementation ECCV 2020 Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung

The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application.

Adversarial Robustness Clustering

OptiGAN: Generative Adversarial Networks for Goal Optimized Sequence Generation

1 code implementation16 Apr 2020 Mahmoud Hossam, Trung Le, Viet Huynh, Michael Papasimeon, Dinh Phung

One of the challenging problems in sequence generation tasks is the optimized generation of sequences with specific desired goals.

Diversity reinforcement-learning +2

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

no code implementations3 Oct 2019 He Zhao, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung

Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.

Adversarial Attack Translation

Maximal Divergence Sequential Autoencoder for Binary Software Vulnerability Detection

no code implementations ICLR 2019 Tue Le, Tuan Nguyen, Trung Le, Dinh Phung, Paul Montague, Olivier De Vel, Lizhen Qu

Due to the sharp increase in the severity of the threat imposed by software vulnerabilities, the detection of vulnerabilities in binary code has become an important concern in the software industry, such as the embedded systems industry, and in the field of computer security.

Computer Security Vulnerability Detection

When Can Neural Networks Learn Connected Decision Regions?

no code implementations25 Jan 2019 Trung Le, Dinh Phung

Previous work has questioned the conditions under which the decision regions of a neural network are connected and further showed the implications of the corresponding theory to the problem of adversarial manipulation of classifiers.

On Deep Domain Adaptation: Some Theoretical Understandings

no code implementations15 Nov 2018 Trung Le, Khanh Nguyen, Nhat Ho, Hung Bui, Dinh Phung

The underlying idea of deep domain adaptation is to bridge the gap between source and target domains in a joint space so that a supervised classifier trained on labeled source data can be nicely transferred to the target domain.

Domain Adaptation Transfer Learning

MGAN: Training Generative Adversarial Nets with Multiple Generators

2 code implementations ICLR 2018 Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung

We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem.

KGAN: How to Break The Minimax Game in GAN

no code implementations6 Nov 2017 Trung Le, Tu Dinh Nguyen, Dinh Phung

In this paper, we propose a new viewpoint for GANs, which is termed as the minimizing general loss viewpoint.

General Classification

Scalable Support Vector Clustering Using Budget

no code implementations19 Sep 2017 Tung Pham, Trung Le, Hang Dang

In this paper, we propose applying Stochastic Gradient Descent (SGD) framework to the first phase of support-based clustering for finding the domain of novelty and a new strategy to perform the clustering assignment.

Clustering Outlier Detection

Analogical-based Bayesian Optimization

no code implementations19 Sep 2017 Trung Le, Khanh Nguyen, Tu Dinh Nguyen, Dinh Phung

With this spirit, in this paper, we propose Analogical-based Bayesian Optimization that can maximize black-box function over a domain where only a similarity score can be defined.

Bayesian Optimization Gaussian Processes

Dual Discriminator Generative Adversarial Nets

2 code implementations NeurIPS 2017 Tu Dinh Nguyen, Trung Le, Hung Vu, Dinh Phung

We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem.

Ranked #23 on Image Generation on STL-10 (Inception score metric)

Generative Adversarial Network

Geometric Enclosing Networks

no code implementations16 Aug 2017 Trung Le, Hung Vu, Tu Dinh Nguyen, Dinh Phung

Training model to generate data has increasingly attracted research attention and become important in modern world applications.

Multi-Generator Generative Adversarial Nets

no code implementations8 Aug 2017 Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung

A minimax formulation is able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN.

Dual Space Gradient Descent for Online Learning

no code implementations NeurIPS 2016 Trung Le, Tu Nguyen, Vu Nguyen, Dinh Phung

However, this approach still suffers from a serious shortcoming as it needs to use a high dimensional random feature space to achieve a sufficiently accurate kernel approximation.

Scalable Semi-supervised Learning with Graph-based Kernel Machine

no code implementations22 Jun 2016 Trung Le, Khanh Nguyen, Van Nguyen, Vu Nguyen, Dinh Phung

Acquiring labels are often costly, whereas unlabeled data are usually easy to obtain in modern machine learning applications.

BIG-bench Machine Learning

Approximation Vector Machines for Large-scale Online Learning

1 code implementation22 Apr 2016 Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung

One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity.

General Classification regression

Cannot find the paper you are looking for? You can Submit a new open access paper.