1 code implementation • Findings (ACL) 2022 • Thuy-Trang Vu, Shahram Khadivi, Dinh Phung, Gholamreza Haffari
Generalising to unseen domains is under-explored and remains a challenge in neural machine translation.
no code implementations • ICML 2020 • Quan Hoang, Trung Le, Dinh Phung
We propose a novel gradient-based tractable approach for the Blahut-Arimoto (BA) algorithm to compute the rate-distortion function where the BA algorithm is fully parameterized.
no code implementations • 8 Aug 2024 • Khanh Doan, Long Tung Vuong, Tuan Nguyen, Anh Tuan Bui, Quyen Tran, Thanh-Toan Do, Dinh Phung, Trung Le
Diffusion models (DM) have become fundamental components of generative models, excelling across various domains such as image creation, audio generation, and complex data interpolation.
2 code implementations • 20 Jul 2024 • Cuong Pham, Hoang Anh Dung, Cuong C. Nguyen, Trung Le, Dinh Phung, Gustavo Carneiro, Thanh-Toan Do
The transformation network modifies the original calibration data and the modified data will be used as the training set to learn the quantized model with the objective that the quantized model achieves a good performance on the original calibration data.
no code implementations • NeurIPS 2023 • Cuong Pham, Cuong C. Nguyen, Trung Le, Dinh Phung, Gustavo Carneiro, Thanh-Toan Do
Bayesian Neural Networks (BNNs) offer probability distributions for model parameters, enabling uncertainty quantification in predictions.
no code implementations • 13 Jun 2024 • Xiaohao Yang, He Zhao, Dinh Phung, Wray Buntine, Lan Du
Topic modeling has been a widely used tool for unsupervised text analysis.
no code implementations • 11 Jun 2024 • Van-Anh Nguyen, Quyen Tran, Tuan Truong, Thanh-Toan Do, Dinh Phung, Trung Le
Sharpness-aware minimization (SAM) has been instrumental in improving deep neural network training by minimizing both the training loss and the sharpness of the loss landscape, leading the model into flatter minima that are associated with better generalization properties.
no code implementations • 9 Jun 2024 • Shangyu Chen, Zizheng Pan, Jianfei Cai, Dinh Phung
Personalizing a large-scale pretrained Text-to-Image (T2I) diffusion model is challenging as it typically struggles to make an appropriate trade-off between its training data distribution and the target distribution, i. e., learning a novel concept with only a few target images to achieve personalization (aligning with the personalized target) while preserving text editability (aligning with diverse text prompts).
1 code implementation • 16 May 2024 • Manh Luong, Khai Nguyen, Nhat Ho, Reza Haf, Dinh Phung, Lizhen Qu
The Learning-to-match (LTM) framework proves to be an effective inverse optimal transport approach for learning the underlying ground metric between two sources of data, facilitating subsequent matching.
1 code implementation • 11 Apr 2024 • Cheng Zhang, Qianyi Wu, Camilo Cruz Gambardella, Xiaoshui Huang, Dinh Phung, Wanli Ouyang, Jianfei Cai
Generative models, e. g., Stable Diffusion, have enabled the creation of photorealistic images from text prompts.
1 code implementation • CVPR 2024 • Minh-Tuan Tran, Trung Le, Xuan-May Le, Mehrtash Harandi, Dinh Phung
In this field, Data-Free Knowledge Transfer (DFKT) plays a crucial role in addressing catastrophic forgetting and data privacy problems.
no code implementations • 19 Mar 2024 • Anh Bui, Vy Vo, Tung Pham, Dinh Phung, Trung Le
There has long been plenty of theoretical and empirical evidence supporting the success of ensemble learning.
no code implementations • 18 Mar 2024 • Anh Bui, Khanh Doan, Trung Le, Paul Montague, Tamas Abraham, Dinh Phung
By transferring this knowledge to the prompt, erasing undesirable concepts becomes more stable and has minimal negative impact on other concepts.
1 code implementation • 9 Mar 2024 • Cuong Pham, Van-Anh Nguyen, Trung Le, Dinh Phung, Gustavo Carneiro, Thanh-Toan Do
Inspired by the benefits of the frequency domain, we propose a novel module that functions as an attention mechanism in the frequency domain.
1 code implementation • 23 Feb 2024 • Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung
To address this problem, we propose a score-based algorithm for learning causal structures from missing data based on optimal transport.
1 code implementation • 17 Feb 2024 • Minh-Vuong Nguyen, Linhao Luo, Fatemeh Shiri, Dinh Phung, Yuan-Fang Li, Thuy-Trang Vu, Gholamreza Haffari
Large language models (LLMs) demonstrate strong reasoning abilities when prompted to generate chain-of-thought (CoT) explanations alongside answers.
no code implementations • 29 Jan 2024 • Tuan Nguyen, Van Nguyen, Trung Le, He Zhao, Quan Hung Tran, Dinh Phung
Additionally, we propose minimizing class-aware Higher-order Moment Matching (HMM) to align the corresponding class regions on the source and target domains.
no code implementations • 10 Dec 2023 • Khanh Doan, Quyen Tran, Tung Lam Tran, Tuan Nguyen, Dinh Phung, Trung Le
To address this, we propose the Gradient Projection Class-Prototype Conditional Diffusion Model (GPPDM), a GR-based approach for continual learning that enhances image quality in generators and thus reduces the CF in classifiers.
no code implementations • 26 Nov 2023 • Quyen Tran, Lam Tran, Khoat Than, Toan Tran, Dinh Phung, Trung Le
Drawing inspiration from prompt tuning techniques applied to Large Language Models, recent methods based on pre-trained ViT networks have achieved remarkable results in the field of Continual Learning.
no code implementations • 16 Nov 2023 • Ngoc N. Tran, Lam Tran, Hoang Phan, Anh Bui, Tung Pham, Toan Tran, Dinh Phung, Trung Le
Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.
1 code implementation • 6 Nov 2023 • Dat Quoc Nguyen, Linh The Nguyen, Chi Tran, Dung Ngoc Nguyen, Dinh Phung, Hung Bui
The base model, PhoGPT-4B, with exactly 3. 7B parameters, is pre-trained from scratch on a Vietnamese corpus of 102B tokens, with an 8192 context length, employing a vocabulary of 20480 token types.
no code implementations • 18 Oct 2023 • Linhao Luo, Thuy-Trang Vu, Dinh Phung, Gholamreza Haffari
We systematically evaluate the state-of-the-art LLMs with KGs in generic and specific domains.
no code implementations • 2 Oct 2023 • Thanh Nguyen-Duc, Trung Le, Roland Bammer, He Zhao, Jianfei Cai, Dinh Phung
Medical semi-supervised segmentation is a technique where a model is trained to segment objects of interest in medical images with limited annotated data.
1 code implementation • CVPR 2024 • Minh-Tuan Tran, Trung Le, Xuan-May Le, Mehrtash Harandi, Quan Hung Tran, Dinh Phung
In this paper, we propose a novel Noisy Layer Generation method (NAYER) which relocates the random source from the input to a noisy layer and utilizes the meaningful constant label-text embedding (LTE) as the input.
no code implementations • 29 Sep 2023 • Tuan Truong, Hoang-Phi Nguyen, Tung Pham, Minh-Tuan Tran, Mehrtash Harandi, Dinh Phung, Trung Le
Motivated by this analysis, we introduce our algorithm, Riemannian Sharpness-Aware Minimization (RSAM).
1 code implementation • 24 Jul 2023 • Xiaohao Yang, He Zhao, Dinh Phung, Lan Du
To do so, we propose to enhance NTMs by narrowing the semantic distance between similar documents, with the underlying assumption that documents from different corpora may share similar semantics.
1 code implementation • NeurIPS 2023 • Van-Anh Nguyen, Trung Le, Anh Tuan Bui, Thanh-Toan Do, Dinh Phung
Interestingly, our developed theories allow us to flexibly incorporate the concept of sharpness awareness into training, whether it's a single model, ensemble models, or Bayesian Neural Networks, by considering specific forms of the center model distribution.
1 code implementation • 26 May 2023 • Michael Fu, Trung Le, Van Nguyen, Chakkrit Tantithamthavorn, Dinh Phung
Prior studies found that vulnerabilities across different vulnerable programs may exhibit similar vulnerable scopes, implicitly forming discernible vulnerability patterns that can be learned by DL models through supervised training.
1 code implementation • 25 May 2023 • Vy Vo, Trung Le, Tung-Long Vuong, He Zhao, Edwin Bonilla, Dinh Phung
Estimating the parameters of a probabilistic directed graphical model from incomplete data is a long-standing challenge.
no code implementations • 17 May 2023 • Ngoc N. Tran, Son Duong, Hoang Phan, Tung Pham, Dinh Phung, Trung Le
Self-supervised learning aims to extract meaningful features from unlabeled data for further downstream tasks.
no code implementations • 6 May 2023 • Thuy-Trang Vu, Shahram Khadivi, Mahsa Ghorbanali, Dinh Phung, Gholamreza Haffari
Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL).
1 code implementation • 26 Apr 2023 • Anh Bui, Trung Le, He Zhao, Quan Tran, Paul Montague, Dinh Phung
The key factor for the success of adversarial training is the capability to generate qualified and divergent adversarial examples which satisfy some objectives/goals (e. g., finding adversarial examples that maximize the model losses for simultaneously attacking multiple models).
no code implementations • 21 Apr 2023 • Pengfei Fang, Mehrtash Harandi, Trung Le, Dinh Phung
Hyperbolic geometry, a Riemannian manifold endowed with constant sectional negative curvature, has been considered an alternative embedding space in many learning scenarios, \eg, natural language processing, graph learning, \etc, as a result of its intriguing property of encoding the data's hierarchical structure (like irregular graph or tree-likeness data).
no code implementations • 12 Feb 2023 • Tung-Long Vuong, Trung Le, He Zhao, Chuanxia Zheng, Mehrtash Harandi, Jianfei Cai, Dinh Phung
Learning deep discrete latent presentations offers a promise of better symbolic and summarized abstractions that are more useful to subsequent downstream tasks.
no code implementations • 18 Jan 2023 • Son Duy Dao, Hengcan Shi, Dinh Phung, Jianfei Cai
Recent mask proposal models have significantly improved the performance of zero-shot semantic segmentation.
no code implementations • 5 Dec 2022 • Ngoc N. Tran, Anh Tuan Bui, Dinh Phung, Trung Le
On the other hand, in order to achieve that, we need to devise even stronger adversarial attacks to challenge these defense models.
no code implementations • 30 Nov 2022 • Quyen Tran, Hoang Phan, Khoat Than, Dinh Phung, Trung Le
To address this issue, in this work, we first propose an online mixture model learning approach based on nice properties of the mature optimal transport theory (OT-MM).
no code implementations • 24 Nov 2022 • Hoang Phan, Lam Tran, Ngoc N. Tran, Nhat Ho, Dinh Phung, Trung Le
Multi-Task Learning (MTL) is a widely-used and powerful learning paradigm for training deep neural networks that allows learning more than one objective by a single backbone.
no code implementations • 20 Oct 2022 • Thuy-Trang Vu, Shahram Khadivi, Xuanli He, Dinh Phung, Gholamreza Haffari
Previous works mostly focus on either multilingual or multi-domain aspects of neural machine translation (NMT).
1 code implementation • 14 Oct 2022 • Van-Anh Nguyen, Khanh Pham Dinh, Long Tung Vuong, Thanh-Toan Do, Quan Hung Tran, Dinh Phung, Trung Le
Our approach departs from the computational process of ViTs with a focus on visualizing the local and global information in input images and the latent feature embeddings at multiple levels.
1 code implementation • 27 Sep 2022 • Vy Vo, Trung Le, Van Nguyen, He Zhao, Edwin Bonilla, Gholamreza Haffari, Dinh Phung
Interpretable machine learning seeks to understand the reasoning process of complex black-box systems that are long notorious for lack of explainability.
1 code implementation • 20 Sep 2022 • Van Nguyen, Trung Le, Chakkrit Tantithamthavorn, Michael Fu, John Grundy, Hung Nguyen, Seyit Camtepe, Paul Quirk, Dinh Phung
In this paper, we propose a novel end-to-end deep learning-based approach to identify the vulnerability-relevant code statements of a specific function.
1 code implementation • 19 Sep 2022 • Van Nguyen, Trung Le, Chakkrit Tantithamthavorn, John Grundy, Hung Nguyen, Dinh Phung
However, there are still two open and significant issues for SVD in terms of i) learning automatic representations to improve the predictive performance of SVD, and ii) tackling the scarcity of labeled vulnerabilities datasets that conventionally need laborious labeling effort by experts.
2 code implementations • 19 Sep 2022 • Chuanxia Zheng, Long Tung Vuong, Jianfei Cai, Dinh Phung
Although two-stage Vector Quantized (VQ) generative models allow for synthesizing high-fidelity and high-resolution images, their quantization operator encodes similar patches within an image into the same index, resulting in a repeated artifact for similar adjacent regions using existing decoder architectures.
1 code implementation • 7 Jul 2022 • Vy Vo, Van Nguyen, Trung Le, Quan Hung Tran, Gholamreza Haffari, Seyit Camtepe, Dinh Phung
A popular attribution-based approach is to exploit local neighborhoods for learning instance-specific explainers in an additive manner.
1 code implementation • 4 Jun 2022 • Hoang Phan, Ngoc Tran, Trung Le, Toan Tran, Nhat Ho, Dinh Phung
Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem.
no code implementations • 5 Apr 2022 • Chuanxia Zheng, Guoxian Song, Tat-Jen Cham, Jianfei Cai, Dinh Phung, Linjie Luo
In this work, we present a novel framework for pluralistic image completion that can achieve both high quality and diversity at much faster inference speed.
1 code implementation • 1 Mar 2022 • Hoang Phan, Trung Le, Trung Phung, Tuan Anh Bui, Nhat Ho, Dinh Phung
First, they purely focus on local regularization to strengthen model robustness, missing a global regularization effect which is useful in many real-world applications (e. g., domain adaptation, domain generalization, and adversarial machine learning).
1 code implementation • ICLR 2022 • Tuan Anh Bui, Trung Le, Quan Tran, He Zhao, Dinh Phung
We introduce a new Wasserstein cost function and a new series of risk functions, with which we show that standard AT methods are special cases of their counterparts in our framework.
1 code implementation • 22 Feb 2022 • Tam Le, Truyen Nguyen, Dinh Phung, Viet Anh Nguyen
In this work, we consider probability measures supported on a graph metric space and propose a novel Sobolev transport metric.
1 code implementation • 16 Dec 2021 • Vinh Tong, Dai Quoc Nguyen, Dinh Phung, Dat Quoc Nguyen
WGE also constructs another single undirected graph from relation-focused constraints, which views entities and relations as nodes.
no code implementations • NeurIPS 2021 • Trung Phung, Trung Le, Long Vuong, Toan Tran, Anh Tran, Hung Bui, Dinh Phung
Domain adaptation (DA) benefits from the rigorous theoretical works that study its insightful characteristics and various aspects, e. g., learning domain-invariant representations and its trade-off.
no code implementations • 29 Oct 2021 • Dang Nguyen, Trang Nguyen, Khai Nguyen, Dinh Phung, Hung Bui, Nhat Ho
To address this issue, we propose a novel model fusion framework, named CLAFusion, to fuse neural networks with a different number of layers, which we refer to as heterogeneous neural networks, via cross-layer alignment.
no code implementations • 29 Oct 2021 • Trung Le, Dat Do, Tuan Nguyen, Huy Nguyen, Hung Bui, Nhat Ho, Dinh Phung
We study the label shift problem between the source and target domains in general domain adaptation (DA) settings.
1 code implementation • NeurIPS 2021 • Manh-Ha Bui, Toan Tran, Anh Tuan Tran, Dinh Phung
We empirically show that mDSDI provides competitive results with state-of-the-art techniques in DG.
1 code implementation • 14 Oct 2021 • Van-Anh Nguyen, Dai Quoc Nguyen, Van Nguyen, Trung Le, Quan Hung Tran, Dinh Phung
Identifying vulnerabilities in the source code is essential to protect the software systems from cyber security attacks.
1 code implementation • 1 Oct 2021 • Van-Anh Nguyen, Tuan Nguyen, Trung Le, Quan Hung Tran, Dinh Phung
To address the second challenge, we propose to bridge the gap between the target domain and the mixture of source domains in the latent space via a generator or feature extractor.
no code implementations • 29 Sep 2021 • Siqi Xia, Shijie Liu, Trung Le, Dinh Phung, Sarah Erfani, Benjamin I. P. Rubinstein, Christopher Leckie, Paul Montague
More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples.
no code implementations • 29 Sep 2021 • Van Nguyen, Trung Le, John C. Grundy, Dinh Phung
Software vulnerabilities existing in a program or function of computer systems have been becoming a serious and crucial concern.
no code implementations • 29 Sep 2021 • Son Duy Dao, He Zhao, Dinh Phung, Jianfei Cai
Recently, as an effective way of learning latent representations, contrastive learning has been increasingly popular and successful in various domains.
no code implementations • 29 Sep 2021 • Long Tung Vuong, Trung Quoc Phung, Toan Tran, Anh Tuan Tran, Dinh Phung, Trung Le
To achieve a satisfactory generalization performance on prediction tasks in an unseen domain, existing domain generalization (DG) approaches often rely on the strict assumption of fixed domain-invariant features and common hypotheses learned from a set of training domains.
1 code implementation • EMNLP 2021 • Thuy-Trang Vu, Xuanli He, Dinh Phung, Gholamreza Haffari
Once the in-domain data is detected by the classifier, the NMT model is then adapted to the new domain by jointly learning translation and domain discrimination tasks.
no code implementations • 3 Aug 2021 • Jing Liu, Bohan Zhuang, Mingkui Tan, Xu Liu, Dinh Phung, Yuanqing Li, Jianfei Cai
More critically, EAS is able to find compact architectures within 0. 1 second for 50 deployment scenarios.
no code implementations • 24 Jul 2021 • Son D. Dao, Ethan Zhao, Dinh Phung, Jianfei Cai
Recently, as an effective way of learning latent representations, contrastive learning has been increasingly popular and successful in various domains.
1 code implementation • UAI 2021 • Tuan Nguyen, Trung Le, He Zhao, Quan Hung Tran, Truyen Nguyen, Dinh Phung
To this end, we propose in this paper a novel model for multi-source DA using the theory of optimal transport and imitation learning.
Imitation Learning Multi-Source Unsupervised Domain Adaptation +1
no code implementations • 27 Apr 2021 • Mahmoud Hossam, Trung Le, Michael Papasimeon, Viet Huynh, Dinh Phung
Generating realistic sequences is a central task in many machine learning applications.
no code implementations • 27 Apr 2021 • Mahmoud Hossam, Trung Le, He Zhao, Viet Huynh, Dinh Phung
There has been recently a growing interest in studying adversarial examples on natural language models in the black-box setting.
1 code implementation • 15 Apr 2021 • Dai Quoc Nguyen, Vinh Tong, Dinh Phung, Dat Quoc Nguyen
We introduce a novel embedding model, named NoGE, which aims to integrate co-occurrence among entities and relations into graph neural networks to improve knowledge graph completion (i. e., link prediction).
1 code implementation • CVPR 2022 • Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai, Dinh Phung
Bridging global context interactions correctly is important for high-fidelity image completion with large masks.
Ranked #2 on Image Inpainting on FFHQ 512 x 512
no code implementations • 28 Feb 2021 • He Zhao, Dinh Phung, Viet Huynh, Yuan Jin, Lan Du, Wray Buntine
Topic modelling has been a successful technique for text analysis for almost twenty years.
2 code implementations • 11 Feb 2021 • Khai Nguyen, Dang Nguyen, Quoc Nguyen, Tung Pham, Hung Bui, Dinh Phung, Trung Le, Nhat Ho
To address these problems, we propose a novel mini-batch scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that finds the optimal coupling between mini-batches and it can be seen as an approximation to a well-defined distance on the space of probability measures.
1 code implementation • 25 Jan 2021 • Anh Bui, Trung Le, He Zhao, Paul Montague, Seyit Camtepe, Dinh Phung
Central to this approach is the selection of positive (similar) and negative (dissimilar) sets to provide the model the opportunity to `contrast' between data and class representation in the latent space.
1 code implementation • ICCV 2021 • Van-Anh Nguyen, Tuan Nguyen, Trung Le, Quan Hung Tran, Dinh Phung
To address the second challenge, we propose to bridge the gap between the target domain and the mixture of source domains in the latent space via a generator or feature extractor.
Multi-Source Unsupervised Domain Adaptation Unsupervised Domain Adaptation
no code implementations • NeurIPS 2020 • Viet Huynh, He Zhao, Dinh Phung
We present an optimal transport framework for learning topics from textual data.
no code implementations • COLING 2020 • Quan Tran, Nhan Dam, Tuan Lai, Franck Dernoncourt, Trung Le, Nham Le, Dinh Phung
Interpretability and explainability of deep neural networks are challenging due to their scale, complexity, and the agreeable notions on which the explaining process rests.
1 code implementation • 14 Oct 2020 • Mahmoud Hossam, Trung Le, He Zhao, Dinh Phung
Training robust deep learning models for down-stream tasks is a critical challenge.
no code implementations • 13 Oct 2020 • He Zhao, Thanh Nguyen, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung
Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.
1 code implementation • EMNLP 2020 • Thuy-Trang Vu, Dinh Phung, Gholamreza Haffari
Recent work has shown the importance of adaptation of broad-coverage contextualised embedding models on the domain of the target task of interest.
1 code implementation • 26 Sep 2020 • Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dinh Phung
We propose a simple yet effective embedding model to learn quaternion embeddings for entities and relations in knowledge graphs.
1 code implementation • 21 Sep 2020 • Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung
An important technique of this approach is to control the transferability of adversarial examples among ensemble members.
1 code implementation • 12 Aug 2020 • Dai Quoc Nguyen, Tu Dinh Nguyen, Dinh Phung
As demonstrated, the Quaternion space, a hyper-complex vector space, provides highly meaningful computations and analogical calculus through Hamilton product compared to the Euclidean and complex vector spaces.
1 code implementation • ICLR 2021 • He Zhao, Dinh Phung, Viet Huynh, Trung Le, Wray Buntine
Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis.
Ranked #5 on Topic Models on 20NewsGroups
no code implementations • 6 Aug 2020 • Thanh Nguyen-Duc, He Zhao, Jianfei Cai, Dinh Phung
To interpret the teacher model and assist the learning of the student, an explainer module is introduced to highlight the regions of an input that are important for the predictions of the teacher model.
1 code implementation • ECCV 2020 • Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas Abraham, Dinh Phung
The fact that deep neural networks are susceptible to crafted perturbations severely impacts the use of deep learning in certain domains of application.
1 code implementation • 22 Jun 2020 • Dai Quoc Nguyen, Tu Dinh Nguyen, Dinh Phung
Despite several signs of progress have been made recently, limited research has been conducted for an inductive setting where embeddings are required for newly unseen nodes -- a setting encountered commonly in practical applications of deep learning for graph networks.
1 code implementation • 16 Apr 2020 • Mahmoud Hossam, Trung Le, Viet Huynh, Michael Papasimeon, Dinh Phung
One of the challenging problems in sequence generation tasks is the optimized generation of sequences with specific desired goals.
1 code implementation • 12 Nov 2019 • Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, Dinh Phung
In this paper, we focus on learning low-dimensional embeddings for nodes in graph-structured data.
Ranked #52 on Node Classification on Pubmed
no code implementations • 10 Oct 2019 • Tam Le, Viet Huynh, Nhat Ho, Dinh Phung, Makoto Yamada
We study in this paper a variant of Wasserstein barycenter problem, which we refer to as tree-Wasserstein barycenter, by leveraging a specific class of ground metrics, namely tree metrics, for Wasserstein distance.
no code implementations • 3 Oct 2019 • He Zhao, Trung Le, Paul Montague, Olivier De Vel, Tamas Abraham, Dinh Phung
Deep neural network image classifiers are reported to be susceptible to adversarial evasion attacks, which use carefully crafted images created to mislead a classifier.
1 code implementation • 26 Sep 2019 • Dai Quoc Nguyen, Tu Dinh Nguyen, Dinh Phung
The transformer self-attention network has been extensively used in research domains such as computer vision, image processing, and natural language processing.
Ranked #1 on Graph Classification on IMDb-M
no code implementations • 25 Sep 2019 • Dai Quoc Nguyen, Tu Dinh Nguyen, Dinh Phung
Thus, U2GAN can address the weaknesses in the existing models in order to produce plausible node embeddings whose sum is the final embedding of the whole graph.
no code implementations • Transportation Research Part C: Emerging Technologies 2019 • Loan N.N. Do, Hai L. Vu, Bao Q. Vo, Zhiyuan Liu, Dinh Phung
In this study, a deep learning based traffic flow predictor with spatial and temporal attentions (STANN) is proposed.
1 code implementation • ACL 2020 • Dai Quoc Nguyen, Tu Dinh Nguyen, Dinh Phung
Knowledge graph embedding methods often suffer from a limitation of memorizing valid triples to predict new ones for triple classification and search personalization problems.
1 code implementation • ACL 2019 • Thuy-Trang Vu, Ming Liu, Dinh Phung, Gholamreza Haffari
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary.
no code implementations • ICLR 2019 • Tue Le, Tuan Nguyen, Trung Le, Dinh Phung, Paul Montague, Olivier De Vel, Lizhen Qu
Due to the sharp increase in the severity of the threat imposed by software vulnerabilities, the detection of vulnerabilities in binary code has become an important concern in the software industry, such as the embedded systems industry, and in the field of computer security.
no code implementations • 25 Jan 2019 • Trung Le, Dinh Phung
Previous work has questioned the conditions under which the decision regions of a neural network are connected and further showed the implications of the corresponding theory to the problem of adversarial manipulation of classifiers.
no code implementations • 15 Nov 2018 • Trung Le, Khanh Nguyen, Nhat Ho, Hung Bui, Dinh Phung
The underlying idea of deep domain adaptation is to bridge the gap between source and target domains in a joint space so that a supervised classifier trained on labeled source data can be nicely transferred to the target domain.
no code implementations • 29 Oct 2018 • Nhat Ho, Viet Huynh, Dinh Phung, Michael. I. Jordan
We propose a novel probabilistic approach to multilevel clustering problems based on composite transportation distance, which is a variant of transportation distance where the underlying metric is Kullback-Leibler divergence.
2 code implementations • NAACL 2019 • Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, Dinh Phung
In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object).
Ranked #41 on Link Prediction on WN18RR
no code implementations • 3 May 2018 • Hung Vu, Tu Dinh Nguyen, Dinh Phung
Abnormal event detection is one of the important objectives in research and practical applications of video surveillance.
no code implementations • 12 Apr 2018 • Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dinh Phung
After that, the 3-column matrix is fed into a deep learning architecture to re-rank the search results returned by a basis ranker.
1 code implementation • ICLR 2018 • Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung
We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem.
3 code implementations • NAACL 2018 • Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, Dinh Phung
This 3-column matrix is then fed to a convolution layer where multiple filters are operated on the matrix to generate different feature maps.
Ranked #59 on Link Prediction on WN18RR
no code implementations • 6 Nov 2017 • Trung Le, Tu Dinh Nguyen, Dinh Phung
In this paper, we propose a new viewpoint for GANs, which is termed as the minimizing general loss viewpoint.
no code implementations • 19 Sep 2017 • Trung Le, Khanh Nguyen, Tu Dinh Nguyen, Dinh Phung
With this spirit, in this paper, we propose Analogical-based Bayesian Optimization that can maximize black-box function over a domain where only a similarity score can be defined.
2 code implementations • NeurIPS 2017 • Tu Dinh Nguyen, Trung Le, Hung Vu, Dinh Phung
We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem.
Ranked #18 on Image Generation on STL-10 (Inception score metric)
no code implementations • 18 Aug 2017 • Tu Dinh Nguyen, Truyen Tran, Dinh Phung, Svetha Venkatesh
Of current representation learning schemes, restricted Boltzmann machines (RBMs) have proved to be highly effective in unsupervised settings.
no code implementations • 18 Aug 2017 • Tu Dinh Nguyen, Truyen Tran, Dinh Phung, Svetha Venkatesh
The analysis of mixed data has been raising challenges in statistics and machine learning.
no code implementations • 17 Aug 2017 • Hung Vu, Dinh Phung, Tu Dinh Nguyen, Anthony Trevors, Svetha Venkatesh
Automated detection of abnormalities in data has been studied in research area in recent years because of its diverse applications in practice including video surveillance, industrial damage detection and network intrusion detection.
no code implementations • 16 Aug 2017 • Trung Le, Hung Vu, Tu Dinh Nguyen, Dinh Phung
Training model to generate data has increasingly attracted research attention and become important in modern world applications.
no code implementations • 8 Aug 2017 • Quan Hoang, Tu Dinh Nguyen, Trung Le, Dinh Phung
A minimax formulation is able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN.
1 code implementation • ICML 2017 • Nhat Ho, XuanLong Nguyen, Mikhail Yurochkin, Hung Hai Bui, Viet Huynh, Dinh Phung
We propose a novel approach to the problem of multilevel clustering, which aims to simultaneously partition data in each group and discover grouping patterns among groups in a potentially large hierarchically structured corpus of data.
no code implementations • 27 Mar 2017 • Quang N. Tran, Ba-Ngu Vo, Dinh Phung, Ba-Tuong Vo, Thuong Nguyen
Multiple instance data are sets or multi-sets of unordered elements.
no code implementations • 14 Mar 2017 • Dinh Phung, Ba-Ngu Bo
The goal of data clustering is to partition data points into groups to minimize a given objective function.
no code implementations • 7 Mar 2017 • Ba-Ngu Vo, Dinh Phung, Quang N. Tran, Ba-Tuong Vo
While Multiple Instance (MI) data are point patterns -- sets or multi-sets of unordered points -- appropriate statistical point pattern models have not been used in MI learning.
no code implementations • 8 Feb 2017 • Quang N. Tran, Ba-Ngu Vo, Dinh Phung, Ba-Tuong Vo
However, there has been limited research in the clustering of point patterns - sets or multi-sets of unordered elements - that are found in numerous applications and data sources.
no code implementations • 30 Jan 2017 • Ba-Ngu Vo, Quang N. Tran, Dinh Phung, Ba-Tuong Vo
Point patterns are sets or multi-sets of unordered elements that can be found in numerous data sources.
no code implementations • 2 Dec 2016 • Dang Nguyen, Wei Luo, Dinh Phung, Svetha Venkatesh
In this paper, we consider the patient similarity matching problem over a cancer cohort of more than 220, 000 patients.
no code implementations • NeurIPS 2016 • Trung Le, Tu Nguyen, Vu Nguyen, Dinh Phung
However, this approach still suffers from a serious shortcoming as it needs to use a high dimensional random feature space to achieve a sufficiently accurate kernel approximation.
no code implementations • 28 Sep 2016 • Shivapratap Gopakumar, Truyen Tran, Dinh Phung, Svetha Venkatesh
Using a linear model as basis for prediction, we achieve feature stability by regularising latent correlation in features.
1 code implementation • 15 Sep 2016 • Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh
CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) long-range, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations.
1 code implementation • 17 Aug 2016 • Kien Do, Truyen Tran, Dinh Phung, Svetha Venkatesh
We evaluate the proposed method on synthetic and real-world datasets and demonstrate that (a) a proper handling mixed-types is necessary in outlier detection, and (b) free-energy of Mv. RBM is a powerful and efficient outlier scoring method, which is highly competitive against state-of-the-arts.
no code implementations • 11 Aug 2016 • Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh
Gates are employed in many recent state-of-the-art recurrent models such as LSTM and GRU, and feedforward models such as Residual Nets and Highway Networks.
no code implementations • 28 Jul 2016 • Truyen Tran, Wei Luo, Dinh Phung, Jonathan Morris, Kristen Rickard, Svetha Venkatesh
Preterm births occur at an alarming rate of 10-15%.
no code implementations • 22 Jun 2016 • Trung Le, Khanh Nguyen, Van Nguyen, Vu Nguyen, Dinh Phung
Acquiring labels are often costly, whereas unlabeled data are usually easy to obtain in modern machine learning applications.
no code implementations • 3 May 2016 • Thuong Nguyen, Truyen Tran, Shivapratap Gopakumar, Dinh Phung, Svetha Venkatesh
Accurate prediction of suicide risk in mental health patients remains an open problem.
1 code implementation • 22 Apr 2016 • Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung
One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity.
no code implementations • 4 Mar 2016 • Truyen Tran, Dinh Phung, Svetha Venkatesh
We introduce a deep multitask architecture to integrate multityped representations of multimodal objects.
no code implementations • 17 Feb 2016 • Truyen Tran, Dinh Phung, Svetha Venkatesh
We introduce Neural Choice by Elimination, a new framework that integrates deep neural networks into probabilistic sequential choice models for learning to rank.
no code implementations • 9 Feb 2016 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Recommender systems play a central role in providing individualized access to information and services.
1 code implementation • 1 Feb 2016 • Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh
We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes.
no code implementations • 25 Dec 2015 • Adham Beykikhoshk, Ognjen Arandjelovic, Dinh Phung, Svetha Venkatesh
In this paper we describe a novel framework for the discovery of the topical content of a data corpus, and the tracking of its complex structural changes across the temporal dimension.
no code implementations • 8 Feb 2015 • Adham Beykikhoshk, Ognjen Arandjelovic, Dinh Phung, Svetha Venkatesh
In this paper we describe a novel framework for the discovery of the topical content of a data corpus, and the tracking of its complex structural changes across the temporal dimension.
no code implementations • 6 Aug 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh, Hung H. Bui
In this contribution, we propose a new approximation technique that may have the potential to achieve sub-cubic time complexity in length and linear time depth, at the cost of some loss of quality.
no code implementations • 6 Aug 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Modern datasets are becoming heterogeneous.
no code implementations • 1 Aug 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
We introduce Thurstonian Boltzmann Machines (TBM), a unified architecture that can naturally incorporate a wide range of data inputs at the same time.
no code implementations • 31 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Ordinal data is omnipresent in almost all multiuser-generated feedback - questionnaires, preferences etc.
no code implementations • 31 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Ranking over sets arise when users choose between groups of items.
no code implementations • 24 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
Learning structured outputs with general structures is computationally challenging, except for tree-structured models.
no code implementations • 23 Jul 2014 • Shivapratap Gopakumar, Truyen Tran, Dinh Phung, Svetha Venkatesh
Stability in clinical prediction models is crucial for transferability between studies, yet has received little attention.
no code implementations • 23 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
In practical settings, the task often reduces to estimating a rank functional of an object with respect to a query.
no code implementations • 22 Jul 2014 • Truyen Tran, Dinh Phung, Svetha Venkatesh
The \emph{maximum a posteriori} (MAP) assignment for general structure Markov random fields (MRFs) is computationally intractable.
no code implementations • 9 Jan 2014 • Vu Nguyen, Dinh Phung, XuanLong Nguyen, Svetha Venkatesh, Hung Hai Bui
We present a Bayesian nonparametric framework for multilevel clustering which utilizes group-level context information to simultaneously discover low-dimensional structures of the group contents and partitions groups into clusters.