no code implementations • 5 Feb 2024 • Kien Do, Duc Kieu, Toan Nguyen, Dang Nguyen, Hung Le, Dung Nguyen, Thin Nguyen
We introduce "posterior flows" - generalizations of "probability flows" to a broader class of stochastic processes not necessarily diffusion processes - and propose a systematic training-free method to transform the posterior flow of a "linear" stochastic process characterized by the equation Xt = at * X0 + st * X1 into a straight constant-speed (SC) flow, reminiscent of Rectified Flow.
no code implementations • 19 Dec 2023 • Phuoc Nguyen, Truyen Tran, Sunil Gupta, Thin Nguyen, Svetha Venkatesh
We then represent the functional form of a target outlier leaf as a function of the node and edge noises.
no code implementations • 15 Dec 2023 • Quang-Duy Tran, Bao Duong, Phuoc Nguyen, Thin Nguyen
One solution to this problem is assuming that cause and effect are generated from a structural causal model, enabling identification of the causal direction after estimating the model in each direction.
1 code implementation • 28 Oct 2023 • Toan Nguyen, Kien Do, Bao Duong, Thin Nguyen
Hence, we propose a compelling proposition: Minimising the divergences between risk distributions across training domains leads to robust invariance for DG.
1 code implementation • 4 Sep 2023 • Quang-Duy Tran, Phuoc Nguyen, Bao Duong, Thin Nguyen
Score-based approaches in the structure learning task are thriving because of their scalability.
1 code implementation • 16 Jul 2023 • Bao Duong, Thin Nguyen
The result is HOST (Heteroscedastic causal STructure learning), a simple yet effective causal structure learning algorithm that scales polynomially in both sample size and dimensionality.
1 code implementation • 6 Dec 2022 • Toan Nguyen, Kien Do, Duc Thanh Nguyen, Bao Duong, Thin Nguyen
A well-known existing causal inference method like back-door adjustment cannot be applied to remove spurious correlations as it requires the observation of confounders.
1 code implementation • 20 Nov 2022 • Bao Duong, Thin Nguyen
Mutual Information (MI) and Conditional Mutual Information (CMI) are multi-purpose tools from information theory that are able to naturally measure the statistical dependencies between random variables, thus they are usually of central interest in several statistical and machine learning tasks, such as conditional independence testing and representation learning.
1 code implementation • 4 Sep 2022 • Bao Duong, Thin Nguyen
Detecting conditional independencies plays a key role in several statistical and machine learning tasks, especially in causal discovery algorithms.
1 code implementation • 25 Jul 2022 • Azhar Mohammed, Dang Nguyen, Bao Duong, Thin Nguyen
Data augmentation is one of the most successful techniques to improve the classification accuracy of machine learning models in computer vision.
no code implementations • 14 Feb 2022 • Tri Minh Nguyen, Thin Nguyen, Truyen Tran
Discovering new medicines is the hallmark of human endeavor to live a better and longer life.
1 code implementation • 16 Jan 2022 • Tri Minh Nguyen, Thin Nguyen, Truyen Tran
While the drug or target representation can be learned in an unsupervised manner, it still lacks the interaction information, which is critical in drug-target interaction.
1 code implementation • 24 Mar 2021 • Tri Minh Nguyen, Thomas P Quinn, Thin Nguyen, Truyen Tran
Methods: We propose a multi-agent reinforcement learning framework, Multi-Agent Counterfactual Drug target binding Affinity (MACDA), to generate counterfactual explanations for the drug-protein complex.
1 code implementation • 25 Sep 2020 • Tri Minh Nguyen, Thin Nguyen, Thao Minh Le, Truyen Tran
In addition, previous DTA methods learn protein representation solely based on a small number of protein sequences in DTA datasets while neglecting the use of proteins outside of the DTA datasets.
no code implementations • 28 Apr 2020 • Dung Nguyen, Duc Thanh Nguyen, Rui Zeng, Thanh Thi Nguyen, Son N. Tran, Thin Nguyen, Sridha Sridharan, Clinton Fookes
Multimodal dimensional emotion recognition has drawn a great attention from the affective computing community and numerous schemes have been extensively investigated, making a significant progress in this area.
1 code implementation • bioRxiv 2019 • Thin Nguyen, Hang Le, Svetha Venkatesh
The results show that our proposed method can not only predict the affinity better than non-deep learning models, but also outperform competing deep learning approaches.
Ranked #4 on Drug Discovery on KIBA
1 code implementation • NeurIPS 2018 • Hung Le, Truyen Tran, Thin Nguyen, Svetha Venkatesh
Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation.
no code implementations • 1 Apr 2018 • Kien Do, Truyen Tran, Thin Nguyen, Svetha Venkatesh
GAML regards labels as auxiliary nodes and models them in conjunction with the input graph.