Search Results for author: Tuan Dung Nguyen

Found 12 papers, 4 papers with code

AstroMLab 4: Benchmark-Topping Performance in Astronomy Q&A with a 70B-Parameter Domain-Specialized Reasoning Model

no code implementations23 May 2025 Tijmen de Haan, Yuan-Sen Ting, Tirthankar Ghosal, Tuan Dung Nguyen, Alberto Accomazzi, Emily Herron, Vanessa Lama, Rui Pan, Azton Wells, Nesar Ramachandra

General-purpose large language models, despite their broad capabilities, often struggle with specialized domain knowledge, a limitation particularly pronounced in more accessible, lower-parameter versions.

Astronomy

Empirically evaluating commonsense intelligence in large language models with large-scale human judgments

no code implementations15 May 2025 Tuan Dung Nguyen, Duncan J. Watts, Mark E. Whiting

Commonsense intelligence in machines is often assessed by static benchmarks that compare a model's output against human-prescribed correct labels.

Common Sense Reasoning

AstroMLab 1: Who Wins Astronomy Jeopardy!?

no code implementations15 Jul 2024 Yuan-Sen Ting, Tuan Dung Nguyen, Tirthankar Ghosal, Rui Pan, Hardik Arora, Zechang Sun, Tijmen de Haan, Nesar Ramachandra, Azton Wells, Sandeep Madireddy, Alberto Accomazzi

This dataset comprises 4, 425 multiple-choice questions curated from the Annual Review of Astronomy and Astrophysics, covering a broad range of astrophysical topics.

Astronomy Benchmarking +1

Federated PCA on Grassmann Manifold for IoT Anomaly Detection

1 code implementation10 Jul 2024 Tung-Anh Nguyen, Long Tan Le, Tuan Dung Nguyen, Wei Bao, Suranga Seneviratne, Choong Seon Hong, Nguyen H. Tran

Experimental results on the UNSW-NB15 and TON-IoT datasets show that our proposed methods offer performance in anomaly detection comparable to nonlinear baselines, while providing significant improvements in communication and memory efficiency, underscoring their potential for securing IoT networks.

Intrusion Detection Unsupervised Anomaly Detection

On Partial Optimal Transport: Revising the Infeasibility of Sinkhorn and Efficient Gradient Methods

1 code implementation21 Dec 2023 Anh Duc Nguyen, Tuan Dung Nguyen, Quang Minh Nguyen, Hoang H. Nguyen, Lam M. Nguyen, Kim-Chuan Toh

This paper studies the Partial Optimal Transport (POT) problem between two unbalanced measures with at most $n$ supports and its applications in various AI tasks such as color transfer or domain adaptation.

Domain Adaptation Point Cloud Registration

Federated Deep Equilibrium Learning: Harnessing Compact Global Representations to Enhance Personalization

no code implementations27 Sep 2023 Long Tan Le, Tuan Dung Nguyen, Tung-Anh Nguyen, Choong Seon Hong, Suranga Seneviratne, Wei Bao, Nguyen H. Tran

Federated Learning (FL) has emerged as a groundbreaking distributed learning paradigm enabling clients to train a global model collaboratively without exchanging data.

Federated Learning Information Retrieval

On the Generalization of Wasserstein Robust Federated Learning

no code implementations3 Jun 2022 Tung-Anh Nguyen, Tuan Dung Nguyen, Long Tan Le, Canh T. Dinh, Nguyen H. Tran

We show that the robustness of WAFL is more general than related approaches, and the generalization bound is robust to all adversarial distributions inside the Wasserstein ball (ambiguity set).

Domain Adaptation Federated Learning

DONE: Distributed Approximate Newton-type Method for Federated Edge Learning

2 code implementations10 Dec 2020 Canh T. Dinh, Nguyen H. Tran, Tuan Dung Nguyen, Wei Bao, Amir Rezaei Balef, Bing B. Zhou, Albert Y. Zomaya

In this work, we propose DONE, a distributed approximate Newton-type algorithm with fast convergence rate for communication-efficient federated edge learning.

Edge-computing Vocal Bursts Type Prediction

Personalized Federated Learning with Moreau Envelopes

4 code implementations NeurIPS 2020 Canh T. Dinh, Nguyen H. Tran, Tuan Dung Nguyen

Federated learning (FL) is a decentralized and privacy-preserving machine learning technique in which a group of clients collaborate with a server to learn a global model without sharing clients' data.

Diversity Meta-Learning +4

Cannot find the paper you are looking for? You can Submit a new open access paper.