Search Results for author: Khiem Le

Found 4 papers, 1 papers with code

MolX: Enhancing Large Language Models for Molecular Learning with A Multi-Modal Extension

no code implementations10 Jun 2024 Khiem Le, Zhichun Guo, Kaiwen Dong, Xiaobao Huang, Bozhao Nan, Roshni Iyer, Xiangliang Zhang, Olaf Wiest, Wei Wang, Nitesh V. Chawla

Large Language Models (LLMs) with their strong task-handling capabilities have shown remarkable advancements across a spectrum of fields, moving beyond natural language understanding.

Natural Language Understanding Retrosynthesis

Exploring the Practicality of Federated Learning: A Survey Towards the Communication Perspective

no code implementations30 May 2024 Khiem Le, Nhan Luong-Ha, Manh Nguyen-Duc, Danh Le-Phuoc, Cuong Do, Kok-Seng Wong

Federated Learning (FL) is a promising paradigm that offers significant advancements in privacy-preserving, decentralized machine learning by enabling collaborative training of models across distributed devices without centralizing data.

Federated Learning Privacy Preserving

HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts

1 code implementation12 Dec 2023 Giang Do, Khiem Le, Quang Pham, TrungTin Nguyen, Thanh-Nam Doan, Bint T. Nguyen, Chenghao Liu, Savitha Ramasamy, XiaoLi Li, Steven Hoi

By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models.

Cannot find the paper you are looking for? You can Submit a new open access paper.