Search Results for author: Giang Do

Found 2 papers, 1 papers with code

CompeteSMoE - Effective Training of Sparse Mixture of Experts via Competition

no code implementations4 Feb 2024 Quang Pham, Giang Do, Huy Nguyen, TrungTin Nguyen, Chenghao Liu, Mina Sartipi, Binh T. Nguyen, Savitha Ramasamy, XiaoLi Li, Steven Hoi, Nhat Ho

Sparse mixture of experts (SMoE) offers an appealing solution to scale up the model complexity beyond the mean of increasing the network's depth or width.

HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts

1 code implementation12 Dec 2023 Giang Do, Khiem Le, Quang Pham, TrungTin Nguyen, Thanh-Nam Doan, Bint T. Nguyen, Chenghao Liu, Savitha Ramasamy, XiaoLi Li, Steven Hoi

By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models.

Cannot find the paper you are looking for? You can Submit a new open access paper.