no code implementations • 7 Mar 2024 • Yu Zhu, Chuxiong Sun, Wenfei Yang, Wenqiang Wei, Bo Tang, Tianzhu Zhang, Zhiyu Li, Shifeng Zhang, Feiyu Xiong, Jie Hu, MingChuan Yang
Reinforcement Learning from Human Feedback (RLHF) is the prevailing approach to ensure Large Language Models (LLMs) align with human values.
1 code implementation • 17 Mar 2022 • Yuan Cao, Zhiqiao Gao, Jie Hu, MingChuan Yang, Jinpeng Chen
As a result, informative samples in the margin area can not be discovered and AL performance are damaged.
1 code implementation • 30 Dec 2020 • Chuxiong Sun, Jie Hu, Hongming Gu, Jinpeng Chen, MingChuan Yang
Until the date of submission (Aug 26, 2022), AGDNs achieve top-1 performance on the ogbn-arxiv, ogbn-proteins and ogbl-ddi datasets and top-3 performance on the ogbl-citation2 dataset.
Ranked #1 on Link Property Prediction on ogbl-citation2