1 code implementation • 27 Feb 2024 • Yi Hu, Xiaojuan Tang, Haotong Yang, Muhan Zhang
Through carefully designed intervention experiments on five math tasks, we confirm that transformers are performing case-based reasoning, no matter whether scratchpad is used, which aligns with the previous observations that transformers use subgraph matching/shortcut learning to reason.
1 code implementation • 9 Nov 2023 • Fanxu Meng, Haotong Yang, Yiding Wang, Muhan Zhang
The human brain is naturally equipped to comprehend and interpret visual information rapidly.
no code implementations • 9 Oct 2023 • Haotong Yang, Fanxu Meng, Zhouchen Lin, Muhan Zhang
Furthermore, by generalizing this structure to the hierarchical case, we demonstrate that models can achieve task composition, further reducing the space needed to learn from linear to logarithmic, thereby effectively learning on complex reasoning involving multiple steps.
no code implementations • 29 May 2023 • Yi Hu, Haotong Yang, Zhouchen Lin, Muhan Zhang
We also consider the ensemble of code prompting and CoT prompting to combine the strengths of both.
1 code implementation • 2 Feb 2023 • Xiyuan Wang, Haotong Yang, Muhan Zhang
Despite its outstanding performance in various graph tasks, vanilla Message Passing Neural Network (MPNN) usually fails in link prediction tasks, as it only uses representations of two individual target nodes and ignores the pairwise relation between them.
Ranked #1 on Link Property Prediction on ogbl-ddi
1 code implementation • 19 Sep 2022 • Haotong Yang, Zhouchen Lin, Muhan Zhang
However, evaluation of knowledge graph completion (KGC) models often ignores the incompleteness -- facts in the test set are ranked against all unknown triplets which may contain a large number of missing facts not included in the KG yet.