1 code implementation • 13 Oct 2023 • Qiming Bao, Gael Gendron, Alex Yuxuan Peng, Wanjun Zhong, Neset Tan, Yang Chen, Michael Witbrock, Jiamou Liu
Despite their high performance on the original publicly available datasets, we find that all models perform poorly on these newly constructed datasets.
1 code implementation • 19 Sep 2023 • Qiming Bao, Juho Leinonen, Alex Yuxuan Peng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock, Jiamou Liu
When learnersourcing multiple-choice questions, creating explanations for the solution of a question is a crucial step; it helps other students understand the solution and promotes a deeper understanding of related concepts.
1 code implementation • 31 May 2023 • Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie
We perform extensive evaluations of state-of-the-art LLMs, showing that they currently achieve very limited performance in contrast with other natural language tasks, even when applying techniques that have been shown to improve performance on other NLP tasks.
1 code implementation • 21 May 2023 • Qiming Bao, Alex Yuxuan Peng, Zhenyun Deng, Wanjun Zhong, Gael Gendron, Timothy Pistotti, Neset Tan, Nathan Young, Yang Chen, Yonghua Zhu, Paul Denny, Michael Witbrock, Jiamou Liu
Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner.
no code implementations • 14 Mar 2023 • Neşet Özkan Tan, Alex Yuxuan Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, Michael Witbrock
Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints.
1 code implementation • 28 Jul 2022 • Qiming Bao, Alex Yuxuan Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu
In our model, reasoning is performed using an iterative memory neural network based on RNN with a gated attention mechanism.
1 code implementation • Findings (ACL) 2022 • Nathan Young, Qiming Bao, Joshua Bensemann, Michael Witbrock
Transformers have recently been shown to be capable of reliably performing logical reasoning over facts and rules expressed in natural language, but abductive reasoning - inference to the best explanation of an unexpected observation - has been underexplored despite significant applications to scientific discovery, common-sense reasoning, and model interpretability.
no code implementations • 9 Dec 2021 • Joshua Bensemann, Qiming Bao, Gaël Gendron, Tim Hartill, Michael Witbrock
If we assume that artificial networks have no form of visual experience, then deficits caused by blindsight give us insights into the processes occurring within visual experience that we can incorporate into artificial neural networks.
no code implementations • 19 Nov 2021 • Lin Ni, Qiming Bao, Xiaoxuan Li, Qianqian Qi, Paul Denny, Jim Warren, Michael Witbrock, Jiamou Liu
We propose DeepQR, a novel neural-network model for AQQR that is trained using multiple-choice-question (MCQ) datasets collected from PeerWise, a widely-used learnersourcing platform.
2 code implementations • Proceedings of the Australasian Computer Science Week Multi-conference (ACSW 2020) 2020 • Qiming Bao, Lin Ni, Jiamou Liu
This paper proposes a chatbot framework that adopts a hybrid model which consists of a knowledge graph and a text similarity model.