Search Results for author: Alex Yuxuan Peng

Found 6 papers, 4 papers with code

Can Large Language Models Learn Independent Causal Mechanisms?

no code implementations4 Feb 2024 Gaël Gendron, Bao Trung Nguyen, Alex Yuxuan Peng, Michael Witbrock, Gillian Dobbie

We show that such causal constraints can improve out-of-distribution performance on abstract and causal reasoning tasks.

Language Modelling

Assessing and Enhancing the Robustness of Large Language Models with Task Structure Variations for Logical Reasoning

1 code implementation13 Oct 2023 Qiming Bao, Gael Gendron, Alex Yuxuan Peng, Wanjun Zhong, Neset Tan, Yang Chen, Michael Witbrock, Jiamou Liu

Despite their high performance on the original publicly available datasets, we find that all models perform poorly on these newly constructed datasets.

Data Augmentation GPT-3.5 +3

Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

1 code implementation19 Sep 2023 Qiming Bao, Juho Leinonen, Alex Yuxuan Peng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock, Jiamou Liu

When learnersourcing multiple-choice questions, creating explanations for the solution of a question is a crucial step; it helps other students understand the solution and promotes a deeper understanding of related concepts.

Explanation Generation GPT-4 +3

Input-length-shortening and text generation via attention values

no code implementations14 Mar 2023 Neşet Özkan Tan, Alex Yuxuan Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, Michael Witbrock

Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints.

Conditional Text Generation text-classification +1

Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation

1 code implementation28 Jul 2022 Qiming Bao, Alex Yuxuan Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu

In our model, reasoning is performed using an iterative memory neural network based on RNN with a gated attention mechanism.

Cannot find the paper you are looking for? You can Submit a new open access paper.