Search Results for author: Tim Hartill

Found 6 papers, 2 papers with code

Do Smaller Language Models Answer Contextualised Questions Through Memorisation Or Generalisation?

no code implementations21 Nov 2023 Tim Hartill, Joshua Bensemann, Michael Witbrock, Patricia J. Riddle

We train two Language Models in a multitask fashion whereby the second model differs from the first only in that it has two additional datasets added to the training regime that are designed to impart simple numerical reasoning strategies of a sort known to improve performance on some of our evaluation datasets but not on others.

Question Answering Semantic Similarity +1

Answering Unseen Questions With Smaller Language Models Using Rationale Generation and Dense Retrieval

no code implementations9 Aug 2023 Tim Hartill, Diana Benavides-Prado, Michael Witbrock, Patricia J. Riddle

When provided with sufficient explanatory context, smaller Language Models have been shown to exhibit strong reasoning ability on challenging short-answer question-answering tasks where the questions are unseen in training.

Language Modelling Question Answering +2

Teaching Smaller Language Models To Generalise To Unseen Compositional Questions

1 code implementation2 Aug 2023 Tim Hartill, Neset Tan, Michael Witbrock, Patricia J. Riddle

We equip a smaller Language Model to generalise to answering challenging compositional questions that have not been seen in training.

Information Retrieval Language Modelling +3

Input-length-shortening and text generation via attention values

no code implementations14 Mar 2023 Neşet Özkan Tan, Alex Yuxuan Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, Michael Witbrock

Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints.

Conditional Text Generation text-classification +1

Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation

1 code implementation28 Jul 2022 Qiming Bao, Alex Yuxuan Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu

In our model, reasoning is performed using an iterative memory neural network based on RNN with a gated attention mechanism.

Relating Blindsight and AI: A Review

no code implementations9 Dec 2021 Joshua Bensemann, Qiming Bao, Gaël Gendron, Tim Hartill, Michael Witbrock

If we assume that artificial networks have no form of visual experience, then deficits caused by blindsight give us insights into the processes occurring within visual experience that we can incorporate into artificial neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.