Search Results for author: Michael Witbrock

Found 46 papers, 18 papers with code

Convolutional and Recurrent Neural Networks for Spoken Emotion Recognition

no code implementations ALTA 2020 Aaron Keesing, Ian Watson, Michael Witbrock

We test four models proposed in the speech emotion recognition (SER) literature on 15 public and academic licensed datasets in speaker-independent cross-validation.

Speech Emotion Recognition

Explicit Graph Reasoning Fusing Knowledge and Contextual Information for Multi-hop Question Answering

1 code implementation NAACL (DLG4NLP) 2022 Zhenyun Deng, Yonghua Zhu, Qianqian Qi, Michael Witbrock, Patricia Riddle

Current graph-neural-network-based (GNN-based) approaches to multi-hop questions integrate clues from scattered paragraphs in an entity graph, achieving implicit reasoning by synchronous update of graph node representations using information from neighbours; this is poorly suited for explaining how clues are passed through the graph in hops.

Multi-hop Question Answering Question Answering +1

Can Large Language Models Learn Independent Causal Mechanisms?

no code implementations4 Feb 2024 Gaël Gendron, Bao Trung Nguyen, Alex Yuxuan Peng, Michael Witbrock, Gillian Dobbie

We show that such causal constraints can improve out-of-distribution performance on abstract and causal reasoning tasks.

Language Modelling

Do Smaller Language Models Answer Contextualised Questions Through Memorisation Or Generalisation?

no code implementations21 Nov 2023 Tim Hartill, Joshua Bensemann, Michael Witbrock, Patricia J. Riddle

We train two Language Models in a multitask fashion whereby the second model differs from the first only in that it has two additional datasets added to the training regime that are designed to impart simple numerical reasoning strategies of a sort known to improve performance on some of our evaluation datasets but not on others.

Question Answering Semantic Similarity +1

Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models

1 code implementation19 Sep 2023 Qiming Bao, Juho Leinonen, Alex Yuxuan Peng, Wanjun Zhong, Gaël Gendron, Timothy Pistotti, Alice Huang, Paul Denny, Michael Witbrock, Jiamou Liu

When learnersourcing multiple-choice questions, creating explanations for the solution of a question is a crucial step; it helps other students understand the solution and promotes a deeper understanding of related concepts.

Explanation Generation Language Modelling +2

Answering Unseen Questions With Smaller Language Models Using Rationale Generation and Dense Retrieval

no code implementations9 Aug 2023 Tim Hartill, Diana Benavides-Prado, Michael Witbrock, Patricia J. Riddle

When provided with sufficient explanatory context, smaller Language Models have been shown to exhibit strong reasoning ability on challenging short-answer question-answering tasks where the questions are unseen in training.

Language Modelling Question Answering +2

Teaching Smaller Language Models To Generalise To Unseen Compositional Questions

1 code implementation2 Aug 2023 Tim Hartill, Neset Tan, Michael Witbrock, Patricia J. Riddle

We equip a smaller Language Model to generalise to answering challenging compositional questions that have not been seen in training.

Information Retrieval Language Modelling +3

Meerkat Behaviour Recognition Dataset

1 code implementation20 Jun 2023 Mitchell Rogers, Gaël Gendron, David Arturo Soriano Valdez, Mihailo Azhar, Yang Chen, Shahrokh Heidari, Caleb Perelini, Padriac O'Leary, Kobe Knowles, Izak Tait, Simon Eyre, Michael Witbrock, Patrice Delmas

Recording animal behaviour is an important step in evaluating the well-being of animals and further understanding the natural world.

Large Language Models Are Not Strong Abstract Reasoners

1 code implementation31 May 2023 Gaël Gendron, Qiming Bao, Michael Witbrock, Gillian Dobbie

We perform extensive evaluations of state-of-the-art LLMs, showing that they currently achieve very limited performance in contrast with other natural language tasks, even when applying techniques that have been shown to improve performance on other NLP tasks.

Common Sense Reasoning Memorization +1

Neuromodulation Gated Transformer

1 code implementation5 May 2023 Kobe Knowles, Joshua Bensemann, Diana Benavides-Prado, Vithya Yogarajan, Michael Witbrock, Gillian Dobbie, Yang Chen

We introduce a novel architecture, the Neuromodulation Gated Transformer (NGT), which is a simple implementation of neuromodulation in transformers via a multiplicative effect.

Input-length-shortening and text generation via attention values

no code implementations14 Mar 2023 Neşet Özkan Tan, Alex Yuxuan Peng, Joshua Bensemann, Qiming Bao, Tim Hartill, Mark Gahegan, Michael Witbrock

Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints.

Conditional Text Generation text-classification +1

Learning Density-Based Correlated Equilibria for Markov Games

no code implementations16 Feb 2023 Libo Zhang, Yang Chen, Toru Takisaka, Bakh Khoussainov, Michael Witbrock, Jiamou Liu

In real-world multi-agent systems, in addition to being in an equilibrium, agents' policies are often expected to meet requirements with respect to safety, and fairness.


Disentanglement of Latent Representations via Causal Interventions

1 code implementation2 Feb 2023 Gaël Gendron, Michael Witbrock, Gillian Dobbie

Following this assumption, we introduce a new method for disentanglement inspired by causal dynamics that combines causality theory with vector-quantized variational autoencoders.

Disentanglement Retrieval

A Survey of Methods, Challenges and Perspectives in Causality

no code implementations1 Feb 2023 Gaël Gendron, Michael Witbrock, Gillian Dobbie

Deep Learning models have shown success in a large variety of tasks by extracting correlation patterns from high-dimensional data but still struggle when generalizing out of their initial distribution.

Rapid Connectionist Speaker Adaptation

no code implementations15 Nov 2022 Michael Witbrock, Patrick Haffner

We present SVCnet, a system for modelling speaker variability.

Prompt-based Conservation Learning for Multi-hop Question Answering

no code implementations COLING 2022 Zhenyun Deng, Yonghua Zhu, Yang Chen, Qianqian Qi, Michael Witbrock, Patricia Riddle

In this paper, we propose the Prompt-based Conservation Learning (PCL) framework for multi-hop QA, which acquires new knowledge from multi-hop QA tasks while conserving old knowledge learned on single-hop QA tasks, mitigating forgetting.

Multi-hop Question Answering Question Answering

Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation

1 code implementation28 Jul 2022 Qiming Bao, Alex Yuxuan Peng, Tim Hartill, Neset Tan, Zhenyun Deng, Michael Witbrock, Jiamou Liu

In our model, reasoning is performed using an iterative memory neural network based on RNN with a gated attention mechanism.

Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering

no code implementations16 Jun 2022 Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, Patricia Riddle

We then achieve the decomposition of a multi-hop question via segmentation of the corresponding AMR graph based on the required reasoning type.

AMR-to-Text Generation Multi-hop Question Answering +2

AbductionRules: Training Transformers to Explain Unexpected Inputs

1 code implementation Findings (ACL) 2022 Nathan Young, Qiming Bao, Joshua Bensemann, Michael Witbrock

Transformers have recently been shown to be capable of reliably performing logical reasoning over facts and rules expressed in natural language, but abductive reasoning - inference to the best explanation of an unexpected observation - has been underexplored despite significant applications to scientific discovery, common-sense reasoning, and model interpretability.

Common Sense Reasoning Logical Reasoning

Semantic Construction Grammar: Bridging the NL / Logic Divide

no code implementations10 Dec 2021 Dave Schneider, Michael Witbrock

In this paper, we discuss Semantic Construction Grammar (SCG), a system developed over the past several years to facilitate translation between natural language and logical representations.


Relating Blindsight and AI: A Review

no code implementations9 Dec 2021 Joshua Bensemann, Qiming Bao, Gaël Gendron, Tim Hartill, Michael Witbrock

If we assume that artificial networks have no form of visual experience, then deficits caused by blindsight give us insights into the processes occurring within visual experience that we can incorporate into artificial neural networks.

DeepQR: Neural-based Quality Ratings for Learnersourced Multiple-Choice Questions

no code implementations19 Nov 2021 Lin Ni, Qiming Bao, Xiaoxuan Li, Qianqian Qi, Paul Denny, Jim Warren, Michael Witbrock, Jiamou Liu

We propose DeepQR, a novel neural-network model for AQQR that is trained using multiple-choice-question (MCQ) datasets collected from PeerWise, a widely-used learnersourcing platform.

Contrastive Learning Multiple-choice

Learning to Guide a Saturation-Based Theorem Prover

no code implementations7 Jun 2021 Ibrahim Abdelaziz, Maxwell Crouse, Bassem Makni, Vernon Austil, Cristina Cornelio, Shajith Ikbal, Pavan Kapanipathi, Ndivhuwo Makondo, Kavitha Srinivas, Michael Witbrock, Achille Fokoue

In addition, to the best of our knowledge, TRAIL is the first reinforcement learning-based approach to exceed the performance of a state-of-the-art traditional theorem prover on a standard theorem proving benchmark (solving up to 17% more problems).

Automated Theorem Proving reinforcement-learning +1

Adversarial Inverse Reinforcement Learning for Mean Field Games

no code implementations29 Apr 2021 Yang Chen, Libo Zhang, Jiamou Liu, Michael Witbrock

However, existing IRL methods for MFGs are powerless to reason about uncertainties in demonstrated behaviours of individual agents.

reinforcement-learning Reinforcement Learning (RL)

Graph Enhanced Cross-Domain Text-to-SQL Generation

no code implementations WS 2019 Siyu Huo, Tengfei Ma, Jie Chen, Maria Chang, Lingfei Wu, Michael Witbrock

Semantic parsing is a fundamental problem in natural language understanding, as it involves the mapping of natural language to structured forms such as executable queries or logic-like knowledge representations.

Natural Language Understanding Semantic Parsing +3

A Sequential Set Generation Method for Predicting Set-Valued Outputs

no code implementations12 Mar 2019 Tian Gao, Jie Chen, Vijil Chenthamarakshan, Michael Witbrock

Though SSG is sequential in nature, it does not penalize the ordering of the appearance of the set elements and can be applied to a variety of set output problems, such as a set of classification labels or sequences.

General Classification Multi-Label Classification

From Node Embedding to Graph Embedding: Scalable Global Graph Kernel via Random Features

no code implementations NIPS 2018 2018 Lingfei Wu, Ian En-Hsu Yen, Kun Xu, Liang Zhao, Yinglong Xia, Michael Witbrock

Graph kernels are one of the most important methods for graph data analysis and have been successfully applied in diverse applications.

Graph Embedding

Answering Science Exam Questions Using Query Rewriting with Background Knowledge

no code implementations15 Sep 2018 Ryan Musa, Xiaoyan Wang, Achille Fokoue, Nicholas Mattei, Maria Chang, Pavan Kapanipathi, Bassem Makni, Kartik Talamadupula, Michael Witbrock

Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques.

Information Retrieval Multiple-choice +3

Random Warping Series: A Random Features Method for Time-Series Embedding

1 code implementation14 Sep 2018 Lingfei Wu, Ian En-Hsu Yen, Jin-Feng Yi, Fangli Xu, Qi Lei, Michael Witbrock

The proposed kernel does not suffer from the issue of diagonal dominance while naturally enjoys a \emph{Random Features} (RF) approximation, which reduces the computational complexity of existing DTW-based techniques from quadratic to linear in terms of both the number and the length of time-series.

Clustering Dynamic Time Warping +2

Image Super-Resolution via Dual-State Recurrent Networks

1 code implementation CVPR 2018 Wei Han, Shiyu Chang, Ding Liu, Mo Yu, Michael Witbrock, Thomas S. Huang

Advances in image super-resolution (SR) have recently benefited significantly from rapid developments in deep neural networks.

Image Super-Resolution

Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks

4 code implementations ICLR 2019 Kun Xu, Lingfei Wu, Zhiguo Wang, Yansong Feng, Michael Witbrock, Vadim Sheinin

Our method first generates the node and graph embeddings using an improved graph-based neural network with a novel aggregation strategy to incorporate edge direction information in the node embeddings.

Graph-to-Sequence SQL-to-Text +1

D2KE: From Distance to Kernel and Embedding

no code implementations14 Feb 2018 Lingfei Wu, Ian En-Hsu Yen, Fangli Xu, Pradeep Ravikumar, Michael Witbrock

For many machine learning problem settings, particularly with structured inputs such as sequences or sets of objects, a distance measure between inputs can be specified more naturally than a feature representation.

Time Series Analysis

An Implementation of Back-Propagation Learning on GF11, a Large SIMD Parallel Computer

no code implementations4 Jan 2018 Michael Witbrock, Marco Zagha

We describe a neural network simulator for the IBM GF11, an experimental SIMD machine with 566 processors and a peak arithmetic performance of 11 Gigaflops.

Neural Network simulation

Dilated Recurrent Neural Networks

2 code implementations NeurIPS 2017 Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock, Mark Hasegawa-Johnson, Thomas S. Huang

To provide a theory-based quantification of the architecture's advantages, we introduce a memory capacity measure, the mean recurrent length, which is more suitable for RNNs with long skip connections than existing measures.

Sequential Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.