Search Results for author: Ian Pratt-Hartmann

Found 7 papers, 3 papers with code

Improving Semantic Control in Discrete Latent Spaces with Transformer Quantized Variational Autoencoders

1 code implementation1 Feb 2024 Yingji Zhang, Danilo S. Carvalho, Marco Valentino, Ian Pratt-Hartmann, Andre Freitas

Achieving precise semantic control over the latent spaces of Variational AutoEncoders (VAEs) holds significant value for downstream tasks in NLP as the underlying generative mechanisms could be better localised, explained and improved upon.

LlaMaVAE: Guiding Large Language Model Generation via Continuous Latent Sentence Spaces

no code implementations20 Dec 2023 Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas

Deep generative neural networks, such as Variational AutoEncoders (VAEs), offer an opportunity to better understand and control language models from the perspective of sentence-level latent spaces.

Definition Modelling Language Modelling +4

Graph-Induced Syntactic-Semantic Spaces in Transformer-Based Variational AutoEncoders

1 code implementation14 Nov 2023 Yingji Zhang, Marco Valentino, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas

The injection of syntactic information in Variational AutoEncoders (VAEs) has been shown to result in an overall improvement of performances and generalisation.

Language Modelling Multi-Task Learning

Can Transformers Reason in Fragments of Natural Language?

1 code implementation10 Nov 2022 Viktor Schlegel, Kamen V. Pavlov, Ian Pratt-Hartmann

State-of-the-art deep-learning-based approaches to Natural Language Processing (NLP) are credited with various capabilities that involve reasoning with natural language texts.

valid

Quasi-symbolic explanatory NLI via disentanglement: A geometrical examination

no code implementations12 Oct 2022 Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas

Disentangling the encodings of neural models is a fundamental aspect for improving interpretability, semantic control, and understanding downstream task performance in Natural Language Processing.

Disentanglement Explanation Generation

Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards

no code implementations IWCS (ACL) 2021 Marco Valentino, Ian Pratt-Hartmann, André Freitas

An emerging line of research in Explainable NLP is the creation of datasets enriched with human-annotated explanations and rationales, used to build and evaluate models with step-wise inference and explanation generation capabilities.

Explanation Generation valid

Cannot find the paper you are looking for? You can Submit a new open access paper.