Search Results for author: Yingji Zhang

Found 7 papers, 2 papers with code

Improving Semantic Control in Discrete Latent Spaces with Transformer Quantized Variational Autoencoders

1 code implementation1 Feb 2024 Yingji Zhang, Danilo S. Carvalho, Marco Valentino, Ian Pratt-Hartmann, Andre Freitas

Achieving precise semantic control over the latent spaces of Variational AutoEncoders (VAEs) holds significant value for downstream tasks in NLP as the underlying generative mechanisms could be better localised, explained and improved upon.

LlaMaVAE: Guiding Large Language Model Generation via Continuous Latent Sentence Spaces

no code implementations20 Dec 2023 Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas

Deep generative neural networks, such as Variational AutoEncoders (VAEs), offer an opportunity to better understand and control language models from the perspective of sentence-level latent spaces.

Definition Modelling Language Modelling +4

Graph-Induced Syntactic-Semantic Spaces in Transformer-Based Variational AutoEncoders

1 code implementation14 Nov 2023 Yingji Zhang, Marco Valentino, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas

The injection of syntactic information in Variational AutoEncoders (VAEs) has been shown to result in an overall improvement of performances and generalisation.

Language Modelling Multi-Task Learning

Learning Disentangled Semantic Spaces of Explanations via Invertible Neural Networks

no code implementations2 May 2023 Yingji Zhang, Danilo S. Carvalho, André Freitas

Disentangled latent spaces usually have better semantic separability and geometrical properties, which leads to better interpretability and more controllable data generation.

Disentanglement Sentence +1

Quasi-symbolic explanatory NLI via disentanglement: A geometrical examination

no code implementations12 Oct 2022 Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas

Disentangling the encodings of neural models is a fundamental aspect for improving interpretability, semantic control, and understanding downstream task performance in Natural Language Processing.

Disentanglement Explanation Generation

Learning Disentangled Representations for Natural Language Definitions

no code implementations22 Sep 2022 Danilo S. Carvalho, Giangiacomo Mercatali, Yingji Zhang, Andre Freitas

Disentangling the encodings of neural models is a fundamental aspect for improving interpretability, semantic control and downstream task performance in Natural Language Processing.

Disentanglement Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.