1 code implementation • 1 Feb 2024 • Yingji Zhang, Danilo S. Carvalho, Marco Valentino, Ian Pratt-Hartmann, Andre Freitas
Achieving precise semantic control over the latent spaces of Variational AutoEncoders (VAEs) holds significant value for downstream tasks in NLP as the underlying generative mechanisms could be better localised, explained and improved upon.
no code implementations • 20 Dec 2023 • Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas
Deep generative neural networks, such as Variational AutoEncoders (VAEs), offer an opportunity to better understand and control language models from the perspective of sentence-level latent spaces.
1 code implementation • 14 Nov 2023 • Yingji Zhang, Marco Valentino, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas
The injection of syntactic information in Variational AutoEncoders (VAEs) has been shown to result in an overall improvement of performances and generalisation.
no code implementations • 7 Aug 2023 • Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, Andre Freitas
They employ the T5 model to directly generate the tree, which can explain how the answer is inferred.
no code implementations • 2 May 2023 • Yingji Zhang, Danilo S. Carvalho, André Freitas
Disentangled latent spaces usually have better semantic separability and geometrical properties, which leads to better interpretability and more controllable data generation.
no code implementations • 12 Oct 2022 • Yingji Zhang, Danilo S. Carvalho, Ian Pratt-Hartmann, André Freitas
Disentangling the encodings of neural models is a fundamental aspect for improving interpretability, semantic control, and understanding downstream task performance in Natural Language Processing.
no code implementations • 22 Sep 2022 • Danilo S. Carvalho, Giangiacomo Mercatali, Yingji Zhang, Andre Freitas
Disentangling the encodings of neural models is a fundamental aspect for improving interpretability, semantic control and downstream task performance in Natural Language Processing.