Search Results for author: Marek Grzes

Found 10 papers, 5 papers with code

Bits of Grass: Does GPT already know how to write like Whitman?

no code implementations10 May 2023 Piotr Sawicki, Marek Grzes, Fabricio Goes, Dan Brown, Max Peeperkorn, Aisha Khatun

This study examines the ability of GPT-3. 5, GPT-3. 5-turbo (ChatGPT) and GPT-4 models to generate poems in the style of specific authors using zero-shot and many-shot prompts (which use the maximum context length of 8192 tokens).

How good are variational autoencoders at transfer learning?

1 code implementation21 Apr 2023 Lisa Bonheme, Marek Grzes

Variational autoencoders (VAEs) are used for transfer learning across various research domains such as music generation or medical image analysis.

Music Generation Transfer Learning

Crowd Score: A Method for the Evaluation of Jokes using Large Language Model AI Voters as Judges

1 code implementation21 Dec 2022 Fabricio Goes, Zisen Zhou, Piotr Sawicki, Marek Grzes, Daniel G. Brown

We tested our methodology on 52 jokes in a crowd of four AI voters with different humour types: affiliative, self-enhancing, aggressive and self-defeating.

Language Modelling Large Language Model

FONDUE: an algorithm to find the optimal dimensionality of the latent representations of variational autoencoders

1 code implementation26 Sep 2022 Lisa Bonheme, Marek Grzes

We show that the discrepancies between the IDE of the mean and sampled representations of a VAE after only a few steps of training reveal the presence of passive variables in the latent space, which, in well-behaved VAEs, indicates a superfluous number of dimensions.

How do Variational Autoencoders Learn? Insights from Representational Similarity

1 code implementation17 May 2022 Lisa Bonheme, Marek Grzes

The ability of Variational Autoencoders (VAEs) to learn disentangled representations has made them popular for practical applications.

Be More Active! Understanding the Differences between Mean and Sampled Representations of Variational Autoencoders

1 code implementation26 Sep 2021 Lisa Bonheme, Marek Grzes

However, their mean representations, which are generally used for downstream tasks, have recently been shown to be more correlated than their sampled counterpart, on which disentanglement is usually measured.

Disentanglement

Relating RNN Layers with the Spectral WFA Ranks in Sequence Modelling

no code implementations WS 2019 Farhana Ferdousi Liza, Marek Grzes

We analyse Recurrent Neural Networks (RNNs) to understand the significance of multiple LSTM layers.

Reinforcement Learning using Augmented Neural Networks

no code implementations20 Jun 2018 Jack Shannon, Marek Grzes

Neural networks allow Q-learning reinforcement learning agents such as deep Q-networks (DQN) to approximate complex mappings from state spaces to value functions.

Q-Learning reinforcement-learning +1

Improving Language Modelling with Noise-contrastive estimation

no code implementations22 Sep 2017 Farhana Ferdousi Liza, Marek Grzes

In this paper, we showed that NCE can be a successful approach in neural language modelling when the hyperparameters of a neural network are tuned appropriately.

Language Modelling Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.