no code implementations • 10 May 2023 • Piotr Sawicki, Marek Grzes, Fabricio Goes, Dan Brown, Max Peeperkorn, Aisha Khatun
This study examines the ability of GPT-3. 5, GPT-3. 5-turbo (ChatGPT) and GPT-4 models to generate poems in the style of specific authors using zero-shot and many-shot prompts (which use the maximum context length of 8192 tokens).
1 code implementation • 21 Apr 2023 • Lisa Bonheme, Marek Grzes
Variational autoencoders (VAEs) are used for transfer learning across various research domains such as music generation or medical image analysis.
1 code implementation • 21 Dec 2022 • Fabricio Goes, Zisen Zhou, Piotr Sawicki, Marek Grzes, Daniel G. Brown
We tested our methodology on 52 jokes in a crowd of four AI voters with different humour types: affiliative, self-enhancing, aggressive and self-defeating.
1 code implementation • 26 Sep 2022 • Lisa Bonheme, Marek Grzes
We show that the discrepancies between the IDE of the mean and sampled representations of a VAE after only a few steps of training reveal the presence of passive variables in the latent space, which, in well-behaved VAEs, indicates a superfluous number of dimensions.
1 code implementation • 17 May 2022 • Lisa Bonheme, Marek Grzes
The ability of Variational Autoencoders (VAEs) to learn disentangled representations has made them popular for practical applications.
1 code implementation • 26 Sep 2021 • Lisa Bonheme, Marek Grzes
However, their mean representations, which are generally used for downstream tasks, have recently been shown to be more correlated than their sampled counterpart, on which disentanglement is usually measured.
no code implementations • SEMEVAL 2020 • Lisa Bonheme, Marek Grzes
This paper presents our submission to task 8 (memotion analysis) of the SemEval 2020 competition.
no code implementations • WS 2019 • Farhana Ferdousi Liza, Marek Grzes
We analyse Recurrent Neural Networks (RNNs) to understand the significance of multiple LSTM layers.
no code implementations • 20 Jun 2018 • Jack Shannon, Marek Grzes
Neural networks allow Q-learning reinforcement learning agents such as deep Q-networks (DQN) to approximate complex mappings from state spaces to value functions.
no code implementations • 22 Sep 2017 • Farhana Ferdousi Liza, Marek Grzes
In this paper, we showed that NCE can be a successful approach in neural language modelling when the hyperparameters of a neural network are tuned appropriately.