Search Results for author: Piotr Czapla

Found 4 papers, 4 papers with code

Applying a Pre-trained Language Model to Spanish Twitter Humor Prediction

1 code implementation6 Jul 2019 Bobak Farzin, Piotr Czapla, Jeremy Howard

Our entry into the HAHA 2019 Challenge placed $3^{rd}$ in the classification task and $2^{nd}$ in the regression task.

Language Modelling regression

Universal Language Model Fine-Tuning with Subword Tokenization for Polish

2 code implementations24 Oct 2018 Piotr Czapla, Jeremy Howard, Marcin Kardas

Universal Language Model for Fine-tuning [arXiv:1801. 06146] (ULMFiT) is one of the first NLP methods for efficient inductive transfer learning.

Language Modelling Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.