Paper

Modelling Latent Skills for Multitask Language Generation

We present a generative model for multitask conditional language generation. Our guiding hypothesis is that a shared set of latent skills underlies many disparate language generation tasks, and that explicitly modelling these skills in a task embedding space can help with both positive transfer across tasks and with efficient adaptation to new tasks... (read more)

Results in Papers With Code
(↓ scroll down to see all results)