We study three general multi-task learning (MTL) approaches on 11 sequence
tagging tasks. Our extensive empirical results show that in about 50% of the
cases, jointly learning all 11 tasks improves upon either independent or
pairwise learning of the tasks...
We also show that pairwise MTL can inform us
what tasks can benefit others or what tasks can be benefited if they are
learned jointly. In particular, we identify tasks that can always benefit
others as well as tasks that can always be harmed by others. Interestingly, one
of our MTL approaches yields embeddings of the tasks that reveal the natural
clustering of semantic and syntactic tasks. Our inquiries have opened the doors
to further utilization of MTL in NLP.