The rise of neural networks, and particularly recurrent neural networks, has
produced significant advances in part-of-speech tagging accuracy. One
characteristic common among these models is the presence of rich initial word
encodings. These encodings typically are composed of a recurrent
character-based representation with learned and pre-trained word embeddings.
However, these encodings do not consider a context wider than a single word and
it is only through subsequent recurrent layers that word or sub-word
information interacts. In this paper, we investigate models that use recurrent
neural networks with sentence-level context for initial character and
word-based representations. In particular we show that optimal results are
obtained by integrating these context sensitive representations through
synchronized training with a meta-model that learns to combine their states. We
present results on part-of-speech and morphological tagging with
state-of-the-art performance on a number of languages.