|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.
We introduce Baseline: a library for reproducible deep learning research and fast model development for NLP.
The combination of better supervised data and a more appropriate high-capacity model enables much better relation extraction performance.
#11 best model for Relation Extraction on TACRED
Attention-based recurrent neural network models for joint intent detection and slot filling have achieved the state-of-the-art performance, while they have independent attention weights.
Attention-based encoder-decoder neural network models have recently shown promising results in machine translation and speech recognition.
We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external "facts", allowing the model to be versatile and applicable in an open-domain setting.
A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision!