no code implementations • BigScience (ACL) 2022 • Jason Fries, Natasha Seelam, Gabriel Altay, Leon Weber, Myungsun Kang, Debajyoti Datta, Ruisi Su, Samuele Garda, Bo wang, Simon Ott, Matthias Samwald, Wojciech Kusa
Large-scale language modeling and natural language prompting have demonstrated exciting capabilities for few and zero shot learning in NLP.
no code implementations • Findings (ACL) 2022 • En-Shiun Annie Lee, Sarubi Thillainathan, Shravan Nayak, Surangika Ranathunga, David Ifeoluwa Adelani, Ruisi Su, Arya D. McCarthy
What can pre-trained multilingual sequence-to-sequence models like mBART contribute to translating low-resource languages?
1 code implementation • CoNLL (EMNLP) 2021 • Ruisi Su, Shruti Rijhwani, Hao Zhu, Junxian He, Xinyu Wang, Yonatan Bisk, Graham Neubig
Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50\% as compared to state-of-the-art models trained on pure text.
1 code implementation • IJCNLP 2019 • Lisa Fan, Marshall White, Eva Sharma, Ruisi Su, Prafulla Kumar Choubey, Ruihong Huang, Lu Wang
The increasing prevalence of political bias in news media calls for greater public awareness of it, as well as robust methods for its detection.