no code implementations • 7 Jan 2024 • Tingting Zhao, Shubo Tian, Jordan Daly, Melissa Geiger, Minna Jia, Jinfeng Zhang
The framework may be applied to other types of disasters for rapid and targeted retrieval, classification, redistribution, and archiving of real-time government orders and notifications.
no code implementations • 20 Apr 2022 • Qingyu Chen, Alexis Allot, Robert Leaman, Rezarta Islamaj Doğan, Jingcheng Du, Li Fang, Kai Wang, Shuo Xu, Yuefu Zhang, Parsa Bagherzadeh, Sabine Bergler, Aakash Bhatnagar, Nidhir Bhavsar, Yung-Chun Chang, Sheng-Jie Lin, Wentai Tang, Hongtong Zhang, Ilija Tavchioski, Senja Pollak, Shubo Tian, Jinfeng Zhang, Yulia Otmakhova, Antonio Jimeno Yepes, Hang Dong, Honghan Wu, Richard Dufour, Yanis Labrak, Niladri Chatterjee, Kushagri Tandon, Fréjus Laleye, Loïc Rakotoson, Emmanuele Chersoni, Jinghang Gu, Annemarie Friedrich, Subhash Chandra Pujari, Mariia Chizhikova, Naveen Sivadasan, Zhiyong Lu
To close the gap, we organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature.
no code implementations • 3 Nov 2021 • Qing Han, Shubo Tian, Jinfeng Zhang
As a major social media platform, Twitter publishes a large number of user-generated text (tweets) on a daily basis.
1 code implementation • bioRxiv 2021 • Yuan Zhang, Arunima Mandal, Kevin Cui, Xiuwen Liu, Jinfeng Zhang
The prediction is very fast compared with other protein sequence prediction servers - it takes only a few minutes for a query protein on average.
no code implementations • IJCNLP 2019 • Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu
Current state-of-the-art neural machine translation (NMT) uses a deep multi-head self-attention network with no explicit phrase information.
no code implementations • IJCNLP 2019 • Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu
Recent studies have shown that a hybrid of self-attention networks (SANs) and recurrent neural networks (RNNs) outperforms both individual architectures, while not much is known about why the hybrid models work.
no code implementations • NAACL 2019 • Jie Hao, Xing Wang, Baosong Yang, Long-Yue Wang, Jinfeng Zhang, Zhaopeng Tu
In addition to the standard recurrent neural network, we introduce a novel attentive recurrent network to leverage the strengths of both attention and recurrent networks.
no code implementations • 11 Jun 2018 • Albert Steppi, Jinchan Qu, Minjing Tao, Tingting Zhao, Xiaodong Pang, Jinfeng Zhang
Moreover, we design a new balanced review assignment procedure, which can result in significantly better performance for both MBC and CIGR methods.