Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners.
1 code implementation • 15 Jun 2021 • Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei LI, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie, Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Buzhou Tang, Qingcai Chen
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.
Ranked #1 on Named Entity Recognition on CMeEE
Recent neural-based aspect-based sentiment analysis approaches, though achieving promising improvement on benchmark datasets, have reported suffering from poor robustness when encountering confounder such as non-target aspects.
Although the self-supervised pre-training of transformer models has resulted in the revolutionizing of natural language processing (NLP) applications and the achievement of state-of-the-art results with regard to various benchmarks, this process is still vulnerable to small and imperceptible permutations originating from legitimate inputs.
In this paper, we propose a novel legal application of legal provision prediction (LPP), which aims to predict the related legal provisions of affairs.
We introduce a prototype model and provide an open-source and extensible toolkit called OpenUE for various extraction tasks.
Fine-tuning pre-trained models have achieved impressive performance on standard natural language processing benchmarks.
We show that the out-of-time-order correlation (OTOC) $ \langle W(t)^\dagger V(0)^\dagger W(t)V(0)\rangle$ in many-body localized (MBL) and marginal MBL systems can be efficiently calculated by the spectrum bifurcation renormalization group (SBRG).
Strongly Correlated Electrons Disordered Systems and Neural Networks