1 code implementation • PAKDD 2022: Advances in Knowledge Discovery and Data Mining 2022 • Ashkan Farhangi, Ning Sui, Nan Hua, Haiyan Bai, Arthur Huang, Zhishan Guo
This paper proposes Protoformer, a novel self-learning framework for Transformers that can leverage problematic samples for text classification.
no code implementations • ACL 2022 • Chen-Yu Lee, Chun-Liang Li, Timothy Dozat, Vincent Perot, Guolong Su, Nan Hua, Joshua Ainslie, Renshen Wang, Yasuhisa Fujii, Tomas Pfister
Sequence modeling has demonstrated state-of-the-art performance on natural language and document understanding tasks.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zi Lin, Jeremiah Zhe Liu, Zi Yang, Nan Hua, Dan Roth
Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero.
no code implementations • EMNLP 2018 • Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, Ray Kurzweil
We present easy-to-use TensorFlow Hub sentence embedding models having good task transfer performance.
23 code implementations • 29 Mar 2018 • Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil
For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance.
Ranked #1 on
Text Classification
on TREC-6