Search Results for author: Nan Hua

Found 5 papers, 3 papers with code

Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior

1 code implementation Findings of the Association for Computational Linguistics 2020 Zi Lin, Jeremiah Zhe Liu, Zi Yang, Nan Hua, Dan Roth

Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero.

Universal Sentence Encoder

23 code implementations29 Mar 2018 Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil

For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance.

Association Conversational Response Selection +7

Cannot find the paper you are looking for? You can Submit a new open access paper.