no code implementations • 17 Jul 2023 • Ruichen Li, Haotian Ye, Du Jiang, Xuelan Wen, Chuwei Wang, Zhe Li, Xiang Li, Di He, Ji Chen, Weiluo Ren, LiWei Wang
Neural network-based variational Monte Carlo (NN-VMC) has emerged as a promising cutting-edge technique of ab initio quantum chemistry.
1 code implementation • 24 May 2023 • Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, LiWei Wang
We start by giving an impossibility result showing that bounded-depth Transformers are unable to directly produce correct answers for basic arithmetic/equation tasks unless the model size grows super-polynomially with respect to the input length.
no code implementations • 22 May 2023 • Haotian Ye, Yihong Liu, Hinrich Schütze
An interesting line of research in natural language processing (NLP) aims to incorporate linguistic typology to bridge linguistic diversity and assist the research of low-resource languages.
1 code implementation • 22 May 2023 • Yihong Liu, Haotian Ye, Leonie Weissweiler, Hinrich Schütze
This demonstrates the benefits of colexification for multilingual NLP.
2 code implementations • 15 May 2023 • Yihong Liu, Haotian Ye, Leonie Weissweiler, Philipp Wicke, Renhao Pei, Robert Zangenfeind, Hinrich Schütze
The resulting measure for the conceptual similarity of two languages is complementary to standard genealogical, typological, and surface similarity measures.
no code implementations • 15 May 2023 • Chunlan Ma, Ayyoob ImaniGooghari, Haotian Ye, Ehsaneddin Asgari, Hinrich Schütze
While natural language processing tools have been developed extensively for some of the world's languages, a significant portion of the world's over 7000 languages are still neglected.
1 code implementation • 7 Dec 2022 • Collin Burns, Haotian Ye, Dan Klein, Jacob Steinhardt
Existing techniques for training language models can be misaligned with the truth: if we train models with imitation learning, they may reproduce errors that humans make; if we train them to generate text that humans rate highly, they may output errors that human evaluators can't detect.
1 code implementation • 20 Oct 2022 • Haotian Ye, James Zou, Linjun Zhang
This opens a promising strategy to first train a feature learner rather than a classifier, and then perform linear probing (last layer retraining) in the test environment.
no code implementations • 19 Oct 2022 • Haotian Ye, Xiaoyu Chen, LiWei Wang, Simon S. Du
Generalization in Reinforcement Learning (RL) aims to learn an agent during training that generalizes to the target environment.
no code implementations • NeurIPS 2021 • Haotian Ye, Chuanlong Xie, Tianle Cai, Ruichen Li, Zhenguo Li, LiWei Wang
We also introduce a new concept of expansion function, which characterizes to what extent the variance is amplified in the test domains over the training domains, and therefore give a quantitative meaning of invariant features.
no code implementations • 21 Jan 2021 • Haotian Ye, Chuanlong Xie, Yue Liu, Zhenguo Li
One of the definitions of OOD accuracy is worst-domain accuracy.
no code implementations • 13 Jun 2020 • Chuanlong Xie, Haotian Ye, Fei Chen, Yue Liu, Rui Sun, Zhenguo Li
The key of the out-of-distribution (OOD) generalization is to generalize invariance from training domains to target domains.