no code implementations • 19 Nov 2022 • Lifu Wang, Tianyu Wang, Shengwei Yi, Bo Shen, Bo Hu, Xing Cao
We study the learning ability of linear recurrent neural networks with Gradient Descent.
no code implementations • NeurIPS 2021 • Lifu Wang, Bo Shen, Bo Hu, Xing Cao
In this paper, using detailed analysis about the neural tangent kernel matrix, we prove a generalization error bound to learn such functions without normalized conditions and show that some notable concept classes are learnable with the numbers of iterations and samples scaling almost-polynomially in the input length $L$.
no code implementations • 15 Jan 2021 • Xing Cao, Yun Liu
Recent advances regarding question answering and reading comprehension have resulted in models that surpass human performance when the answer is contained in a single, continuous passage of text, requiring only single-hop reasoning.