no code implementations • 5 Apr 2024 • Zhuochun Li, Bo Xie, Robin Hilsabeck, Alyssa Aguirre, Ning Zou, Zhimeng Luo, Daqing He
Evidence suggests that different prompts lead large language models (LLMs) to generate responses with varying quality.
1 code implementation • journal 2023 • Bo Xie, Xiaohui Jia, Xiawen Song, Hua Zhang, Bi Chen, Bo Jiang, Ye Wang, Yun Pan
It usually includes slot filling and intent detection (SFID) tasks aiming at semantic parsing of utterances.
no code implementations • 22 Dec 2020 • ShengNan Zhang, Bo Xie, QuanSheng Wu, Jianpeng Liu, Oleg V. Yazyev
We formulate the chiral decomposition rules that govern the electronic structure of a broad family of twisted $N+M$ multilayer graphene configurations that combine arbitrary stacking order and a mutual twist.
Mesoscale and Nanoscale Physics Materials Science Strongly Correlated Electrons
no code implementations • 17 Apr 2019 • Qiang Li, Bo Xie, Jane You, Wei Bian, DaCheng Tao
In this paper, we present correlated logistic (CorrLog) model for multilabel image classification.
no code implementations • NeurIPS 2017 • Le Song, Santosh Vempala, John Wilmes, Bo Xie
Moreover, this hard family of functions is realizable with a small (sublinear in dimension) number of activation units in the single hidden layer.
1 code implementation • 28 Feb 2017 • Kenji Kawaguchi, Bo Xie, Vikas Verma, Le Song
For deep models, with no unrealistic assumptions, we prove universal approximation ability, a lower bound on approximation error, a partial optimization guarantee, and a generalization bound.
no code implementations • 9 Nov 2016 • Bo Xie, YIngyu Liang, Le Song
In this paper, we answer these questions by analyzing one-hidden-layer neural networks with ReLU activation, and show that despite the non-convexity, neural networks with diverse units have no spurious local minima.
no code implementations • NeurIPS 2015 • Bo Xie, YIngyu Liang, Le Song
We propose a simple, computationally efficient, and memory friendly algorithm based on the "doubly stochastic gradients" to scale up a range of kernel nonlinear component analysis, such as kernel PCA, CCA and SVD.
no code implementations • 23 Mar 2015 • Maria-Florina Balcan, YIngyu Liang, Le Song, David Woodruff, Bo Xie
Can we perform kernel PCA on the entire dataset in a distributed and communication efficient fashion while maintaining provable and strong guarantees in solution quality?
1 code implementation • NeurIPS 2014 • Bo Dai, Bo Xie, Niao He, YIngyu Liang, Anant Raj, Maria-Florina Balcan, Le Song
The general perception is that kernel methods are not scalable, and neural nets are the methods of choice for nonlinear learning problems.
no code implementations • 13 Nov 2013 • Le Song, Animashree Anandkumar, Bo Dai, Bo Xie
We establish that the sample complexity for the proposed method is quadratic in the number of latent components and is a low order polynomial in the other relevant parameters.