no code implementations • 3 Oct 2023 • Manoj Vishwanath, Steven Cao, Nikil Dutt, Amir M. Rahmani, Miranda M. Lim, Hung Cao
We tested the robustness of this transfer learning technique on various rule-based classical machine learning models as well as the EEGNet-based deep learning model by evaluating on different datasets, including human and mouse data in a binary classification task of detecting individuals with versus without traumatic brain injury (TBI).
no code implementations • 6 Jun 2023 • Steven Cao, Percy Liang, Gregory Valiant
We propose a natural algorithm that involves imputing the missing values of the matrix $X^TX$ and show that even with only two observations per row in $X$, we can provably recover $X^TX$ as long as we have at least $\Omega(r^2 d \log d)$ rows, where $r$ is the rank and $d$ is the number of columns.
3 code implementations • NAACL 2021 • Steven Cao, Victor Sanh, Alexander M. Rush
The dominant approach in probing neural networks for linguistic properties is to train a new shallow multi-layer perceptron (MLP) on top of the model's internal representations.
no code implementations • EMNLP 2020 • Steven Cao, Nikita Kitaev, Dan Klein
We propose a method for unsupervised parsing based on the linguistic notion of a constituency test.
1 code implementation • ICLR 2020 • Steven Cao, Nikita Kitaev, Dan Klein
We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT.
no code implementations • 15 May 2019 • Hongjiang Wei, Steven Cao, Yuyao Zhang, Xiaojun Guan, Fuhua Yan, Kristen W. Yeom, Chunlei Liu
To address these challenges, we propose a learning-based QSM reconstruction method that directly estimates the magnetic susceptibility from total phase images without the need for brain extraction and background phase removal, referred to as autoQSM.
4 code implementations • ACL 2019 • Nikita Kitaev, Steven Cao, Dan Klein
We show that constituency parsing benefits from unsupervised pre-training across a variety of languages and a range of pre-training conditions.
Ranked #6 on Constituency Parsing on CTB5