1 code implementation • 2 May 2023 • Chenzhuang Du, Jiaye Teng, Tingle Li, Yichen Liu, Tianyuan Yuan, Yue Wang, Yang Yuan, Hang Zhao
We abstract the features (i. e. learned representations) of multi-modal data into 1) uni-modal features, which can be learned from uni-modal training, and 2) paired features, which can only be learned from cross-modal interactions.
no code implementations • 19 Mar 2023 • Peiyuan Zhang, Jiaye Teng, Jingzhao Zhang
Our paper examines this observation by providing excess risk lower bounds for GD and SGD in two realizable settings: 1) $\eta T = \bigO{n}$, and (2) $\eta T = \bigOmega{n}$, where $n$ is the size of dataset.
1 code implementation • 1 Oct 2022 • Jiaye Teng, Chuan Wen, Dinghuai Zhang, Yoshua Bengio, Yang Gao, Yang Yuan
Conformal prediction is a distribution-free technique for establishing valid prediction intervals.
no code implementations • 22 Jun 2022 • Chuan Wen, Jianing Qian, Jierui Lin, Jiaye Teng, Dinesh Jayaraman, Yang Gao
Across applications spanning supervised classification and sequential control, deep learning has been reported to find "shortcut" solutions that fail catastrophically under minor changes in the data distribution.
no code implementations • 6 Jun 2022 • Haowei He, Jiaye Teng, Yang Yuan
Deep neural networks are known to be vulnerable to unseen data: they may wrongly assign high confidence stcores to out-distribuion samples.
no code implementations • 1 Jun 2022 • Kaiyue Wen, Jiaye Teng, Jingzhao Zhang
Studies on benign overfitting provide insights for the success of overparameterized deep learning models.
no code implementations • 29 Sep 2021 • Chenzhuang Du, Jiaye Teng, Tingle Li, Yichen Liu, Yue Wang, Yang Yuan, Hang Zhao
We name this problem of multi-modal training, \emph{Modality Laziness}.
no code implementations • ICLR 2022 • Jiaye Teng, Jianhao Ma, Yang Yuan
Generalization is one of the fundamental issues in machine learning.
1 code implementation • 8 Mar 2021 • Jiaye Teng, Zeren Tan, Yang Yuan
It is challenging to deal with censored data, where we only have access to the incomplete information of survival time instead of its exact value.
no code implementations • 5 Mar 2021 • Jiaye Teng, Weiran Huang, Haowei He
Pretext-based self-supervised learning learns the semantic representation via a handcrafted pretext task over unlabeled data and then uses the learned representation for downstream tasks, which effectively reduces the sample complexity of downstream tasks under Conditional Independence (CI) condition.
no code implementations • 4 Jun 2020 • Jiaye Teng, Yang Yuan
First, we apply a machine learning method to fit the ground truth function on the training set and calculate its linear approximation.
no code implementations • 25 Sep 2019 • Jiaye Teng, Guang-He Lee, Yang Yuan
Robustness is an important property to guarantee the security of machine learning models.