no code implementations • 4 Mar 2024 • Zhenru Lin, Yiqun Yao, Yang Yuan

Large language models (LLMs) such as ChatGPT are increasingly proficient in understanding and generating a mixture of code and text.

2 code implementations • 12 Feb 2024 • Yifan Zhang, Yifan Luo, Yang Yuan, Andrew Chi-Chih Yao

Our method showcases a 2 times increase in pretraining token efficiency compared to state-of-the-art baselines, underscoring the potential of our approach in enhancing models' mathematical reasoning capabilities.

1 code implementation • 20 Nov 2023 • Yifan Zhang, Yang Yuan, Andrew Chi-Chih Yao

In this work, we present a comprehensive study of Meta Prompting (MP), an innovative technique reshaping the utilization of language models (LMs) and AI systems in problem-solving and data interaction.

no code implementations • 11 Oct 2023 • Ziyi Chen, Fankai Xie, Meng Wan, Yang Yuan, Miao Liu, Zongguo Wang, Sheng Meng, Yangang Wang

The prediction of chemical synthesis pathways plays a pivotal role in materials science research.

2 code implementations • 29 Sep 2023 • Zhiquan Tan, Jingqin Yang, Weiran Huang, Yang Yuan, Yifan Zhang

In this paper, we conduct a comprehensive analysis of two dual-branch (Siamese architecture) self-supervised learning approaches, namely Barlow Twins and spectral contrastive learning, through the lens of matrix mutual information.

1 code implementation • 8 Aug 2023 • Yifan Zhang, Jingqin Yang, Yang Yuan, Andrew Chi-Chih Yao

We demonstrate CR's superiority through several complex reasoning tasks: it outperforms existing methods in logical inference tasks with up to a 9. 3% improvement, achieving 98. 04% accuracy on the curated FOLIO wiki dataset.

Ranked #4 on Math Word Problem Solving on MATH

3 code implementations • 27 May 2023 • Yifan Zhang, Zhiquan Tan, Jingqin Yang, Weiran Huang, Yang Yuan

Inspired by this framework, we introduce Matrix-SSL, a novel approach that leverages matrix information theory to interpret the maximum entropy encoding loss as matrix uniformity loss.

Ranked #1 on Contrastive Learning on imagenet-1k

1 code implementation • 17 May 2023 • Yifan Zhang, Jingqin Yang, Zhiquan Tan, Yang Yuan

Semi-supervised learning has achieved notable success by leveraging very few labeled data and exploiting the wealth of information derived from unlabeled data.

1 code implementation • 2 May 2023 • Chenzhuang Du, Jiaye Teng, Tingle Li, Yichen Liu, Tianyuan Yuan, Yue Wang, Yang Yuan, Hang Zhao

We abstract the features (i. e. learned representations) of multi-modal data into 1) uni-modal features, which can be learned from uni-modal training, and 2) paired features, which can only be learned from cross-modal interactions.

1 code implementation • 27 Mar 2023 • Zhiquan Tan, Yifan Zhang, Jingqin Yang, Yang Yuan

Contrastive learning is a powerful self-supervised learning method, but we have a limited theoretical understanding of how it works and why it works.

no code implementations • 8 Mar 2023 • Yang Yuan

Can machines think?

no code implementations • 1 Mar 2023 • Yang Yuan

Foundation models like chatGPT have demonstrated remarkable performance on various tasks.

no code implementations • 29 Nov 2022 • Yang Yuan

The second one says fine tuning does not have this limit, as a foundation model with the minimum required power (up to symmetry) can theoretically solve downstream tasks for the category defined by pretext task, with fine tuning and enough resources.

1 code implementation • 1 Oct 2022 • Jiaye Teng, Chuan Wen, Dinghuai Zhang, Yoshua Bengio, Yang Gao, Yang Yuan

Conformal prediction is a distribution-free technique for establishing valid prediction intervals.

no code implementations • 6 Jun 2022 • Haowei He, Jiaye Teng, Yang Yuan

Deep neural networks are known to be vulnerable to unseen data: they may wrongly assign high confidence stcores to out-distribuion samples.

no code implementations • 29 Sep 2021 • Chenzhuang Du, Jiaye Teng, Tingle Li, Yichen Liu, Yue Wang, Yang Yuan, Hang Zhao

We name this problem of multi-modal training, \emph{Modality Laziness}.

no code implementations • ICLR 2022 • Jiaye Teng, Jianhao Ma, Yang Yuan

Generalization is one of the fundamental issues in machine learning.

1 code implementation • 8 Mar 2021 • Jiaye Teng, Zeren Tan, Yang Yuan

It is challenging to deal with censored data, where we only have access to the incomplete information of survival time instead of its exact value.

no code implementations • 23 Nov 2020 • Hao Zhu, Yang Yuan, Guosheng Hu, Xiang Wu, Neil Robertson

IR-Softmax can generalise to any softmax and its variants (which are discriminative for open-set problem) by directly setting the weights as their class centers, naturally solving the data imbalance problem.

1 code implementation • 24 Sep 2020 • Chenwei Wu, Chenzhuang Du, Yang Yuan

In the classical multi-party computation setting, multiple parties jointly compute a function without revealing their own input data.

no code implementations • 4 Jun 2020 • Jiaye Teng, Yang Yuan

First, we apply a machine learning method to fit the ground truth function on the training set and calculate its linear approximation.

no code implementations • 10 Feb 2020 • Yingdong Hu, Liang Zhang, Wei Shan, Xiaoxiao Qin, Jing Qi, Zhenzhou Wu, Yang Yuan

In the big data era, many organizations face the dilemma of data sharing.

no code implementations • NeurIPS 2019 • Piotr Indyk, Ali Vakilian, Yang Yuan

Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix $S$, sometimes by one order of magnitude.

no code implementations • 25 Sep 2019 • Xiyuan Zhang, Yang Yuan, Piotr Indyk

The edit distance between two sequences is an important metric with many applications.

no code implementations • 25 Sep 2019 • Jiaye Teng, Guang-He Lee, Yang Yuan

Robustness is an important property to guarantee the security of machine learning models.

1 code implementation • NeurIPS 2019 • Guang-He Lee, Yang Yuan, Shiyu Chang, Tommi S. Jaakkola

Specifically, an $\ell_2$ bounded adversary cannot alter the ensemble prediction generated by an additive isotropic Gaussian noise, where the radius for the adversary depends on both the variance of the distribution as well as the ensemble margin at the point of interest.

1 code implementation • NeurIPS 2019 • Haowei He, Gao Huang, Yang Yuan

Specifically, at a local minimum there exist many asymmetric directions such that the loss increases abruptly along one side, and slowly along the opposite side--we formally define such minima as asymmetric valleys.

no code implementations • NeurIPS 2018 • Yexiang Xue, Yang Yuan, Zhitian Xu, Ashish Sabharwal

Neural models operating over structured spaces such as knowledge graphs require a continuous embedding of the discrete elements of this space (such as entities) as well as the relationships between them.

no code implementations • ECCV 2018 • Guosheng Hu, Li Liu, Yang Yuan, Zehao Yu, Yang Hua, Zhihong Zhang, Fumin Shen, Ling Shao, Timothy Hospedales, Neil Robertson, Yongxin Yang

To advance subtle expression recognition, we contribute a Large-scale Subtle Emotions and Mental States in the Wild database (LSEMSW).

4 code implementations • ICLR 2018 • Qiantong Xu, Gao Huang, Yang Yuan, Chuan Guo, Yu Sun, Felix Wu, Kilian Weinberger

Evaluating generative adversarial networks (GANs) is inherently challenging.

no code implementations • ICML 2018 • Robert Kleinberg, Yuanzhi Li, Yang Yuan

Stochastic gradient descent (SGD) is widely used in machine learning.

no code implementations • ICCV 2017 • Guosheng Hu, Yang Hua, Yang Yuan, Zhihong Zhang, Zheng Lu, Sankha S. Mukherjee, Timothy M. Hospedales, Neil M. Robertson, Yongxin Yang

To solve this problem, we establish a theoretical equivalence between tensor optimisation and a two-stream gated neural network.

1 code implementation • ICLR 2018 • Elad Hazan, Adam Klivans, Yang Yuan

In particular, we obtain the first quasi-polynomial time algorithm for learning noisy decision trees with polynomial sample complexity.

no code implementations • NeurIPS 2017 • Yuanzhi Li, Yang Yuan

We also show that the identity mapping is necessary for convergence, as it moves the initial point to a better place for optimization.

no code implementations • NeurIPS 2016 • Zeyuan Allen-Zhu, Yang Yuan, Karthik Sridharan

The amount of data available in the world is growing faster than our ability to deal with it.

no code implementations • 30 Dec 2015 • Zeyuan Allen-Zhu, Zheng Qu, Peter Richtárik, Yang Yuan

Accelerated coordinate descent is widely used in optimization due to its cheap per-iteration cost and scalability to large-scale problems.

3 code implementations • 5 Jun 2015 • Zeyuan Allen-Zhu, Yang Yuan

Many classical algorithms are found until several years later to outlive the confines in which they were conceived, and continue to be relevant in unforeseen settings.

1 code implementation • 6 Mar 2015 • Rong Ge, Furong Huang, Chi Jin, Yang Yuan

To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points.

no code implementations • 31 Jul 2014 • Wei Chen, Yajun Wang, Yang Yuan, Qinshi Wang

The objective of an online learning algorithm for CMAB is to minimize (\alpha,\beta)-approximation regret, which is the difference between the \alpha{\beta} fraction of the expected reward when always playing the optimal super arm, and the expected reward of playing super arms according to the algorithm.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.