Search Results for author: Tianyi Luo

Found 10 papers, 1 papers with code

To Aggregate or Not? Learning with Separate Noisy Labels

no code implementations14 Jun 2022 Jiaheng Wei, Zhaowei Zhu, Tianyi Luo, Ehsan Amid, Abhishek Kumar, Yang Liu

The rawly collected training data often comes with separate noisy labels collected from multiple imperfect annotators (e. g., via crowdsourcing).

Learning with noisy labels

Compressed Predictive Information Coding

no code implementations3 Mar 2022 Rui Meng, Tianyi Luo, Kristofer Bouchard

The key insight of our framework is to learn representations by minimizing the compression complexity and maximizing the predictive information in latent space.

Mutual Information Estimation

The Rich Get Richer: Disparate Impact of Semi-Supervised Learning

1 code implementation ICLR 2022 Zhaowei Zhu, Tianyi Luo, Yang Liu

Semi-supervised learning (SSL) has demonstrated its potential to improve the model accuracy for a variety of learning tasks when the high-quality supervised data is severely limited.

Fairness Pseudo Label +2

Research Replication Prediction Using Weakly Supervised Learning

no code implementations Findings of the Association for Computational Linguistics 2020 Tianyi Luo, Xingyu Li, Hainan Wang, Yang Liu

In this paper, we propose two weakly supervised learning approaches that use automatically extracted text information of research papers to improve the prediction accuracy of research replication using both labeled and unlabeled datasets.

BIG-bench Machine Learning Weakly-supervised Learning

Machine Truth Serum

no code implementations28 Sep 2019 Tianyi Luo, Yang Liu

In this paper, we extend the idea proposed in Bayesian Truth Serum that "a surprisingly more popular answer is more likely the true answer" to classification problems.

BIG-bench Machine Learning General Classification

Can Machine Generate Traditional Chinese Poetry? A Feigenbaum Test

no code implementations19 Jun 2016 Qixin Wang, Tianyi Luo, Dong Wang

Recent progress in neural learning demonstrated that machines can do well in regularized tasks, e. g., the game of Go.

Game of Go

Stochastic Top-k ListNet

no code implementations EMNLP 2015 Tianyi Luo, Dong Wang, Rong Liu, Yiqiao Pan

ListNet is a well-known listwise learning to rank model and has gained much attention in recent years.

Learning-To-Rank

Learning from LDA using Deep Neural Networks

no code implementations5 Aug 2015 Dongxu Zhang, Tianyi Luo, Dong Wang, Rong Liu

Latent Dirichlet Allocation (LDA) is a three-level hierarchical Bayesian model for topic inference.

Document Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.