no code implementations • 14 Jun 2022 • Jiaheng Wei, Zhaowei Zhu, Tianyi Luo, Ehsan Amid, Abhishek Kumar, Yang Liu
The rawly collected training data often comes with separate noisy labels collected from multiple imperfect annotators (e. g., via crowdsourcing).
no code implementations • Findings (ACL) 2022 • Tianyi Luo, Rui Meng, Xin Eric Wang, Yang Liu
Research Replication Prediction (RRP) is the task of predicting whether a published research result can be replicated or not.
no code implementations • 3 Mar 2022 • Rui Meng, Tianyi Luo, Kristofer Bouchard
The key insight of our framework is to learn representations by minimizing the compression complexity and maximizing the predictive information in latent space.
1 code implementation • ICLR 2022 • Zhaowei Zhu, Tianyi Luo, Yang Liu
Semi-supervised learning (SSL) has demonstrated its potential to improve the model accuracy for a variety of learning tasks when the high-quality supervised data is severely limited.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Tianyi Luo, Xingyu Li, Hainan Wang, Yang Liu
In this paper, we propose two weakly supervised learning approaches that use automatically extracted text information of research papers to improve the prediction accuracy of research replication using both labeled and unlabeled datasets.
no code implementations • 28 Sep 2019 • Tianyi Luo, Yang Liu
In this paper, we extend the idea proposed in Bayesian Truth Serum that "a surprisingly more popular answer is more likely the true answer" to classification problems.
no code implementations • 19 Jun 2016 • Qixin Wang, Tianyi Luo, Dong Wang
Recent progress in neural learning demonstrated that machines can do well in regularized tasks, e. g., the game of Go.
no code implementations • 21 Apr 2016 • Qixin Wang, Tianyi Luo, Dong Wang, Chao Xing
Learning and generating Chinese poems is a charming yet challenging task.
no code implementations • EMNLP 2015 • Tianyi Luo, Dong Wang, Rong Liu, Yiqiao Pan
ListNet is a well-known listwise learning to rank model and has gained much attention in recent years.
no code implementations • 5 Aug 2015 • Dongxu Zhang, Tianyi Luo, Dong Wang, Rong Liu
Latent Dirichlet Allocation (LDA) is a three-level hierarchical Bayesian model for topic inference.