no code implementations • 13 Mar 2024 • Yongkang Guo, Yuqing Kong
The decision maker does not know the specific information structure, which is a joint distribution of signals, states, and strategies of adversarial experts.
no code implementations • 31 Jan 2024 • Yongkang Guo, Jason D. Hartline, Zhihuan Huang, Yuqing Kong, Anant Shah, Fang-Yi Yu
Given a family of information structures, robust forecast aggregation aims to find the aggregator with minimal worst-case regret compared to the omniscient aggregator.
no code implementations • 23 Nov 2023 • Yuqi Pan, Zhaohua Chen, Yuqing Kong
When the aggregator is deterministic, we present a robust aggregator that leverages second-order information, which can significantly outperform counterparts without it.
no code implementations • 10 Feb 2023 • Yongkang Guo, Yuan Yuan, Jinshan Zhang, Yuqing Kong, Zhihua Zhu, Zheng Cai
A/B testing, or controlled experiments, is the gold standard approach to causally compare the performance of algorithms on online platforms.
no code implementations • 3 Oct 2021 • Yuqing Kong
Here in the setting where a large number of people are asked to answer a small number of multi-choice questions (multi-task, large group), we propose an information aggregation method that is robust to people's strategies.
1 code implementation • 2 Jun 2021 • Paul Resnick, Yuqing Kong, Grant Schoenebeck, Tim Weninger
We refer to such tasks as survey settings because the ground truth is defined through a survey of one or more human raters.
no code implementations • 3 Mar 2021 • Yongkang Guo, Zhihuan Huang, Yuqing Kong, Qian Wang
At a high level, we show the significance of community structure is equivalent to the amount of information contained in the network.
Community Detection Social and Information Networks
no code implementations • 24 Feb 2021 • Jiale Chen, Yuqing Kong, Yuxuan Lu
With this assumption, we propose a new definition for uninformative feedback and correspondingly design a family of evaluation metrics, called f-variety, for group-level feedback which can 1) distinguish informative feedback and uninformative feedback (separation) even if their statistics are both uniform and 2) decrease as the ratio of uninformative respondents increases (monotonicity).
Computer Science and Game Theory
no code implementations • ECCV 2020 • Xinwei Sun, Yilun Xu, Peng Cao, Yuqing Kong, Lingjing Hu, Shanghang Zhang, Yizhou Wang
In this paper, we propose a novel information-theoretic approach, namely \textbf{T}otal \textbf{C}orrelation \textbf{G}ain \textbf{M}aximization (TCGM), for semi-supervised multi-modal learning, which is endowed with promising properties: (i) it can utilize effectively the information across different modalities of unlabeled data points to facilitate training classifiers of each modality (ii) it has theoretical guarantee to identify Bayesian classifiers, i. e., the ground truth posteriors of all modalities.
no code implementations • NeurIPS 2019 • Yilun Xu, Peng Cao, Yuqing Kong, Yizhou Wang
To the best of our knowledge, L_DMI is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information.
Ranked #36 on Image Classification on Clothing1M (using extra training data)
2 code implementations • 8 Sep 2019 • Yilun Xu, Peng Cao, Yuqing Kong, Yizhou Wang
\emph{To the best of our knowledge, $\mathcal{L}_{DMI}$ is the first loss function that is provably robust to instance-independent label noise, regardless of noise pattern, and it can be applied to any existing classification neural networks straightforwardly without any auxiliary information}.
Ranked #36 on Image Classification on Clothing1M
1 code implementation • ICLR 2019 • Peng Cao, Yilun Xu, Yuqing Kong, Yizhou Wang
Furthermore, we devise an accurate data-crowds forecaster that employs both the data and the crowdsourced labels to forecast the ground truth.
no code implementations • 24 Feb 2018 • Yuqing Kong, Grant Schoenebeck
In co-training/multiview learning, the goal is to aggregate two views of data into a prediction for a latent label.