Search Results for author: Amirmasoud Ghiassi

Found 7 papers, 0 papers with code

Robust Learning via Golden Symmetric Loss of (un)Trusted Labels

no code implementations1 Jan 2021 Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen

In this paper, we propose to construct a golden symmetric loss (GSL) based on the estimated confusion matrix as to avoid overfitting to noisy labels and learn effectively from hard classes.

End-to-End Learning from Noisy Crowd to Supervised Machine Learning Models

no code implementations13 Nov 2020 Taraneh Younesian, Chi Hong, Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen

Furthermore, relabeling only 10% of the data using the expert's results in over 90% classification accuracy with SVM.

BIG-bench Machine Learning

TrustNet: Learning from Trusted Data Against (A)symmetric Label Noise

no code implementations13 Jul 2020 Amirmasoud Ghiassi, Taraneh Younesian, Robert Birke, Lydia Y. Chen

Based on the insights, we design TrustNet that first adversely learns the pattern of noise corruption, being it both symmetric or asymmetric, from a small set of trusted data.

ExpertNet: Adversarial Learning and Recovery Against Noisy Labels

no code implementations10 Jul 2020 Amirmasoud Ghiassi, Robert Birke, Rui Han, Lydia Y. Chen

Today's available datasets in the wild, e. g., from social media and open platforms, present tremendous opportunities and challenges for deep learning, as there is a significant portion of tagged images, but often with noisy, i. e. erroneous, labels.

Robust classification

QActor: On-line Active Learning for Noisy Labeled Stream Data

no code implementations28 Jan 2020 Taraneh Younesian, Zilong Zhao, Amirmasoud Ghiassi, Robert Birke, Lydia Y. Chen

A central feature of QActor is to dynamically adjust the query limit according to the learning loss for each data batch.

Active Learning

Online Label Aggregation: A Variational Bayesian Approach

no code implementations19 Jul 2018 Chi Hong, Amirmasoud Ghiassi, Yichi Zhou, Robert Birke, Lydia Y. Chen

Our evaluation results on various online scenarios show that BiLA can effectively infer the true labels, with an error rate reduction of at least 10 to 1. 5 percent points for synthetic and real-world datasets, respectively.

Bayesian Inference Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.