Search Results for author: Kourosh Meshgi

Found 14 papers, 0 papers with code

Information-Maximizing Sampling to Promote Tracking-by-Detection

no code implementations7 Jun 2018 Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba

We introduced the idea of most informative sampling, in which the sampler attempts to select samples that trouble the classifier of a discriminative tracker.

General Classification

Efficient Diverse Ensemble for Discriminative Co-Tracking

no code implementations CVPR 2018 Kourosh Meshgi, Shigeyuki Oba, Shin Ishii

To remove this redundancy and have an effective ensemble learning, it is critical for the committee to include consistent hypotheses that differ from one-another, covering the version space with minimum overlaps.

Ensemble Learning

Active Collaborative Ensemble Tracking

no code implementations28 Apr 2017 Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba, Shin Ishii

However, by updating all of the ensemble using a shared set of samples and their final labels, such diversity is lost or reduced to the diversity provided by the underlying features or internal classifiers' dynamics.

General Classification

Efficient Version-Space Reduction for Visual Tracking

no code implementations2 Apr 2017 Kourosh Meshgi, Shigeyuki Oba, Shin Ishii

To cope with variations of the target shape and appearance, the classifier is updated online with different samples of the target and the background.

Visual Tracking

Efficient Asymmetric Co-Tracking using Uncertainty Sampling

no code implementations31 Mar 2017 Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba, Shin Ishii

We also introduce a budgeting mechanism which prevents the unbounded growth in the number of examples in the first detector to maintain its rapid response.

Automatic Speech Recognition Errors as a Predictor of L2 Listening Difficulties

no code implementations WS 2016 Maryam Sadat Mirzaei, Kourosh Meshgi, Tatsuya Kawahara

To improve the choice of words in this system, and explore a better method to detect speech challenges, ASR errors were investigated as a model of the L2 listener, hypothesizing that some of these errors are similar to those of language learners{'} when transcribing the videos.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Long and Short Memory Balancing in Visual Co-Tracking using Q-Learning

no code implementations14 Feb 2019 Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba

Employing one or more additional classifiers to break the self-learning loop in tracing-by-detection has gained considerable attention.

Q-Learning Self-Learning

AnimGAN: A Spatiotemporally-Conditioned Generative Adversarial Network for Character Animation

no code implementations23 May 2020 Maryam Sadat Mirzaei, Kourosh Meshgi, Etienne Frigo, Toyoaki Nishida

We proposed a spatiotemporally-conditioned GAN that generates a sequence that is similar to a given sequence in terms of semantics and spatiotemporal dynamics.

Generative Adversarial Network

Adversarial Semi-Supervised Multi-Domain Tracking

no code implementations30 Sep 2020 Kourosh Meshgi, Maryam Sadat Mirzaei

Neural networks for multi-domain learning empowers an effective combination of information from different domains by sharing and co-learning the parameters.

Self-Supervised Learning Visual Tracking

Leveraging Tacit Information Embedded in CNN Layers for Visual Tracking

no code implementations2 Oct 2020 Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba

Different layers in CNNs provide not only different levels of abstraction for describing the objects in the input but also encode various implicit information about them.

Visual Tracking

Uncertainty Regularized Multi-Task Learning

no code implementations WASSA (ACL) 2022 Kourosh Meshgi, Maryam Sadat Mirzaei, Satoshi Sekine

By sharing parameters and providing task-independent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains.

Multi-Task Learning text-classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.