no code implementations • 7 Jun 2018 • Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba
We introduced the idea of most informative sampling, in which the sampler attempts to select samples that trouble the classifier of a discriminative tracker.
no code implementations • CVPR 2018 • Kourosh Meshgi, Shigeyuki Oba, Shin Ishii
To remove this redundancy and have an effective ensemble learning, it is critical for the committee to include consistent hypotheses that differ from one-another, covering the version space with minimum overlaps.
no code implementations • 28 Apr 2017 • Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba, Shin Ishii
However, by updating all of the ensemble using a shared set of samples and their final labels, such diversity is lost or reduced to the diversity provided by the underlying features or internal classifiers' dynamics.
no code implementations • 2 Apr 2017 • Kourosh Meshgi, Shigeyuki Oba, Shin Ishii
To cope with variations of the target shape and appearance, the classifier is updated online with different samples of the target and the background.
no code implementations • 31 Mar 2017 • Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba, Shin Ishii
We also introduce a budgeting mechanism which prevents the unbounded growth in the number of examples in the first detector to maintain its rapid response.
no code implementations • WS 2016 • Maryam Sadat Mirzaei, Kourosh Meshgi, Tatsuya Kawahara
To improve the choice of words in this system, and explore a better method to detect speech challenges, ASR errors were investigated as a model of the L2 listener, hypothesizing that some of these errors are similar to those of language learners{'} when transcribing the videos.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 14 Feb 2019 • Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba
Employing one or more additional classifiers to break the self-learning loop in tracing-by-detection has gained considerable attention.
no code implementations • 23 May 2020 • Maryam Sadat Mirzaei, Kourosh Meshgi, Etienne Frigo, Toyoaki Nishida
We proposed a spatiotemporally-conditioned GAN that generates a sequence that is similar to a given sequence in terms of semantics and spatiotemporal dynamics.
no code implementations • 30 Sep 2020 • Kourosh Meshgi, Maryam Sadat Mirzaei
Neural networks for multi-domain learning empowers an effective combination of information from different domains by sharing and co-learning the parameters.
no code implementations • 2 Oct 2020 • Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba
Different layers in CNNs provide not only different levels of abstraction for describing the objects in the input but also encode various implicit information about them.
no code implementations • 29 Sep 2021 • Kourosh Meshgi, Maryam Sadat Mirzaei, Satoshi Sekine
Simultaneous training of a multi-task learning network on different domains or tasks is not always straightforward.
no code implementations • RepL4NLP (ACL) 2022 • Kourosh Meshgi, Maryam Sadat Mirzaei, Satoshi Sekine
Simultaneous training of a multi-task learning network on different domains or tasks is not always straightforward.
no code implementations • WASSA (ACL) 2022 • Kourosh Meshgi, Maryam Sadat Mirzaei, Satoshi Sekine
By sharing parameters and providing task-independent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains.