no code implementations • 22 Jun 2023 • Farshad Saberi-Movahed, Mohammad K. Ebrahimpour, Farid Saberi-Movahed, Monireh Moshavash, Dorsa Rahmatian, Mahvash Mohazzebi, Mahdi Shariatzadeh, Mahdi Eftekhari
Deep Metric Learning (DML) models rely on strong representations and similarity-based measures with specific loss functions.
1 code implementation • 31 Jan 2022 • Adolfo G. Ramirez-Aristizabal, Mohammad K. Ebrahimpour, Christopher T. Kello
Classifying EEG responses to naturalistic acoustic stimuli is of theoretical and practical importance, but standard approaches are limited by processing individual channels separately on very short sound segments (a few seconds or less).
no code implementations • 28 Dec 2021 • Mohammad K. Ebrahimpour, Gang Qian, Allison Beach
On the other hand, the proxy-based loss functions often lead to significant speedups in convergence during training, while the rich relations among data points are often not fully explored by the proxy-based losses.
no code implementations • 25 May 2020 • Mohammad K. Ebrahimpour, Timothy Shea, Andreea Danielescu, David C. Noelle, Christopher T. Kello
Machine learning approaches to auditory object recognition are traditionally based on engineered features such as those derived from the spectrum or cepstrum.
no code implementations • 15 May 2020 • Mohammad K. Ebrahimpour, Jiayun Li, Yen-Yun Yu, Jackson L. Reese, Azadeh Moghtaderi, Ming-Hsuan Yang, David C. Noelle
The coarse functional distinction between these streams is between object recognition -- the "what" of the signal -- and extracting location related information -- the "where" of the signal.
no code implementations • 15 May 2020 • Mohammad K. Ebrahimpour, J. Ben Falandays, Samuel Spevack, Ming-Hsuan Yang, David C. Noelle
Inspired by this structure, we have proposed an object detection framework involving the integration of a "What Network" and a "Where Network".
no code implementations • ICLR 2019 • Mohammad K. Ebrahimpour, David C. Noelle
We demonstrate that a simple linear mapping can be learned from sensitivity maps to bounding box coordinates, localizing the recognized object.
no code implementations • 6 Mar 2019 • Jiayun Li, Mohammad K. Ebrahimpour, Azadeh Moghtaderi, Yen-Yun Yu
Ideally, attention maps predicted by captioning models should be consistent with intrinsic attentions from visual models for any given visual concept.