no code implementations • LREC 2022 • Christopher Song, David Harwath, Tuka Alhanai, James Glass
We present Speak, a toolkit that allows researchers to crowdsource speech audio recordings using Amazon Mechanical Turk (MTurk).
no code implementations • 21 Sep 2023 • Reza Khanmohammadi, Tuka Alhanai, Mohammad M. Ghassemi
Three different experiments are conducted in this study to test the applicability of imitating Tsallis entropy for performance enhancement: Bitcoin price prediction, speech emotion recognition, and chronic neck pain detection.
1 code implementation • 26 Jun 2023 • Shangyang Min, Hassan B. Ebadian, Tuka Alhanai, Mohammad Mahdi Ghassemi
Feature-Imitating-Networks (FINs) are neural networks that are first trained to approximate closed-form statistical features (e. g. Entropy), and then embedded into other networks to enhance their performance.
no code implementations • 7 May 2023 • Hazem Ibrahim, Fengyuan Liu, Rohail Asim, Balaraju Battu, Sidahmed Benabderrahmane, Bashar Alhafni, Wifag Adnan, Tuka Alhanai, Bedoor AlShebli, Riyadh Baghdadi, Jocelyn J. Bélanger, Elena Beretta, Kemal Celik, Moumena Chaqfeh, Mohammed F. Daqaq, Zaynab El Bernoussi, Daryl Fougnie, Borja Garcia de Soto, Alberto Gandolfi, Andras Gyorgy, Nizar Habash, J. Andrew Harris, Aaron Kaufman, Lefteris Kirousis, Korhan Kocak, Kangsan Lee, Seungah S. Lee, Samreen Malik, Michail Maniatakos, David Melcher, Azzam Mourad, Minsu Park, Mahmoud Rasras, Alicja Reuben, Dania Zantout, Nancy W. Gleason, Kinga Makovi, Talal Rahwan, Yasir Zaki
Moreover, current AI-text classifiers cannot reliably detect ChatGPT's use in school work, due to their propensity to classify human-written answers as AI-generated, as well as the ease with which AI-generated text can be edited to evade detection.
no code implementations • 31 Oct 2022 • Reza Khanmohammadi, Sari Saba-Sadiya, Sina Esfandiarpour, Tuka Alhanai, Mohammad M. Ghassemi
In this paper, we present Mambanet: a hybrid neural network for predicting the outcomes of Basketball games.
no code implementations • 10 May 2022 • Allen R. Williams, Yoolim Jin, Anthony Duer, Tuka Alhanai, Mohammad Ghassemi
This data is continuously collected and processed nightly into metadata consisting of mileage and time summaries of each discrete trip taken, and a set of behavioral scores describing attributes of the trip (e. g, driver fatigue or driver distraction) so we examine whether it can be used to identify periods of increased risk by successfully classifying trips that occur immediately before a trip in which there was an incident leading to a claim for that driver.
no code implementations • 10 Oct 2021 • Reza Khanmohammadi, Mitra Sadat Mirshafiee, Mohammad Mahdi Ghassemi, Tuka Alhanai
In this work, we apply common PCG signal processing techniques on the gender-tagged Shiraz University Fetal Heart Sounds Database and study the applicability of previously proposed features in classifying fetal gender using both Machine Learning and Deep Learning models.
no code implementations • 10 Oct 2021 • Sari Saba-Sadiya, Tuka Alhanai, Mohammad M Ghassemi
In this paper, we introduce a novel approach to neural learning: the Feature-Imitating-Network (FIN).
1 code implementation • Findings (EMNLP) 2021 • Hooman Sedghamiz, Shivam Raval, Enrico Santus, Tuka Alhanai, Mohammad Ghassemi
This paper introduces SupCL-Seq, which extends the supervised contrastive learning from computer vision to the optimization of sequence representations in NLP.
1 code implementation • Findings (EMNLP) 2021 • Shivam Raval, Hooman Sedghamiz, Enrico Santus, Tuka Alhanai, Mohammad Ghassemi, Emmanuele Chersoni
Adverse Events (AE) are harmful events resulting from the use of medical products.
1 code implementation • 21 Sep 2020 • Sari Saba-Sadiya, Tuka Alhanai, Taosheng Liu, Mohammad M. Ghassemi
Our approach exhibited a minimum of ~15% improvement over contemporary approaches when tested on subjects and tasks not used during model training.
1 code implementation • 20 Oct 2017 • Tuka Alhanai, Rhoda Au, James Glass
In this study we developed an automated system that evaluates speech and language features from audio recordings of neuropsychological examinations of 92 subjects in the Framingham Heart Study.