Search Results for author: Pawan Sinha

Found 5 papers, 1 papers with code

Information Transfer Rate in BCIs: Towards Tightly Integrated Symbiosis

no code implementations1 Jan 2023 Suayb S. Arslan, Pawan Sinha

In order to calculate ITR, it is customary to assume a uniform input distribution and an oversimplified channel model that is memoryless, stationary, and symmetrical in nature with discrete alphabet sizes.

Binary Classification SSVEP

Neural Correlates of Face Familiarity Perception

no code implementations31 Jul 2022 Evan Ehrenberg, Kleovoulos Leo Tsourides, Hossein Nejati, Ngai-Man Cheung, Pawan Sinha

In the domain of face recognition, there exists a puzzling timing discrepancy between results from macaque neurophysiology on the one hand and human electrophysiology on the other.

EEG Face Recognition

Three approaches to facilitate DNN generalization to objects in out-of-distribution orientations and illuminations

1 code implementation30 Oct 2021 Akira Sakai, Taro Sunagawa, Spandan Madan, Kanata Suzuki, Takashi Katoh, Hiromichi Kobashi, Hanspeter Pfister, Pawan Sinha, Xavier Boix, Tomotake Sasaki

While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations and illuminations, Deep Neural Networks (DNNs) severely suffer in this case, even when large amounts of training examples are available.

Emergent Neural Network Mechanisms for Generalization to Objects in Novel Orientations

no code implementations28 Sep 2021 Avi Cooper, Xavier Boix, Daniel Harari, Spandan Madan, Hanspeter Pfister, Tomotake Sasaki, Pawan Sinha

The capability of Deep Neural Networks (DNNs) to recognize objects in orientations outside the distribution of the training data is not well understood.

Robustness to Transformations Across Categories: Is Robustness To Transformations Driven by Invariant Neural Representations?

no code implementations30 Jun 2020 Hojin Jang, Syed Suleman Abbas Zaidi, Xavier Boix, Neeraj Prasad, Sharon Gilad-Gutnick, Shlomit Ben-Ami, Pawan Sinha

Our results with state-of-the-art DCNNs indicate that invariant neural representations do not always drive robustness to transformations, as networks show robustness for categories seen transformed during training even in the absence of invariant neural representations.

Cannot find the paper you are looking for? You can Submit a new open access paper.