no code implementations • 22 May 2024 • Hongyi Pan, Emadeldeen Hamdan, Xin Zhu, Koushik Biswas, Ahmet Enis Cetin, Ulas Bagci
However, training the attention weights of queries, keys, and values is non-trivial from a state of random initialization.
no code implementations • 8 Mar 2024 • Xin Zhu, Hongyi Pan, Yury Velichko, Adam B. Murphy, Ashley Ross, Baris Turkbey, Ahmet Enis Cetin, Ulas Bagci
Random samples drawn from latent space are then incorporated with a prototypical corrected image to generate multiple plausible images.
no code implementations • 8 Mar 2024 • Xin Zhu, Ahmet Enis Cetin
Furthermore, with the assistance of the block MHT layer, the proposed blind normalized SVGD algorithm achieves a higher preamble detection accuracy and throughput than other state-of-the-art detection methods.
no code implementations • 4 Oct 2023 • Xin Zhu, Daoguang Yang, Hongyi Pan, Hamid Reza Karimi, Didem Ozevin, Ahmet Enis Cetin
In comparison to the linear layer, the DCST layer reduces the number of trainable parameters and improves the accuracy of data reconstruction.
1 code implementation • 18 Sep 2023 • Hongyi Pan, Bin Wang, Zheyuan Zhang, Xin Zhu, Debesh Jha, Ahmet Enis Cetin, Concetto Spampinato, Ulas Bagci
However, it neglects background interference in the amplitude spectrum.
no code implementations • 15 Sep 2023 • Xin Zhu, Hongyi Pan, Shuaiang Rong, Ahmet Enis Cetin
The latent space data is transmitted to the receiver.
no code implementations • 15 Sep 2023 • Xin Zhu, Hongyi Pan, Salih Atici, Ahmet Enis Cetin
Traditional preamble detection algorithms have low accuracy in the grant-based random access scheme in massive machine-type communication (mMTC).
1 code implementation • 27 May 2023 • Hongyi Pan, Xin Zhu, Salih Atici, Ahmet Enis Cetin
In this paper, we propose a novel Hadamard Transform (HT)-based neural network layer for hybrid quantum-classical computing.
1 code implementation • 13 Mar 2023 • Hongyi Pan, Emadeldeen Hamdan, Xin Zhu, Salih Atici, Ahmet Enis Cetin
Trainable soft-thresholding layers, that remove noise in the transform domain, bring nonlinearity to the transform domain layers.
1 code implementation • 20 Dec 2022 • Salih Atici, Hongyi Pan, Ahmet Enis Cetin
We evaluate the efficiency of our training algorithm on benchmark datasets using ResNet-18, WResNet-20, ResNet-50, and a toy neural network.
no code implementations • 15 Nov 2022 • Salih Atici, Hongyi Pan, Mohammed H. Elnagar, Veerasathpurush Allareddy, Omar Suhaym, Rashid Ansari, Ahmet Enis Cetin
They also have a built-in set of novel directional filters that highlight the Cervical Verte edges in X-ray images.
no code implementations • 15 Nov 2022 • Hongyi Pan, Xin Zhu, Salih Atici, Ahmet Enis Cetin
In this paper, we propose a novel Discrete Cosine Transform (DCT)-based neural network layer which we call DCT-perceptron to replace the $3\times3$ Conv2D layers in the Residual neural Network (ResNet).
1 code implementation • 15 Nov 2022 • Hongyi Pan, Xin Zhu, Zhilu Ye, Pai-Yen Chen, Ahmet Enis Cetin
To improve the estimation precision, we propose a neural network that uses a novel Discrete Cosine Transform (DCT) layer to denoise and decorrelates the data.
no code implementations • 3 Oct 2022 • Hongyi Pan, Salih Atici, Ahmet Enis Cetin
In this paper, we introduce a convolutional network which we call MultiPodNet consisting of a combination of two or more convolutional networks which process the input image in parallel to achieve the same goal.
no code implementations • 7 Jan 2022 • Hongyi Pan, Diaa Badawi, Ahmet Enis Cetin
In both 1-D and 2-D layers, we compute the binary WHT of the input feature map and denoise the WHT domain coefficients using a nonlinearity which is obtained by combining soft-thresholding with the tanh function.
no code implementations • 7 Jan 2022 • Hongyi Pan, Diaa Badawi, Ishaan Bassi, Sule Ozev, Ahmet Enis Cetin
We propose a kernel-PCA based method to detect anomaly in chemical sensors.
no code implementations • 22 Oct 2021 • Hongyi Pan, Diaa Badawi, Runxuan Miao, Erdem Koyuncu, Ahmet Enis Cetin
In this paper, we introduce multiplication-avoiding power iteration (MAPI), which replaces the standard $\ell_2$-inner products that appear at the regular power iteration (RPI) with multiplication-free vector products which are Mercer-type kernel operations related with the $\ell_1$ norm.
no code implementations • 14 Apr 2021 • Hongyi Pan, Diaa Dabawi, Ahmet Enis Cetin
In this paper, we propose a novel layer based on fast Walsh-Hadamard transform (WHT) and smooth-thresholding to replace $1\times 1$ convolution layers in deep neural networks.
no code implementations • 30 Oct 2019 • Usama Muneeb, Erdem Koyuncu, Yasaman Keshtkarjahromi, Hulya Seferoglu, Mehmet Fatih Erden, Ahmet Enis Cetin
We propose a technique to increase robustness and reduce computational complexity in a Convolutional Neural Network (CNN) based anomaly detector that utilizes the optical flow information of video data.
no code implementations • 22 May 2019 • Lubna Shibly Mokatren, Rashid Ansari, Ahmet Enis Cetin, Alex D. Leow, Heide Klumpp, Olusola Ajilore, Fatos Yarman Vural
Performance of two classification models - model 1 that ignores the sensor layout and model 2 that factors it in - is investigated and found to achieve consistently higher detection accuracy.
no code implementations • 7 Dec 2018 • Lubna Shibly Mokatren, Rashid Ansari, Ahmet Enis Cetin, Alex D. Leow, Olusola Ajilore, Heide Klumpp, Fatos T. Yarman Vural
The problem of detecting the presence of Social Anxiety Disorder (SAD) using Electroencephalography (EEG) for classification has seen limited study and is addressed with a new approach that seeks to exploit the knowledge of EEG sensor spatial configuration.