no code implementations • 22 Nov 2023 • Daniel Nickelsen, Bubacarr Bah
With two flavors of our approach and the adoption of $p(\boldsymbol y|\mathcal M)$ for bi-directional stepwise regression, we present a total of three new avenues for equation learning.
no code implementations • 7 Apr 2023 • Hamid El Bahja, Jan Christian Hauffen, Peter Jung, Bubacarr Bah, Issa Karambal
Deep learning has been highly successful in some applications.
1 code implementation • 21 Oct 2021 • Jannis Kurtz, Bubacarr Bah
Compared to classical deep neural networks its binarized versions can be useful for applications on resource-limited devices due to their reduction in memory consumption and computational demands.
no code implementations • 21 Dec 2020 • Samuel Ofosu Mensah, Bubacarr Bah, Willie Brink
Convolutional Neural Networks (CNNs) have successfully been used to classify diabetic retinopathy (DR) fundus images in recent times.
no code implementations • 7 Jul 2020 • Bubacarr Bah, Jannis Kurtz
We study deep neural networks with binary activation functions (BDNN), i. e. the activation function only has two states.
no code implementations • 11 Apr 2020 • Mhlasakululeka Mvubu, Emmanuel Kabuga, Christian Plitz, Bubacarr Bah, Ronnie Becker, Hans Georg Zimmermann
Recurrent neural networks (RNNs) are more suitable for learning non-linear dependencies in dynamical systems from observed time series data.
no code implementations • 12 Oct 2019 • Bubacarr Bah, Holger Rauhut, Ulrich Terstiege, Michael Westdickenberg
We study the convergence of gradient flows related to learning deep linear neural networks (where the activation function is the identity map) from data.
no code implementations • 21 Mar 2016 • Anastasios Kyrillidis, Bubacarr Bah, Rouzbeh Hasheminezhad, Quoc Tran-Dinh, Luca Baldassarre, Volkan Cevher
Our experimental findings on synthetic and real applications support our claims for faster recovery in the convex setting -- as opposed to using dense sensing matrices, while showing a competitive recovery performance.
no code implementations • 12 Jul 2013 • Bubacarr Bah, Ali Sadeghian, Volkan Cevher
We propose a dimensionality reducing matrix design based on training data with constraints on its Frobenius norm and number of rows.