Search Results for author: Bubacarr Bah

Found 9 papers, 1 papers with code

Improved identification accuracy in equation learning via comprehensive $\boldsymbol{R^2}$-elimination and Bayesian model selection

no code implementations22 Nov 2023 Daniel Nickelsen, Bubacarr Bah

With two flavors of our approach and the adoption of $p(\boldsymbol y|\mathcal M)$ for bi-directional stepwise regression, we present a total of three new avenues for equation learning.

Model Selection regression

Efficient and Robust Mixed-Integer Optimization Methods for Training Binarized Deep Neural Networks

1 code implementation21 Oct 2021 Jannis Kurtz, Bubacarr Bah

Compared to classical deep neural networks its binarized versions can be useful for applications on resource-limited devices due to their reduction in memory consumption and computational demands.

Towards the Localisation of Lesions in Diabetic Retinopathy

no code implementations21 Dec 2020 Samuel Ofosu Mensah, Bubacarr Bah, Willie Brink

Convolutional Neural Networks (CNNs) have successfully been used to classify diabetic retinopathy (DR) fundus images in recent times.

An Integer Programming Approach to Deep Neural Networks with Binary Activation Functions

no code implementations7 Jul 2020 Bubacarr Bah, Jannis Kurtz

We study deep neural networks with binary activation functions (BDNN), i. e. the activation function only has two states.

On Error Correction Neural Networks for Economic Forecasting

no code implementations11 Apr 2020 Mhlasakululeka Mvubu, Emmanuel Kabuga, Christian Plitz, Bubacarr Bah, Ronnie Becker, Hans Georg Zimmermann

Recurrent neural networks (RNNs) are more suitable for learning non-linear dependencies in dynamical systems from observed time series data.

Time Series Time Series Analysis

Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers

no code implementations12 Oct 2019 Bubacarr Bah, Holger Rauhut, Ulrich Terstiege, Michael Westdickenberg

We study the convergence of gradient flows related to learning deep linear neural networks (where the activation function is the identity map) from data.

Convex block-sparse linear regression with expanders -- provably

no code implementations21 Mar 2016 Anastasios Kyrillidis, Bubacarr Bah, Rouzbeh Hasheminezhad, Quoc Tran-Dinh, Luca Baldassarre, Volkan Cevher

Our experimental findings on synthetic and real applications support our claims for faster recovery in the convex setting -- as opposed to using dense sensing matrices, while showing a competitive recovery performance.

regression

Energy-aware adaptive bi-Lipschitz embeddings

no code implementations12 Jul 2013 Bubacarr Bah, Ali Sadeghian, Volkan Cevher

We propose a dimensionality reducing matrix design based on training data with constraints on its Frobenius norm and number of rows.

Compressive Sensing

Cannot find the paper you are looking for? You can Submit a new open access paper.