Search Results for author: Sajad Darabi

Found 15 papers, 9 papers with code

Heterogenous Ensemble of Models for Molecular Property Prediction

1 code implementation20 Nov 2022 Sajad Darabi, Shayan Fazeli, Jiwei Liu, Alexandre Milesi, Pawel Morkisz, Jean-François Puget, Gilberto Titericz

Previous works have demonstrated the importance of considering different modalities on molecules, each of which provide a varied granularity of information for downstream property prediction tasks.

Molecular Property Prediction Property Prediction +1

A Framework for Large Scale Synthetic Graph Dataset Generation

1 code implementation4 Oct 2022 Sajad Darabi, Piotr Bigaj, Dawid Majchrowski, Artur Kasymov, Pawel Morkisz, Alex Fit-Florea

Recently there has been increasing interest in developing and deploying deep graph learning algorithms for many tasks, such as fraud detection and recommender systems.

Benchmarking Drug Discovery +6

Contrastive Mixup: Self- and Semi-Supervised learning for Tabular Domain

1 code implementation27 Aug 2021 Sajad Darabi, Shayan Fazeli, Ali Pazoki, Sriram Sankararaman, Majid Sarrafzadeh

Recent literature in self-supervised has demonstrated significant progress in closing the gap between supervised and unsupervised methods in the image and text domains.

Synthesising Multi-Modal Minority Samples for Tabular Data

no code implementations17 May 2021 Sajad Darabi, Yotam Elor

Furthermore, the superior synthetic data yields better prediction quality in downstream binary classification tasks, as was demonstrated in extensive experiments with 27 publicly available real-world datasets

Binary Classification

AE-SMOTE: A Multi-Modal Minority Oversampling Framework

no code implementations1 Jan 2021 Sajad Darabi, Yotam Elor

Real-world binary classification tasks are in many cases unbalanced i. e. the minority class is much smaller than the majority class.

Binary Classification

Group-Connected Multilayer Perceptron Networks

no code implementations20 Dec 2019 Mohammad Kachuee, Sajad Darabi, Shayan Fazeli, Majid Sarrafzadeh

GMLP is based on the idea of learning expressive feature combinations (groups) and exploiting them to reduce the network complexity by defining local group-wise operations.

Representation Learning

Unsupervised Representation for EHR Signals and Codes as Patient Status Vector

no code implementations4 Oct 2019 Sajad Darabi, Mohammad Kachuee, Majid Sarrafzadeh

In this work, we present a two-step unsupervised representation learning scheme to summarize the multi-modal clinical time series consisting of signals and medical codes into a patient status vector.

Representation Learning Time Series +1

TAPER: Time-Aware Patient EHR Representation

2 code implementations11 Aug 2019 Sajad Darabi, Mohammad Kachuee, Shayan Fazeli, Majid Sarrafzadeh

The data contained in these records are irregular and contain multiple modalities such as notes, and medical codes.

Language Modelling Representation Learning

Generative Imputation and Stochastic Prediction

2 code implementations22 May 2019 Mohammad Kachuee, Kimmo Karkkainen, Orpaz Goldstein, Sajad Darabi, Majid Sarrafzadeh

In order to make imputations, we train a simple and effective generator network to generate imputations that a discriminator network is tasked to distinguish.

Classification General Classification +2

BNN+: Improved Binary Network Training

no code implementations ICLR 2019 Sajad Darabi, Mouloud Belbahri, Matthieu Courbariaux, Vahid Partovi Nia

Binary neural networks (BNN) help to alleviate the prohibitive resource requirements of DNN, where both activations and weights are limited to 1-bit.

Foothill: A Quasiconvex Regularization for Edge Computing of Deep Neural Networks

no code implementations18 Jan 2019 Mouloud Belbahri, Eyyüb Sari, Sajad Darabi, Vahid Partovi Nia

Using a quasiconvex base function in order to construct a binary quantizer helps training binary neural networks (BNNs) and adding noise to the input data or using a concrete regularization function helps to improve generalization error.

Edge-computing General Classification +4

Regularized Binary Network Training

1 code implementation ICLR 2019 Sajad Darabi, Mouloud Belbahri, Matthieu Courbariaux, Vahid Partovi Nia

We propose to improve the binary training method, by introducing a new regularization function that encourages training weights around binary values.

Dynamic Feature Acquisition Using Denoising Autoencoders

1 code implementation3 Nov 2018 Mohammad Kachuee, Sajad Darabi, Babak Moatamed, Majid Sarrafzadeh

In real-world scenarios, different features have different acquisition costs at test-time which necessitates cost-aware methods to optimize the cost and performance trade-off.

Denoising Density Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.