Search Results for author: Ali Abdalla

Found 7 papers, 3 papers with code

MolDesigner: Interactive Design of Efficacious Drugs with Deep Learning

1 code implementation5 Oct 2020 Kexin Huang, Tianfan Fu, Dawood Khan, Ali Abid, Ali Abdalla, Abubakar Abid, Lucas M. Glass, Marinka Zitnik, Cao Xiao, Jimeng Sun

The efficacy of a drug depends on its binding affinity to the therapeutic target and pharmacokinetics.

Combining graph and sequence information to learn protein representations

no code implementations25 Sep 2019 Hassan Kané, Mohamed Coulibali, Pelkins Ajanoh, Ali Abdalla

Using these representations, we train machine learning models that outperform existing methods on the task of tissue-specific protein function prediction on 10 out of 13 tissues.

Protein Function Prediction Representation Learning

JAUNE: Justified And Unified Neural language Evaluation

no code implementations25 Sep 2019 Hassan Kané, Yusuf Kocyigit, Ali Abdalla, Pelkins Ajanoh, Mohamed Coulibali

We review the limitations of BLEU and ROUGE -- the most popular metrics used to assess reference summaries against hypothesis summaries, and introduce JAUNE: a set of criteria for what a good metric should behave like and propose concrete ways to use recent Transformers-based Language Models to assess reference summaries against hypothesis summaries.

Towards Neural Language Evaluators

no code implementations20 Sep 2019 Hassan Kané, Yusuf Kocyigit, Pelkins Ajanoh, Ali Abdalla, Mohamed Coulibali

We review three limitations of BLEU and ROUGE -- the most popular metrics used to assess reference summaries against hypothesis summaries, come up with criteria for what a good metric should behave like and propose concrete ways to use recent Transformers-based Language Models to assess reference summaries against hypothesis summaries.

Towards Neural Similarity Evaluator

no code implementations NeurIPS Workshop Document_Intelligen 2019 Hassan Kané, Yusuf Kocyigit, Pelkins Ajanoh, Ali Abdalla, Mohamed Coulibali

We review three limitations of BLEU and ROUGE – the most popular metrics used to assess reference summaries against hypothesis summaries, come up with criteria for what a good metric should behave like and propose concrete ways to assess the performance of a metric in detail and show the potential of Transformers-based Language Models to assess reference summaries against hypothesis summaries.

Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild

1 code implementation6 Jun 2019 Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, James Zou

Their feedback identified that Gradio should support a variety of interfaces and frameworks, allow for easy sharing of the interface, allow for input manipulation and interactive inference by the domain expert, as well as allow embedding the interface in iPython notebooks.

Cannot find the paper you are looking for? You can Submit a new open access paper.