Search Results for author: Shabnam Daghaghi

Found 5 papers, 1 papers with code

Adaptive Sampling for Deep Learning via Efficient Nonparametric Proxies

no code implementations22 Nov 2023 Shabnam Daghaghi, Benjamin Coleman, Benito Geordie, Anshumali Shrivastava

To address this problem, we propose a novel sampling distribution based on nonparametric kernel regression that learns an effective importance score as the neural network trains.

regression

Accelerating SLIDE Deep Learning on Modern CPUs: Vectorization, Quantizations, Memory Optimizations, and More

2 code implementations6 Mar 2021 Shabnam Daghaghi, Nicholas Meisburger, Mengnan Zhao, Yong Wu, Sameh Gobriel, Charlie Tai, Anshumali Shrivastava

Our work highlights several novel perspectives and opportunities for implementing randomized algorithms for deep learning on modern CPUs.

A Truly Constant-time Distribution-aware Negative Sampling

no code implementations1 Jan 2021 Shabnam Daghaghi, Tharun Medini, Beidi Chen, Mengnan Zhao, Anshumali Shrivastava

Softmax classifiers with a very large number of classes naturally occur in many applications such as natural language processing and information retrieval.

Information Retrieval Retrieval

A Tale of Two Efficient and Informative Negative Sampling Distributions

no code implementations31 Dec 2020 Shabnam Daghaghi, Tharun Medini, Nicholas Meisburger, Beidi Chen, Mengnan Zhao, Anshumali Shrivastava

Unfortunately, due to the dynamically updated parameters and data samples, there is no sampling scheme that is provably adaptive and samples the negative classes efficiently.

Information Retrieval Retrieval +1

SDM-Net: A Simple and Effective Model for Generalized Zero-Shot Learning

no code implementations10 Sep 2019 Shabnam Daghaghi, Tharun Medini, Anshumali Shrivastava

Zero-Shot Learning (ZSL) is a classification task where we do not have even a single training labeled example from a set of unseen classes.

Descriptive General Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.