Search Results for author: Harsha Vardhan Simhadri

Found 12 papers, 8 papers with code

OOD-DiskANN: Efficient and Scalable Graph ANNS for Out-of-Distribution Queries

no code implementations22 Oct 2022 Shikhar Jaiswal, Ravishankar Krishnaswamy, Ankit Garg, Harsha Vardhan Simhadri, Sheshansh Agrawal

State-of-the-art algorithms for Approximate Nearest Neighbor Search (ANNS) such as DiskANN, FAISS-IVF, and HNSW build data dependent indices that offer substantially better accuracy and search efficiency over data-agnostic indices by overfitting to the index data distribution.

FreshDiskANN: A Fast and Accurate Graph-Based ANN Index for Streaming Similarity Search

1 code implementation20 May 2021 Aditi Singh, Suhas Jayaram Subramanya, Ravishankar Krishnaswamy, Harsha Vardhan Simhadri

Approximate nearest neighbor search (ANNS) is a fundamental building block in information retrieval with graph-based indices being the current state-of-the-art and widely used in the industry.

Information Retrieval Retrieval

DROCC: Deep Robust One-Class Classification

1 code implementation ICML 2020 Sachin Goyal, aditi raghunathan, Moksh Jain, Harsha Vardhan Simhadri, Prateek Jain

Classical approaches for one-class problems such as one-class SVM and isolation forest require careful feature engineering when applied to structured domains like images.

Classification Feature Engineering +3

Rand-NSG: Fast Accurate Billion-point Nearest Neighbor Search on a Single Node

no code implementations NeurIPS 2019 Suhas Jayaram Subramanya, Fnu Devvrit, Harsha Vardhan Simhadri, Ravishankar Krishnawamy, Rohan Kadekodi

We present a new graph-based indexing and search system called DiskANN that can index, store, and search a billion point database on a single workstation with just 64GB RAM and an inexpensive solid-state drive (SSD).

Word2Sense: Sparse Interpretable Word Embeddings

no code implementations ACL 2019 Abhishek Panigrahi, Harsha Vardhan Simhadri, Chiranjib Bhattacharyya

We present an unsupervised method to generate Word2Sense word embeddings that are interpretable {---} each dimension of the embedding space corresponds to a fine-grained sense, and the non-negative value of the embedding along the j-th dimension represents the relevance of the j-th sense to the word.

Word Embeddings Word Similarity

Multiple Instance Learning for Efficient Sequential Data Classification on Resource-constrained Devices

1 code implementation NeurIPS 2018 Don Dennis, Chirag Pabbaraju, Harsha Vardhan Simhadri, Prateek Jain

We propose a method, EMI-RNN, that exploits these observations by using a multiple instance learning formulation along with an early prediction technique to learn a model that achieves better accuracy compared to baseline models, while simultaneously reducing computation by a large fraction.

General Classification Multiple Instance Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.