Search Results for author: S. V. N. Vishwanathan

Found 27 papers, 6 papers with code

PEFA: Parameter-Free Adapters for Large-scale Embedding-based Retrieval Models

1 code implementation5 Dec 2023 Wei-Cheng Chang, Jyun-Yu Jiang, Jiong Zhang, Mutasem Al-Darabsah, Choon Hui Teo, Cho-Jui Hsieh, Hsiang-Fu Yu, S. V. N. Vishwanathan

For product search, PEFA improves the Recall@100 of the fine-tuned ERMs by an average of 5. 3% and 14. 5%, for PEFA-XS and PEFA-XL, respectively.

Retrieval Text Retrieval

Toward Understanding Privileged Features Distillation in Learning-to-Rank

no code implementations19 Sep 2022 Shuo Yang, Sujay Sanghavi, Holakou Rahmanian, Jan Bakus, S. V. N. Vishwanathan

Such features naturally arise in merchandised recommendation systems; for instance, "user clicked this item" as a feature is predictive of "user purchased this item" in the offline data, but is clearly not available during online serving.

Learning-To-Rank Recommendation Systems

DS-FACTO: Doubly Separable Factorization Machines

no code implementations29 Apr 2020 Parameswaran Raman, S. V. N. Vishwanathan

Traditional algorithms for FM which work on a single-machine are not equipped to handle this scale and therefore, using a distributed algorithm to parallelize the computation across a cluster is inevitable.

Recommendation Systems Stochastic Optimization

A Zero Attention Model for Personalized Product Search

no code implementations29 Aug 2019 Qingyao Ai, Daniel N. Hill, S. V. N. Vishwanathan, W. Bruce Croft

In this paper, we formulate the problem of personalized product search and conduct large-scale experiments with search logs sampled from a commercial e-commerce search engine.

Retrieval

An Efficient Bandit Algorithm for Realtime Multivariate Optimization

no code implementations22 Oct 2018 Daniel N. Hill, Houssam Nassif, Yi Liu, Anand Iyer, S. V. N. Vishwanathan

We further apply our algorithm to optimize a message that promotes adoption of an Amazon service.

Batch-Expansion Training: An Efficient Optimization Framework

no code implementations22 Apr 2017 Michał Dereziński, Dhruv Mahajan, S. Sathiya Keerthi, S. V. N. Vishwanathan, Markus Weimer

We propose Batch-Expansion Training (BET), a framework for running a batch optimizer on a gradually expanding dataset.

Online Learning of Combinatorial Objects via Extended Formulation

no code implementations17 Sep 2016 Holakou Rahmanian, David P. Helmbold, S. V. N. Vishwanathan

We present applications of our framework to online learning of Huffman trees and permutations.

Extreme Stochastic Variational Inference: Distributed and Asynchronous

no code implementations31 May 2016 Jiong Zhang, Parameswaran Raman, Shihao Ji, Hsiang-Fu Yu, S. V. N. Vishwanathan, Inderjit S. Dhillon

Moreover, it requires the parameters to fit in the memory of a single processor; this is problematic when the number of parameters is in billions.

Variational Inference

A Structural Smoothing Framework For Robust Graph Comparison

no code implementations NeurIPS 2015 Pinar Yanardag, S. V. N. Vishwanathan

In this paper, we propose a general smoothing framework for graph kernels by taking \textit{structural similarity} into account, and apply it to derive smoothed variants of popular graph kernels.

BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies

1 code implementation21 Nov 2015 Shihao Ji, S. V. N. Vishwanathan, Nadathur Satish, Michael J. Anderson, Pradeep Dubey

One way to understand BlackOut is to view it as an extension of the DropOut strategy to the output layer, wherein we use a discriminative training loss and a weighted sampling scheme.

Language Modelling

Deep Graph Kernels

no code implementations KDD '15 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2015 Pinar Yanardag, S. V. N. Vishwanathan

In this paper, we present Deep Graph Kernels (DGK), a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning.

Graph Classification Language Modelling

WordRank: Learning Word Embeddings via Robust Ranking

2 code implementations EMNLP 2016 Shihao Ji, Hyokun Yun, Pinar Yanardag, Shin Matsushima, S. V. N. Vishwanathan

Then, based on this insight, we propose a novel framework WordRank that efficiently estimates word representations via robust ranking, in which the attention mechanism and robustness to noise are readily achieved via the DCG-like ranking losses.

Learning Word Embeddings Word Similarity

Totally Corrective Boosting with Cardinality Penalization

no code implementations7 Apr 2015 Vasil S. Denchev, Nan Ding, Shin Matsushima, S. V. N. Vishwanathan, Hartmut Neven

If actual quantum optimization were to be used with this algorithm in the future, we would expect equivalent or superior results at much smaller time and energy costs during training.

Benchmarking Combinatorial Optimization

A Scalable Asynchronous Distributed Algorithm for Topic Modeling

1 code implementation16 Dec 2014 Hsiang-Fu Yu, Cho-Jui Hsieh, Hyokun Yun, S. V. N. Vishwanathan, Inderjit S. Dhillon

Learning meaningful topic models with massive document collections which contain millions of documents and billions of tokens is challenging because of two reasons: First, one needs to deal with a large number of topics (typically in the order of thousands).

Topic Models

Distributed Stochastic Optimization of the Regularized Risk

no code implementations17 Jun 2014 Shin Matsushima, Hyokun Yun, Xinhua Zhang, S. V. N. Vishwanathan

Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task.

Stochastic Optimization

DFacTo: Distributed Factorization of Tensors

no code implementations NeurIPS 2014 Joon Hee Choi, S. V. N. Vishwanathan

We present a technique for significantly speeding up Alternating Least Squares (ALS) and Gradient Descent (GD), two widely used algorithms for tensor factorization.

The Structurally Smoothed Graphlet Kernel

no code implementations3 Mar 2014 Pinar Yanardag, S. V. N. Vishwanathan

This vector representation can be used in a variety of applications, such as, for computing similarity between graphs.

Ranking via Robust Binary Classification and Parallel Parameter Estimation in Large-Scale Data

no code implementations11 Feb 2014 Hyokun Yun, Parameswaran Raman, S. V. N. Vishwanathan

We propose RoBiRank, a ranking algorithm that is motivated by observing a close connection between evaluation metrics for learning to rank and loss functions for robust classification.

Binary Classification General Classification +2

Modeling Attractiveness and Multiple Clicks in Sponsored Search Results

no code implementations1 Jan 2014 Dinesh Govindaraj, Tao Wang, S. V. N. Vishwanathan

Our model seamlessly incorporates the effect of externalities (quality of other search results displayed in response to a user query), user fatigue, as well as pre and post-click relevance of a sponsored search result.

NOMAD: Non-locking, stOchastic Multi-machine algorithm for Asynchronous and Decentralized matrix completion

1 code implementation1 Dec 2013 Hyokun Yun, Hsiang-Fu Yu, Cho-Jui Hsieh, S. V. N. Vishwanathan, Inderjit Dhillon

One of the key features of NOMAD is that the ownership of a variable is asynchronously transferred between processors in a decentralized fashion.

Distributed, Parallel, and Cluster Computing

t-divergence Based Approximate Inference

no code implementations NeurIPS 2011 Nan Ding, Yuan Qi, S. V. N. Vishwanathan

Approximate inference is an important technique for dealing with large, intractable graphical models based on the exponential family of distributions.

Multitask Learning without Label Correspondences

no code implementations NeurIPS 2010 Novi Quadrianto, James Petterson, Tibério S. Caetano, Alex J. Smola, S. V. N. Vishwanathan

We propose an algorithm to perform multitask learning where each task has potentially distinct label sets and label correspondences are not readily available.

Data Integration General Classification

Multiple Kernel Learning and the SMO Algorithm

no code implementations NeurIPS 2010 Zhaonan Sun, Nawanol Ampornpunt, Manik Varma, S. V. N. Vishwanathan

Our objective is to train $p$-norm Multiple Kernel Learning (MKL) and, more generally, linear MKL regularised by the Bregman divergence, using the Sequential Minimal Optimization (SMO) algorithm.

t-logistic regression

no code implementations NeurIPS 2010 Nan Ding, S. V. N. Vishwanathan

We extend logistic regression by using t-exponential families which were introduced recently in statistical physics.

regression

Lower Bounds on Rate of Convergence of Cutting Plane Methods

no code implementations NeurIPS 2010 Xinhua Zhang, Ankan Saha, S. V. N. Vishwanathan

By exploiting the structure of the objective function we can devise an algorithm that converges in $O(1/\sqrt{\epsilon})$ iterations.

Cannot find the paper you are looking for? You can Submit a new open access paper.