Search Results for author: Debabrata Mahapatra

Found 4 papers, 1 papers with code

Multi-Task Learning with User Preferences: Gradient Descent with Controlled Ascent in Pareto Optimization

1 code implementation ICML 2020 Debabrata Mahapatra, Vaibhav Rajan

However, they cannot be used to find exact Pareto optimal solutions satisfying user-specified preferences with respect to task-specific losses, that is not only a common requirement in applications but also a useful way to explore the infinite set of Pareto optimal solutions.

Multi-Task Learning

Multi-Label Learning to Rank through Multi-Objective Optimization

no code implementations7 Jul 2022 Debabrata Mahapatra, Chaosheng Dong, Yetian Chen, Deqiang Meng, Michinari Momma

Moreover, it formulates multiple goals that may be conflicting yet important to optimize for simultaneously, e. g., in product search, a ranking model can be trained based on product quality and purchase likelihood to increase revenue.

Information Retrieval Learning-To-Rank +2

Exact Pareto Optimal Search for Multi-Task Learning and Multi-Criteria Decision-Making

no code implementations2 Aug 2021 Debabrata Mahapatra, Vaibhav Rajan

These shortcomings lead to modeling limitations and computational inefficiency in multi-task learning (MTL) and multi-criteria decision-making (MCDM) methods that utilize CS for their underlying non-convex multi-objective optimization (MOO).

Computational Efficiency Decision Making +1

Deep Sparse Coding Using Optimized Linear Expansion of Thresholds

no code implementations20 May 2017 Debabrata Mahapatra, Subhadip Mukherjee, Chandra Sekhar Seelamantula

We address the problem of reconstructing sparse signals from noisy and compressive measurements using a feed-forward deep neural network (DNN) with an architecture motivated by the iterative shrinkage-thresholding algorithm (ISTA).

Image Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.