Search Results for author: Rishi Saket

Found 6 papers, 1 papers with code

FRACTAL: Fine-Grained Scoring from Aggregate Text Labels

no code implementations7 Apr 2024 Yukti Makhija, Priyanka Agrawal, Rishi Saket, Aravindan Raghuveer

Large language models (LLMs) are being increasingly tuned to power complex generation tasks such as writing, fact-seeking, querying and reasoning.

Math Multiple Instance Learning +3

Hardness of Learning Boolean Functions from Label Proportions

no code implementations28 Mar 2024 Venkatesan Guruswami, Rishi Saket

This is in contrast with the work of (Saket, NeurIPS'21) which gave a $(2/5)$-approximation for learning ORs using a halfspace.

PAC learning

LLP-Bench: A Large Scale Tabular Benchmark for Learning from Label Proportions

no code implementations16 Oct 2023 Anand Brahmbhatt, Mohith Pokala, Rishi Saket, Aravindan Raghuveer

One of the unique properties of tabular LLP is the ability to create feature bags where all the instances in a bag have the same value for a given feature.

Click-Through Rate Prediction

Label Differential Privacy via Aggregation

no code implementations16 Oct 2023 Anand Brahmbhatt, Rishi Saket, Shreyas Havaldar, Anshul Nasery, Aravindan Raghuveer

Further, the $\ell_2^2$-regressor which minimizes the loss on the aggregated dataset has a loss within $\left(1 + o(1)\right)$-factor of the optimum on the original dataset w. p.

regression

Multi-Variate Time Series Forecasting on Variable Subsets

1 code implementation25 Jun 2022 Jatin Chauhan, Aravindan Raghuveer, Rishi Saket, Jay Nandy, Balaraman Ravindran

Through systematic experiments across 4 datasets and 5 forecast models, we show that our technique is able to recover close to 95\% performance of the models even when only 15\% of the original variables are present.

Multivariate Time Series Forecasting Time Series

Learnability of Linear Thresholds from Label Proportions

no code implementations NeurIPS 2021 Rishi Saket

This bound is tight for the non-monochromatic bags case. The above is in contrast to the usual supervised learning setup (i. e., unit-sized bags) in which LTFs are efficiently learnable to arbitrary accuracy using linear programming, and even a trivial algorithm (any LTF or its complement) achieves an accuracy of $1/2$.

Cannot find the paper you are looking for? You can Submit a new open access paper.