Search Results for author: Kiarash Banihashem

Found 8 papers, 1 papers with code

A Dynamic Algorithm for Weighted Submodular Cover Problem

no code implementations13 Jul 2024 Kiarash Banihashem, Samira Goudarzi, Mohammadtaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh

We consider this problem in a dynamic setting where there are updates to our set $V$, in the form of insertions and deletions of elements from a ground set $\mathcal{V}$, and the goal is to maintain an approximately optimal solution with low query complexity per update.

Optimal Sparse Recovery with Decision Stumps

no code implementations8 Mar 2023 Kiarash Banihashem, Mohammadtaghi Hajiaghayi, Max Springer

Though often used in practice for feature selection, the theoretical guarantees of these methods are not well understood.

feature selection

Run-Off Election: Improved Provable Defense against Data Poisoning Attacks

2 code implementations5 Feb 2023 Keivan Rezaei, Kiarash Banihashem, Atoosa Chegini, Soheil Feizi

Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work.

Data Poisoning

Explicit Tradeoffs between Adversarial and Natural Distributional Robustness

no code implementations15 Sep 2022 Mazda Moayeri, Kiarash Banihashem, Soheil Feizi

In this setting, through theoretical and empirical analysis, we show that (i) adversarial training with $\ell_1$ and $\ell_2$ norms increases the model reliance on spurious features; (ii) For $\ell_\infty$ adversarial training, spurious reliance only occurs when the scale of the spurious features is larger than that of the core features; (iii) adversarial training can have an unintended consequence in reducing distributional robustness, specifically when spurious correlations are changed in the new test domain.

Adversarial Robustness

Admissible Policy Teaching through Reward Design

no code implementations6 Jan 2022 Kiarash Banihashem, Adish Singla, Jiarui Gan, Goran Radanovic

This problem can be viewed as a dual to the problem of optimal reward poisoning attacks: instead of forcing an agent to adopt a specific policy, the reward designer incentivizes an agent to avoid taking actions that are inadmissible in certain states.

Defense Against Reward Poisoning Attacks in Reinforcement Learning

no code implementations10 Feb 2021 Kiarash Banihashem, Adish Singla, Goran Radanovic

As a threat model, we consider attacks that minimally alter rewards to make the attacker's target policy uniquely optimal under the poisoned rewards, with the optimality gap specified by an attack parameter.

reinforcement-learning Reinforcement Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.