no code implementations • 13 Jul 2024 • Kiarash Banihashem, Samira Goudarzi, Mohammadtaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh
We consider this problem in a dynamic setting where there are updates to our set $V$, in the form of insertions and deletions of elements from a ground set $\mathcal{V}$, and the goal is to maintain an approximately optimal solution with low query complexity per update.
no code implementations • 1 Jun 2023 • Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, Mohammadtaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh
This is the first dynamic algorithm for the problem that has a query complexity independent of the size of ground set $V$.
no code implementations • 8 Mar 2023 • Kiarash Banihashem, Mohammadtaghi Hajiaghayi, Max Springer
Though often used in practice for feature selection, the theoretical guarantees of these methods are not well understood.
no code implementations • 15 Feb 2023 • Kiarash Banihashem, Mohammadtaghi Hajiaghayi, Suho Shin, Aleksandrs Slivkins
We study social learning dynamics motivated by reviews on online platforms.
2 code implementations • 5 Feb 2023 • Keivan Rezaei, Kiarash Banihashem, Atoosa Chegini, Soheil Feizi
Based on this approach, we propose DPA+ROE and FA+ROE defense methods based on Deep Partition Aggregation (DPA) and Finite Aggregation (FA) approaches from prior work.
no code implementations • 15 Sep 2022 • Mazda Moayeri, Kiarash Banihashem, Soheil Feizi
In this setting, through theoretical and empirical analysis, we show that (i) adversarial training with $\ell_1$ and $\ell_2$ norms increases the model reliance on spurious features; (ii) For $\ell_\infty$ adversarial training, spurious reliance only occurs when the scale of the spurious features is larger than that of the core features; (iii) adversarial training can have an unintended consequence in reducing distributional robustness, specifically when spurious correlations are changed in the new test domain.
no code implementations • 6 Jan 2022 • Kiarash Banihashem, Adish Singla, Jiarui Gan, Goran Radanovic
This problem can be viewed as a dual to the problem of optimal reward poisoning attacks: instead of forcing an agent to adopt a specific policy, the reward designer incentivizes an agent to avoid taking actions that are inadmissible in certain states.
no code implementations • 10 Feb 2021 • Kiarash Banihashem, Adish Singla, Goran Radanovic
As a threat model, we consider attacks that minimally alter rewards to make the attacker's target policy uniquely optimal under the poisoned rewards, with the optimality gap specified by an attack parameter.