Search Results for author: Amin Karbasi

Found 100 papers, 12 papers with code

Fast Neural Kernel Embeddings for General Activations

2 code implementations9 Sep 2022 Insu Han, Amir Zandieh, Jaehoon Lee, Roman Novak, Lechao Xiao, Amin Karbasi

Moreover, most prior works on neural kernels have focused on the ReLU activation, mainly due to its popularity but also due to the difficulty of computing such kernels for general activations.

Tree of Attacks: Jailbreaking Black-Box LLMs Automatically

1 code implementation4 Dec 2023 Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, Amin Karbasi

In this work, we present Tree of Attacks with Pruning (TAP), an automated method for generating jailbreaks that only requires black-box access to the target LLM.

Navigate

HyperAttention: Long-context Attention in Near-Linear Time

1 code implementation9 Oct 2023 Insu Han, Rajesh Jayaram, Amin Karbasi, Vahab Mirrokni, David P. Woodruff, Amir Zandieh

Recent work suggests that in the worst-case scenario, quadratic time is necessary unless the entries of the attention matrix are bounded or the matrix has low stable rank.

Streaming Weak Submodularity: Interpreting Neural Networks on the Fly

1 code implementation NeurIPS 2017 Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, Amin Karbasi

In many machine learning applications, it is important to explain the predictions of a black-box classifier.

Adaptive Sequence Submodularity

1 code implementation NeurIPS 2019 Marko Mitrovic, Ehsan Kazemi, Moran Feldman, Andreas Krause, Amin Karbasi

In many machine learning applications, one needs to interactively select a sequence of items (e. g., recommending movies based on a user's feedback) or make sequential decisions in a certain order (e. g., guiding an agent through a series of states).

Decision Making Link Prediction +1

Submodular Maximization Beyond Non-negativity: Guarantees, Fast Algorithms, and Applications

1 code implementation19 Apr 2019 Christopher Harshaw, Moran Feldman, Justin Ward, Amin Karbasi

It is generally believed that submodular functions -- and the more general class of $\gamma$-weakly submodular functions -- may only be optimized under the non-negativity assumption $f(S) \geq 0$.

Experimental Design

How Do You Want Your Greedy: Simultaneous or Repeated?

1 code implementation29 Sep 2020 Moran Feldman, Christopher Harshaw, Amin Karbasi

We also present SubmodularGreedy. jl, a Julia package which implements these algorithms and may be downloaded at https://github. com/crharshaw/SubmodularGreedy. jl .

Self-Consistency of the Fokker-Planck Equation

1 code implementation2 Jun 2022 Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani

In this paper, we exploit this concept to design a potential function of the hypothesis velocity fields, and prove that, if such a function diminishes to zero during the training procedure, the trajectory of the densities generated by the hypothesis velocity fields converges to the solution of the FPE in the Wasserstein-2 sense.

KDEformer: Accelerating Transformers via Kernel Density Estimation

1 code implementation5 Feb 2023 Amir Zandieh, Insu Han, Majid Daliri, Amin Karbasi

Dot-product attention mechanism plays a crucial role in modern deep architectures (e. g., Transformer) for sequence modeling, however, na\"ive exact computation of this model incurs quadratic time and memory complexities in sequence length, hindering the training of long-sequence models.

Density Estimation Image Generation

Streaming Submodular Maximization under a $k$-Set System Constraint

1 code implementation9 Feb 2020 Ran Haba, Ehsan Kazemi, Moran Feldman, Amin Karbasi

In this paper, we propose a novel framework that converts streaming algorithms for monotone submodular maximization into streaming algorithms for non-monotone submodular maximization.

Data Summarization Movie Recommendation

Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity

no code implementations ICML 2018 Lin Chen, Christopher Harshaw, Hamed Hassani, Amin Karbasi

We also propose One-Shot Frank-Wolfe, a simpler algorithm which requires only a single stochastic gradient estimate in each round and achieves an $O(T^{2/3})$ stochastic regret bound for convex and continuous submodular optimization.

Data Summarization at Scale: A Two-Stage Submodular Approach

no code implementations ICML 2018 Marko Mitrovic, Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi

The sheer scale of modern datasets has resulted in a dire need for summarization techniques that identify representative elements in a dataset.

Data Summarization Vocal Bursts Valence Prediction

Projection-Free Bandit Convex Optimization

no code implementations18 May 2018 Lin Chen, Mingrui Zhang, Amin Karbasi

In this paper, we propose the first computationally efficient projection-free algorithm for bandit convex optimization (BCO).

Matrix Completion

Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization

no code implementations24 Apr 2018 Aryan Mokhtari, Hamed Hassani, Amin Karbasi

Further, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that our proposed method achieves a $((1-1/e)OPT-\eps)$ guarantee with $O(1/\eps^3)$ stochastic gradient computations.

Stochastic Optimization

Do Less, Get More: Streaming Submodular Maximization with Subsampling

no code implementations NeurIPS 2018 Moran Feldman, Amin Karbasi, Ehsan Kazemi

In this paper, we develop the first one-pass streaming algorithm for submodular maximization that does not evaluate the entire stream even once.

Video Summarization

Comparison Based Learning from Weak Oracles

no code implementations20 Feb 2018 Ehsan Kazemi, Lin Chen, Sanjoy Dasgupta, Amin Karbasi

More specifically, we aim at devising efficient algorithms to locate a target object in a database equipped with a dissimilarity metric via invocation of the weak comparison oracle.

Online Continuous Submodular Maximization

no code implementations16 Feb 2018 Lin Chen, Hamed Hassani, Amin Karbasi

For such settings, we then propose an online stochastic gradient ascent algorithm that also achieves a regret bound of $O(\sqrt{T})$ regret, albeit against a weaker $1/2$-approximation to the best feasible solution in hindsight.

Deletion-Robust Submodular Maximization at Scale

no code implementations20 Nov 2017 Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi

Can we efficiently extract useful information from a large user-generated dataset while protecting the privacy of the users and/or ensuring fairness in representation.

Fairness feature selection

Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

no code implementations5 Nov 2017 Aryan Mokhtari, Hamed Hassani, Amin Karbasi

More precisely, for a monotone and continuous DR-submodular function and subject to a \textit{general} convex body constraint, we prove that \alg achieves a $[(1-1/e)\text{OPT} -\eps]$ guarantee (in expectation) with $\mathcal{O}{(1/\eps^3)}$ stochastic gradient computations.

Gradient Methods for Submodular Maximization

no code implementations NeurIPS 2017 Hamed Hassani, Mahdi Soltanolkotabi, Amin Karbasi

Despite the apparent lack of convexity in such functions, we prove that stochastic projected gradient methods can provide strong approximation guarantees for maximizing continuous submodular functions with convex constraints.

Active Learning

Weakly Submodular Maximization Beyond Cardinality Constraints: Does Randomization Help Greedy?

no code implementations ICML 2018 Lin Chen, Moran Feldman, Amin Karbasi

In this paper, we prove that a randomized version of the greedy algorithm (previously used by Buchbinder et al. (2014) for a different problem) achieves an approximation ratio of $(1 + 1/\gamma)^{-2}$ for the maximization of a weakly submodular function subject to a general matroid constraint, where $\gamma$ is a parameter measuring the distance of the function from submodularity.

Submodular Variational Inference for Network Reconstruction

no code implementations29 Mar 2016 Lin Chen, Forrest W. Crawford, Amin Karbasi

In real-world and online social networks, individuals receive and transmit information in real time.

Variational Inference

Near-Optimal Active Learning of Halfspaces via Query Synthesis in the Noisy Setting

no code implementations11 Mar 2016 Lin Chen, Hamed Hassani, Amin Karbasi

This problem has recently gained a lot of interest in automated science and adversarial reverse engineering for which only heuristic algorithms are known.

Active Learning

Distributed Submodular Maximization

no code implementations3 Nov 2014 Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, Andreas Krause

Such problems can often be reduced to maximizing a submodular set function subject to various constraints.

Clustering

Tradeoffs for Space, Time, Data and Risk in Unsupervised Learning

no code implementations2 May 2016 Mario Lucic, Mesrob I. Ohannessian, Amin Karbasi, Andreas Krause

Using k-means clustering as a prototypical unsupervised learning problem, we show how we can strategically summarize the data (control space) in order to trade off risk and time when data is generated by a probabilistic model.

Clustering Navigate

Seeing the Unseen Network: Inferring Hidden Social Ties from Respondent-Driven Sampling

no code implementations13 Nov 2015 Lin Chen, Forrest W. Crawford, Amin Karbasi

Learning about the social structure of hidden and hard-to-reach populations --- such as drug users and sex workers --- is a major goal of epidemiological and public health research on risk behaviors and disease prevention.

Stochastic Optimization Time Series +1

Fast Mixing for Discrete Point Processes

no code implementations6 Jun 2015 Patrick Rebeschini, Amin Karbasi

We show that if the set function (not necessarily submodular) displays a natural notion of decay of correlation, then, for $\beta$ small enough, it is possible to design fast mixing Markov chain Monte Carlo methods that yield error bounds on marginal approximations that do not depend on the size of the set $V$.

Point Processes

Lazier Than Lazy Greedy

no code implementations28 Sep 2014 Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan Vondrak, Andreas Krause

Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice?

Clustering Data Summarization

Convolutional Neural Associative Memories: Massive Capacity with Noise Tolerance

no code implementations24 Jul 2014 Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi

The resulting network has a retrieval capacity that is exponential in the size of the network.

Retrieval

Noise Facilitation in Associative Memories of Exponential Capacity

no code implementations13 Mar 2014 Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney

More surprisingly, we show that internal noise actually improves the performance of the recall phase while the pattern retrieval capacity remains intact, i. e., the number of stored patterns does not reduce with noise (up to a threshold).

Hippocampus Retrieval

Near-Optimally Teaching the Crowd to Classify

no code implementations10 Feb 2014 Adish Singla, Ilija Bogunovic, Gábor Bartók, Amin Karbasi, Andreas Krause

How should we present training examples to learners to teach them classification rules?

From Small-World Networks to Comparison-Based Search

no code implementations15 Jul 2011 Amin Karbasi, Stratis Ioannidis, Laurent Massoulie

In short, a user searching for a target object navigates through a database in the following manner: the user is asked to select the object most similar to her target from a small list of objects.

Coupled Neural Associative Memories

no code implementations8 Jan 2013 Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi

We propose a novel architecture to design a neural associative memory that is capable of learning a large number of patterns and recalling them later in presence of noise.

Retrieval

Neural Networks Built from Unreliable Components

no code implementations26 Jan 2013 Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav Varshney

Recent advances in associative memory design through strutured pattern sets and graph-based inference algorithms have allowed the reliable learning and retrieval of an exponential number of patterns.

Retrieval

Unconstrained Submodular Maximization with Constant Adaptive Complexity

no code implementations15 Nov 2018 Lin Chen, Moran Feldman, Amin Karbasi

In this paper, we consider the unconstrained submodular maximization problem.

Interactive Submodular Bandit

no code implementations NeurIPS 2017 Lin Chen, Andreas Krause, Amin Karbasi

We then receive a noisy feedback about the utility of the action (e. g., ratings) which we model as a submodular function over the context-action space.

Data Summarization Movie Recommendation +1

Fast Distributed Submodular Cover: Public-Private Data Summarization

no code implementations NeurIPS 2016 Baharan Mirzasoleiman, Morteza Zadimoghaddam, Amin Karbasi

The goal is to provide a succinct summary of massive dataset, ideally as small as possible, from which customized summaries can be built for each user, i. e. it can contain elements from the public data (for diversity) and users' private data (for personalization).

Data Summarization Movie Recommendation +1

Noise-Enhanced Associative Memories

no code implementations NeurIPS 2013 Amin Karbasi, Amir Hesam Salavati, Amin Shokrollahi, Lav R. Varshney

More surprisingly, we show that internal noise actually improves the performance of the recall phase.

Hippocampus

Probabilistic Submodular Maximization in Sub-Linear Time

no code implementations ICML 2017 Serban Stan, Morteza Zadimoghaddam, Andreas Krause, Amin Karbasi

As a remedy, we introduce the problem of sublinear time probabilistic submodular maximization: Given training examples of functions (e. g., via user feature vectors), we seek to reduce the ground set so that optimizing new functions drawn from the same distribution will provide almost as much value when restricted to the reduced ground set as when using the full set.

Recommendation Systems

Scalable Deletion-Robust Submodular Maximization: Data Summarization with Privacy and Fairness Constraints

no code implementations ICML 2018 Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi

Can we efficiently extract useful information from a large user-generated dataset while protecting the privacy of the users and/or ensuring fairness in representation?

Data Summarization Fairness +1

Black Box Submodular Maximization: Discrete and Continuous Settings

no code implementations28 Jan 2019 Lin Chen, Mingrui Zhang, Hamed Hassani, Amin Karbasi

In this paper, we consider the problem of black box continuous submodular maximization where we only have access to the function values and no information about the derivatives is provided.

Stochastic Conditional Gradient++

no code implementations19 Feb 2019 Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen

It is known that this rate is optimal in terms of stochastic gradient evaluations.

Stochastic Optimization

Submodular Streaming in All its Glory: Tight Approximation, Minimum Memory and Low Adaptive Complexity

no code implementations2 May 2019 Ehsan Kazemi, Marko Mitrovic, Morteza Zadimoghaddam, Silvio Lattanzi, Amin Karbasi

We show how one can achieve the tight $(1/2)$-approximation guarantee with $O(k)$ shared memory while minimizing not only the required rounds of computations but also the total number of communicated bits.

Data Summarization

One Sample Stochastic Frank-Wolfe

no code implementations10 Oct 2019 Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi

One of the beauties of the projected gradient descent method lies in its rather simple mechanism and yet stable behavior with inexact, stochastic gradients, which has led to its wide-spread use in many machine learning applications.

Regret Bounds for Batched Bandits

no code implementations11 Oct 2019 Hossein Esfandiari, Amin Karbasi, Abbas Mehrabian, Vahab Mirrokni

We present simple and efficient algorithms for the batched stochastic multi-armed bandit and batched stochastic linear bandit problems.

Multi-Armed Bandits

Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition

no code implementations NeurIPS 2020 Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi

To establish the dimension-independent upper bound, we next show that a mini-batching algorithm provides an $ O(\frac{T}{\sqrt{K}}) $ upper bound, and therefore conclude that the minimax regret of switching-constrained OCO is $ \Theta(\frac{T}{\sqrt{K}}) $ for any $K$.

2k

Adaptivity in Adaptive Submodularity

no code implementations9 Nov 2019 Hossein Esfandiari, Amin Karbasi, Vahab Mirrokni

We propose an efficient semi adaptive policy that with $O(\log n \times\log k)$ adaptive rounds of observations can achieve an almost tight $1-1/e-\epsilon$ approximation guarantee with respect to an optimal policy that carries out $k$ actions in a fully sequential manner.

Active Learning Decision Making +1

Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match

no code implementations NeurIPS 2019 Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen

Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight $[(1-1/e)\OPT -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic gradients and $O(1/\epsilon)$ calls to the linear optimization oracle.

Submodular Maximization Through Barrier Functions

no code implementations NeurIPS 2020 Ashwinkumar Badanidiyuru, Amin Karbasi, Ehsan Kazemi, Jan Vondrak

In this paper, we introduce a novel technique for constrained submodular maximization, inspired by barrier functions in continuous optimization.

Movie Recommendation

Regularized Submodular Maximization at Scale

no code implementations10 Feb 2020 Ehsan Kazemi, Shervin Minaee, Moran Feldman, Amin Karbasi

In this paper, we propose scalable methods for maximizing a regularized submodular function $f = g - \ell$ expressed as the difference between a monotone submodular function $g$ and a modular function $\ell$.

Data Summarization Point Processes +1

More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models

no code implementations ICML 2020 Lin Chen, Yifei Min, Mingrui Zhang, Amin Karbasi

Despite remarkable success in practice, modern machine learning models have been found to be susceptible to adversarial attacks that make human-imperceptible perturbations to the data, but result in serious and potentially dangerous prediction errors.

The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization

no code implementations25 Feb 2020 Yifei Min, Lin Chen, Amin Karbasi

In the medium adversary regime, with more training data, the generalization loss exhibits a double descent curve, which implies the existence of an intermediate stage where more training data hurts the generalization.

Classification General Classification

Submodular Maximization in Clean Linear Time

no code implementations16 Jun 2020 Wenxin Li, Moran Feldman, Ehsan Kazemi, Amin Karbasi

In this paper, we provide the first deterministic algorithm that achieves the tight $1-1/e$ approximation guarantee for submodular maximization under a cardinality (size) constraint while making a number of queries that scales only linearly with the size of the ground set $n$.

Movie Recommendation Text Summarization +1

Meta Learning in the Continuous Time Limit

no code implementations19 Jun 2020 Ruitu Xu, Lin Chen, Amin Karbasi

In this paper, we establish the ordinary differential equation (ODE) that underlies the training dynamics of Model-Agnostic Meta-Learning (MAML).

Meta-Learning

Continuous Submodular Maximization: Beyond DR-Submodularity

no code implementations NeurIPS 2020 Moran Feldman, Amin Karbasi

We first prove that a simple variant of the vanilla coordinate ascent, called Coordinate-Ascent+, achieves a $(\frac{e-1}{2e-1}-\varepsilon)$-approximation guarantee while performing $O(n/\varepsilon)$ iterations, where the computational complexity of each iteration is roughly $O(n/\sqrt{\varepsilon}+n\log n)$ (here, $n$ denotes the dimension of the optimization problem).

Safe Learning under Uncertain Objectives and Constraints

no code implementations23 Jun 2020 Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi, Hamed Hassani

More precisely, by assuming that Reliable-FW has access to a (stochastic) gradient oracle of the objective function and a noisy feasibility oracle of the safety polytope, it finds an $\epsilon$-approximate first-order stationary point with the optimal ${\mathcal{O}}({1}/{\epsilon^2})$ gradient oracle complexity (resp.

Multiple Descent: Design Your Own Generalization Curve

no code implementations NeurIPS 2021 Lin Chen, Yifei Min, Mikhail Belkin, Amin Karbasi

This paper explores the generalization loss of linear regression in variably parameterized families of models, both under-parameterized and over-parameterized.

regression

Streaming Submodular Maximization under a k-Set System Constraint

no code implementations ICML 2020 Ran Haba, Ehsan Kazemi, Moran Feldman, Amin Karbasi

Moreover, we propose the first streaming algorithms for monotone submodular maximization subject to $k$-extendible and $k$-system constraints.

Data Summarization Movie Recommendation

Online MAP Inference of Determinantal Point Processes

no code implementations NeurIPS 2020 Aditya Bhaskara, Amin Karbasi, Silvio Lattanzi, Morteza Zadimoghaddam

In this paper, we provide an efficient approximation algorithm for finding the most likelihood configuration (MAP) of size $k$ for Determinantal Point Processes (DPP) in the online setting where the data points arrive in an arbitrary order and the algorithm cannot discard the selected elements from its local memory.

Point Processes

Submodularity in Action: From Machine Learning to Signal Processing Applications

no code implementations17 Jun 2020 Ehsan Tohidi, Rouhollah Amiri, Mario Coutino, David Gesbert, Geert Leus, Amin Karbasi

We introduce a variety of submodular-friendly applications, and elucidate the relation of submodularity to convexity and concavity which enables efficient optimization.

BIG-bench Machine Learning

Batched Neural Bandits

no code implementations25 Feb 2021 Quanquan Gu, Amin Karbasi, Khashayar Khosravi, Vahab Mirrokni, Dongruo Zhou

In many sequential decision-making problems, the individuals are split into several batches and the decision-maker is only allowed to change her policy at the end of batches.

Decision Making

Federated Functional Gradient Boosting

no code implementations11 Mar 2021 Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi

First, in the semi-heterogeneous setting, when the marginal distributions of the feature vectors on client machines are identical, we develop the federated functional gradient boosting (FFGB) method that provably converges to the global minimum.

Federated Learning

The Power of Subsampling in Submodular Maximization

no code implementations6 Apr 2021 Christopher Harshaw, Ehsan Kazemi, Moran Feldman, Amin Karbasi

We propose subsampling as a unified algorithmic technique for submodular maximization in centralized and online settings.

Movie Recommendation Video Summarization

Learning and Certification under Instance-targeted Poisoning

no code implementations18 May 2021 Ji Gao, Amin Karbasi, Mohammad Mahmoody

In this paper, we study PAC learnability and certification of predictions under instance-targeted poisoning attacks, where the adversary who knows the test instance may change a fraction of the training set with the goal of fooling the learner at the test instance.

PAC learning

Parallelizing Thompson Sampling

no code implementations NeurIPS 2021 Amin Karbasi, Vahab Mirrokni, Mohammad Shadravan

How can we make use of information parallelism in online decision making problems while efficiently balancing the exploration-exploitation trade-off?

Decision Making Thompson Sampling

Submodular + Concave

no code implementations NeurIPS 2021 Siddharth Mitra, Moran Feldman, Amin Karbasi

It has been well established that first order optimization methods can converge to the maximal objective value of concave functions and provide constant factor approximation guarantees for (non-convex/non-concave) continuous submodular functions.

An Exponential Improvement on the Memorization Capacity of Deep Threshold Networks

no code implementations NeurIPS 2021 Shashank Rajput, Kartik Sreenivasan, Dimitris Papailiopoulos, Amin Karbasi

Recently, Vershynin (2020) settled a long standing question by Baum (1988), proving that \emph{deep threshold} networks can memorize $n$ points in $d$ dimensions using $\widetilde{\mathcal{O}}(e^{1/\delta^2}+\sqrt{n})$ neurons and $\widetilde{\mathcal{O}}(e^{1/\delta^2}(d+\sqrt{n})+n)$ weights, where $\delta$ is the minimum distance between the points.

Memorization

Data-driven mapping between functional connectomes using optimal transport

no code implementations2 Jul 2021 Javid Dadashkarimi, Amin Karbasi, Dustin Scheinost

Being able to map connectomes and derived results between different atlases without additional pre-processing is a crucial step in improving interpretation and generalization between studies that use different atlases.

Time Series Time Series Analysis

Black-Box Generalization: Stability of Zeroth-Order Learning

no code implementations14 Feb 2022 Konstantinos E. Nikolakakis, Farzin Haddadpour, Dionysios S. Kalogerias, Amin Karbasi

These bounds coincide with those for SGD, and rather surprisingly are independent of $d$, $K$ and the batch size $m$, under appropriate choices of a slightly decreased learning rate.

Generalization Bounds

What Functions Can Graph Neural Networks Generate?

no code implementations17 Feb 2022 Mohammad Fereydounian, Hamed Hassani, Amin Karbasi

We prove that: (i) a GNN, as a graph function, is necessarily permutation compatible; (ii) conversely, any permutation compatible function, when restricted on input graphs with distinct node features, can be generated by a GNN; (iii) for arbitrary node features (not necessarily distinct), a simple feature augmentation scheme suffices to generate a permutation compatible function by a GNN; (iv) permutation compatibility can be verified by checking only quadratically many functional constraints, rather than an exhaustive search over all the permutations; (v) GNNs can generate \textit{any} graph function once we augment the node features with node identities, thus going beyond graph isomorphism and permutation compatibility.

The Best of Both Worlds: Reinforcement Learning with Logarithmic Regret and Policy Switches

no code implementations3 Mar 2022 Grigoris Velegkas, Zhuoran Yang, Amin Karbasi

In this paper, we study the problem of regret minimization for episodic Reinforcement Learning (RL) both in the model-free and the model-based setting.

reinforcement-learning Reinforcement Learning (RL)

Learning Distributionally Robust Models at Scale via Composite Optimization

no code implementations ICLR 2022 Farzin Haddadpour, Mohammad Mahdi Kamani, Mehrdad Mahdavi, Amin Karbasi

To train machine learning models that are robust to distribution shifts in the data, distributionally robust optimization (DRO) has been proven very effective.

Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD

no code implementations26 Apr 2022 Konstantinos E. Nikolakakis, Farzin Haddadpour, Amin Karbasi, Dionysios S. Kalogerias

For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, and recovers the generalization error guarantees of stochastic algorithms with fewer assumptions.

Scalable MCMC Sampling for Nonsymmetric Determinantal Point Processes

1 code implementation1 Jul 2022 Insu Han, Mike Gartrell, Elvis Dohmatob, Amin Karbasi

In this work, we develop a scalable MCMC sampling algorithm for $k$-NDPPs with low-rank kernels, thus enabling runtime that is sublinear in $n$.

Point Processes

Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes

no code implementations5 Oct 2022 Alkis Kalavasis, Grigoris Velegkas, Amin Karbasi

Second, we consider the problem of multiclass classification with structured data (such as data lying on a low dimensional manifold or satisfying margin conditions), a setting which is captured by partial concept classes (Alon, Hanneke, Holzman and Moran, FOCS '21).

Replicable Bandits

no code implementations4 Oct 2022 Hossein Esfandiari, Alkis Kalavasis, Amin Karbasi, Andreas Krause, Vahab Mirrokni, Grigoris Velegkas

Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter.

Multi-Armed Bandits

On Optimal Learning Under Targeted Data Poisoning

no code implementations6 Oct 2022 Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran

In this work we aim to characterize the smallest achievable error $\epsilon=\epsilon(\eta)$ by the learner in the presence of such an adversary in both realizable and agnostic settings.

Data Poisoning

Exact Gradient Computation for Spiking Neural Networks Through Forward Propagation

no code implementations18 Oct 2022 Jane H. Lee, Saeid Haghighatshoar, Amin Karbasi

their weights, and (2) we propose a novel training algorithm, called \emph{forward propagation} (FP), that computes exact gradients for SNN.

The Impossibility of Parallelizing Boosting

no code implementations23 Jan 2023 Amin Karbasi, Kasper Green Larsen

The aim of boosting is to convert a sequence of weak learners into a strong learner.

Select without Fear: Almost All Mini-Batch Schedules Generalize Optimally

no code implementations3 May 2023 Konstantinos E. Nikolakakis, Amin Karbasi, Dionysis Kalogerias

We establish matching upper and lower generalization error bounds for mini-batch Gradient Descent (GD) training with either deterministic or stochastic, data-independent, but otherwise arbitrary batch selection rules.

Learning from Aggregated Data: Curated Bags versus Random Bags

no code implementations16 May 2023 Lin Chen, Gang Fu, Amin Karbasi, Vahab Mirrokni

Our method is based on the observation that the sum of the gradients of the loss function on individual data examples in a curated bag can be computed from the aggregate label without the need for individual labels.

Statistical Indistinguishability of Learning Algorithms

no code implementations23 May 2023 Alkis Kalavasis, Amin Karbasi, Shay Moran, Grigoris Velegkas

When two different parties use the same learning rule on their own data, how can we test whether the distributions of the two outcomes are similar?

Submodular Minimax Optimization: Finding Effective Sets

no code implementations26 May 2023 Loay Mualem, Ethan R. Elenberg, Moran Feldman, Amin Karbasi

Despite the rich existing literature about minimax optimization in continuous settings, only very partial results of this kind have been obtained for combinatorial settings.

dialog state tracking Prompt Engineering +1

Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning

no code implementations28 May 2023 Patrik Okanovic, Roger Waleffe, Vasilis Mageirakos, Konstantinos E. Nikolakakis, Amin Karbasi, Dionysis Kalogerias, Nezihe Merve Gürel, Theodoros Rekatsinas

Methods for carefully selecting or generating a small set of training data to learn from, i. e., data pruning, coreset selection, and data distillation, have been shown to be effective in reducing the ever-increasing cost of training neural networks.

Data Compression

Langevin Thompson Sampling with Logarithmic Communication: Bandits and Reinforcement Learning

no code implementations15 Jun 2023 Amin Karbasi, Nikki Lijing Kuang, Yi-An Ma, Siddharth Mitra

Thompson sampling (TS) is widely used in sequential decision making due to its ease of use and appealing empirical performance.

Decision Making Multi-Armed Bandits +3

Optimal Guarantees for Algorithmic Reproducibility and Gradient Complexity in Convex Optimization

no code implementations NeurIPS 2023 Liang Zhang, Junchi Yang, Amin Karbasi, Niao He

Particularly, given the inexact initialization oracle, our regularization-based algorithms achieve the best of both worlds - optimal reproducibility and near-optimal gradient complexity - for minimization and minimax optimization.

SubGen: Token Generation in Sublinear Time and Memory

no code implementations8 Feb 2024 Amir Zandieh, Insu Han, Vahab Mirrokni, Amin Karbasi

In this work, our focus is on developing an efficient compression technique for the KV cache.

Clustering Online Clustering +1

Replicable Learning of Large-Margin Halfspaces

no code implementations21 Feb 2024 Alkis Kalavasis, Amin Karbasi, Kasper Green Larsen, Grigoris Velegkas, Felix Zhou

Departing from the requirement of polynomial time algorithms, using the DP-to-Replicability reduction of Bun, Gaboardi, Hopkins, Impagliazzo, Lei, Pitassi, Sorrell, and Sivakumar [STOC, 2023], we show how to obtain a replicable algorithm for large-margin halfspaces with improved sample complexity with respect to the margin parameter $\tau$, but running time doubly exponential in $1/\tau^2$ and worse sample complexity dependence on $\epsilon$ than one of our previous algorithms.

Cannot find the paper you are looking for? You can Submit a new open access paper.