no code implementations • 22 Feb 2024 • Vaggos Chatziafratis, Ishani Karmarkar, Ellen Vitercik
We approach this problem by introducing a notion of size generalization for clustering algorithm accuracy.
1 code implementation • 11 Oct 2022 • Vaggos Chatziafratis, Ioannis Panageas, Clayton Sanford, Stelios Andrew Stavroulakis
Recurrent Neural Networks (RNNs) frequently exhibit complicated dynamics, and their sensitivity to the initialization process often renders them notoriously hard to train.
no code implementations • 3 Aug 2022 • Fivos Kalogiannis, Ioannis Anagnostides, Ioannis Panageas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Vaggos Chatziafratis, Stelios Stavroulakis
In this work, we depart from those prior results by investigating infinite-horizon \emph{adversarial team Markov games}, a natural and well-motivated class of games in which a team of identically-interested players -- in the absence of any explicit coordination or communication -- is competing against an adversarial player.
no code implementations • 19 Oct 2021 • Clayton Sanford, Vaggos Chatziafratis
Given a target function $f$, how large must a neural network be in order to approximate $f$?
2 code implementations • NeurIPS 2020 • Ines Chami, Albert Gu, Vaggos Chatziafratis, Christopher Ré
Recently, Dasgupta reframed HC as a discrete optimization problem by introducing a global cost function measuring the quality of a given tree.
no code implementations • ICML 2020 • Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas
The expressivity of neural networks as a function of their depth, width and type of activation units has been an important question in deep learning theory.
no code implementations • ICLR 2020 • Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas, Xiao Wang
Motivated by our observation that the triangle waves used in Telgarsky's work contain points of period 3 - a period that is special in that it implies chaotic behavior based on the celebrated result by Li-Yorke - we proceed to give general lower bounds for the width needed to represent periodic functions as a function of the depth.
no code implementations • 29 Nov 2019 • Pranjal Awasthi, Vaggos Chatziafratis, Xue Chen, Aravindan Vijayaraghavan
In particular, our adversarially robust PCA primitive leads to computationally efficient and robust algorithms for both unsupervised and supervised learning problems such as clustering and learning adversarially robust classifiers.
no code implementations • 7 Aug 2018 • Moses Charikar, Vaggos Chatziafratis, Rad Niazadeh
Hierarchical Clustering (HC) is a widely studied problem in exploratory data analysis, usually tackled by simple agglomerative procedures like average-linkage, single-linkage or complete-linkage.
no code implementations • 3 Jul 2018 • Vaggos Chatziafratis, Tim Roughgarden, Joshua R. Wang
We prove that the evolution of weight vectors in online gradient descent can encode arbitrary polynomial-space computations, even in very simple learning settings.
no code implementations • ICML 2018 • Vaggos Chatziafratis, Rad Niazadeh, Moses Charikar
For many real-world applications, we would like to exploit prior information about the data that imposes constraints on the clustering hierarchy, and is not captured by the set of features available to the algorithm.