1 code implementation • NeurIPS 2021 • Stefani Karp, Ezra Winston, Yuanzhi Li, Aarti Singh
We therefore propose the "local signal adaptivity" (LSA) phenomenon as one explanation for the superiority of neural networks over kernel methods.
1 code implementation • 5 Apr 2022 • Yusha Liu, Yichong Xu, Nihar B. Shah, Aarti Singh
Our approach addresses the two aforementioned challenges by: (i) ensuring that rankings are incorporated into the updates scores in the same manner for all papers, thereby mitigating arbitrariness, and (ii) allowing to seamlessly use existing interfaces and workflows designed for scores.
no code implementations • 16 Jun 2018 • Ivan Stelmakh, Nihar B. Shah, Aarti Singh
Our fairness objective is to maximize the review quality of the most disadvantaged paper, in contrast to the commonly used objective of maximizing the total quality over all papers.
no code implementations • ICML 2018 • Simon S. Du, Jason D. Lee, Yuandong Tian, Barnabas Poczos, Aarti Singh
We consider the problem of learning a one-hidden-layer neural network with non-overlapping convolutional layer and ReLU activation, i. e., $f(\mathbf{Z}, \mathbf{w}, \mathbf{a}) = \sum_j a_j\sigma(\mathbf{w}^T\mathbf{Z}_j)$, in which both the convolutional weights $\mathbf{w}$ and the output weights $\mathbf{a}$ are parameters to be learned.
no code implementations • ICML 2018 • Yichong Xu, Sivaraman Balakrishnan, Aarti Singh, Artur Dubrawski
In supervised learning, we typically leverage a fully labeled dataset to design methods for function estimation or prediction.
no code implementations • 26 May 2018 • Simon S. Du, Yining Wang, Sivaraman Balakrishnan, Pradeep Ravikumar, Aarti Singh
We first show that a simple local binning median step can effectively remove the adversary noise and this median estimator is minimax optimal up to absolute constants over the H\"{o}lder function class with smoothness parameters smaller than or equal to 1.
no code implementations • NeurIPS 2018 • Simon S. Du, Yining Wang, Xiyu Zhai, Sivaraman Balakrishnan, Ruslan Salakhutdinov, Aarti Singh
It is widely believed that the practical success of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) owes to the fact that CNNs and RNNs use a more compact parametric representation than their Fully-Connected Neural Network (FNN) counterparts, and consequently require fewer training examples to accurately estimate their parameters.
no code implementations • 22 Apr 2018 • Yo Joong Choe, Sivaraman Balakrishnan, Aarti Singh, Jean M. Vettel, Timothy Verstynen
If communication efficiency is fundamentally constrained by the integrity along the entire length of a white matter bundle, then variability in the functional dynamics of brain networks should be associated with variability in the local connectome.
no code implementations • NeurIPS 2018 • Yining Wang, Sivaraman Balakrishnan, Aarti Singh
In this setup, an algorithm is allowed to adaptively query the underlying function at different locations and receives noisy evaluations of function values at the queried points (i. e. the algorithm has access to zeroth-order information).
no code implementations • 29 Oct 2017 • Yining Wang, Simon Du, Sivaraman Balakrishnan, Aarti Singh
We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries.
no code implementations • 13 Feb 2018 • Yifan Wu, Barnabas Poczos, Aarti Singh
A major challenge in understanding the generalization of deep learning is to explain why (stochastic) gradient descent can exploit the network architecture to find solutions that have good generalization performance when using high capacity models.
no code implementations • 17 May 2015 • Yining Wang, Aarti Singh
We consider the problem of matrix column subset selection, which selects a subset of columns from an input matrix such that the input can be well approximated by the span of the selected columns.
no code implementations • 9 Jan 2016 • Yining Wang, Adams Wei Yu, Aarti Singh
We derive computationally tractable methods to select a small subset of experiment settings from a large pool of given design points.
no code implementations • 14 Nov 2017 • Zeyuan Allen-Zhu, Yuanzhi Li, Aarti Singh, Yining Wang
The experimental design problem concerns the selection of k points from a potentially large design pool of p-dimensional vectors, so as to maximize the statistical efficiency regressed on the selected k design points.
no code implementations • NeurIPS 2017 • Simon S. Du, Chi Jin, Jason D. Lee, Michael. I. Jordan, Barnabas Poczos, Aarti Singh
Although gradient descent (GD) almost always escapes saddle points asymptotically [Lee et al., 2016], this paper shows that even with fairly natural random initialization schemes and non-pathological functions, GD can be significantly slowed down by saddle points, taking exponential time to escape.
no code implementations • NeurIPS 2017 • Simon Shaolei Du, Jayanth Koushik, Aarti Singh, Barnabas Poczos
We consider the Hypothesis Transfer Learning (HTL) problem where one incorporates a hypothesis trained on the source domain into the learning procedure of the target domain.
no code implementations • NeurIPS 2017 • Simon S. Du, Yining Wang, Aarti Singh
This observation leads to many interesting results on general high-rank matrix estimation problems, which we briefly summarize below ($A$ is an $n\times n$ high-rank PSD matrix and $A_k$ is the best rank-$k$ approximation of $A$): (1) High-rank matrix completion: By observing $\Omega(\frac{n\max\{\epsilon^{-4}, k^2\}\mu_0^2\|A\|_F^2\log n}{\sigma_{k+1}(A)^2})$ elements of $A$ where $\sigma_{k+1}\left(A\right)$ is the $\left(k+1\right)$-th singular value of $A$ and $\mu_0$ is the incoherence, the truncated SVD on a zero-filled matrix satisfies $\|\widehat{A}_k-A\|_F \leq (1+O(\epsilon))\|A-A_k\|_F$ with high probability.
no code implementations • 9 Feb 2017 • Yining Wang, Jialei Wang, Sivaraman Balakrishnan, Aarti Singh
We consider the problems of estimation and of constructing component-wise confidence intervals in a sparse high-dimensional linear regression model when some covariates of the design matrix are missing completely at random.
no code implementations • 19 Apr 2017 • Yichong Xu, Hongyang Zhang, Aarti Singh, Kyle Miller, Artur Dubrawski
We study the problem of interactively learning a binary classifier using noisy labeling and pairwise comparison oracles, where the comparison oracle answers which one in the given two instances is more likely to be positive.
no code implementations • 24 Feb 2017 • Simon S. Du, Sivaraman Balakrishnan, Aarti Singh
Many conventional statistical procedures are extremely sensitive to seemingly minor deviations from modeling assumptions.
no code implementations • 3 Apr 2014 • Akshay Krishnamurthy, Martin Azizyan, Aarti Singh
Our theoretical results show that even a constant number of measurements per column suffices to approximate the principal subspace to arbitrary precision, provided that the number of vectors is large.
no code implementations • 24 Oct 2016 • Yining Wang, Yu-Xiang Wang, Aarti Singh
Subspace clustering is the problem of partitioning unlabeled data points into a number of clusters so that data points within one cluster lie approximately on a low-dimensional linear subspace.
no code implementations • NeurIPS 2016 • Bo Li, Yining Wang, Aarti Singh, Yevgeniy Vorobeychik
Recommendation and collaborative filtering systems are important in modern information and e-commerce applications.
no code implementations • 4 Apr 2015 • Yining Wang, Yu-Xiang Wang, Aarti Singh
A line of recent work (4, 19, 24, 20) provided strong theoretical guarantee for sparse subspace clustering (4), the state-of-the-art algorithm for subspace clustering, on both noiseless and noisy data sets.
no code implementations • 1 Feb 2016 • Gautam Dasarathy, Aarti Singh, Maria-Florina Balcan, Jong Hyuk Park
The problem of learning the structure of a high dimensional graphical model from data has received considerable attention in recent years.
no code implementations • 21 Jul 2015 • Siheng Chen, Rohan Varma, Aarti Singh, Jelena Kovačević
In this paper, we consider a statistical problem of learning a linear model from noisy samples.
no code implementations • 6 Feb 2016 • Ilmun Kim, Aaditya Ramdas, Aarti Singh, Larry Wasserman
We prove two results that hold for all classifiers in any dimensions: if its true error remains $\epsilon$-better than chance for some $\epsilon>0$ as $d, n \to \infty$, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent.
no code implementations • 23 Jan 2016 • Aaditya Ramdas, David Isenberg, Aarti Singh, Larry Wasserman
Linear independence testing is a fundamental information-theoretic and statistical problem that can be posed as follows: given $n$ points $\{(X_i, Y_i)\}^n_{i=1}$ from a $p+q$ dimensional multivariate distribution where $X_i \in \mathbb{R}^p$ and $Y_i \in\mathbb{R}^q$, determine whether $a^T X$ and $b^T Y$ are uncorrelated for every $a \in \mathbb{R}^p, b\in \mathbb{R}^q$ or not.
no code implementations • 16 Dec 2015 • Siheng Chen, Rohan Varma, Aarti Singh, Jelena Kovačević
For each class, we provide an explicit definition of the graph signals and construct a corresponding graph dictionary with desirable properties.
no code implementations • 20 Jun 2014 • Yining Wang, Aarti Singh
We present a simple noise-robust margin-based active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise.
no code implementations • 2 Jun 2015 • Martin Azizyan, Akshay Krishnamurthy, Aarti Singh
This paper studies the problem of estimating the covariance of a collection of vectors using only highly compressed measurements of each vector.
no code implementations • 4 Aug 2015 • Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman
We formally characterize the power of popular tests for GDA like the Maximum Mean Discrepancy with the Gaussian kernel (gMMD) and bandwidth-dependent variants of the Energy Distance with the Euclidean norm (eED) in the high-dimensional MDA regime.
no code implementations • 21 Apr 2015 • Siheng Chen, Rohan Varma, Aarti Singh, Jelena Kovačević
We study signal recovery on graphs based on two sampling strategies: random sampling and experimentally designed sampling.
no code implementations • 15 May 2015 • Aaditya Ramdas, Barnabas Poczos, Aarti Singh, Larry Wasserman
For larger $\sigma$, the \textit{unflattening} of the regression function on convolution with uniform noise, along with its local antisymmetry around the threshold, together yield a behaviour where noise \textit{appears} to be beneficial.
no code implementations • 15 May 2015 • Aaditya Ramdas, Aarti Singh
Combining these two parts yields an algorithm that solves stochastic convex optimization of uniformly convex and smooth functions using only noisy gradient signs by repeatedly performing active learning, achieves optimal rates and is adaptive to all unknown convexity and smoothness parameters.
no code implementations • 3 May 2015 • Martin Azizyan, Yen-Chi Chen, Aarti Singh, Larry Wasserman
We study the risk of mode-based clustering.
no code implementations • 9 Jun 2014 • Sashank J. Reddi, Aaditya Ramdas, Barnabás Póczos, Aarti Singh, Larry Wasserman
This paper is about two related decision theoretic problems, nonparametric two-sample testing and independence testing.
no code implementations • 23 Nov 2014 • Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry Wasserman
The current literature is split into two kinds of tests - those which are consistent without any assumptions about how the distributions may differ (\textit{general} alternatives), and those which are designed to specifically test easier alternatives, like a difference in means (\textit{mean-shift} alternatives).
no code implementations • 28 Mar 2013 • Brittany Terese Fasy, Fabrizio Lecci, Alessandro Rinaldo, Larry Wasserman, Sivaraman Balakrishnan, Aarti Singh
Persistent homology is a method for probing topological properties of point clouds and functions.
no code implementations • 14 Jul 2014 • Akshay Krishnamurthy, Aarti Singh
We show that adaptive sampling allows one to eliminate standard incoherence assumptions on the matrix row space that are necessary for passive sampling procedures.
no code implementations • 9 Jun 2014 • Larry Wasserman, Martin Azizyan, Aarti Singh
We provide explicit bounds on the error rate of the resulting clustering.
no code implementations • 9 Jun 2014 • Martin Azizyan, Aarti Singh, Larry Wasserman
We consider the problem of clustering data points in high dimensions, i. e. when the number of data points may be much smaller than the number of dimensions.
no code implementations • 10 Nov 2013 • Junier B. Oliva, Barnabas Poczos, Timothy Verstynen, Aarti Singh, Jeff Schneider, Fang-Cheng Yeh, Wen-Yih Tseng
We present the FuSSO, a functional analogue to the LASSO, that efficiently finds a sparse set of functional input covariates to regress a real-valued response against.
no code implementations • 1 May 2013 • Akshay Krishnamurthy, James Sharpnack, Aarti Singh
We study the localization of a cluster of activated vertices in a graph, from adaptively designed compressive measurements.
no code implementations • NeurIPS 2013 • James Sharpnack, Akshay Krishnamurthy, Aarti Singh
The detection of anomalous activity in graphs is a statistical problem that arises in many applications, such as network surveillance, disease outbreak detection, and activity monitoring in social networks.
no code implementations • NeurIPS 2013 • Akshay Krishnamurthy, Aarti Singh
In the absence of noise, we show that one can exactly recover a $n \times n$ matrix of rank $r$ from merely $\Omega(n r^{3/2}\log(r))$ matrix entries.
no code implementations • 29 Jul 2013 • Sivaraman Balakrishnan, Alessandro Rinaldo, Aarti Singh, Larry Wasserman
In this note we use a different construction based on the direct analysis of the likelihood ratio test to show that the upper bound of Niyogi, Smale and Weinberger is in fact tight, thus establishing rate optimal asymptotic minimax bounds for the problem.
no code implementations • NeurIPS 2013 • Sivaraman Balakrishnan, Srivatsan Narayanan, Alessandro Rinaldo, Aarti Singh, Larry Wasserman
In this paper we investigate the problem of estimating the cluster tree for a density $f$ supported on or near a smooth $d$-dimensional manifold $M$ isometrically embedded in $\mathbb{R}^D$.
no code implementations • 15 Sep 2012 • Sivaraman Balakrishnan, Mladen Kolar, Alessandro Rinaldo, Aarti Singh
We consider the problems of detection and localization of a contiguous block of weak activation in a large matrix, from a small number of noisy, possibly adaptive, compressive (linear) measurements.
no code implementations • NeurIPS 2013 • Martin Azizyan, Aarti Singh, Larry Wasserman
While several papers have investigated computationally and statistically efficient methods for learning Gaussian mixtures, precise minimax bounds for their statistical performance as well as fundamental limits in high-dimensional settings are not well-understood.
no code implementations • 7 Apr 2012 • Martin Azizyan, Aarti Singh, Larry Wasserman
Semisupervised methods are techniques for using labeled data $(X_1, Y_1),\ldots,(X_n, Y_n)$ together with unlabeled data $X_{n+1},\ldots, X_N$ to make predictions.
1 code implementation • 2 Nov 2013 • Frédéric Chazal, Brittany Terese Fasy, Fabrizio Lecci, Alessandro Rinaldo, Aarti Singh, Larry Wasserman
Persistent homology probes topological properties from point clouds and functions.
Algebraic Topology Computational Geometry Applications
no code implementations • ICLR 2019 • Simon S. Du, Xiyu Zhai, Barnabas Poczos, Aarti Singh
One of the mysteries in the success of neural networks is randomly initialized first order methods like gradient descent can achieve zero training loss even though the objective function is non-convex and non-smooth.
no code implementations • 25 Oct 2018 • Yining Wang, Erva Ulu, Aarti Singh, Levent Burak Kara
Our approach uses a computationally tractable experimental design method to select number of sample force locations based on geometry only, without inspecting the stress response that requires computationally expensive finite-element analysis.
no code implementations • NeurIPS 2018 • Simon S. Du, Yining Wang, Xiyu Zhai, Sivaraman Balakrishnan, Ruslan R. Salakhutdinov, Aarti Singh
We show that for an $m$-dimensional convolutional filter with linear activation acting on a $d$-dimensional input, the sample complexity of achieving population prediction error of $\epsilon$ is $\widetilde{O(m/\epsilon^2)$, whereas the sample-complexity for its FNN counterpart is lower bounded by $\Omega(d/\epsilon^2)$ samples.
no code implementations • NeurIPS 2017 • Yichong Xu, Hongyang Zhang, Kyle Miller, Aarti Singh, Artur Dubrawski
We study the problem of interactively learning a binary classifier using noisy labeling and pairwise comparison oracles, where the comparison oracle answers which one in the given two instances is more likely to be positive.
no code implementations • NeurIPS 2015 • Yining Wang, Yu-Xiang Wang, Aarti Singh
Subspace clustering is an unsupervised learning problem that aims at grouping data points into multiple ``clusters'' so that data points in a single cluster lie approximately on a low-dimensional linear subspace.
no code implementations • NeurIPS 2011 • Mladen Kolar, Sivaraman Balakrishnan, Alessandro Rinaldo, Aarti Singh
We consider the problem of identifying a sparse set of relevant columns and rows in a large data matrix with highly corrupted entries.
no code implementations • NeurIPS 2011 • Sivaraman Balakrishnan, Min Xu, Akshay Krishnamurthy, Aarti Singh
Although spectral clustering has enjoyed considerable empirical success in machine learning, its theoretical properties are not yet fully developed.
no code implementations • NeurIPS 2010 • James Sharpnack, Aarti Singh
We consider the problem of identifying an activation pattern in a complex, large-scale network that is embedded in very noisy measurements.
no code implementations • NeurIPS 2008 • Aarti Singh, Robert Nowak, Jerry Zhu
We show that there are large classes of problems for which SSL can significantly outperform supervised learning, in finite sample regimes and sometimes also in terms of error convergence rates.
no code implementations • ICML 2017 • Pengtao Xie, Aarti Singh, Eric P. Xing
Latent space models (LSMs) provide a principled and effective way to extract hidden patterns from observed data.
no code implementations • ICML 2017 • Zeyuan Allen-Zhu, Yuanzhi Li, Aarti Singh, Yining Wang
We consider computationally tractable methods for the experimental design problem, where k out of n design points of dimension p are selected so that certain optimality criteria are approximately satisfied.
no code implementations • ICML 2018 • Yichong Xu, Hariank Muthakana, Sivaraman Balakrishnan, Aarti Singh, Artur Dubrawski
Finally, we present experiments that show the efficacy of RR and investigate its robustness to various sources of noise and model-misspecification.
no code implementations • 14 Oct 2019 • Yichong Xu, Xi Chen, Aarti Singh, Artur Dubrawski
The Thresholding Bandit Problem (TBP) aims to find the set of arms with mean rewards greater than a given threshold.
no code implementations • 16 Oct 2019 • Yuexin Wu, Yichong Xu, Aarti Singh, Yiming Yang, Artur Dubrawski
Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.
no code implementations • 3 Nov 2019 • Yichong Xu, Aparna Joshi, Aarti Singh, Artur Dubrawski
We consider a novel setting of zeroth order non-convex optimization, where in addition to querying the function value at a given point, we can also duel two points and get the point with the larger function value.
no code implementations • NeurIPS 2020 • Yichong Xu, Ruosong Wang, Lin F. Yang, Aarti Singh, Artur Dubrawski
If preferences are stochastic, and the preference probability relates to the hidden reward values, we present algorithms for PbRL, both with and without a simulator, that are able to identify the best policy up to accuracy $\varepsilon$ with high probability.
no code implementations • 21 Jun 2020 • Charvi Rastogi, Sivaraman Balakrishnan, Nihar B. Shah, Aarti Singh
We also provide testing algorithms and associated sample complexity bounds for the problem of two-sample testing with partial (or total) ranking data. Furthermore, we empirically evaluate our results via extensive simulations as well as two real-world datasets consisting of pairwise comparisons.
1 code implementation • 17 Aug 2020 • Nadine Chang, Jayanth Koushik, Aarti Singh, Martial Hebert, Yu-Xiong Wang, Michael J. Tarr
Methods in long-tail learning focus on improving performance for data-poor (rare) classes; however, performance for such classes remains much lower than performance for more data-rich (frequent) classes.
no code implementations • 8 Oct 2020 • Ivan Stelmakh, Nihar B. Shah, Aarti Singh
We consider the issue of strategic behaviour in various peer-assessment tasks, including peer grading of exams or homeworks and peer review in hiring or promotions.
no code implementations • 30 Nov 2020 • Ivan Stelmakh, Charvi Rastogi, Nihar B. Shah, Aarti Singh, Hal Daumé III
Peer review is the backbone of academia and humans constitute a cornerstone of this process, being responsible for reviewing papers and making the final acceptance/rejection decisions.
no code implementations • 30 Nov 2020 • Ivan Stelmakh, Nihar B. Shah, Aarti Singh, Hal Daumé III
Conference peer review constitutes a human-computation process whose importance cannot be overstated: not only it identifies the best submissions for acceptance, but, ultimately, it impacts the future of the whole research area by promoting some ideas and restraining others.
no code implementations • 30 Nov 2020 • Ivan Stelmakh, Nihar B. Shah, Aarti Singh, Hal Daumé III
Modern machine learning and computer science conferences are experiencing a surge in the number of submissions that challenges the quality of peer review as the number of competent reviewers is growing at a much slower rate.
no code implementations • 11 Dec 2020 • Yusha Liu, Yining Wang, Aarti Singh
We also study adaptation to unknown function smoothness over a continuous scale of H\"older spaces indexed by $\alpha$, with a bandit model selection approach applied with our proposed two-layer algorithms.
no code implementations • 25 Sep 2019 • Yuexin Wu, Yichong Xu, Aarti Singh, Artur Dubrawski, Yiming Yang
Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data.
no code implementations • 8 Dec 2021 • Ojash Neopane, Aaditya Ramdas, Aarti Singh
We consider a variant of the best arm identification (BAI) problem in multi-armed bandits (MAB) in which there are two sets of arms (source and target), and the objective is to determine the best target arm while only pulling source arms.
no code implementations • 24 Apr 2022 • Dhruv Malik, Yuanzhi Li, Aarti Singh
Policy regret is a well established notion of measuring the performance of an online learning algorithm against an adaptive adversary.
1 code implementation • 1 Mar 2023 • Anirudh Vemula, Yuda Song, Aarti Singh, J. Andrew Bagnell, Sanjiban Choudhury
We propose a novel approach to addressing two fundamental challenges in Model-based Reinforcement Learning (MBRL): the computational expense of repeatedly finding a good policy in the learned model, and the objective mismatch between model fitting and policy computation.
no code implementations • 23 Mar 2023 • Vaibhav Jindal, Albert Liang, Aarti Singh, Shirley Ho, Drew Jamieson
Finding the initial conditions that led to the current state of the universe is challenging because it involves searching over an intractable input space of initial conditions, along with modeling their evolution via tools such as N-body simulations which are computationally expensive.
no code implementations • 26 Apr 2023 • Yusha Liu, Aarti Singh
In continuum-armed bandit problems where the underlying function resides in a reproducing kernel Hilbert space (RKHS), namely, the kernelised bandit problems, an important open problem remains of how well learning algorithms can adapt if the regularity of the associated kernel function is unknown.
no code implementations • 4 May 2023 • Dhruv Malik, Conor Igoe, Yuanzhi Li, Aarti Singh
Motivated by this, a significant line of work has formalized settings where an action's loss is a function of the number of times that action was recently played in the prior $m$ timesteps, where $m$ corresponds to a bound on human memory capacity.
no code implementations • 4 Jun 2023 • Shivani Chiranjeevi, Mojdeh Sadaati, Zi K Deng, Jayanth Koushik, Talukder Z Jubery, Daren Mueller, Matthew E O Neal, Nirav Merchant, Aarti Singh, Asheesh K Singh, Soumik Sarkar, Arti Singh, Baskar Ganapathysubramanian
InsectNet can guide citizen science data collection, especially for invasive species where early detection is crucial.
no code implementations • 9 Jun 2023 • Eric Luxenberg, Dhruv Malik, Yuanzhi Li, Aarti Singh, Stephen Boyd
We consider robust empirical risk minimization (ERM), where model parameters are chosen to minimize the worst-case empirical loss when each data point varies over a given convex uncertainty set.
no code implementations • 28 Aug 2023 • Jennifer Hsia, Danish Pruthi, Aarti Singh, Zachary C. Lipton
First, we show that we can inflate a model's comprehensiveness and sufficiency scores dramatically without altering its predictions or explanations on in-distribution test inputs.
no code implementations • 23 Mar 2024 • Aakash Lahoti, Stefani Karp, Ezra Winston, Aarti Singh, Yuanzhi Li
Vision tasks are characterized by the properties of locality and translation invariance.