You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 8 Jul 2022 • Alejandro Parada-Mayorga, Zhiyang Wang, Fernando Gama, Alejandro Ribeiro

We also conclude that in Agg-GNNs the selectivity of the mapping operators is tied to the properties of the filters only in the first layer of the CNN stage.

1 code implementation • 5 Jul 2022 • Navid Naderializadeh, Mark Eisen, Alejandro Ribeiro

We consider resource management problems in multi-user wireless networks, which can be cast as optimizing a network-wide utility function, subject to constraints on the long-term average performance of users across the network.

1 code implementation • 2 Jun 2022 • Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani

In this paper, we exploit this concept to design a potential function of the hypothesis velocity fields, and prove that, if such a function diminishes to zero during the training procedure, the trajectory of the densities generated by the hypothesis velocity fields converges to the solution of the FPE in the Wasserstein-2 sense.

1 code implementation • 31 May 2022 • Saurabh Sihag, Gonzalo Mateos, Corey McMillan, Alejandro Ribeiro

Moreover, our experiments on multi-resolution datasets also demonstrate that VNNs are amenable to transferability of performance over covariance matrices of different dimensions; a feature that is infeasible for PCA-based approaches.

no code implementations • 19 May 2022 • Max Wasserman, Saurabh Sihag, Gonzalo Mateos, Alejandro Ribeiro

Machine learning frameworks such as graph neural networks typically rely on a given, fixed graph to exploit relational inductive biases and thus effectively learn from network data.

no code implementations • 19 May 2022 • Charilaos I. Kanatsoulis, Alejandro Ribeiro

Graph Neural Networks (GNNs) are powerful convolutional architectures that have shown remarkable performance in various node-level and graph-level tasks.

1 code implementation • 7 Mar 2022 • Navid Naderializadeh, Mark Eisen, Alejandro Ribeiro

We consider the problems of downlink user selection and power control in wireless networks, comprising multiple transmitters and receivers communicating with each other over a shared wireless medium.

1 code implementation • 8 Feb 2022 • Juan Elenter, Navid Naderializadeh, Alejandro Ribeiro

Considering a primal-dual approach, we optimize the primal variables, corresponding to the model parameters, as well as the dual variables, corresponding to the constraints.

no code implementations • 24 Jan 2022 • Vinicius Lima, Mark Eisen, Konstantinos Gatsis, Alejandro Ribeiro

As the number of learnable parameters in a neural network grows with the size of the input signal, deep reinforcement learning may fail to scale, limiting the immediate generalization of such scheduling and resource allocation policies to large-scale systems.

no code implementations • 14 Dec 2021 • Daniel Mox, Vijay Kumar, Alejandro Ribeiro

In this letter we propose a data-driven approach to optimizing the algebraic connectivity of a team of robots.

no code implementations • 9 Dec 2021 • Luana Ruiz, Luiz F. O. Chamon, Alejandro Ribeiro

In this paper, we study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.

no code implementations • NeurIPS 2021 • Alexander Robey, Luiz F. O. Chamon, George J. Pappas, Hamed Hassani, Alejandro Ribeiro

In particular, we leverage semi-infinite optimization and non-convex duality theory to show that adversarial training is equivalent to a statistical problem over perturbation distributions, which we characterize completely.

no code implementations • 10 Oct 2021 • Zhiyang Wang, Luana Ruiz, Mark Eisen, Alejandro Ribeiro

We consider the problem of resource allocation in large scale wireless networks.

no code implementations • 10 Oct 2021 • Zhiyang Wang, Luana Ruiz, Alejandro Ribeiro

Hence, in this paper, we analyze the stability properties of convolutional neural networks on manifolds to understand the stability of GNNs on large graphs.

no code implementations • 7 Oct 2021 • Juan Cervino, Luana Ruiz, Alejandro Ribeiro

Graph Neural Networks (GNN) rely on graph convolutions to learn features from network data.

no code implementations • ICLR 2022 • Samar Hadou, Charilaos I. Kanatsoulis, Alejandro Ribeiro

We introduce a generic definition of convolution operators that mimic the diffusion process of signals over its underlying support.

no code implementations • ICLR 2022 • Zebang Shen, Juan Cervino, Hamed Hassani, Alejandro Ribeiro

Federated Learning (FL) has emerged as the tool of choice for training deep models over heterogeneous and decentralized datasets.

no code implementations • 23 Aug 2021 • Alejandro Parada-Mayorga, Landon Butler, Alejandro Ribeiro

In this paper we provide stability results for algebraic neural networks (AlgNNs) based on non commutative algebras.

no code implementations • 19 Jul 2021 • Zhan Gao, Fernando Gama, Alejandro Ribeiro

At training time, the joint wide and deep architecture learns nonlinear representations from data.

no code implementations • 3 Jul 2021 • Zhiyang Wang, Mark Eisen, Alejandro Ribeiro

We consider the broad class of decentralized optimal resource allocation problems in wireless networks, which can be formulated as a constrained statistical learning problems with a localized information structure.

1 code implementation • 24 Jun 2021 • Ting-Kuei Hu, Fernando Gama, Tianlong Chen, Wenqing Zheng, Zhangyang Wang, Alejandro Ribeiro, Brian M. Sadler

Our framework is implemented by a cascade of a convolutional and a graph neural network (CNN / GNN), addressing agent-level visual perception and feature learning, as well as swarm-level communication, local information aggregation and agent action inference, respectively.

no code implementations • 19 Jun 2021 • Zhan Gao, Elvin Isufi, Alejandro Ribeiro

In particular, it proves the expected output difference between the GCNN over random perturbed graphs and the GCNN over the nominal graph is upper bounded by a factor that is linear in the link loss probability.

no code implementations • 7 Jun 2021 • Juan Cervino, Luana Ruiz, Alejandro Ribeiro

Graph neural networks (GNNs) use graph convolutions to exploit network invariances and learn meaningful feature representations from network data.

no code implementations • 7 Jun 2021 • Zhiyang Wang, Luana Ruiz, Alejandro Ribeiro

We then define two frequency dependent manifold filters that split the infinite dimensional spectrum of the LB operator in finite partitions, and prove that these filters are stable to absolute and relative perturbations of the LB operator respectively.

no code implementations • 5 Jun 2021 • Zhan Gao, Subhrajit Bhattacharya, Leiming Zhang, Rick S. Blum, Alejandro Ribeiro, Brian M. Sadler

Graph neural networks (GNNs) are processing architectures that exploit graph structural information to model representations from network data.

1 code implementation • 18 May 2021 • Lifeng Zhou, Vishnu D. Sharma, QingBiao Li, Amanda Prorok, Alejandro Ribeiro, Vijay Kumar

We demonstrate the performance of our GNN-based learning approach in a scenario of active target coverage with large networks of robots.

no code implementations • 26 Mar 2021 • Ekaterina Tolstaya, Ethan Stump, Alec Koppel, Alejandro Ribeiro

We present a reinforcement learning algorithm for learning sparse non-parametric controllers in a Reproducing Kernel Hilbert Space.

1 code implementation • 8 Mar 2021 • Ekaterina Tolstaya, Landon Butler, Daniel Mox, James Paulos, Vijay Kumar, Alejandro Ribeiro

To overcome this challenge, we propose a task-agnostic, decentralized, low-latency method for data distribution in ad-hoc networks using Graph Neural Networks (GNN).

no code implementations • 8 Mar 2021 • Luiz F. O. Chamon, Santiago Paternain, Miguel Calvo-Fullana, Alejandro Ribeiro

In this paper, we overcome this issue by learning in the empirical dual domain, where constrained statistical learning problems become unconstrained and deterministic.

no code implementations • 3 Mar 2021 • Zhiyang Wang, Luana Ruiz, Alejandro Ribeiro

We further construct a manifold neural network architecture with these filters.

no code implementations • 23 Feb 2021 • Miguel Calvo-Fullana, Santiago Paternain, Luiz F. O. Chamon, Alejandro Ribeiro

Constrained reinforcement learning involves multiple rewards that must individually accumulate to given thresholds.

no code implementations • 11 Feb 2021 • Arbaaz Khan, Vijay Kumar, Alejandro Ribeiro

We are able to demonstrate the scalability of our methods for a large number of robots by employing a graph neural network (GNN) to parameterize policies for the robots.

no code implementations • 11 Feb 2021 • Clark Zhang, Santiago Paternain, Alejandro Ribeiro

This paper introduces the constrained Sufficiently Accurate model learning approach, provides examples of such problems, and presents a theorem on how close some approximate solutions can be.

no code implementations • 29 Dec 2020 • Fernando Gama, QingBiao Li, Ekaterina Tolstaya, Amanda Prorok, Alejandro Ribeiro

Dynamical systems consisting of a set of autonomous agents face the challenge of having to accomplish a global task, relying only on local information.

no code implementations • 24 Nov 2020 • Luiz F. O. Chamon, Santiago Paternain, Alejandro Ribeiro

Prediction credibility measures, in the form of confidence intervals or probability distributions, are fundamental in statistics and machine learning to characterize model robustness, detect out-of-distribution samples (outliers), and protect against adversarial attacks.

no code implementations • NeurIPS 2020 • Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani

In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.

no code implementations • 5 Nov 2020 • Zhiyang Wang, Mark Eisen, Alejandro Ribeiro

We capture the asynchrony by modeling the activation pattern as a characteristic of each node and train a policy-based resource allocation method.

no code implementations • 27 Oct 2020 • Luana Ruiz, Fernando Gama, Alejandro Ribeiro, Elvin Isufi

In this work, we approach GCNNs from a state-space perspective revealing that the graph convolutional module is a minimalistic linear state-space model, in which the state update matrix is the graph shift operator.

no code implementations • 24 Oct 2020 • Juan Cervino, Juan Andres Bazerque, Miguel Calvo-Fullana, Alejandro Ribeiro

In this paper we consider a problem known as multi-task learning, consisting of fitting a set of classifier or regression functions intended for solving different tasks.

no code implementations • 23 Oct 2020 • Luana Ruiz, Zhiyang Wang, Alejandro Ribeiro

We then extend this analysis by interpreting the graphon neural network as a generating model for GNNs on deterministic and stochastic graphs instantiated from the original and perturbed graphons.

no code implementations • 22 Oct 2020 • Alejandro Parada-Mayorga, Hans Riess, Alejandro Ribeiro, Robert Ghrist

In this paper we state the basics for a signal processing framework on quiver representations.

no code implementations • 22 Oct 2020 • Alejandro Parada-Mayorga, Alejandro Ribeiro

Algebraic neural networks (AlgNNs) are composed of a cascade of layers each one associated to and algebraic signal model, and information is mapped between layers by means of a nonlinearity function.

no code implementations • 17 Oct 2020 • Samuel Pfrommer, Fernando Gama, Alejandro Ribeiro

We define a notion of discriminability tied to the stability of the architecture, show that GNNs are at least as discriminative as linear graph filter banks, and characterize the signals that cannot be discriminated by either.

no code implementations • 16 Oct 2020 • Santiago Paternain, Juan Andres Bazerque, Alejandro Ribeiro

To that end we compute unbiased stochastic gradients of the value function which we use as ascent directions to update the policy.

no code implementations • 12 Oct 2020 • Zhan Gao, Fernando Gama, Alejandro Ribeiro

Spherical convolutional neural networks (Spherical CNNs) learn nonlinear representations from 3D data by exploiting the data structure and have shown promising performance in shape analysis, object classification, and planning among others.

1 code implementation • 14 Sep 2020 • Bianca Iancu, Luana Ruiz, Alejandro Ribeiro, Elvin Isufi

Activation functions are crucial in graph neural networks (GNNs) as they allow defining a nonlinear family of functions to capture the relationship between the input graph data and their representations.

no code implementations • 8 Sep 2020 • Maria Peifer, Alejandro Ribeiro

Instead, each agent must form a local model and decide what information is fundamental to the learning problem, which will be sent to a central unit.

Signal Processing

no code implementations • 3 Sep 2020 • Alejandro Parada-Mayorga, Alejandro Ribeiro

An AlgNN is a stacked layered information processing structure where each layer is conformed by an algebra, a vector space and a homomorphism between the algebra and the space of endomorphisms of the vector space.

no code implementations • 3 Sep 2020 • Vinicius Lima, Mark Eisen, Konstantinos Gatsis, Alejandro Ribeiro

Wireless control systems replace traditional wired communication with wireless networks to exchange information between actuators, plants and sensors in a control system.

no code implementations • 4 Aug 2020 • Luana Ruiz, Fernando Gama, Alejandro Ribeiro

They are presented here as generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters instead of banks of classical convolutional filters.

no code implementations • 27 Jul 2020 • Zhan Gao, Mark Eisen, Alejandro Ribeiro

This paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.

no code implementations • NeurIPS 2020 • Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani

In this paper, we consider the problem of computing the barycenter of a set of probability distributions under the Sinkhorn divergence.

no code implementations • 2 Jul 2020 • Zhan Gao, Alec Koppel, Alejandro Ribeiro

Stochastic gradient descent is a canonical tool for addressing stochastic optimization problems, and forms the bedrock of modern machine learning and statistics.

no code implementations • 26 Jun 2020 • Zhan Gao, Mark Eisen, Alejandro Ribeiro

This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.

no code implementations • 12 Jun 2020 • Harshat Kumar, Dionysios S. Kalogerias, George J. Pappas, Alejandro Ribeiro

Deterministic Policy Gradient (DPG) removes a level of randomness from standard randomized-action Policy Gradient (PG), and demonstrates substantial empirical success for tackling complex dynamic problems involving Markov decision processes.

no code implementations • 11 Jun 2020 • Arbaaz Khan, Alejandro Ribeiro, Vijay Kumar, Anthony G. Francis

This paper investigates the feasibility of using Graph Neural Networks (GNNs) for classical motion planning problems.

no code implementations • 11 Jun 2020 • Zhan Gao, Fernando Gama, Alejandro Ribeiro

At testing time, the deep part (nonlinear) is left unchanged, while the wide part is retrained online, leading to a convex problem.

no code implementations • NeurIPS 2020 • Luiz. F. O. Chamon, Alejandro Ribeiro

To overcome this issue, we prove that under mild conditions the empirical dual problem of constrained learning is also a PAC constrained learner that now leads to a practical constrained learning algorithm based solely on solving unconstrained problems.

no code implementations • L4DC 2020 • Luiz F.O. Chamon, Santiago Paternain, Alejandro Ribeiro

In recent years, considerable work has been done to tackle the issue of designing control laws based on observations to allow unknown dynamical systems to perform pre-specified tasks.

no code implementations • NeurIPS 2020 • Luana Ruiz, Luiz. F. O. Chamon, Alejandro Ribeiro

These graph convolutions combine information from adjacent nodes using coefficients that are shared across all nodes.

no code implementations • 4 Jun 2020 • Zhan Gao, Elvin Isufi, Alejandro Ribeiro

Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning among others.

no code implementations • 23 Mar 2020 • Fernando Gama, Ekaterina Tolstaya, Alejandro Ribeiro

Dynamical systems comprised of autonomous agents arise in many relevant problems such as multi-agent robotics, smart grids, or smart cities.

no code implementations • 10 Mar 2020 • Luana Ruiz, Luiz F. O. Chamon, Alejandro Ribeiro

Graphons are infinite-dimensional objects that represent the limit of convergent sequences of graphs as their number of nodes goes to infinity.

1 code implementation • 8 Mar 2020 • Fernando Gama, Elvin Isufi, Geert Leus, Alejandro Ribeiro

We also introduce GNN extensions using edge-varying and autoregressive moving average graph filters and discuss their properties.

no code implementations • 3 Mar 2020 • Alejandro Parada-Mayorga, Luana Ruiz, Alejandro Ribeiro

In this work, we propose a new strategy for pooling and sampling on GNNs using graphons which preserves the spectral properties of the graph.

no code implementations • 17 Feb 2020 • Navid Naderializadeh, Mark Eisen, Alejandro Ribeiro

We consider the problem of downlink power control in wireless networks, consisting of multiple transmitter-receiver pairs communicating with each other over a single shared wireless medium.

no code implementations • 12 Feb 2020 • Luiz. F. O. Chamon, Santiago Paternain, Miguel Calvo-Fullana, Alejandro Ribeiro

This paper is concerned with the study of constrained statistical learning problems, the unconstrained version of which are at the core of virtually all of modern information processing.

no code implementations • 6 Feb 2020 • Ting-Kuei Hu, Fernando Gama, Tianlong Chen, Zhangyang Wang, Alejandro Ribeiro, Brian M. Sadler

More specifically, we consider that each robot has access to a visual perception of the immediate surroundings, and communication capabilities to transmit and receive messages from other neighboring robots.

1 code implementation • 3 Feb 2020 • Luana Ruiz, Fernando Gama, Alejandro Ribeiro

Graph processes exhibit a temporal structure determined by the sequence index and and a spatial structure determined by the graph support.

1 code implementation • 21 Jan 2020 • Elvin Isufi, Fernando Gama, Alejandro Ribeiro

This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs).

1 code implementation • 12 Dec 2019 • Qing-Biao Li, Fernando Gama, Alejandro Ribeiro, Amanda Prorok

We train the model to imitate an expert algorithm, and use the resulting model online in decentralized planning involving only local communication and local observations.

no code implementations • 6 Dec 2019 • Dionysios S. Kalogerias, Luiz. F. O. Chamon, George J. Pappas, Alejandro Ribeiro

Despite the simplicity and intuitive interpretation of Minimum Mean Squared Error (MMSE) estimators, their effectiveness in certain scenarios is questionable.

no code implementations • 20 Nov 2019 • Santiago Paternain, Miguel Calvo-Fullana, Luiz. F. O. Chamon, Alejandro Ribeiro

The advantages of the proposed relaxation are threefold.

no code implementations • 10 Nov 2019 • Dionysios S. Kalogerias, Mark Eisen, George J. Pappas, Alejandro Ribeiro

Upon further assuming the use of near-universal policy parameterizations, we also develop explicit bounds on the gap between optimal values of initial, infinite dimensional resource allocation problems, and dual values of their parameterized smoothed surrogates.

no code implementations • NeurIPS 2019 • Santiago Paternain, Luiz. F. O. Chamon, Miguel Calvo-Fullana, Alejandro Ribeiro

The later is generally addressed by formulating the conflicting requirements as a constrained RL problem and solved using Primal-Dual methods.

no code implementations • 21 Oct 2019 • Fernando Gama, Joan Bruna, Alejandro Ribeiro

In this paper, we are set to study the effect that a change in the underlying graph topology that supports the signal has on the output of a GNN.

no code implementations • 18 Oct 2019 • Harshat Kumar, Alec Koppel, Alejandro Ribeiro

Actor-critic algorithms combine the merits of both approaches by alternating between steps to estimate the value function and policy gradient updates.

no code implementations • 4 Sep 2019 • Mark Eisen, Alejandro Ribeiro

We consider the problem of optimally allocating resources across a set of transmitters and receivers in a wireless network.

no code implementations • 21 Jun 2019 • Zhan Gao, Mark Eisen, Alejandro Ribeiro

Radio on Free Space Optics (RoFSO), as a universal platform for heterogeneous wireless services, is able to transmit multiple radio frequency signals at high rates in free space optical networks.

1 code implementation • NeurIPS 2019 • Fernando Gama, Joan Bruna, Alejandro Ribeiro

In this work, we extend scattering transforms to network data by using multiresolution graph wavelets, whose computation can be obtained by means of graph convolutions.

no code implementations • 11 May 2019 • Fernando Gama, Joan Bruna, Alejandro Ribeiro

Graph neural networks (GNNs) have emerged as a powerful tool for nonlinear processing of graph signals, exhibiting success in recommender systems, power outage prediction, and motion planning, among others.

no code implementations • 7 May 2019 • Maria Peifer, Luiz. F. O. Chamon, Santiago Paternain, Alejandro Ribeiro

To address the complexity issues, we then write the function estimation problem as a sparse functional program that explicitly minimizes the support of the representation leading to low complexity solutions.

no code implementations • 29 Mar 2019 • Luana Ruiz, Fernando Gama, Antonio G. Marques, Alejandro Ribeiro

Graph neural networks (GNNs) are information processing architectures tailored to these graph signals and made of stacked layers that compose graph convolutional filters with nonlinear activation functions.

1 code implementation • 25 Mar 2019 • Ekaterina Tolstaya, Fernando Gama, James Paulos, George Pappas, Vijay Kumar, Alejandro Ribeiro

We consider the problem of finding distributed controllers for large networks of mobile robots with interacting dynamics and sparsely available communications.

Robotics

1 code implementation • 5 Mar 2019 • Luana Ruiz, Fernando Gama, Alejandro Ribeiro

Graph processes model a number of important problems such as identifying the epicenter of an earthquake or predicting weather.

Ranked #11 on Node Classification on CiteSeer (0.5%)

no code implementations • ICLR 2020 • Zebang Shen, Pan Zhou, Cong Fang, Alejandro Ribeiro

We target the problem of finding a local minimum in non-convex finite-sum minimization.

no code implementations • 4 Mar 2019 • Elvin Isufi, Fernando Gama, Alejandro Ribeiro

This paper reviews graph convolutional neural networks (GCNNs) through the lens of edge-variant graph filters.

no code implementations • 19 Feb 2019 • Clark Zhang, Arbaaz Khan, Santiago Paternain, Alejandro Ribeiro

In this paper, we investigate a method to regularize model learning techniques to provide better error characteristics for traditional control and planning algorithms.

no code implementations • 1 Nov 2018 • Luiz. F. O. Chamon, Yonina C. Eldar, Alejandro Ribeiro

Even if they are, recovering sparse solutions using convex relaxations requires assumptions that may be hard to meet in practice.

no code implementations • 29 Oct 2018 • Luana Ruiz, Fernando Gama, Antonio G. Marques, Alejandro Ribeiro

Graph neural networks (GNNs) have been shown to replicate convolutional neural networks' (CNNs) superior performance in many problems involving graphs.

no code implementations • 26 Oct 2018 • Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takáč

In this paper, we propose a Distributed Accumulated Newton Conjugate gradiEnt (DANCE) method in which sample size is gradually increasing to quickly obtain a solution whose empirical loss is under satisfactory statistical accuracy.

no code implementations • 27 Sep 2018 • Arbaaz Khan, Clark Zhang, Vijay Kumar, Alejandro Ribeiro

A deep reinforcement learning solution is developed for a collaborative multiagent system.

no code implementations • 21 Jul 2018 • Mark Eisen, Clark Zhang, Luiz. F. O. Chamon, Daniel D. Lee, Alejandro Ribeiro

This paper considers the design of optimal resource allocation policies in wireless communication systems which are generically modeled as a functional optimization problem with stochastic constraints.

no code implementations • ICLR 2019 • Fernando Gama, Alejandro Ribeiro, Joan Bruna

Stability is a key aspect of data analysis.

no code implementations • 22 May 2018 • Arbaaz Khan, Clark Zhang, Daniel D. Lee, Vijay Kumar, Alejandro Ribeiro

When the number of agents increases, the dimensionality of the input and control spaces increase as well, and these methods do not scale well.

Distributed Optimization
Multi-agent Reinforcement Learning
**+1**

no code implementations • 1 May 2018 • Fernando Gama, Antonio G. Marques, Geert Leus, Alejandro Ribeiro

Multinode aggregation GNNs are consistently the best performing GNN architecture.

1 code implementation • 19 Apr 2018 • Alec Koppel, Ekaterina Tolstaya, Ethan Stump, Alejandro Ribeiro

We consider Markov Decision Problems defined over continuous state and action spaces, where an autonomous agent seeks to learn a map from its states to actions so as to maximize its long-term discounted accumulation of rewards.

no code implementations • 6 Mar 2018 • Fernando Gama, Antonio G. Marques, Alejandro Ribeiro, Geert Leus

Superior performance and ease of implementation have fostered the adoption of Convolutional Neural Networks (CNNs) for a wide array of inference and reconstruction tasks.

no code implementations • NeurIPS 2017 • Luiz. F. O. Chamon, Alejandro Ribeiro

This work provides performance guarantees for the greedy solution of experimental design problems.

no code implementations • 27 Oct 2017 • Fernando Gama, Geert Leus, Antonio G. Marques, Alejandro Ribeiro

Convolutional neural networks (CNNs) are being applied to an increasing number of problems and fields due to their superior performance in classification and regression tasks.

no code implementations • 11 Oct 2017 • Alec Koppel, Santiago Paternain, Cedric Richard, Alejandro Ribeiro

That is, we establish that with constant step-size selections agents' functions converge to a neighborhood of the globally optimal one while satisfying the consensus constraints as the penalty parameter is increased.

no code implementations • NeurIPS 2017 • Aryan Mokhtari, Alejandro Ribeiro

Theoretical analyses show that the use of adaptive sample size methods reduces the overall computational cost of achieving the statistical accuracy of the whole dataset for a broad range of deterministic and stochastic first-order methods.

no code implementations • 22 May 2017 • Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro

In this paper, we propose a novel adaptive sample size second-order method, which reduces the cost of computing the Hessian by solving a sequence of ERM problems corresponding to a subset of samples and lowers the cost of computing the Hessian inverse using a truncated eigenvalue decomposition.

no code implementations • 5 Apr 2017 • Luiz. F. O. Chamon, Alejandro Ribeiro

In contrast to traditional signal processing, the irregularity of the signal domain makes selecting a sampling set non-trivial and hard to analyze.

no code implementations • 2 Feb 2017 • Aryan Mokhtari, Mark Eisen, Alejandro Ribeiro

This makes their computational cost per iteration independent of the number of objective functions $n$.

no code implementations • 13 Dec 2016 • Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro

Despite their attractiveness, popular perception is that techniques for nonparametric function approximation do not scale to streaming data due to an intractable growth in the amount of storage they require.

no code implementations • 1 Nov 2016 • Aryan Mokhtari, Mert Gürbüzbalaban, Alejandro Ribeiro

We prove that not only the proposed DIAG method converges linearly to the optimal solution, but also its linear convergence factor justifies the advantage of incremental methods on GD.

no code implementations • 18 Oct 2016 • Mark Eisen, Santiago Segarra, Gabriel Egan, Alejandro Ribeiro

We first study the similarity of writing styles between Early English playwrights by comparing the profile WANs.

no code implementations • 7 Oct 2016 • Tianyi Chen, Aryan Mokhtari, Xin Wang, Alejandro Ribeiro, Georgios B. Giannakis

Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements.

no code implementations • 21 Jul 2016 • Gunnar Carlsson, Facundo Mémoli, Alejandro Ribeiro, Santiago Segarra

This paper characterizes hierarchical clustering methods that abide by two previously introduced axioms -- thus, denominated admissible methods -- and proposes tractable algorithms for their implementation.

no code implementations • 21 Jul 2016 • Gunnar Carlsson, Facundo Mémoli, Alejandro Ribeiro, Santiago Segarra

This paper considers networks where relationships between nodes are represented by directed dissimilarities.

no code implementations • 21 Jul 2016 • Gunnar Carlsson, Facundo Mémoli, Alejandro Ribeiro, Santiago Segarra

We introduce two practical properties of hierarchical clustering methods for (possibly asymmetric) network data: excisiveness and linear scale preservation.

no code implementations • 17 Jun 2016 • Alec Koppel, Brian M. Sadler, Alejandro Ribeiro

To do so, we depart from the canonical decentralized optimization framework where agreement constraints are enforced, and instead formulate a problem where each agent minimizes a global objective while enforcing network proximity constraints.

Multiagent Systems Systems and Control Computation

no code implementations • 15 Jun 2016 • Aryan Mokhtari, Alec Koppel, Alejandro Ribeiro

Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both the selection of blocks and the selection of elements of the training set.

no code implementations • NeurIPS 2016 • Aryan Mokhtari, Alejandro Ribeiro

We consider empirical risk minimization for large-scale datasets.

no code implementations • 3 May 2016 • Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro

We consider discriminative dictionary learning in a distributed online setting, where a network of agents aims to learn a common set of dictionary elements of a feature space and model parameters while sequentially receiving observations.

no code implementations • 23 Mar 2016 • Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro

The resulting dual D-BFGS method is a fully decentralized algorithm in which nodes approximate curvature information of themselves and their neighbors through the satisfaction of a secant condition.

no code implementations • 22 Mar 2016 • Aryan Mokhtari, Alec Koppel, Alejandro Ribeiro

Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both, the selection of blocks and the selection of elements of the training set.

no code implementations • 16 Mar 2016 • Aryan Mokhtari, Shahin Shahrampour, Ali Jadbabaie, Alejandro Ribeiro

In this paper, we address tracking of a time-varying parameter with unknown dynamics.

no code implementations • 13 Jun 2015 • Aryan Mokhtari, Alejandro Ribeiro

The decentralized double stochastic averaging gradient (DSA) algorithm is proposed as a solution alternative that relies on: (i) The use of local stochastic averaging gradients.

Optimization and Control

no code implementations • 6 Sep 2014 • Aryan Mokhtari, Alejandro Ribeiro

Global convergence of an online (stochastic) limited memory version of the Broyden-Fletcher- Goldfarb-Shanno (BFGS) quasi-Newton method for solving optimization problems with stochastic objectives that arise in large scale machine learning is established.

no code implementations • 17 Jun 2014 • Santiago Segarra, Mark Eisen, Alejandro Ribeiro

Attribution accuracy is observed to exceed the one achieved by methods that rely on word frequencies alone.

no code implementations • 17 Apr 2014 • Gunnar Carlsson, Facundo Mémoli, Alejandro Ribeiro, Santiago Segarra

This paper introduces hierarchical quasi-clustering methods, a generalization of hierarchical clustering for asymmetric networks where the output structure preserves the asymmetry of the input data.

no code implementations • 20 Feb 2014 • Aryan Mokhtari, Alejandro Ribeiro

This paper adapts a recently developed regularized stochastic version of the Broyden, Fletcher, Goldfarb, and Shanno (BFGS) quasi-Newton method for the solution of support vector machine classification problems.

no code implementations • 29 Jan 2014 • Aryan Mokhtari, Alejandro Ribeiro

Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS.

no code implementations • 31 Jan 2013 • Gunnar Carlsson, Facundo Mémoli, Alejandro Ribeiro, Santiago Segarra

Our construction of hierarchical clustering methods is based on defining admissible methods to be those methods that abide by the axioms of value - nodes in a network with two nodes are clustered together at the maximum of the two dissimilarities between them - and transformation - when dissimilarities are reduced, the network may become more clustered but not less.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.