no code implementations • 16 Oct 2024 • Makram Chahine, Alex Quach, Alaa Maalouf, Tsun-Hsuan Wang, Daniela Rus
End-to-end learning directly maps sensory inputs to actions, creating highly integrated and efficient policies for complex robotics tasks.
1 code implementation • 9 May 2024 • Shiva Sreeram, Tsun-Hsuan Wang, Alaa Maalouf, Guy Rosman, Sertac Karaman, Daniela Rus
We provide a sober look at the application of Multimodal Large Language Models (MLLMs) in autonomous driving, challenging common assumptions about their ability to interpret dynamic driving scenarios.
no code implementations • 26 Oct 2023 • Tsun-Hsuan Wang, Alaa Maalouf, Wei Xiao, Yutong Ban, Alexander Amini, Guy Rosman, Sertac Karaman, Daniela Rus
As autonomous driving technology matures, end-to-end methodologies have emerged as a leading strategy, promising seamless integration from perception to control via deep learning.
1 code implementation • 10 Aug 2023 • Alaa Maalouf, Ninad Jadhav, Krishna Murthy Jatavallabhula, Makram Chahine, Daniel M. Vogt, Robert J. Wood, Antonio Torralba, Daniela Rus
We demonstrate FAn on a real-world robotic system (a micro aerial vehicle) and report its ability to seamlessly follow the objects of interest in a real-time control loop.
no code implementations • 16 Jul 2023 • Murad Tukan, Alaa Maalouf, Margarita Osadchy
Deep learning has grown tremendously over recent years, yielding state-of-the-art results in various fields.
no code implementations • 23 May 2023 • Alaa Maalouf, Murad Tukan, Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus
Despite significant empirical progress in recent years, there is little understanding of the theoretical limitations/guarantees of dataset distillation, specifically, what excess risk is achieved by distillation compared to the original dataset, and how large are distilled datasets?
1 code implementation • 19 May 2023 • Alaa Maalouf, Murad Tukan, Vladimir Braverman, Daniela Rus
A coreset is a tiny weighted subset of an input set, that closely resembles the loss function, with respect to a certain set of queries.
1 code implementation • 9 Mar 2023 • Murad Tukan, Samson Zhou, Alaa Maalouf, Daniela Rus, Vladimir Braverman, Dan Feldman
In this paper, we introduce the first algorithm to construct coresets for \emph{RBFNNs}, i. e., small weighted subsets that approximate the loss of the input data on any radial basis function network and thus approximate any function defined by an \emph{RBFNN} on the larger input data.
1 code implementation • 14 Feb 2023 • Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, Mohd Omama, Tao Chen, Alaa Maalouf, Shuang Li, Ganesh Iyer, Soroush Saryazdi, Nikhil Keetha, Ayush Tewari, Joshua B. Tenenbaum, Celso Miguel de Melo, Madhava Krishna, Liam Paull, Florian Shkurti, Antonio Torralba
ConceptFusion leverages the open-set capabilities of today's foundation models pre-trained on internet-scale data to reason about concepts across modalities such as natural language, images, and audio.
1 code implementation • 21 Sep 2022 • Alaa Maalouf, Yotam Gurfinkel, Barak Diker, Oren Gal, Daniela Rus, Dan Feldman
We suggest the first system that runs real-time semantic segmentation via deep learning on a weak micro-computer such as the Raspberry Pi Zero v2 (whose price was \$15) attached to a toy-drone.
no code implementations • 18 Sep 2022 • Murad Tukan, Loay Mualem, Alaa Maalouf
Lately, coresets (provable data summarizations) were leveraged for pruning DNNs, adding the advantage of theoretical guarantees on the trade-off between the compression rate and the approximation error.
no code implementations • 8 Mar 2022 • Murad Tukan, Alaa Maalouf, Dan Feldman, Roi Poranne
While this approach is very simple, it can become costly when the obstacles are unknown, since samples hitting these obstacles are wasted.
no code implementations • 6 Mar 2022 • Alaa Maalouf, Murad Tukan, Eric Price, Daniel Kane, Dan Feldman
The goal (e. g., for anomaly detection) is to approximate the $n$ points received so far in $P$ by a single frequency $\sin$, e. g. $\min_{c\in C}cost(P, c)+\lambda(c)$, where $cost(P, c)=\sum_{i=1}^n \sin^2(\frac{2\pi}{N} p_ic)$, $C\subseteq [N]$ is a feasible set of solutions, and $\lambda$ is a given regularization function.
no code implementations • 4 Nov 2021 • Alaa Maalouf, Gilad Eini, Ben Mussay, Dan Feldman, Margarita Osadchy
Our approach offers a new definition of coreset, which is a natural relaxation of the standard definition and aims at approximating the \emph{average} loss of the original data over the queries.
no code implementations • 4 Nov 2021 • Alaa Maalouf, Ibrahim Jubran, Dan Feldman
The survey may help guide new researchers unfamiliar with the field, and introduce them to the very basic foundations of coresets, through a simple, yet fundamental, problem.
2 code implementations • NeurIPS 2021 • Lucas Liebenwein, Alaa Maalouf, Oren Gal, Dan Feldman, Daniela Rus
We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression.
no code implementations • 10 Jan 2021 • Ibrahim Jubran, Alaa Maalouf, Ron Kimmel, Dan Feldman
A harder version is the \emph{registration problem}, where the correspondence is unknown, and the minimum is also over all possible correspondence functions from $P$ to $Q$.
no code implementations • ICCV 2021 • Ibrahim Jubran, Alaa Maalouf, Ron Kimmel, Dan Feldman
A harder version is the registration problem, where the correspondence is unknown, and the minimum is also over all possible correspondence functions from P to Q. Algorithms such as the Iterative Closest Point (ICP) and its variants were suggested for these problems, but none yield a provable non-trivial approximation for the global optimum.
no code implementations • ICLR 2021 • Alaa Maalouf, Harry Lang, Daniela Rus, Dan Feldman
Based on this approach, we provide a novel architecture that replaces the original embedding layer by a set of $k$ small layers that operate in parallel and are then recombined with a single fully-connected layer.
no code implementations • 11 Sep 2020 • Murad Tukan, Alaa Maalouf, Matan Weksler, Dan Feldman
Here, $d$ is the number of the neurons in the layer, $n$ is the number in the next one, and $A_{k, 2}$ can be stored in $O((n+d)k)$ memory instead of $O(nd)$.
no code implementations • 9 Jun 2020 • Alaa Maalouf, Ibrahim Jubran, Murad Tukan, Dan Feldman
PAC-learning usually aims to compute a small subset ($\varepsilon$-sample/net) from $n$ items, that provably approximates a given loss function for every query (model, classifier, hypothesis) from a given set of queries, up to an additive error $\varepsilon\in(0, 1)$.
no code implementations • NeurIPS 2020 • Murad Tukan, Alaa Maalouf, Dan Feldman
Coreset is usually a small weighted subset of $n$ input points in $\mathbb{R}^d$, that provably approximates their loss function for a given set of queries (models, classifiers, etc.).
no code implementations • ICML 2020 • Ibrahim Jubran, Murad Tukan, Alaa Maalouf, Dan Feldman
The input to the \emph{sets-$k$-means} problem is an integer $k\geq 1$ and a set $\mathcal{P}=\{P_1,\cdots, P_n\}$ of sets in $\mathbb{R}^d$.
no code implementations • 19 Oct 2019 • Ibrahim Jubran, Alaa Maalouf, Dan Feldman
A coreset (or core-set) of an input set is its small summation, such that solving a problem on the coreset as its input, provably yields the same result as solving the same problem on the original (full) set, for a given family of problems (models, classifiers, loss functions).
no code implementations • 2 Jul 2019 • Alaa Maalouf, Adiel Statman, Dan Feldman
With high probability, non-uniform sampling based on upper bounds on what is known as importance or sensitivity of each row in $A$ yields a coreset.
1 code implementation • NeurIPS 2019 • Alaa Maalouf, Ibrahim Jubran, Dan Feldman
Least-mean squares (LMS) solvers such as Linear / Ridge / Lasso-Regression, SVD and Elastic-Net not only solve fundamental machine learning problems, but are also the building blocks in a variety of other methods, such as decision trees and matrix factorizations.