Search Results for author: Alaa Maalouf

Found 26 papers, 8 papers with code

Flex: End-to-End Text-Instructed Visual Navigation with Foundation Models

no code implementations16 Oct 2024 Makram Chahine, Alex Quach, Alaa Maalouf, Tsun-Hsuan Wang, Daniela Rus

End-to-end learning directly maps sensory inputs to actions, creating highly integrated and efficient policies for complex robotics tasks.

Visual Navigation

Probing Multimodal LLMs as World Models for Driving

1 code implementation9 May 2024 Shiva Sreeram, Tsun-Hsuan Wang, Alaa Maalouf, Guy Rosman, Sertac Karaman, Daniela Rus

We provide a sober look at the application of Multimodal Large Language Models (MLLMs) in autonomous driving, challenging common assumptions about their ability to interpret dynamic driving scenarios.

Autonomous Driving Trajectory Planning

Drive Anywhere: Generalizable End-to-end Autonomous Driving with Multi-modal Foundation Models

no code implementations26 Oct 2023 Tsun-Hsuan Wang, Alaa Maalouf, Wei Xiao, Yutong Ban, Alexander Amini, Guy Rosman, Sertac Karaman, Daniela Rus

As autonomous driving technology matures, end-to-end methodologies have emerged as a leading strategy, promising seamless integration from perception to control via deep learning.

Autonomous Driving Data Augmentation

Follow Anything: Open-set detection, tracking, and following in real-time

1 code implementation10 Aug 2023 Alaa Maalouf, Ninad Jadhav, Krishna Murthy Jatavallabhula, Makram Chahine, Daniel M. Vogt, Robert J. Wood, Antonio Torralba, Daniela Rus

We demonstrate FAn on a real-world robotic system (a micro aerial vehicle) and report its ability to seamlessly follow the objects of interest in a real-time control loop.

Dataset Distillation Meets Provable Subset Selection

no code implementations16 Jul 2023 Murad Tukan, Alaa Maalouf, Margarita Osadchy

Deep learning has grown tremendously over recent years, yielding state-of-the-art results in various fields.

Dataset Distillation

On the Size and Approximation Error of Distilled Sets

no code implementations23 May 2023 Alaa Maalouf, Murad Tukan, Noel Loo, Ramin Hasani, Mathias Lechner, Daniela Rus

Despite significant empirical progress in recent years, there is little understanding of the theoretical limitations/guarantees of dataset distillation, specifically, what excess risk is achieved by distillation compared to the original dataset, and how large are distilled datasets?

Dataset Distillation regression

AutoCoreset: An Automatic Practical Coreset Construction Framework

1 code implementation19 May 2023 Alaa Maalouf, Murad Tukan, Vladimir Braverman, Daniela Rus

A coreset is a tiny weighted subset of an input set, that closely resembles the loss function, with respect to a certain set of queries.

Provable Data Subset Selection For Efficient Neural Network Training

1 code implementation9 Mar 2023 Murad Tukan, Samson Zhou, Alaa Maalouf, Daniela Rus, Vladimir Braverman, Dan Feldman

In this paper, we introduce the first algorithm to construct coresets for \emph{RBFNNs}, i. e., small weighted subsets that approximate the loss of the input data on any radial basis function network and thus approximate any function defined by an \emph{RBFNN} on the larger input data.

Efficient Neural Network

ConceptFusion: Open-set Multimodal 3D Mapping

1 code implementation14 Feb 2023 Krishna Murthy Jatavallabhula, Alihusein Kuwajerwala, Qiao Gu, Mohd Omama, Tao Chen, Alaa Maalouf, Shuang Li, Ganesh Iyer, Soroush Saryazdi, Nikhil Keetha, Ayush Tewari, Joshua B. Tenenbaum, Celso Miguel de Melo, Madhava Krishna, Liam Paull, Florian Shkurti, Antonio Torralba

ConceptFusion leverages the open-set capabilities of today's foundation models pre-trained on internet-scale data to reason about concepts across modalities such as natural language, images, and audio.

3D geometry Autonomous Driving +1

Deep Learning on Home Drone: Searching for the Optimal Architecture

1 code implementation21 Sep 2022 Alaa Maalouf, Yotam Gurfinkel, Barak Diker, Oren Gal, Daniela Rus, Dan Feldman

We suggest the first system that runs real-time semantic segmentation via deep learning on a weak micro-computer such as the Raspberry Pi Zero v2 (whose price was \$15) attached to a toy-drone.

Deep Learning Real-Time Semantic Segmentation

Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions

no code implementations18 Sep 2022 Murad Tukan, Loay Mualem, Alaa Maalouf

Lately, coresets (provable data summarizations) were leveraged for pruning DNNs, adding the advantage of theoretical guarantees on the trade-off between the compression rate and the approximation error.

Obstacle Aware Sampling for Path Planning

no code implementations8 Mar 2022 Murad Tukan, Alaa Maalouf, Dan Feldman, Roi Poranne

While this approach is very simple, it can become costly when the obstacles are unknown, since samples hitting these obstacles are wasted.

Coresets for Data Discretization and Sine Wave Fitting

no code implementations6 Mar 2022 Alaa Maalouf, Murad Tukan, Eric Price, Daniel Kane, Dan Feldman

The goal (e. g., for anomaly detection) is to approximate the $n$ points received so far in $P$ by a single frequency $\sin$, e. g. $\min_{c\in C}cost(P, c)+\lambda(c)$, where $cost(P, c)=\sum_{i=1}^n \sin^2(\frac{2\pi}{N} p_ic)$, $C\subseteq [N]$ is a feasible set of solutions, and $\lambda$ is a given regularization function.

Anomaly Detection

A Unified Approach to Coreset Learning

no code implementations4 Nov 2021 Alaa Maalouf, Gilad Eini, Ben Mussay, Dan Feldman, Margarita Osadchy

Our approach offers a new definition of coreset, which is a natural relaxation of the standard definition and aims at approximating the \emph{average} loss of the original data over the queries.

Network Pruning

Introduction to Coresets: Approximated Mean

no code implementations4 Nov 2021 Alaa Maalouf, Ibrahim Jubran, Dan Feldman

The survey may help guide new researchers unfamiliar with the field, and introduce them to the very basic foundations of coresets, through a simple, yet fundamental, problem.

Survey

Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition

2 code implementations NeurIPS 2021 Lucas Liebenwein, Alaa Maalouf, Oren Gal, Dan Feldman, Daniela Rus

We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression.

Low-rank compression

Provably Approximated ICP

no code implementations10 Jan 2021 Ibrahim Jubran, Alaa Maalouf, Ron Kimmel, Dan Feldman

A harder version is the \emph{registration problem}, where the correspondence is unknown, and the minimum is also over all possible correspondence functions from $P$ to $Q$.

Provably Approximated Point Cloud Registration

no code implementations ICCV 2021 Ibrahim Jubran, Alaa Maalouf, Ron Kimmel, Dan Feldman

A harder version is the registration problem, where the correspondence is unknown, and the minimum is also over all possible correspondence functions from P to Q. Algorithms such as the Iterative Closest Point (ICP) and its variants were suggested for these problems, but none yield a provable non-trivial approximation for the global optimum.

Point Cloud Registration

Deep Learning Meets Projective Clustering

no code implementations ICLR 2021 Alaa Maalouf, Harry Lang, Daniela Rus, Dan Feldman

Based on this approach, we provide a novel architecture that replaces the original embedding layer by a set of $k$ small layers that operate in parallel and are then recombined with a single fully-connected layer.

Clustering Deep Learning

Compressed Deep Networks: Goodbye SVD, Hello Robust Low-Rank Approximation

no code implementations11 Sep 2020 Murad Tukan, Alaa Maalouf, Matan Weksler, Dan Feldman

Here, $d$ is the number of the neurons in the layer, $n$ is the number in the next one, and $A_{k, 2}$ can be stored in $O((n+d)k)$ memory instead of $O(nd)$.

Faster PAC Learning and Smaller Coresets via Smoothed Analysis

no code implementations9 Jun 2020 Alaa Maalouf, Ibrahim Jubran, Murad Tukan, Dan Feldman

PAC-learning usually aims to compute a small subset ($\varepsilon$-sample/net) from $n$ items, that provably approximates a given loss function for every query (model, classifier, hypothesis) from a given set of queries, up to an additive error $\varepsilon\in(0, 1)$.

PAC learning

Coresets for Near-Convex Functions

no code implementations NeurIPS 2020 Murad Tukan, Alaa Maalouf, Dan Feldman

Coreset is usually a small weighted subset of $n$ input points in $\mathbb{R}^d$, that provably approximates their loss function for a given set of queries (models, classifiers, etc.).

regression

Sets Clustering

no code implementations ICML 2020 Ibrahim Jubran, Murad Tukan, Alaa Maalouf, Dan Feldman

The input to the \emph{sets-$k$-means} problem is an integer $k\geq 1$ and a set $\mathcal{P}=\{P_1,\cdots, P_n\}$ of sets in $\mathbb{R}^d$.

Clustering Document Classification

Introduction to Coresets: Accurate Coresets

no code implementations19 Oct 2019 Ibrahim Jubran, Alaa Maalouf, Dan Feldman

A coreset (or core-set) of an input set is its small summation, such that solving a problem on the coreset as its input, provably yields the same result as solving the same problem on the original (full) set, for a given family of problems (models, classifiers, loss functions).

Math

Tight Sensitivity Bounds For Smaller Coresets

no code implementations2 Jul 2019 Alaa Maalouf, Adiel Statman, Dan Feldman

With high probability, non-uniform sampling based on upper bounds on what is known as importance or sensitivity of each row in $A$ yields a coreset.

Fast and Accurate Least-Mean-Squares Solvers

1 code implementation NeurIPS 2019 Alaa Maalouf, Ibrahim Jubran, Dan Feldman

Least-mean squares (LMS) solvers such as Linear / Ridge / Lasso-Regression, SVD and Elastic-Net not only solve fundamental machine learning problems, but are also the building blocks in a variety of other methods, such as decision trees and matrix factorizations.

Data Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.