Search Results for author: Nadav Cohen

Found 32 papers, 15 papers with code

Implicit Bias of Policy Gradient in Linear Quadratic Control: Extrapolation to Unseen Initial States

1 code implementation12 Feb 2024 Noam Razin, Yotam Alexander, Edo Cohen-Karlik, Raja Giryes, Amir Globerson, Nadav Cohen

This paper theoretically studies the implicit bias of policy gradient in terms of extrapolation to unseen initial states.

Data-Driven Strategies for Coping with Incomplete DVL Measurements

no code implementations28 Jan 2024 Nadav Cohen, Itzik Klein

Autonomous underwater vehicles are specialized platforms engineered for deep underwater operations.

Autonomous Navigation

A-KIT: Adaptive Kalman-Informed Transformer

1 code implementation18 Jan 2024 Nadav Cohen, Itzik Klein

In this paper, we derive and introduce A-KIT, an adaptive Kalman-informed transformer to learn the varying process noise covariance online.

Sensor Fusion

What Makes Data Suitable for a Locally Connected Neural Network? A Necessary and Sufficient Condition Based on Quantum Entanglement

1 code implementation20 Mar 2023 Yotam Alexander, Nimrod De La Vega, Noam Razin, Nadav Cohen

Focusing on locally connected neural networks (a prevalent family of architectures that includes convolutional and recurrent neural networks as well as local self-attention models), we address this problem by adopting theoretical tools from quantum physics.

Set-Transformer BeamsNet for AUV Velocity Forecasting in Complete DVL Outage Scenarios

no code implementations22 Dec 2022 Nadav Cohen, Zeev Yampolsky, Itzik Klein

Our ST-BeamsNet estimated the AUV velocity vector with an 8. 547% speed error, which is 26% better than the MA approach.

Autonomous Navigation Blocking

On the Ability of Graph Neural Networks to Model Interactions Between Vertices

1 code implementation NeurIPS 2023 Noam Razin, Tom Verbin, Nadav Cohen

Formalizing strength of interactions through an established measure known as separation rank, we quantify the ability of certain GNNs to model interaction between a given subset of vertices and its complement, i. e. between the sides of a given partition of input vertices.

Learning Low Dimensional State Spaces with Overparameterized Recurrent Neural Nets

no code implementations25 Oct 2022 Edo Cohen-Karlik, Itamar Menuhin-Gruman, Raja Giryes, Nadav Cohen, Amir Globerson

Overparameterization in deep learning typically refers to settings where a trained neural network (NN) has representational capacity to fit the training data in many ways, some of which generalize well, while others do not.

Deep Linear Networks for Matrix Completion -- An Infinite Depth Limit

no code implementations22 Oct 2022 Nadav Cohen, Govind Menon, Zsolt Veraszto

The deep linear network (DLN) is a model for implicit regularization in gradient based optimization of overparametrized learning architectures.

Matrix Completion

LiBeamsNet: AUV Velocity Vector Estimation in Situations of Limited DVL Beam Measurements

no code implementations20 Oct 2022 Nadav Cohen, Itzik Klein

In such conditions, the vehicle's velocity vector could not be estimated leading to a navigation solution drift and in some situations the AUV is required to abort the mission and return to the surface.

Autonomous Navigation

BeamsNet: A data-driven Approach Enhancing Doppler Velocity Log Measurements for Autonomous Underwater Vehicle Navigation

no code implementations27 Jun 2022 Nadav Cohen, Itzik Klein

Both simulation and sea experiments were made to validate the proposed learning approach relative to the model-based approach.

Semantic Segmentation in Art Paintings

1 code implementation7 Mar 2022 Nadav Cohen, Yael Newman, Ariel Shamir

In this paper, we tackle the problem of semantic segmentation of artistic paintings, an even more challenging task because of a much larger diversity in colors, textures, and shapes and because there are no ground truth annotations available for segmentation.

Domain Adaptation Segmentation +2

On the Implicit Bias of Gradient Descent for Temporal Extrapolation

no code implementations9 Feb 2022 Edo Cohen-Karlik, Avichai Ben David, Nadav Cohen, Amir Globerson

When using recurrent neural networks (RNNs) it is common practice to apply trained models to sequences longer than those seen in training.

Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks

1 code implementation27 Jan 2022 Noam Razin, Asaf Maman, Nadav Cohen

In the pursuit of explaining implicit regularization in deep learning, prominent focus was given to matrix and tensor factorizations, which correspond to simplified neural networks.

Implicit Regularization in Tensor Factorization

1 code implementation19 Feb 2021 Noam Razin, Asaf Maman, Nadav Cohen

Recent efforts to unravel the mystery of implicit regularization in deep learning have led to a theoretical focus on matrix factorization -- matrix completion via linear neural network.

Matrix Completion

Implicit Regularization in Deep Learning May Not Be Explainable by Norms

1 code implementation NeurIPS 2020 Noam Razin, Nadav Cohen

Mathematically characterizing the implicit regularization induced by gradient-based optimization is a longstanding pursuit in the theory of deep learning.

Matrix Completion Open-Ended Question Answering

Implicit Regularization in Deep Matrix Factorization

1 code implementation NeurIPS 2019 Sanjeev Arora, Nadav Cohen, Wei Hu, Yuping Luo

Efforts to understand the generalization mystery in deep learning have led to the belief that gradient-based optimization induces a form of implicit regularization, a bias towards models of low "complexity."

Matrix Completion

A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks

no code implementations ICLR 2019 Sanjeev Arora, Nadav Cohen, Noah Golowich, Wei Hu

We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network (parameterized as $x \mapsto W_N W_{N-1} \cdots W_1 x$) by minimizing the $\ell_2$ loss over whitened data.

On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization

1 code implementation ICML 2018 Sanjeev Arora, Nadav Cohen, Elad Hazan

The effect of depth on optimization is decoupled from expressiveness by focusing on settings where additional layers amount to overparameterization - linear neural networks, a well-studied model.

regression

"Zero-Shot" Super-Resolution using Deep Internal Learning

7 code implementations17 Dec 2017 Assaf Shocher, Nadav Cohen, Michal Irani

On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods.

Image Compression Image Super-Resolution

Analysis and Design of Convolutional Networks via Hierarchical Tensor Decompositions

no code implementations5 May 2017 Nadav Cohen, Or Sharir, Yoav Levine, Ronen Tamari, David Yakira, Amnon Shashua

Expressive efficiency refers to the ability of a network architecture to realize functions that require an alternative architecture to be much larger.

Inductive Bias

Deep Learning and Quantum Entanglement: Fundamental Connections with Implications to Network Design

no code implementations ICLR 2018 Yoav Levine, David Yakira, Nadav Cohen, Amnon Shashua

This description enables us to carry a graph-theoretic analysis of a convolutional network, with which we demonstrate a direct control over the inductive bias of the deep network via its channel numbers, that are related to the min-cut in the underlying graph.

Inductive Bias

Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions

no code implementations ICLR 2018 Nadav Cohen, Ronen Tamari, Amnon Shashua

By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency.

Tensorial Mixture Models

2 code implementations13 Oct 2016 Or Sharir, Ronen Tamari, Nadav Cohen, Amnon Shashua

Other methods, based on arithmetic circuits and sum-product networks, do allow tractable marginalization, but their performance is challenged by the need to learn the structure of a circuit.

Inductive Bias of Deep Convolutional Networks through Pooling Geometry

1 code implementation22 May 2016 Nadav Cohen, Amnon Shashua

In addition to analyzing deep networks, we show that shallow ones support only linear separation ranks, and by this gain insight into the benefit of functions brought forth by depth - they are able to efficiently model strong correlation under favored partitions of the input.

Inductive Bias

Convolutional Rectifier Networks as Generalized Tensor Decompositions

no code implementations1 Mar 2016 Nadav Cohen, Amnon Shashua

Second, and more importantly, we show that depth efficiency is weaker with convolutional rectifier networks than it is with convolutional arithmetic circuits.

On the Expressive Power of Deep Learning: A Tensor Analysis

no code implementations16 Sep 2015 Nadav Cohen, Or Sharir, Amnon Shashua

In this work we derive a deep network architecture based on arithmetic circuits that inherently employs locality, sharing and pooling.

Deep SimNets

no code implementations CVPR 2016 Nadav Cohen, Or Sharir, Amnon Shashua

We present a deep layered architecture that generalizes convolutional neural networks (ConvNets).

SimNets: A Generalization of Convolutional Networks

1 code implementation3 Oct 2014 Nadav Cohen, Amnon Shashua

We present a deep layered architecture that generalizes classical convolutional neural networks (ConvNets).

Cannot find the paper you are looking for? You can Submit a new open access paper.