Search Results for author: Guy Hacohen

Found 9 papers, 4 papers with code

In Your Pace: Learning the Right Example at the Right Time

no code implementations ICLR 2019 Guy Hacohen, Daphna Weinshall

Initially, we define the difficulty of a training image using transfer learning from some competitive "teacher" network trained on the Imagenet database, showing improvement in learning speed and final performance for both small and competitive networks, using the CIFAR-10 and the CIFAR-100 datasets.

Transfer Learning

Semi-Supervised Learning in the Few-Shot Zero-Shot Scenario

no code implementations27 Aug 2023 Noam Fluss, Guy Hacohen, Daphna Weinshall

Semi-Supervised Learning (SSL) is a framework that utilizes both labeled and unlabeled data to enhance model performance.

Image Classification

Pruning the Unlabeled Data to Improve Semi-Supervised Learning

no code implementations27 Aug 2023 Guy Hacohen, Daphna Weinshall

In the domain of semi-supervised learning (SSL), the conventional approach involves training a learner with a limited amount of labeled data alongside a substantial volume of unlabeled data, both drawn from the same underlying distribution.

Image Classification

Active Learning Through a Covering Lens

1 code implementation23 May 2022 Ofer Yehuda, Avihu Dekel, Guy Hacohen, Daphna Weinshall

Deep active learning aims to reduce the annotation cost for the training of deep models, which is notoriously data-hungry.

Active Learning Representation Learning +1

Active Learning on a Budget: Opposite Strategies Suit High and Low Budgets

1 code implementation6 Feb 2022 Guy Hacohen, Avihu Dekel, Daphna Weinshall

Investigating active learning, we focus on the relation between the number of labeled examples (budget size), and suitable querying strategies.

Active Learning

The Grammar-Learning Trajectories of Neural Language Models

1 code implementation ACL 2022 Leshem Choshen, Guy Hacohen, Daphna Weinshall, Omri Abend

These findings suggest that there is some mutual inductive bias that underlies these models' learning of linguistic phenomena.

Inductive Bias

Principal Components Bias in Over-parameterized Linear Models, and its Manifestation in Deep Neural Networks

no code implementations NeurIPS 2021 Guy Hacohen, Daphna Weinshall

Empirically, we show how the PC-bias streamlines the order of learning of both linear and non-linear networks, more prominently at earlier stages of learning.

On The Power of Curriculum Learning in Training Deep Networks

3 code implementations7 Apr 2019 Guy Hacohen, Daphna Weinshall

We address challenge (i) using two methods: transfer learning from some competitive ``teacher" network, and bootstrapping.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.