no code implementations • ICLR 2019 • Guy Hacohen, Daphna Weinshall
Initially, we define the difficulty of a training image using transfer learning from some competitive "teacher" network trained on the Imagenet database, showing improvement in learning speed and final performance for both small and competitive networks, using the CIFAR-10 and the CIFAR-100 datasets.
no code implementations • 17 Dec 2024 • Uri Stern, Tomer Yaacoby, Daphna Weinshall
We posit that this score quantifies local overfitting: a decline in performance confined to certain regions of the data space.
1 code implementation • 1 Jul 2024 • Inbal Mishal, Daphna Weinshall
Deep Active Learning (AL) techniques can be effective in reducing annotation costs for training deep models.
no code implementations • 30 Jun 2024 • Shahar Shaul-Ariel, Daphna Weinshall
However, as the memory allocated for replay decreases, the effectiveness of these approaches diminishes.
no code implementations • 17 Oct 2023 • Uri Stern, Daphna Weinshall
An extensive empirical evaluation with modern deep models shows our method's utility on multiple datasets, neural networks architectures and training schemes, both when training from scratch and when using pre-trained networks in transfer learning.
1 code implementation • 17 Oct 2023 • Uri Stern, Daniel Shwartz, Daphna Weinshall
Our method allows for the incorporation of useful knowledge obtained by the models during the overfitting phase without deterioration of the general performance, which is usually missed when early stopping is used.
no code implementations • 27 Aug 2023 • Noam Fluss, Guy Hacohen, Daphna Weinshall
Semi-Supervised Learning (SSL) is a framework that utilizes both labeled and unlabeled data to enhance model performance.
no code implementations • 27 Aug 2023 • Guy Hacohen, Daphna Weinshall
In the domain of semi-supervised learning (SSL), the conventional approach involves training a learner with a limited amount of labeled data alongside a substantial volume of unlabeled data, both drawn from the same underlying distribution.
no code implementations • 2 Oct 2022 • Daniel Shwartz, Uri Stern, Daphna Weinshall
This introduces a problem when training in the presence of noisy labels, as the noisy examples cannot be distinguished from clean examples by the end of training.
1 code implementation • 23 May 2022 • Ofer Yehuda, Avihu Dekel, Guy Hacohen, Daphna Weinshall
Deep active learning aims to reduce the annotation cost for the training of deep models, which is notoriously data-hungry.
1 code implementation • 6 Feb 2022 • Guy Hacohen, Avihu Dekel, Daphna Weinshall
Investigating active learning, we focus on the relation between the number of labeled examples (budget size), and suitable querying strategies.
Ranked #1 on
Active Learning
on CIFAR10 (10,000)
1 code implementation • ACL 2022 • Leshem Choshen, Guy Hacohen, Daphna Weinshall, Omri Abend
These findings suggest that there is some mutual inductive bias that underlies these models' learning of linguistic phenomena.
no code implementations • NeurIPS 2021 • Guy Hacohen, Daphna Weinshall
Empirically, we show how the PC-bias streamlines the order of learning of both linear and non-linear networks, more prominently at earlier stages of learning.
no code implementations • 9 Feb 2021 • Roee Cates, Daphna Weinshall
Our model can be employed during training time only and then pruned for prediction, resulting in an equivalent architecture to the base model.
1 code implementation • 1 Dec 2020 • Boaz Lerner, Guy Shiran, Daphna Weinshall
We also notably improve the results in the extreme cases of 1, 2 and 3 labels per class, and show that features learned by our model are more meaningful for separating the data.
no code implementations • 25 Nov 2020 • Itamar Winter, Daphna Weinshall
In the small-data regime, where only a small sample of labeled images is available for training with no access to additional unlabeled data, our results surpass state-of-the-art GAN models trained on the same amount of data.
1 code implementation • 31 Mar 2020 • Idan Azuri, Daphna Weinshall
GLICO learns a mapping from the training examples to a latent space and a generator that generates images from vectors in the latent space.
1 code implementation • 5 Dec 2019 • Guy Shiran, Daphna Weinshall
Simultaneously, the same deep network is trained to solve an additional self-supervised task of predicting image rotations.
Ranked #8 on
Image Clustering
on Tiny-ImageNet
no code implementations • ICML 2020 • Guy Hacohen, Leshem Choshen, Daphna Weinshall
We further show that this pattern of results reflects the interplay between the way neural networks learn benchmark datasets.
3 code implementations • 7 Apr 2019 • Guy Hacohen, Daphna Weinshall
We address challenge (i) using two methods: transfer learning from some competitive ``teacher" network, and bootstrapping.
no code implementations • 30 Jan 2019 • Gal Katzhendler, Daphna Weinshall
Blurred Images Lead to Bad Local Minima
no code implementations • 9 Dec 2018 • Daphna Weinshall, Dan Amir
We also prove that when the ideal difficulty score is fixed, the convergence rate is monotonically increasing with respect to the loss of the current hypothesis at each point.
no code implementations • EMNLP 2018 • Haim Dubossarsky, Eitan Grossman, Daphna Weinshall
This and additional results point to the conclusion that performance gains as reported in previous work may be an artifact of random sense assignment, which is equivalent to sub-sampling and multiple estimation of word vector representations.
no code implementations • 30 Aug 2018 • Matan Ben-Yosef, Daphna Weinshall
In order to provide a better fit to the target data distribution when the dataset includes many different classes, we propose a variant of the basic GAN model, called Gaussian Mixture GAN (GM-GAN), where the probability distribution over the latent space is a mixture of Gaussians.
no code implementations • ICML 2018 • Daphna Weinshall, Gad Cohen, Dan Amir
We provide theoretical investigation of curriculum learning in the context of stochastic gradient descent when optimizing the convex linear regression loss.
no code implementations • 28 Sep 2017 • Amit Mandelbaum, Daphna Weinshall
The reliable measurement of confidence in classifiers' predictions is very important for many applications and is, therefore, an important part of classifier design.
no code implementations • EMNLP 2017 • Haim Dubossarsky, Daphna Weinshall, Eitan Grossman
This article evaluates three proposed laws of semantic change.
no code implementations • CVPR 2017 • Gad Cohen, Daphna Weinshall
Studies in visual perceptual learning investigate the way human performance improves with practice, in the context of relatively simple (and therefore more manageable) visual tasks.
1 code implementation • 20 Apr 2017 • Yehezkel S. Resheff, Amit Mandelbaum, Daphna Weinshall
Deep learning has become the method of choice in many application domains of machine learning in recent years, especially for multi-class classification tasks.
no code implementations • 21 Apr 2016 • Nomi Vinokurov, Daphna Weinshall
Our method is based on the initial assignment of confidence values, which measure the affinity between a new test point and each known class.
no code implementations • 17 Nov 2015 • Yehezkel S. Resheff, Daphna Weinshall
Since most data analysis and statistical methods do not handle gracefully missing values, the first step in the analysis requires the imputation of missing values.
no code implementations • 16 Nov 2015 • Yehezkel S. Resheff, Shay Rotics, Ran Nathan, Daphna Weinshall
A common use of accelerometer data is for supervised learning of behavioral modes.
no code implementations • 27 Jan 2015 • Chaim Ginzburg, Amit Raphael, Daphna Weinshall
The reliable detection of speed of moving vehicles is considered key to traffic law enforcement in most countries, and is seen by many as an important tool to reduce the number of traffic accidents and fatalities.
no code implementations • NeurIPS 2010 • Uri Shalit, Daphna Weinshall, Gal Chechik
When learning models that are represented in matrix forms, enforcing a low-rank constraint can dramatically improve the memory and run time complexity, while providing a natural regularization of the model.
no code implementations • NeurIPS 2008 • Daphna Weinshall, Hynek Hermansky, Alon Zweig, Jie Luo, Holly Jimison, Frank Ohl, Misha Pavel
We define a formal framework for the representation and processing of incongruent events: starting from the notion of label hierarchy, we show how partial order on labels can be deduced from such hierarchies.