no code implementations • 10 Mar 2023 • Koby Bibas, Oren Sar Shalom, Dietmar Jannach
In this work, we propose a novel approach that can leverage both item side-information and labeled complementary item pairs to generate effective complementary recommendations for cold items, i. e., for items for which no co-purchase statistics yet exist.
no code implementations • 21 Oct 2022 • Koby Bibas, Oren Sar Shalom, Dietmar Jannach
A series of experiments on datasets from e-commerce and social media demonstrates that considering collaborative signals helps to significantly improve the performance of the main task of image classification by up to 9. 1%.
no code implementations • 17 Jun 2022 • Koby Bibas, Meir Feder
In the context of online prediction where the min-max solution is the Normalized Maximum Likelihood (NML), it has been suggested to use NML with ``luckiness'': A prior-like function is applied to the hypothesis class, which reduces its effective size.
1 code implementation • NeurIPS 2021 • Koby Bibas, Meir Feder, Tal Hassner
Furthermore, we describe how to efficiently apply the derived pNML regret to any pretrained deep NN, by employing the explicit pNML for the last layer, followed by the softmax function.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 4 Sep 2021 • Uriya Pesso, Koby Bibas, Meir Feder
Specifically, our defense performs adversarial targeted attacks according to different hypotheses, where each hypothesis assumes a specific label for the test sample.
no code implementations • 14 Feb 2021 • Koby Bibas, Meir Feder
Modern machine learning models do not obey this paradigm: They produce an accurate prediction even with a perfect fit to the training set.
no code implementations • 10 Jan 2021 • Koby Bibas, Gili Weiss-Dicker, Dana Cohen, Noa Cahan, Hayit Greenspan
A fundamental step in the recovering of the 3D single-particle structure is to align its 2D projections; thus, the construction of a canonical representation with a fixed rotation angle is required.
no code implementations • 25 Sep 2019 • Dotan Kaufman, Koby Bibas, Eran Borenstein, Michael Chertok, Tal Hassner
To this end, we propose a novel loss that balances compression and acceleration of a deep learning model vs. loss of generalization capabilities.
no code implementations • 25 Sep 2019 • Uriya Pesso, Koby Bibas, Meir Feder
In particular, we follow the recently suggested Predictive Normalized Maximum Likelihood (pNML) scheme for universal learning, whose goal is to optimally compete with a reference learner that knows the true label of the test sample but is restricted to use a learner from a given hypothesis class.
3 code implementations • 12 May 2019 • Koby Bibas, Yaniv Fogel, Meir Feder
Linear regression is a classical paradigm in statistics.
1 code implementation • 28 Apr 2019 • Koby Bibas, Yaniv Fogel, Meir Feder
Finally, we extend the pNML to a ``twice universal'' solution, that provides universality for model class selection and generates a learner competing with the best one from all model classes.