no code implementations • 6 Jun 2021 • Dvir Ben Or, Michael Kolomenkin, Gil Shabat
Dynamic difficulty adjustment ($DDA$) is a process of automatically changing a game difficulty for the optimization of user experience.
no code implementations • 28 Dec 2020 • Dvir Ben Or, Michael Kolomenkin, Gil Shabat
This note presents a simple way to add a count (or quantile) constraint to a regression neural net, such that given $n$ samples in the training set it guarantees that the prediction of $m<n$ samples will be larger than the actual value (the label).
no code implementations • 8 Feb 2020 • Hanan Shteingart, Eran Marom, Igor Itkin, Gil Shabat, Michael Kolomenkin, Moshe Salhov, Liran Katzir
There is a striking relationship between a three hundred years old Political Science theorem named "Condorcet's jury theorem" (1785), which states that majorities are more likely to choose correctly when individual votes are often correct and independent, and a modern Machine Learning concept called "Strength of Weak Learnability" (1990), which describes a method for converting a weak learning algorithm into one that achieves arbitrarily high accuracy and stands in the basis of Ensemble Learning.
no code implementations • CVPR 2013 • Michael Kolomenkin, Ilan Shimshoni, Ayellet Tal
In this paper, we propose a general framework for automatically detecting the optimal scale for each point on the surface.