1 code implementation • 25 Sep 2024 • Harsha Vardhan Simhadri, Martin Aumüller, Amir Ingber, Matthijs Douze, George Williams, Magdalen Dobson Manohar, Dmitry Baranchuk, Edo Liberty, Frank Liu, Ben Landrum, Mazin Karjikar, Laxman Dhulipala, Meng Chen, Yue Chen, Rui Ma, Kai Zhang, Yuzheng Cai, Jiayang Shi, Yizhuo Chen, Weiguo Zheng, Zihao Wan, Jie Yin, Ben Huang
The 2023 Big ANN Challenge, held at NeurIPS 2023, focused on advancing the state-of-the-art in indexing data structures and search algorithms for practical variants of Approximate Nearest Neighbor (ANN) search that reflect the growing complexity and diversity of workloads.
no code implementations • 16 Sep 2023 • Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-$k$ retrieval in Information Retrieval.
no code implementations • 25 Jan 2023 • Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty
To achieve optimal memory footprint and query latency, they rely on the near stationarity of documents and on laws governing natural languages.
no code implementations • 3 Feb 2022 • Edo Liberty
This paper provides a one-line proof of Frequent Directions (FD) for sketching streams of matrices.
1 code implementation • 29 Jun 2019 • Nikita Ivkin, Edo Liberty, Kevin Lang, Zohar Karnin, Vladimir Braverman
Approximating quantiles and distributions over streaming data has been studied for roughly two decades now.
no code implementations • 22 Jun 2019 • Nick Ryder, Zohar Karnin, Edo Liberty
In many applications the data set to be projected is given to us in advance, yet the current RP techniques do not make use of information about the data.
no code implementations • 11 Jun 2019 • Zohar Karnin, Edo Liberty
We provide general techniques for bounding the class discrepancy of machine learning problems.
1 code implementation • ICLR 2019 • Yu Bai, Yu-Xiang Wang, Edo Liberty
To make deep neural networks feasible in resource-constrained environments (such as mobile devices), it is beneficial to quantize models by using low-precision weights.
2 code implementations • 17 Mar 2016 • Zohar Karnin, Kevin Lang, Edo Liberty
One of our contributions is a novel representation and modification of the widely used merge-and-reduce construction.
Data Structures and Algorithms
no code implementations • 8 Jan 2015 • Mina Ghashami, Edo Liberty, Jeff M. Phillips, David P. Woodruff
It performed $O(d \times \ell)$ operations per row and maintains a sketch matrix $B \in R^{\ell \times d}$ such that for any $k < \ell$ $\|A^TA - B^TB \|_2 \leq \|A - A_k\|_F^2 / (\ell-k)$ and $\|A - \pi_{B_k}(A)\|_F^2 \leq \big(1 + \frac{k}{\ell-k}\big) \|A-A_k\|_F^2 $ .
Data Structures and Algorithms 68W40 (Primary)
no code implementations • 18 Dec 2014 • Edo Liberty, Ram Sriharsha, Maxim Sviridenko
We also show that, experimentally, it is not much worse than k-means++ while operating in a strictly more constrained computational model.
no code implementations • NeurIPS 2013 • Dimitris Achlioptas, Zohar Karnin, Edo Liberty
We consider the problem of selecting non-zero entries of a matrix $A$ in order to produce a sparse sketch of it, $B$, that minimizes $\|A-B\|_2$.