Interestingly in our experiments, our approach even reaches zero gap for 49 among 50 JSP instances whose job numbers are more than 150 on 20 machines.
We introduce a new regression framework designed to deal with large-scale, complex data that lies around a low-dimensional manifold.
The set of local modes and the ridge lines estimated from a dataset are important summary characteristics of the data-generating distribution.
This paper studies the linear convergence of the subspace constrained mean shift (SCMS) algorithm, a well-known algorithm for identifying a density ridge defined by a kernel density estimator.
We introduce a density-based clustering method called skeleton clustering that can detect clusters in multivariate and even high-dimensional data with irregular shapes.
Under the (generalized) EM framework, we provide a new proof for the ascending property of density estimates and demonstrate the global convergence of directional mean shift sequences.
Directional data consist of observations distributed on a (hyper)sphere, and appear in many applied fields, such as astronomy, ecology, and environmental science.
no code implementations • 27 Jan 2020 • Tristan Cazenave, Yen-Chi Chen, Guan-Wei Chen, Shi-Yu Chen, Xian-Dong Chiu, Julien Dehos, Maria Elsa, Qucheng Gong, Hengyuan Hu, Vasil Khalidov, Cheng-Ling Li, Hsin-I Lin, Yu-Jin Lin, Xavier Martinet, Vegard Mella, Jeremy Rapin, Baptiste Roziere, Gabriel Synnaeve, Fabien Teytaud, Olivier Teytaud, Shi-Cheng Ye, Yi-Jun Ye, Shi-Jim Yen, Sergey Zagoruyko
Since DeepMind's AlphaZero, Zero learning quickly became the state-of-the-art method for many board games.
no code implementations • 5 Nov 2019 • Brian Nord, Andrew J. Connolly, Jamie Kinney, Jeremy Kubica, Gautaum Narayan, Joshua E. G. Peek, Chad Schafer, Erik J. Tollerud, Camille Avestruz, G. Jogesh Babu, Simon Birrer, Douglas Burke, João Caldeira, Douglas A. Caldwell, Joleen K. Carlberg, Yen-Chi Chen, Chuanfei Dong, Eric D. Feigelson, V. Zach Golkhou, Vinay Kashyap, T. S. Li, Thomas Loredo, Luisa Lucie-Smith, Kaisey S. Mandel, J. R. Martínez-Galarza, Adam A. Miller, Priyamvada Natarajan, Michelle Ntampaka, Andy Ptak, David Rapetti, Lior Shamir, Aneta Siemiginowska, Brigitta M. Sipőcz, Arfon M. Smith, Nhan Tran, Ricardo Vilalta, Lucianne M. Walkowicz, John ZuHone
The field of astronomy has arrived at a turning point in terms of size and complexity of both datasets and scientific collaboration.
Pattern-mixture models provide a transparent approach for handling missing data, where the full-data distribution is factorized in a way that explicitly shows the parts that can be estimated from observed data alone, and the parts that require identifying restrictions.
Methodology Statistics Theory Statistics Theory
First, we review the various functional summaries in the literature and propose a unified framework for the functional summaries.
Variational inference is a general approach for approximating complex density functions, such as those arising in latent variable models, popular in machine learning.
A cluster tree provides a highly-interpretable summary of a density function by representing the hierarchy of its high-density clusters.
Persistence diagrams are two-dimensional plots that summarize the topological features of functions and are an important part of topological data analysis.
The Morse-Smale complex of a function $f$ decomposes the sample space into cells where $f$ is increasing or decreasing.
Modal regression estimates the local modes of the distribution of $Y$ given $X=x$, instead of the mean, as in the usual regression sense, and can hence reveal important structure missed by usual regression methods.
Mode clustering is a nonparametric method for clustering that defines clusters using the basins of attraction of a density estimator's modes.