no code implementations • 6 Nov 2023 • Andreas Galanis, Alkis Kalavasis, Anthimos Vardis Kandiros
For general $H$-colorings, we show that standard conditions that guarantee sampling, such as Dobrushin's condition, are insufficient for one-sample learning; on the positive side, we provide a general condition that is sufficient to guarantee linear-time learning and obtain applications for proper colorings and permissive models.
no code implementations • 23 Nov 2022 • Davin Choo, Yuval Dagan, Constantinos Daskalakis, Anthimos Vardis Kandiros
We provide time- and sample-efficient algorithms for learning and testing latent-tree Ising models, i. e. Ising models that may only be observed at their leaf nodes.
no code implementations • 21 Nov 2022 • Yuval Dagan, Constantinos Daskalakis, Anthimos Vardis Kandiros
Our results for the landscape of the log-likelihood function in general latent tree models provide support for the extensive practical use of maximum likelihood based-methods in this setting.
no code implementations • 20 Jul 2021 • Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Surbhi Goel, Anthimos Vardis Kandiros
We consider a general statistical estimation problem wherein binary labels across different observations are not independent conditioned on their feature vectors, but dependent, capturing settings where e. g. these observations are collected on a spatial domain, a temporal domain, or a social network, which induce dependencies.
no code implementations • 20 Apr 2020 • Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Anthimos Vardis Kandiros
As corollaries of our main theorem, we derive bounds when the model's interaction matrix is a (sparse) linear combination of known matrices, or it belongs to a finite set, or to a high-dimensional manifold.