no code implementations • 7 Nov 2023 • Lida H. Aleksanyan, Armen E. Allahverdyan, Vardan G. Bardakhchyan
We explain this rejection behavior via the following principle: if the responder regrets less about losing the offer than the proposer regrets not offering the best option, the offer is rejected.
no code implementations • 26 Jul 2023 • Lida Aleksanyan, Armen E. Allahverdyan
We propose an unsupervised, corpus-independent method to extract keywords from a single text.
no code implementations • 6 Jan 2023 • Vardan G. Bardakhchyan, Armen E. Allahverdyan
We study a sufficiently general regret criterion for choosing between two probabilistic lotteries.
no code implementations • 13 Jan 2022 • Armen E. Allahverdyan, Andranik Khachatryan
We also show that the codebook representation is important -- switching from a naive representation to a compact one significantly improves the matters for alphabets with large number of symbols, most notably the words.
1 code implementation • 9 Apr 2020 • Weibing Deng, R. Xie, S. Deng, Armen E. Allahverdyan
The differences reveal a temporal asymmetry in meaningful texts, which is confirmed by showing that texts are much better compressible in their natural way (i. e. along the narrative) than in the word-inverted form.
no code implementations • 22 Sep 2018 • Rongrong Xie, Shengfeng Deng, Weibing Deng, Armen E. Allahverdyan
Within this regime the strategy is outperformed by its local (adaptive) version, which supervises pixels that do not agree with their Bayesian estimate.
no code implementations • 22 Sep 2018 • Weibing Deng, Armen E. Allahverdyan
In all studied texts we saw that for the first half Zipf's law applies from smaller ranks than in the second half, i. e. the law applies better to the first half.
no code implementations • 5 Oct 2015 • Weibing Deng, Armen E. Allahverdyan
We study rank-frequency relations for phonemes, the minimal units that still relate to linguistic meaning.
no code implementations • 3 Nov 2014 • Armen E. Allahverdyan, Aram Galstyan
We consider active maximum a posteriori (MAP) inference problem for Hidden Markov Models (HMM), where, given an initial MAP estimate of the hidden sequence, we select to label certain states in the sequence to improve the estimation accuracy of the remaining states.
no code implementations • NeurIPS 2011 • Armen E. Allahverdyan, Aram Galstyan
We present an asymptotic analysis of Viterbi Training (VT) and contrast it with a more conventional Maximum Likelihood (ML) approach to parameter estimation in Hidden Markov Models.
no code implementations • 2 Dec 2013 • Greg Ver Steeg, Cristopher Moore, Aram Galstyan, Armen E. Allahverdyan
It predicts a first-order detectability transition whenever $q > 2$, while the finite-temperature cavity method shows that this is the case only when $q > 4$.