no code implementations • 30 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that vector cosine, which is generally considered among the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by an unsupervised measure that calculates the extent of the intersection among the most mutually dependent contexts of the target words.
1 code implementation • LREC 2016 • Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang
When the classification is binary, ROOT9 achieves the following results against the baseline: hypernyms-co-hyponyms 95. 7% vs. 69. 8%, hypernyms-random 91. 8% vs. 64. 1% and co-hyponyms-random 97. 8% vs. 79. 4%.
no code implementations • LREC 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we claim that Vector Cosine, which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models, can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists.
no code implementations • 29 Mar 2016 • Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang
In this paper, we describe ROOT13, a supervised system for the classification of hypernyms, co-hyponyms and random words.
no code implementations • LREC 2012 • Hongzhi Xu, Helen Kai-yun Chen, Chu-Ren Huang, Qin Lu, Dingxu Shi, Tin-Shing Chiu
We adopt the corpus-informed approach to example sentence selections for the construction of a reference grammar.