Here we consider some well-known facts in syntax from a physics perspective,
allowing us to establish equivalences between both fields with many
consequences. Mainly, we observe that the operation MERGE, put forward by N.
Chomsky in 1995, can be interpreted as a physical information coarse-graining...
Thus, MERGE in linguistics entails information renormalization in physics,
according to different time scales. We make this point mathematically formal in
terms of language models. In this setting, MERGE amounts to a probability
tensor implementing a coarse-graining, akin to a probabilistic context-free
grammar. The probability vectors of meaningful sentences are given by
stochastic tensor networks (TN) built from diagonal tensors and which are
mostly loop-free, such as Tree Tensor Networks and Matrix Product States, thus
being computationally very efficient to manipulate. We show that this implies
the polynomially-decaying (long-range) correlations experimentally observed in
language, and also provides arguments in favour of certain types of neural
networks for language processing. Moreover, we show how to obtain such language
models from quantum states that can be efficiently prepared on a quantum
computer, and use this to find bounds on the perplexity of the probability
distribution of words in a sentence. Implications of our results are discussed
across several ambits.