We present preliminary results in quantitative analyses of color usage in selected authors' works from LitBank.
While large pre-trained language models are powerful, their predictions often lack logical consistency across test inputs.
However, this evidence is consistent with GPT3 reasoning only about specific lexical items rather than the more abstract conceptual categories of Levin et al.'s theory.
Current spoken dialogue systems initiate their turns after a long period of silence (700-1000ms), which leads to little real-time feedback, sluggish responses, and an overall stilted conversational flow.
It is a challenging problem to detect and recognize targets on complex large-scene Synthetic Aperture Radar (SAR) images.
Transformer-based language model approaches to automated story generation currently provide state-of-the-art results.
Neural Cellular Automata (NCAs) have been proven effective in simulating morphogenetic processes, the continuous construction of complex structures from very few starting cells.
Our normative fine-tuning technique is able to reduce non-normative text by 27-61%, depending on the data set.