no code implementations • 17 Jan 2024 • Geetanjali Bihani, Julia Taylor Rayz
The advent of large language models (LLMs) has enabled significant performance gains in the field of natural language processing.
1 code implementation • 30 Apr 2023 • Geetanjali Bihani, Julia Taylor Rayz
Neural network-based decisions tend to be overconfident, where their raw outcome probabilities do not align with the true decision probabilities.
1 code implementation • 5 Oct 2022 • Kanishka Misra, Julia Taylor Rayz, Allyson Ettinger
A characteristic feature of human semantic cognition is its ability to not only store and retrieve the properties of concepts observed through experience, but to also facilitate the inheritance of properties (can breathe) from superordinate concepts (animal) to their subordinates (dog) -- i. e. demonstrate property inheritance.
1 code implementation • 13 May 2022 • Kanishka Misra, Julia Taylor Rayz, Allyson Ettinger
To what extent can experience from language contribute to our conceptual knowledge?
no code implementations • 12 Mar 2022 • Geetanjali Bihani, Julia Taylor Rayz
With data privacy becoming more of a necessity than a luxury in today's digital world, research on more robust models of privacy preservation and information security is on the rise.
no code implementations • 22 Sep 2021 • Gilchan Park, Julia Taylor Rayz, Cleveland G. Shields
The result showed that there is evidence to distinguish the codes, and this is considered to be sufficient for training of human annotators.
1 code implementation • 6 May 2021 • Kanishka Misra, Allyson Ettinger, Julia Taylor Rayz
Building on research arguing for the possibility of conceptual and categorical knowledge acquisition through statistics contained in language, we evaluate predictive language models (LMs) -- informed solely by textual input -- on a prevalent phenomenon in cognitive science: typicality.
1 code implementation • 22 Apr 2021 • Kanishka Misra, Julia Taylor Rayz
Humans often communicate by using imprecise language, suggesting that fuzzy concepts with unclear boundaries are prevalent in language use.
no code implementations • NAACL (DeeLIO) 2021 • Geetanjali Bihani, Julia Taylor Rayz
Contextual word representation models have shown massive improvements on a multitude of NLP tasks, yet their word sense disambiguation capabilities remain poorly explained.
no code implementations • 22 Apr 2021 • Geetanjali Bihani, Julia Taylor Rayz
In this work, we propose a scheme to address the ambiguity in single-intent as well as multi-intent natural language utterances by creating degree memberships over fuzzified intent classes.
1 code implementation • 16 Apr 2021 • Xiaonan Jing, Yi Zhang, Qingyuan Hu, Julia Taylor Rayz
Twitter can be viewed as a data source for Natural Language Processing (NLP) tasks.
1 code implementation • 16 Apr 2021 • Xiaonan Jing, Qingyuan Hu, Yi Zhang, Julia Taylor Rayz
Twitter serves as a data source for many Natural Language Processing (NLP) tasks.
1 code implementation • 8 Jan 2021 • Xiaonan Jing, Julia Taylor Rayz
We propose a hybrid Graph-of-Tweets (GoT) model which combines the word- and document-level structures for modeling Tweets.
no code implementations • 8 Jan 2021 • Yifei Hu, Xiaonan Jing, Youlim Ko, Julia Taylor Rayz
While many programs provide spelling correction functionality, many systems do not take context into account.
no code implementations • 14 Dec 2020 • Geetanjali Bihani, Julia Taylor Rayz
Static word embeddings encode word associations, extensively utilized in downstream NLP tasks.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Kanishka Misra, Allyson Ettinger, Julia Taylor Rayz
Models trained to estimate word probabilities in context have become ubiquitous in natural language processing.
no code implementations • 2 Apr 2018 • Shih-Feng Yang, Julia Taylor Rayz
We adopted the feature extraction used in STREAMCUBE and applied a clustering K-means approach to it.