Natural Language Understanding

665 papers with code • 6 benchmarks • 71 datasets

Natural Language Understanding is an important field of Natural Language Processing which contains various tasks such as text classification, natural language inference and story comprehension. Applications enabled by natural language understanding range from question answering to automated reasoning.

Source: Find a Reasonable Ending for Stories: Does Logic Relation Help the Story Cloze Test?

Libraries

Use these libraries to find Natural Language Understanding models and implementations
11 papers
125,059
7 papers
2,200
6 papers
1,949
4 papers
154
See all 10 libraries.

Latest papers with no code

Large Language Models for Networking: Workflow, Advances and Challenges

no code yet • 19 Apr 2024

The networking field is characterized by its high complexity and rapid iteration, requiring extensive expertise to accomplish network tasks, ranging from network design, diagnosis, configuration and security.

Towards Logically Consistent Language Models via Probabilistic Reasoning

no code yet • 19 Apr 2024

Large language models (LLMs) are a promising venue for natural language understanding and generation tasks.

SKIP: Skill-Localized Prompt Tuning for Inference Speed Boost-Up

no code yet • 18 Apr 2024

Prompt-tuning methods have shown comparable performance as parameter-efficient fine-tuning (PEFT) methods in various natural language understanding tasks.

Automating REST API Postman Test Cases Using LLM

no code yet • 16 Apr 2024

Postman test cases offer streamlined automation, collaboration, and dynamic data handling, providing a user-friendly and efficient approach to API testing compared to traditional test cases.

Binder: Hierarchical Concept Representation through Order Embedding of Binary Vectors

no code yet • 16 Apr 2024

Hyperbolic embedding improves embedding quality by exploiting the ever-expanding property of Hyperbolic space, but it also suffers from the same fate as box embedding as gradient descent like optimization is not simple in the Hyperbolic space.

Medical mT5: An Open-Source Multilingual Text-to-Text LLM for The Medical Domain

no code yet • 11 Apr 2024

While these LLMs display competitive performance on automated medical texts benchmarks, they have been pre-trained and evaluated with a focus on a single language (English mostly).

LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements

no code yet • 9 Apr 2024

In particular, while some models prove virtually unaffected by knowledge conflicts in affirmative and negative contexts, when faced with more semantically involved modal and conditional environments, they often fail to separate the text from their internal knowledge.

RecGPT: Generative Personalized Prompts for Sequential Recommendation via ChatGPT Training Paradigm

no code yet • 6 Apr 2024

For the model part, we adopt Generative Pre-training Transformer (GPT) as the sequential recommendation model and design a user modular to capture personalized information.

Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as Rankers

no code yet • 4 Apr 2024

The integration of Large Language Models (LLMs) in information retrieval has raised a critical reevaluation of fairness in the text-ranking models.

PURPLE: Making a Large Language Model a Better SQL Writer

no code yet • 29 Mar 2024

LLMs can learn to organize operator compositions from the input demonstrations for the given task.