832 papers with code • 2 benchmarks • 2 datasets
These leaderboards are used to track progress in text-classification
To build an interpretable neural text classifier, most of the prior work has focused on designing inherently interpretable models or finding faithful explanations.
To this end, previous work either makes use of auxiliary data at parameter server to verify the received gradients (e. g., by computing validation error rate) or leverages statistic-based methods (e. g. median and Krum) to identify and remove malicious gradients from Byzantine clients.
Furthermore, our unified framework enables the transfer of design elements across different approaches, and as a result we are able to instantiate new parameter-efficient fine-tuning methods that tune less parameters than previous methods while being more effective, achieving comparable results to fine-tuning all parameters on all four tasks.
There is a growing interest in dataset generation recently due to the superior generative capacity of large pre-trained language models (PLMs).
This paper develops the Correlation Networks (CorNet) architecture for the extreme multi-label text classification (XMTC) task, where the objective is to tag an input text sequence with the most relevant subset of labels from an extremely large label set.
Recent releases of Large Language Models (LLMs), e. g. ChatGPT, are astonishing at generating human-like texts, but they may impact the authenticity of texts.