Search Results for author: Christoph Tillmann

Found 4 papers, 0 papers with code

Efficient Models for the Detection of Hate, Abuse and Profanity

no code implementations8 Feb 2024 Christoph Tillmann, Aashka Trivedi, Bishwaranjan Bhattacharjee

This is unacceptable in civil discourse. The detection of Hate, Abuse and Profanity in text is a vital component of creating civil and unbiased LLMs, which is needed not only for English, but for all languages.

Document Classification named-entity-recognition +3

Cannot find the paper you are looking for? You can Submit a new open access paper.