Search Results for author: Nils Rethmeier

Found 9 papers, 4 papers with code

VendorLink: An NLP approach for Identifying & Linking Vendor Migrants & Potential Aliases on Darknet Markets

1 code implementation4 May 2023 Vageesh Saxena, Nils Rethmeier, Gijs Van Dijck, Gerasimos Spanakis

The anonymity on the Darknet allows vendors to stay undetected by using multiple vendor aliases or frequently migrating between markets.

Neighborhood Contrastive Learning for Scientific Document Representations with Citation Embeddings

1 code implementation14 Feb 2022 Malte Ostendorff, Nils Rethmeier, Isabelle Augenstein, Bela Gipp, Georg Rehm

Learning scientific document representations can be substantially improved through contrastive learning objectives, where the challenge lies in creating positive and negative training samples that encode the desired similarity semantics.

Citation Prediction Contrastive Learning +3

A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives

no code implementations25 Feb 2021 Nils Rethmeier, Isabelle Augenstein

Contrastive self-supervised training objectives enabled recent successes in image representation pretraining by learning to contrast input-input pairs of augmented images as either similar or dissimilar.

Contrastive Learning Language Modelling +4

Data-Efficient Pretraining via Contrastive Self-Supervision

no code implementations2 Oct 2020 Nils Rethmeier, Isabelle Augenstein

For natural language processing `text-to-text' tasks, the prevailing approaches heavily rely on pretraining large self-supervised models on increasingly larger `task-external' data.

Fairness Few-Shot Learning +3

Self-supervised Contrastive Zero to Few-shot Learning from Small, Long-tailed Text data

no code implementations28 Sep 2020 Nils Rethmeier, Isabelle Augenstein

We thus approach pretraining from a miniaturisation perspective, such as not to require massive external data sources and models, or learned translations from continuous input embeddings to discrete labels.

Few-Shot Learning Multi Label Text Classification +2

TX-Ray: Quantifying and Explaining Model-Knowledge Transfer in (Un-)Supervised NLP

2 code implementations2 Dec 2019 Nils Rethmeier, Vageesh Kumar Saxena, Isabelle Augenstein

While state-of-the-art NLP explainability (XAI) methods focus on explaining per-sample decisions in supervised end or probing tasks, this is insufficient to explain and quantify model knowledge transfer during (un-)supervised training.

Explainable Artificial Intelligence (XAI) Model Compression +1

Cannot find the paper you are looking for? You can Submit a new open access paper.