Search Results for author: Karel D'Oosterlinck

Found 12 papers, 9 papers with code

HyperDAS: Towards Automating Mechanistic Interpretability with Hypernetworks

no code implementations13 Mar 2025 Jiuding Sun, Jing Huang, Sidharth Baskaran, Karel D'Oosterlinck, Christopher Potts, Michael Sklar, Atticus Geiger

Mechanistic interpretability has made great strides in identifying neural network features (e. g., directions in hidden activation space) that mediate concepts(e. g., the birth year of a person) and enable predictable manipulation.

counterfactual

Querying Databases with Function Calling

1 code implementation23 Jan 2025 Connor Shorten, Charles Pierse, Thomas Benjamin Smith, Karel D'Oosterlinck, Tuana Celik, Erika Cardenas, Leonie Monigatti, Mohd Shukri Hasan, Edward Schmuhl, Daniel Williams, Aravind Kesiraju, Bob van Luijt

While Function Calling is the most common method for interfacing external tools to LLMs, its application to database querying as a tool has been underexplored.

Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment

1 code implementation12 Aug 2024 Karel D'Oosterlinck, Winnie Xu, Chris Develder, Thomas Demeester, Amanpreet Singh, Christopher Potts, Douwe Kiela, Shikib Mehri

We study this and find that (i) preference data gives a better learning signal when the underlying responses are contrastive, and (ii) alignment objectives lead to better performance when they specify more control over the model during training.

Contrastive Learning

Updating CLIP to Prefer Descriptions Over Captions

1 code implementation12 Jun 2024 Amir Zur, Elisa Kreiss, Karel D'Oosterlinck, Christopher Potts, Atticus Geiger

This model correlates with the judgements of blind and low-vision people while preserving transfer capabilities and has interpretable structure that sheds light on the caption--description distinction.

parameter-efficient fine-tuning

In-Context Learning for Extreme Multi-Label Classification

2 code implementations22 Jan 2024 Karel D'Oosterlinck, Omar Khattab, François Remy, Thomas Demeester, Chris Develder, Christopher Potts

Multi-label classification problems with thousands of classes are hard to solve with in-context learning alone, as language models (LMs) might lack prior knowledge about the precise classes or how to assign them, and it is generally infeasible to demonstrate every class in a prompt.

Classification Extreme Multi-Label Classification +3

CAW-coref: Conjunction-Aware Word-level Coreference Resolution

1 code implementation9 Oct 2023 Karel D'Oosterlinck, Semere Kiros Bitew, Brandon Papineau, Christopher Potts, Thomas Demeester, Chris Develder

State-of-the-art coreference resolutions systems depend on multiple LLM calls per document and are thus prohibitively expensive for many use cases (e. g., information extraction with large corpora).

coreference-resolution

Rigorously Assessing Natural Language Explanations of Neurons

no code implementations19 Sep 2023 Jing Huang, Atticus Geiger, Karel D'Oosterlinck, Zhengxuan Wu, Christopher Potts

Natural language is an appealing medium for explaining how large language models process and store information, but evaluating the faithfulness of such explanations is challenging.

BioDEX: Large-Scale Biomedical Adverse Drug Event Extraction for Real-World Pharmacovigilance

1 code implementation22 May 2023 Karel D'Oosterlinck, François Remy, Johannes Deleu, Thomas Demeester, Chris Develder, Klim Zaporojets, Aneiss Ghodsi, Simon Ellershaw, Jack Collins, Christopher Potts

We introduce BioDEX, a large-scale resource for Biomedical adverse Drug Event Extraction, rooted in the historical output of drug safety reporting in the U. S. BioDEX consists of 65k abstracts and 19k full-text biomedical papers with 256k associated document-level safety reports created by medical experts.

Event Extraction Pharmacovigilance

Frozen Pretrained Transformers for Neural Sign Language Translation

1 code implementation International Workshop on Automatic Translation for Signed and Spoken Languages (AT4SSL) 2021 Mathieu De Coster, Karel D'Oosterlinck, Marija Pizurica, Paloma Rabaey, Severine Verlinden, Mieke Van Herreweghe, Joni Dambre

Our results show that pretrained language models can be used to improve sign language translation performance and that the self-attention patterns in BERT transfer in zero-shot to the encoder and decoder of sign language translation models.

Decoder Machine Translation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.