Search Results for author: Łukasz Borchmann

Found 15 papers, 7 papers with code

In Case You Missed It: ARC 'Challenge' Is Not That Challenging

no code implementations23 Dec 2024 Łukasz Borchmann

ARC Challenge appears more difficult than ARC Easy for modern LLMs primarily due to an evaluation setup that prevents direct comparison of answer choices rather than inherent complexity.

ARC Multiple-choice

Tackling prediction tasks in relational databases with LLMs

no code implementations18 Nov 2024 Marek Wydmuch, Łukasz Borchmann, Filip Graliński

Though large language models (LLMs) have demonstrated exceptional performance across numerous problems, their application to predictive tasks in relational databases remains largely unexplored.

Prediction

Can Models Help Us Create Better Models? Evaluating LLMs as Data Scientists

1 code implementation30 Oct 2024 Michał Pietruszka, Łukasz Borchmann, Aleksander Jędrosz, Paweł Morawiecki

We present a benchmark for large language models designed to tackle one of the most knowledge-intensive tasks in data science: writing feature engineering code, which requires domain knowledge in addition to a deep understanding of the underlying problem and data structure.

Feature Engineering

Notes on Applicability of GPT-4 to Document Understanding

no code implementations28 May 2024 Łukasz Borchmann

Evaluation is followed by analyses that suggest possible contamination of textual GPT-4 models and indicate the significant performance drop for lengthy documents.

document understanding Optical Character Recognition (OCR)

STable: Table Generation Framework for Encoder-Decoder Models

no code implementations8 Jun 2022 Michał Pietruszka, Michał Turski, Łukasz Borchmann, Tomasz Dwojak, Gabriela Pałka, Karolina Szyndler, Dawid Jurkiewicz, Łukasz Garncarek

The output structure of database-like tables, consisting of values structured in horizontal rows and vertical columns identifiable by name, can cover a wide range of NLP tasks.

Decoder Joint Entity and Relation Extraction +1

Going Full-TILT Boogie on Document Understanding with Text-Image-Layout Transformer

1 code implementation18 Feb 2021 Rafał Powalski, Łukasz Borchmann, Dawid Jurkiewicz, Tomasz Dwojak, Michał Pietruszka, Gabriela Pałka

We address the challenging problem of Natural Language Comprehension beyond plain-text documents by introducing the TILT neural network architecture which simultaneously learns layout information, visual features, and textual semantics.

Ranked #7 on Visual Question Answering (VQA) on InfographicVQA (using extra training data)

Decoder Document Image Classification +2

From Dataset Recycling to Multi-Property Extraction and Beyond

1 code implementation CONLL 2020 Tomasz Dwojak, Michał Pietruszka, Łukasz Borchmann, Jakub Chłędowski, Filip Graliński

This paper investigates various Transformer architectures on the WikiReading Information Extraction and Machine Reading Comprehension dataset.

Machine Reading Comprehension

Successive Halving Top-k Operator

1 code implementation8 Oct 2020 Michał Pietruszka, Łukasz Borchmann, Filip Graliński

We propose a differentiable successive halving method of relaxing the top-k operator, rendering gradient-based optimization possible.

On the Multi-Property Extraction and Beyond

no code implementations15 Jun 2020 Tomasz Dwojak, Michał Pietruszka, Łukasz Borchmann, Filip Graliński, Jakub Chłędowski

In this paper, we investigate the Dual-source Transformer architecture on the WikiReading information extraction and machine reading comprehension dataset.

Machine Reading Comprehension

ApplicaAI at SemEval-2020 Task 11: On RoBERTa-CRF, Span CLS and Whether Self-Training Helps Them

no code implementations SEMEVAL 2020 Dawid Jurkiewicz, Łukasz Borchmann, Izabela Kosmala, Filip Graliński

This paper presents the winning system for the propaganda Technique Classification (TC) task and the second-placed system for the propaganda Span Identification (SI) task.

Propaganda span identification

Cannot find the paper you are looking for? You can Submit a new open access paper.