Search Results for author: Przemysław Kazienko

Found 15 papers, 7 papers with code

Into the Unknown: Self-Learning Large Language Models

1 code implementation14 Feb 2024 Teddy Ferdinan, Jan Kocoń, Przemysław Kazienko

It facilitates the creation of a self-learning loop that focuses exclusively on the knowledge gap in Points in The Unknown, resulting in a reduced hallucination score.

Hallucination Self-Learning

Personalized Large Language Models

no code implementations14 Feb 2024 Stanisław Woźniak, Bartłomiej Koptyra, Arkadiusz Janz, Przemysław Kazienko, Jan Kocoń

Large language models (LLMs) have significantly advanced Natural Language Processing (NLP) tasks in recent years.

Emotion Recognition Hate Speech Detection +1

From Generalized Laughter to Personalized Chuckles: Unleashing the Power of Data Fusion in Subjective Humor Detection

no code implementations18 Dec 2023 Julita Bielaniewicz, Przemysław Kazienko

It seems that concatenating personalized datasets, even with the cost of normalizing the range of annotations across all datasets, if combined with the personalized models, results in an enormous increase in the performance of humor detection.

Humor Detection

Towards Model-Based Data Acquisition for Subjective Multi-Task NLP Problems

1 code implementation13 Dec 2023 Kamil Kanclerz, Julita Bielaniewicz, Marcin Gruza, Jan Kocon, Stanisław Woźniak, Przemysław Kazienko

Data annotated by humans is a source of knowledge by describing the peculiarities of the problem and therefore fueling the decision process of the trained model.

Self-Supervised Learning

Modeling Uncertainty in Personalized Emotion Prediction with Normalizing Flows

1 code implementation10 Dec 2023 Piotr Miłkowski, Konrad Karanowski, Patryk Wielopolski, Jan Kocoń, Przemysław Kazienko, Maciej Zięba

It may be solved by Personalized Natural Language Processing (PNLP), where the model exploits additional information about the reader to make more accurate predictions.

Emotion Recognition

Scaling Representation Learning from Ubiquitous ECG with State-Space Models

1 code implementation26 Sep 2023 Kleanthis Avramidis, Dominika Kunc, Bartosz Perz, Kranti Adsul, Tiantian Feng, Przemysław Kazienko, Stanisław Saganowski, Shrikanth Narayanan

We train this model in a self-supervised manner with 275, 000 10s ECG recordings collected in the wild and evaluate it on a range of downstream tasks.

Representation Learning

Extracting Aspects Hierarchies using Rhetorical Structure Theory

no code implementations4 Sep 2019 Łukasz Augustyniak, Tomasz Kajdanowicz, Przemysław Kazienko

We propose a novel approach to generate aspect hierarchies that proved to be consistently correct compared with human-generated hierarchies.

Sentiment Analysis

Aspect Detection using Word and Char Embeddings with (Bi)LSTM and CRF

1 code implementation3 Sep 2019 Łukasz Augustyniak, Tomasz Kajdanowicz, Przemysław Kazienko

We proposed a~new accurate aspect extraction method that makes use of both word and character-based embeddings.

Aspect Extraction Word Embeddings

Using Machine Learning to Predict the Evolution of Physics Research

no code implementations29 Oct 2018 Wenyuan Liu, Stanisław Saganowski, Przemysław Kazienko, Siew Ann Cheong

The advancement of science as outlined by Popper and Kuhn is largely qualitative, but with bibliometric data it is possible and desirable to develop a quantitative picture of scientific progress.

BIG-bench Machine Learning Descriptive

WordNet2Vec: Corpora Agnostic Word Vectorization Method

no code implementations10 Jun 2016 Roman Bartusiak, Łukasz Augustyniak, Tomasz Kajdanowicz, Przemysław Kazienko, Maciej Piasecki

Since WordNet embeds natural language in the form of a complex network, a transformation mechanism WordNet2Vec is proposed in the paper.

Clustering General Classification +3

Learning in Unlabeled Networks - An Active Learning and Inference Approach

no code implementations5 Oct 2015 Tomasz Kajdanowicz, Radosław Michalski, Katarzyna Musiał, Przemysław Kazienko

The question that arises is: "labels of which nodes should be collected and used for learning in order to provide the best classification accuracy for the whole network?".

Active Learning Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.