Search Results for author: Thomas Lukasiewicz

Found 120 papers, 52 papers with code

BECEL: Benchmark for Consistency Evaluation of Language Models

1 code implementation COLING 2022 Myeongjun Jang, Deuk Sin Kwon, Thomas Lukasiewicz

Behavioural consistency is a critical condition for a language model (LM) to become trustworthy like humans.

Language Modelling

Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction

no code implementations EMNLP 2020 Patrick Hohenecker, Frank Mtumbuka, Vid Kocijan, Thomas Lukasiewicz

The goal of open information extraction (OIE) is to extract facts from natural language text, and to represent them as structured triples of the form {\textless}subject, predicate, object{\textgreater}.

Open Information Extraction Sentence

Affinity-Graph-Guided Contractive Learning for Pretext-Free Medical Image Segmentation with Minimal Annotation

no code implementations14 Oct 2024 Zehua Cheng, Di Yuan, Thomas Lukasiewicz

The combination of semi-supervised learning (SemiSL) and contrastive learning (CL) has been successful in medical image segmentation with limited annotations.

Contrastive Learning Image Segmentation +4

Dimension-independent learning rates for high-dimensional classification problems

no code implementations26 Sep 2024 Andres Felipe Lerma-Pineda, Philipp Petersen, Simon Frieder, Thomas Lukasiewicz

Thereafter, we prove the existence of a neural network with bounded weights approximating a classification function.

Towards Certification of Uncertainty Calibration under Adversarial Attacks

no code implementations22 May 2024 Cornelius Emde, Francesco Pinto, Thomas Lukasiewicz, Philip H. S. Torr, Adel Bibi

We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations.

PiShield: A PyTorch Package for Learning with Requirements

no code implementations28 Feb 2024 Mihaela Cătălina Stoian, Alex Tatomir, Thomas Lukasiewicz, Eleonora Giunchiglia

Given the widespread application of deep learning, there is a growing need for frameworks allowing for the integration of the requirements across various domains.

Autonomous Driving Deep Learning +1

Exploiting T-norms for Deep Learning in Autonomous Driving

no code implementations17 Feb 2024 Mihaela Cătălina Stoian, Eleonora Giunchiglia, Thomas Lukasiewicz

Deep learning has been at the core of the autonomous driving field development, due to the neural networks' success in finding patterns in raw data and turning them into accurate predictions.

Autonomous Driving Deep Learning +1

Associative Memories in the Feature Space

no code implementations16 Feb 2024 Tommaso Salvatori, Beren Millidge, Yuhang Song, Rafal Bogacz, Thomas Lukasiewicz

This problem can be easily solved by computing \emph{similarities} in an embedding space instead of the pixel space.

How Realistic Is Your Synthetic Data? Constraining Deep Generative Models for Tabular Data

1 code implementation7 Feb 2024 Mihaela Cătălina Stoian, Salijona Dyrmishi, Maxime Cordy, Thomas Lukasiewicz, Eleonora Giunchiglia

Further, we show how our CL does not necessarily need to be integrated at training time, as it can be also used as a guardrail at inference time, still producing some improvements in the overall performance of the models.

Pre-training and Diagnosing Knowledge Base Completion Models

1 code implementation27 Jan 2024 Vid Kocijan, Myeongjun Erik Jang, Thomas Lukasiewicz

The method works for both canonicalized knowledge bases and uncanonicalized or open knowledge bases, i. e., knowledge bases where more than one copy of a real-world entity or relation may exist.

General Knowledge Knowledge Base Completion +3

Large Language Models for Mathematicians

no code implementations7 Dec 2023 Simon Frieder, Julius Berner, Philipp Petersen, Thomas Lukasiewicz

Large language models (LLMs) such as ChatGPT have received immense interest for their general-purpose language understanding and, in particular, their ability to generate high-quality text or computer code.

Text Attribute Control via Closed-Loop Disentanglement

no code implementations1 Dec 2023 Lei Sha, Thomas Lukasiewicz

In this approach, we use a semi-supervised contrastive learning method to encourage the disentanglement of attributes in latent spaces.

Attribute Contrastive Learning +2

Improving Language Models Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary

no code implementations24 Oct 2023 Myeongjun Erik Jang, Thomas Lukasiewicz

Next, we propose an efficient parameter integration technique that updates only a few additional parameters to combine the learned interrelationship with PLMs' pre-trained knowledge.

Data Augmentation

C^2M-DoT: Cross-modal consistent multi-view medical report generation with domain transfer network

no code implementations9 Oct 2023 Ruizhi Wang, Xiangtao Wang, Jie zhou, Thomas Lukasiewicz, Zhenghua Xu

In addition, word-level optimization based on numbers ignores the semantics of reports and medical images, and the generated reports often cannot achieve good performance.

Contrastive Learning Medical Report Generation

AMLP:Adaptive Masking Lesion Patches for Self-supervised Medical Image Segmentation

no code implementations8 Sep 2023 Xiangtao Wang, Ruizhi Wang, Jie zhou, Thomas Lukasiewicz, Zhenghua Xu

The proposed strategies effectively address limitations in applying masked modeling to medical images, tailored to capturing fine lesion details vital for segmentation tasks.

Image Segmentation Medical Image Segmentation +3

NP-SemiSeg: When Neural Processes meet Semi-Supervised Semantic Segmentation

1 code implementation5 Aug 2023 JianFeng Wang, Daniela Massiceti, Xiaolin Hu, Vladimir Pavlovic, Thomas Lukasiewicz

This is useful in a wide range of real-world applications where collecting pixel-wise labels is not feasible in time or cost.

Segmentation Self-Driving Cars +3

Minimum Description Length Clustering to Measure Meaningful Image Complexity

no code implementations26 Jun 2023 Louis Mahon, Thomas Lukasiewicz

We conduct experiments on seven different sets of images, which show that our method assigns the most accurate scores to all images considered.

Clustering

An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models

1 code implementation6 Jun 2023 Zhongbin Xie, Thomas Lukasiewicz

The increasingly large size of modern pretrained language models not only makes them inherit more human-like biases from the training corpora, but also makes it computationally expensive to mitigate such biases.

counterfactual Data Augmentation +1

KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language Explanations

no code implementations5 Jun 2023 Myeongjun Jang, Bodhisattwa Prasad Majumder, Julian McAuley, Thomas Lukasiewicz, Oana-Maria Camburu

While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated NLEs.

Adversarial Attack

MvCo-DoT:Multi-View Contrastive Domain Transfer Network for Medical Report Generation

no code implementations15 Apr 2023 Ruizhi Wang, Xiangtao Wang, Zhenghua Xu, Wenting Xu, Junyang Chen, Thomas Lukasiewicz

In clinical scenarios, multiple medical images with different views are usually generated at the same time, and they have high semantic consistency.

Contrastive Learning Deep Reinforcement Learning +1

Machine Learning with Requirements: a Manifesto

no code implementations7 Apr 2023 Eleonora Giunchiglia, Fergus Imrie, Mihaela van der Schaar, Thomas Lukasiewicz

In the recent years, machine learning has made great advancements that have been at the root of many breakthroughs in different application domains.

Correcting Flaws in Common Disentanglement Metrics

no code implementations5 Apr 2023 Louis Mahon, Lei Shah, Thomas Lukasiewicz

Recent years have seen growing interest in learning disentangled representations, in which distinct features, such as size or shape, are represented by distinct neurons.

Decoder Disentanglement

Hard Regularization to Prevent Deep Online Clustering Collapse without Data Augmentation

1 code implementation29 Mar 2023 Louis Mahon, Thomas Lukasiewicz

We propose a method that does not require data augmentation, and that, differently from existing methods, regularizes the hard assignments.

Clustering Data Augmentation +3

MPS-AMS: Masked Patches Selection and Adaptive Masking Strategy Based Self-Supervised Medical Image Segmentation

no code implementations27 Feb 2023 Xiangtao Wang, Ruizhi Wang, Biao Tian, Jiaojiao Zhang, Shuo Zhang, Junyang Chen, Thomas Lukasiewicz, Zhenghua Xu

We leverage the masked patches selection strategy to choose masked patches with lesions to obtain more lesion representation information, and the adaptive masking strategy is utilized to help learn more mutual information and improve performance further.

Contrastive Learning Image Segmentation +4

Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns

no code implementations11 Feb 2023 Zhongbin Xie, Vid Kocijan, Thomas Lukasiewicz, Oana-Maria Camburu

Bias-measuring datasets play a critical role in detecting biased behavior of language models and in evaluating progress of bias mitigation methods.

coreference-resolution counterfactual +1

Mathematical Capabilities of ChatGPT

2 code implementations NeurIPS 2023 Simon Frieder, Luca Pinchetti, Alexis Chevalier, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Julius Berner

We investigate the mathematical capabilities of two iterations of ChatGPT (released 9-January-2023 and 30-January-2023) and of GPT-4 by testing them on publicly available datasets, as well as hand-crafted ones, using a novel methodology.

Elementary Mathematics Math +2

NP-Match: Towards a New Probabilistic Model for Semi-Supervised Learning

1 code implementation31 Jan 2023 JianFeng Wang, Xiaolin Hu, Thomas Lukasiewicz

In this work, we adjust neural processes (NPs) to the semi-supervised image classification task, resulting in a new method named NP-Match.

Classification Semi-Supervised Image Classification

Rationalizing Predictions by Adversarial Information Calibration

no code implementations15 Jan 2023 Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz

One form of explanation for a prediction is an extractive rationale, i. e., a subset of features of an instance that lead the model to give its prediction on that instance.

Language Modelling Sentiment Analysis +2

Robust Graph Representation Learning via Predictive Coding

no code implementations9 Dec 2022 Billy Byiringiro, Tommaso Salvatori, Thomas Lukasiewicz

Predictive coding is a message-passing framework initially developed to model information processing in the brain, and now also topic of research in machine learning due to some interesting properties.

Graph Neural Network Graph Representation Learning +1

Learning to Model Multimodal Semantic Alignment for Story Visualization

no code implementations14 Nov 2022 Bowen Li, Thomas Lukasiewicz

Story visualization aims to generate a sequence of images to narrate each sentence in a multi-sentence story, where the images should be realistic and keep global consistency across dynamic scenes and characters.

Diversity Sentence +1

Predictive Coding beyond Gaussian Distributions

no code implementations7 Nov 2022 Luca Pinchetti, Tommaso Salvatori, Yordan Yordanov, Beren Millidge, Yuhang Song, Thomas Lukasiewicz

A large amount of recent research has the far-reaching goal of finding training methods for deep neural networks that can serve as alternatives to backpropagation (BP).

Hybrid Reinforced Medical Report Generation with M-Linear Attention and Repetition Penalty

no code implementations14 Oct 2022 Wenting Xu, Zhenghua Xu, Junyang Chen, Chang Qi, Thomas Lukasiewicz

In this article, we propose a hybrid reinforced medical report generation method with m-linear attention and repetition penalty mechanism (HReMRG-MR) to overcome these problems.

Medical Report Generation

Bird-Eye Transformers for Text Generation Models

1 code implementation8 Oct 2022 Lei Sha, Yuhang Song, Yordan Yordanov, Tommaso Salvatori, Thomas Lukasiewicz

Transformers have become an indispensable module for text generation models since their great success in machine translation.

Attribute Inductive Bias +3

Efficient Deep Clustering of Human Activities and How to Improve Evaluation

1 code implementation Asian Conference on Machine Learning 2023 Louis Mahon, Thomas Lukasiewicz

Progress is starting to be made in the unsupervised setting, in the form of deep HAR clustering models, which can assign labels to data without having been given any labels to train on, but there are problems with evaluating deep HAR clustering models, which makes assessing the field and devising new methods difficult.

Clustering Deep Clustering +2

Lightweight Long-Range Generative Adversarial Networks

no code implementations8 Sep 2022 Bowen Li, Thomas Lukasiewicz

In this paper, we introduce novel lightweight generative adversarial networks, which can effectively capture long-range dependencies in the image generation process, and produce high-quality results with a much simpler architecture.

Image Generation

Memory-Driven Text-to-Image Generation

no code implementations15 Aug 2022 Bowen Li, Philip H. S. Torr, Thomas Lukasiewicz

We introduce a memory-driven semi-parametric approach to text-to-image generation, which is based on both parametric and non-parametric techniques.

Generative Adversarial Network Text-to-Image Generation

Word-Level Fine-Grained Story Visualization

1 code implementation3 Aug 2022 Bowen Li, Thomas Lukasiewicz

Story visualization aims to generate a sequence of images to narrate each sentence in a multi-sentence story with a global consistency across dynamic scenes and characters.

Sentence Story Visualization

PCA: Semi-supervised Segmentation with Patch Confidence Adversarial Training

1 code implementation24 Jul 2022 Zihang Xu, Zhenghua Xu, Shuo Zhang, Thomas Lukasiewicz

Unlike most existing semi-supervised learning methods, adversarial training based methods distinguish samples from different sources by learning the data distribution of the segmentation map, leading the segmenter to generate more accurate predictions.

Brain Tumor Segmentation Image Segmentation +2

A Theoretical Framework for Inference and Learning in Predictive Coding Networks

1 code implementation21 Jul 2022 Beren Millidge, Yuhang Song, Tommaso Salvatori, Thomas Lukasiewicz, Rafal Bogacz

In this paper, we provide a comprehensive theoretical analysis of the properties of PCNs trained with prospective configuration.

Continual Learning

Explaining Chest X-ray Pathologies in Natural Language

1 code implementation9 Jul 2022 Maxime Kayser, Cornelius Emde, Oana-Maria Camburu, Guy Parsons, Bartlomiej Papiez, Thomas Lukasiewicz

Most deep learning algorithms lack explanations for their predictions, which limits their deployment in clinical practice.

Explainable Models

NP-Match: When Neural Processes meet Semi-Supervised Learning

1 code implementation3 Jul 2022 JianFeng Wang, Thomas Lukasiewicz, Daniela Massiceti, Xiaolin Hu, Vladimir Pavlovic, Alexandros Neophytou

Semi-supervised learning (SSL) has been widely explored in recent years, and it is an effective way of leveraging unlabeled data to reduce the reliance on labeled data.

Semi-Supervised Image Classification

Rethinking Bayesian Deep Learning Methods for Semi-Supervised Volumetric Medical Image Segmentation

1 code implementation CVPR 2022 JianFeng Wang, Thomas Lukasiewicz

Secondly, in fact, they are only partially based on Bayesian deep learning, as their overall architectures are not designed under the Bayesian framework.

Deep Learning Image Segmentation +3

Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence

1 code implementation Findings (NAACL) 2022 Myeongjun Jang, Frank Mtumbuka, Thomas Lukasiewicz

To alleviate the issue, we propose a novel intermediate training task, names meaning-matching, designed to directly learn a meaning-text correspondence, instead of relying on the distributional hypothesis.

Language Modelling Negation

Deep Learning with Logical Constraints

no code implementations1 May 2022 Eleonora Giunchiglia, Mihaela Catalina Stoian, Thomas Lukasiewicz

In recent years, there has been an increasing interest in exploiting logically specified background knowledge in order to obtain neural models (i) with a better performance, (ii) able to learn from less data, and/or (iii) guaranteed to be compliant with the background knowledge itself, e. g., for safety-critical applications.

Deep Learning Survey

Predictive Coding: Towards a Future of Deep Learning beyond Backpropagation?

no code implementations18 Feb 2022 Beren Millidge, Tommaso Salvatori, Yuhang Song, Rafal Bogacz, Thomas Lukasiewicz

The backpropagation of error algorithm used to train deep neural networks has been fundamental to the successes of deep learning.

Deep Learning

Learning on Arbitrary Graph Topologies via Predictive Coding

no code implementations31 Jan 2022 Tommaso Salvatori, Luca Pinchetti, Beren Millidge, Yuhang Song, TianYi Bao, Rafal Bogacz, Thomas Lukasiewicz

Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network.

The Defeat of the Winograd Schema Challenge

no code implementations7 Jan 2022 Vid Kocijan, Ernest Davis, Thomas Lukasiewicz, Gary Marcus, Leora Morgenstern

The Winograd Schema Challenge - a set of twin sentences involving pronoun reference disambiguation that seem to require the use of commonsense knowledge - was proposed by Hector Levesque in 2011.

Imposing Hard Logical Constraints on Multi-label Classification Neural Networks

no code implementations AAAI Workshop CLeaR 2022 Eleonora Giunchiglia, Thomas Lukasiewicz

In this paper, we thus propose to enhance deep learning models by incorporating background knowledge as hard logical constraints.

Multi-Label Classification

Rationale production to support clinical decision-making

no code implementations15 Nov 2021 Niall Taylor, Lei Sha, Dan W Joyce, Thomas Lukasiewicz, Alejo Nevado-Holgado, Andrey Kormilitzin

In this work, we apply InfoCal, the current state-of-the-art model that produces extractive rationales for its predictions, to the task of predicting hospital readmission using hospital discharge notes.

Decision Making Feature Importance

Unifying Categorical Models by Explicit Disentanglement of the Labels' Generative Factors

no code implementations29 Sep 2021 Luca Pinchetti, Lei Sha, Thomas Lukasiewicz

By doing so, it is possible to merge multiple datasets based on different categorical models by projecting the data points into a unified latent space.

Disentanglement Emotion Recognition

Associative Memories via Predictive Coding

no code implementations NeurIPS 2021 Tommaso Salvatori, Yuhang Song, Yujian Hong, Simon Frieder, Lei Sha, Zhenghua Xu, Rafal Bogacz, Thomas Lukasiewicz

We conclude by discussing the possible impact of this work in the neuroscience community, by showing that our model provides a plausible framework to study learning and retrieval of memories in the brain, as it closely mimics the behavior of the hippocampus as a memory index and generative model.

Hippocampus Retrieval

Knowledge Base Completion Meets Transfer Learning

1 code implementation EMNLP 2021 Vid Kocijan, Thomas Lukasiewicz

The aim of knowledge base completion is to predict unseen facts from existing facts in knowledge bases.

Knowledge Base Completion Relation +1

NoiER: An Approach for Training more Reliable Fine-TunedDownstream Task Models

no code implementations29 Aug 2021 Myeongjun Jang, Thomas Lukasiewicz

The recent development in pretrained language models trained in a self-supervised fashion, such as BERT, is driving rapid progress in the field of NLP.

Out of Distribution (OOD) Detection

Are Training Resources Insufficient? Predict First Then Explain!

no code implementations29 Aug 2021 Myeongjun Jang, Thomas Lukasiewicz

The most predominant form of these models is the explain-then-predict (EtP) structure, which first generates explanations and uses them for making decisions.

Decision Making Explanation Generation

Accurate, yet inconsistent? Consistency Analysis on Language Understanding Models

no code implementations15 Aug 2021 Myeongjun Jang, Deuk Sin Kwon, Thomas Lukasiewicz

Consistency, which refers to the capability of generating the same predictions for semantically similar contexts, is a highly desirable property for a sound language understanding model.

Paraphrase Identification

Selective Pseudo-label Clustering

1 code implementation22 Jul 2021 Louis Mahon, Thomas Lukasiewicz

The most accurate existing approaches combine the training of the DNN with the clustering objective, so that information from the clustering process can be used to update the DNN to produce better features for clustering.

Clustering Image Clustering +1

Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations

no code implementations25 Jun 2021 Bodhisattwa Prasad Majumder, Oana-Maria Camburu, Thomas Lukasiewicz, Julian McAuley

Our framework improves over previous methods by: (i) reaching SOTA task performance while also providing explanations, (ii) providing two types of explanations, while existing models usually provide only one type, and (iii) beating by a large margin the previous SOTA in terms of quality of both types of explanations.

Decision Making

Hi-BEHRT: Hierarchical Transformer-based model for accurate prediction of clinical events using multimodal longitudinal electronic health records

no code implementations21 Jun 2021 Yikuan Li, Mohammad Mamouei, Gholamreza Salimi-Khorshidi, Shishir Rao, Abdelaali Hassaine, Dexter Canoy, Thomas Lukasiewicz, Kazem Rahimi

Capturing the whole history of medical encounters is expected to lead to more accurate predictions, but the inclusion of records collected for decades and from multiple resources can inevitably exceed the receptive field of the existing deep learning architectures.

RSG: A Simple but Effective Module for Learning Imbalanced Datasets

1 code implementation CVPR 2021 JianFeng Wang, Thomas Lukasiewicz, Xiaolin Hu, Jianfei Cai, Zhenghua Xu

Imbalanced datasets widely exist in practice and area great challenge for training deep neural models with agood generalization on infrequent classes.

Long-tail Learning

e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks

2 code implementations ICCV 2021 Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, Thomas Lukasiewicz

e-ViL is a benchmark for explainable vision-language tasks that establishes a unified evaluation framework and provides the first comprehensive comparison of existing approaches that generate NLEs for VL tasks.

Language Modelling Text Generation

Multi-Label Classification Neural Networks with Hard Logical Constraints

1 code implementation24 Mar 2021 Eleonora Giunchiglia, Thomas Lukasiewicz

Multi-label classification (MC) is a standard machine learning problem in which a data point can be associated with a set of classes.

Classification General Classification +1

Reverse Differentiation via Predictive Coding

no code implementations8 Mar 2021 Tommaso Salvatori, Yuhang Song, Thomas Lukasiewicz, Rafal Bogacz, Zhenghua Xu

Recent works prove that these methods can approximate BP up to a certain margin on multilayer perceptrons (MLPs), and asymptotically on any other complex model, and that zero-divergence inference learning (Z-IL), a variant of PC, is able to exactly implement BP on MLPs.

Risk factor identification for incident heart failure using neural network distillation and variable selection

no code implementations17 Feb 2021 Yikuan Li, Shishir Rao, Mohammad Mamouei, Gholamreza Salimi-Khorshidi, Dexter Canoy, Abdelaali Hassaine, Thomas Lukasiewicz, Kazem Rahimi

In this study, we propose two methods, namely, model distillation and variable selection, to untangle hidden patterns learned by an established deep learning model (BEHRT) for risk association identification.

Decision Making Variable Selection

Multi-type Disentanglement without Adversarial Training

no code implementations16 Dec 2020 Lei Sha, Thomas Lukasiewicz

After the latent space is disentangled, the style of a sentence can be transformed by tuning the style representation without affecting other features of the sentence.

Disentanglement Interpretable Machine Learning +3

Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration

no code implementations16 Dec 2020 Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz

We use an adversarial-based technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.

Language Modelling Sentiment Analysis

Can the Brain Do Backpropagation? --- Exact Implementation of Backpropagation in Predictive Coding Networks

no code implementations NeurIPS 2020 Yuhang Song, Thomas Lukasiewicz, Zhenghua Xu, Rafal Bogacz

However, there are several gaps between BP and learning in biologically plausible neuronal networks of the brain (learning in the brain, or simply BL, for short), in particular, (1) it has been unclear to date, if BP can be implemented exactly via BL, (2) there is a lack of local plasticity in BP, i. e., weight updates require information that is not locally available, while BL utilizes only locally available information, and (3)~there is a lack of autonomy in BP, i. e., some external control over the neural network is required (e. g., switching between prediction and learning stages requires changes to dynamics and synaptic plasticity rules), while BL works fully autonomously.

Reinforced Medical Report Generation with X-Linear Attention and Repetition Penalty

no code implementations16 Nov 2020 Wenting Xu, Chang Qi, Zhenghua Xu, Thomas Lukasiewicz

To reduce doctors' workload, deep-learning-based automatic medical report generation has recently attracted more and more research efforts, where attention mechanisms and reinforcement learning are integrated with the classic encoder-decoder architecture to enhance the performance of deep models.

Decoder Medical Report Generation

Efficient Medical Image Segmentation with Intermediate Supervision Mechanism

no code implementations15 Nov 2020 Di Yuan, Junyang Chen, Zhenghua Xu, Thomas Lukasiewicz, Zhigang Fu, Guizhi Xu

However, U-Net is mainly engaged in segmentation, and the extracted features are also targeted at segmentation location information, and the input and output are different.

Decoder Image Segmentation +3

SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on Medical Images

no code implementations15 Nov 2020 Chang Qi, Junyang Chen, Guizhi Xu, Zhenghua Xu, Thomas Lukasiewicz, Yang Liu

We first generate MRI images on limited datasets, then we trained three popular classification models to get the best model for tumor classification.

Data Augmentation General Classification +2

The Gap on GAP: Tackling the Problem of Differing Data Distributions in Bias-Measuring Datasets

1 code implementation3 Nov 2020 Vid Kocijan, Oana-Maria Camburu, Thomas Lukasiewicz

For example, if the feminine subset of a gender-bias-measuring coreference resolution dataset contains sentences with a longer average distance between the pronoun and the correct candidate, an RNN-based model may perform worse on this subset due to long-term dependencies.

coreference-resolution

Deep Learning in Computer-Aided Diagnosis and Treatment of Tumors: A Survey

no code implementations2 Nov 2020 Dan Zhao, Guizhi Xu, Zhenghua Xu, Thomas Lukasiewicz, Minmin Xue, Zhigang Fu

Computer-Aided Diagnosis and Treatment of Tumors is a hot topic of deep learning in recent years, which constitutes a series of medical tasks, such as detection of tumor markers, the outline of tumor leisures, subtypes and stages of tumors, prediction of therapeutic effect, and drug development.

Deep Learning

Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation

1 code implementation NeurIPS 2020 Bowen Li, Xiaojuan Qi, Philip H. S. Torr, Thomas Lukasiewicz

To achieve this, a new word-level discriminator is proposed, which provides the generator with fine-grained training feedback at word-level, to facilitate training a lightweight generator that has a small number of parameters, but can still correctly focus on specific visual attributes of an image, and then edit them without affecting other contents that are not described in the text.

Generative Adversarial Network Image Manipulation +1

Coherent Hierarchical Multi-Label Classification Networks

1 code implementation NeurIPS 2020 Eleonora Giunchiglia, Thomas Lukasiewicz

Hierarchical multi-label classification (HMC) is a challenging classification task extending standard multi-label classification problems by imposing a hierarchy constraint on the classes.

Classification General Classification +2

The Surprising Power of Graph Neural Networks with Random Node Initialization

1 code implementation2 Oct 2020 Ralph Abboud, İsmail İlkan Ceylan, Martin Grohe, Thomas Lukasiewicz

In this work, we analyze the expressive power of GNNs with RNI, and prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.

Representation Learning

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

1 code implementation23 Sep 2020 Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom

For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions.

Decision Making Fairness

Knowledge Graph Extraction from Videos

1 code implementation20 Jul 2020 Louis Mahon, Eleonora Giunchiglia, Bowen Li, Thomas Lukasiewicz

Nearly all existing techniques for automated video annotation (or captioning) describe videos using natural language sentences.

BoxE: A Box Embedding Model for Knowledge Base Completion

1 code implementation NeurIPS 2020 Ralph Abboud, İsmail İlkan Ceylan, Thomas Lukasiewicz, Tommaso Salvatori

Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB).

Knowledge Base Completion Knowledge Graphs +1

A Review of Winograd Schema Challenge Datasets and Approaches

no code implementations23 Apr 2020 Vid Kocijan, Thomas Lukasiewicz, Ernest Davis, Gary Marcus, Leora Morgenstern

The Winograd Schema Challenge is both a commonsense reasoning and natural language understanding challenge, introduced as an alternative to the Turing test.

Natural Language Understanding

e-SNLI-VE: Corrected Visual-Textual Entailment with Natural Language Explanations

3 code implementations7 Apr 2020 Virginie Do, Oana-Maria Camburu, Zeynep Akata, Thomas Lukasiewicz

The recently proposed SNLI-VE corpus for recognising visual-textual entailment is a large, real-world dataset for fine-grained multimodal reasoning.

Multimodal Reasoning Natural Language Inference

Deep Bayesian Gaussian Processes for Uncertainty Estimation in Electronic Health Records

no code implementations23 Mar 2020 Yikuan Li, Shishir Rao, Abdelaali Hassaine, Rema Ramakrishnan, Yajie Zhu, Dexter Canoy, Gholamreza Salimi-Khorshidi, Thomas Lukasiewicz, Kazem Rahimi

In this paper, we merge features of the deep Bayesian learning framework with deep kernel learning to leverage the strengths of both methods for more comprehensive uncertainty estimation.

Decision Making Gaussian Processes

Image-to-Image Translation with Text Guidance

no code implementations12 Feb 2020 Bowen Li, Xiaojuan Qi, Philip H. S. Torr, Thomas Lukasiewicz

The goal of this paper is to embed controllable factors, i. e., natural language descriptions, into image-to-image translation with generative adversarial networks, which allows text descriptions to determine the visual attributes of synthetic images.

Image-to-Image Translation Part-Of-Speech Tagging +1

ManiGAN: Text-Guided Image Manipulation

3 code implementations12 Dec 2019 Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H. S. Torr

The goal of our paper is to semantically edit parts of an image matching a given text that describes desired attributes (e. g., texture, colour, and background), while preserving other contents that are irrelevant to the text.

Generative Adversarial Network Image Manipulation +1

Distributed Low Precision Training Without Mixed Precision

no code implementations18 Nov 2019 Zehua Cheng, Weiyang Wang, Yan Pan, Thomas Lukasiewicz

However, most low precision training solution is based on a mixed precision strategy.

Model Compression

Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations

1 code implementation ACL 2020 Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, Phil Blunsom

To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions.

Decision Making Natural Language Inference

Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods

2 code implementations4 Oct 2019 Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom

We aim for this framework to provide a publicly available, off-the-shelf evaluation when the feature-selection perspective on explanations is needed.

feature selection

Controllable Text-to-Image Generation

2 code implementations NeurIPS 2019 Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H. S. Torr

In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions.

Generative Adversarial Network Text-to-Image Generation

A Surprisingly Robust Trick for the Winograd Schema Challenge

no code implementations ACL 2019 Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.

Language Modelling Natural Language Understanding +1

A Surprisingly Robust Trick for Winograd Schema Challenge

2 code implementations15 May 2019 Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.

Common Sense Reasoning Coreference Resolution +4

Mega-Reward: Achieving Human-Level Play without Extrinsic Rewards

1 code implementation12 May 2019 Yuhang Song, Jianyi Wang, Thomas Lukasiewicz, Zhenghua Xu, Shangtong Zhang, Andrzej Wojcicki, Mai Xu

Intrinsic rewards were introduced to simulate how human intelligence works; they are usually evaluated by intrinsically-motivated play, i. e., playing games without extrinsic rewards but evaluated with extrinsic rewards.

Segmentation is All You Need

no code implementations30 Apr 2019 Zehua Cheng, Yuxiang Wu, Zhenghua Xu, Thomas Lukasiewicz, Weiyang Wang

Region proposal mechanisms are essential for existing deep learning approaches to object detection in images.

Face Detection Head Detection +5

Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting

1 code implementation4 Apr 2019 Ralph Abboud, Ismail Ilkan Ceylan, Thomas Lukasiewicz

Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference.

e-SNLI: Natural Language Inference with Natural Language Explanations

2 code implementations NeurIPS 2018 Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, Phil Blunsom

In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from human-provided explanations at train time.

Natural Language Inference Sentence

Diversity-Driven Extensible Hierarchical Reinforcement Learning

1 code implementation10 Nov 2018 Yuhang Song, Jianyi Wang, Thomas Lukasiewicz, Zhenghua Xu, Mai Xu

However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive.

Diversity Hierarchical Reinforcement Learning +3

Ontology Reasoning with Deep Neural Networks

2 code implementations24 Aug 2018 Patrick Hohenecker, Thomas Lukasiewicz

This is an important and at the same time very natural logical reasoning task, which is why the presented approach is applicable to a plethora of important real-world problems.

Logical Reasoning

Complexity Results for Preference Aggregation over (m)CP-nets: Pareto and Majority Voting

no code implementations26 Jun 2018 Thomas Lukasiewicz, Enrico Malizia

On the other hand, global voting over non-$\mathcal{O}$-legal CP-nets has not carefully been analyzed, despite it was stated in the literature that a theoretical comparison between global and sequential voting was promising and a precise complexity analysis for global voting has been asked for multiple times.

Deep Learning for Ontology Reasoning

no code implementations29 May 2017 Patrick Hohenecker, Thomas Lukasiewicz

In this work, we present a novel approach to ontology reasoning that is based on deep learning rather than logic-based formal reasoning.

Deep Learning Relational Reasoning

Top-k Query Answering in Datalog+/- Ontologies under Subjective Reports (Technical Report)

no code implementations29 Nov 2013 Thomas Lukasiewicz, Maria Vanina Martinez, Cristian Molinaro, Livia Predoiu, Gerardo I. Simari

Theses pieces of information of every report are then combined, along with the querying user's preferences and his/her trust into each report, to rank the query results.

Cannot find the paper you are looking for? You can Submit a new open access paper.