1 code implementation • COLING 2022 • Myeongjun Jang, Deuk Sin Kwon, Thomas Lukasiewicz
Behavioural consistency is a critical condition for a language model (LM) to become trustworthy like humans.
no code implementations • EMNLP 2020 • Patrick Hohenecker, Frank Mtumbuka, Vid Kocijan, Thomas Lukasiewicz
The goal of open information extraction (OIE) is to extract facts from natural language text, and to represent them as structured triples of the form {\textless}subject, predicate, object{\textgreater}.
no code implementations • 16 Oct 2024 • Maxime Kayser, Bayar Menzat, Cornelius Emde, Bogdan Bercean, Alex Novak, Abdala Espinosa, Bartlomiej W. Papiez, Susanne Gaube, Thomas Lukasiewicz, Oana-Maria Camburu
We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps.
no code implementations • 14 Oct 2024 • Zehua Cheng, Di Yuan, Thomas Lukasiewicz
The combination of semi-supervised learning (SemiSL) and contrastive learning (CL) has been successful in medical image segmentation with limited annotations.
no code implementations • 26 Sep 2024 • Andres Felipe Lerma-Pineda, Philipp Petersen, Simon Frieder, Thomas Lukasiewicz
Thereafter, we prove the existence of a neural network with bounded weights approximating a classification function.
1 code implementation • 1 Jul 2024 • Luca Pinchetti, Chang Qi, Oleh Lokshyn, Gaspard Olivers, Cornelius Emde, Mufeng Tang, Amine M'Charrak, Simon Frieder, Bayar Menzat, Rafal Bogacz, Thomas Lukasiewicz, Tommaso Salvatori
In this work, we tackle the problems of efficiency and scalability for predictive coding networks in machine learning.
no code implementations • 22 May 2024 • Cornelius Emde, Francesco Pinto, Thomas Lukasiewicz, Philip H. S. Torr, Adel Bibi
We show that attacks can significantly harm calibration, and thus propose certified calibration as worst-case bounds on calibration under adversarial perturbations.
no code implementations • 28 Feb 2024 • Mihaela Cătălina Stoian, Alex Tatomir, Thomas Lukasiewicz, Eleonora Giunchiglia
Given the widespread application of deep learning, there is a growing need for frameworks allowing for the integration of the requirements across various domains.
no code implementations • 17 Feb 2024 • Mihaela Cătălina Stoian, Eleonora Giunchiglia, Thomas Lukasiewicz
Deep learning has been at the core of the autonomous driving field development, due to the neural networks' success in finding patterns in raw data and turning them into accurate predictions.
no code implementations • 16 Feb 2024 • Tommaso Salvatori, Beren Millidge, Yuhang Song, Rafal Bogacz, Thomas Lukasiewicz
This problem can be easily solved by computing \emph{similarities} in an embedding space instead of the pixel space.
1 code implementation • 7 Feb 2024 • Mihaela Cătălina Stoian, Salijona Dyrmishi, Maxime Cordy, Thomas Lukasiewicz, Eleonora Giunchiglia
Further, we show how our CL does not necessarily need to be integrated at training time, as it can be also used as a guardrail at inference time, still producing some improvements in the overall performance of the models.
1 code implementation • 27 Jan 2024 • Vid Kocijan, Myeongjun Erik Jang, Thomas Lukasiewicz
The method works for both canonicalized knowledge bases and uncanonicalized or open knowledge bases, i. e., knowledge bases where more than one copy of a real-world entity or relation may exist.
no code implementations • 7 Dec 2023 • Simon Frieder, Julius Berner, Philipp Petersen, Thomas Lukasiewicz
Large language models (LLMs) such as ChatGPT have received immense interest for their general-purpose language understanding and, in particular, their ability to generate high-quality text or computer code.
no code implementations • 1 Dec 2023 • Lei Sha, Thomas Lukasiewicz
In this approach, we use a semi-supervised contrastive learning method to encourage the disentanglement of attributes in latent spaces.
no code implementations • 24 Oct 2023 • Myeongjun Erik Jang, Thomas Lukasiewicz
Next, we propose an efficient parameter integration technique that updates only a few additional parameters to combine the learned interrelationship with PLMs' pre-trained knowledge.
no code implementations • 9 Oct 2023 • Ruizhi Wang, Xiangtao Wang, Jie zhou, Thomas Lukasiewicz, Zhenghua Xu
In addition, word-level optimization based on numbers ignores the semantics of reports and medical images, and the generated reports often cannot achieve good performance.
no code implementations • 8 Sep 2023 • Xiangtao Wang, Ruizhi Wang, Jie zhou, Thomas Lukasiewicz, Zhenghua Xu
The proposed strategies effectively address limitations in applying masked modeling to medical images, tailored to capturing fine lesion details vital for segmentation tasks.
no code implementations • 15 Aug 2023 • Tommaso Salvatori, Ankur Mali, Christopher L. Buckley, Thomas Lukasiewicz, Rajesh P. N. Rao, Karl Friston, Alexander Ororbia
Artificial intelligence (AI) is rapidly becoming one of the key technologies of this century.
1 code implementation • 5 Aug 2023 • JianFeng Wang, Daniela Massiceti, Xiaolin Hu, Vladimir Pavlovic, Thomas Lukasiewicz
This is useful in a wide range of real-world applications where collecting pixel-wise labels is not feasible in time or cost.
no code implementations • 27 Jun 2023 • Tommaso Salvatori, Luca Pinchetti, Amine M'Charrak, Beren Millidge, Thomas Lukasiewicz
Recently, there has been extensive research on the capabilities of biologically plausible algorithms.
no code implementations • 26 Jun 2023 • Louis Mahon, Thomas Lukasiewicz
We conduct experiments on seven different sets of images, which show that our method assigns the most accurate scores to all images considered.
1 code implementation • 6 Jun 2023 • Zhongbin Xie, Thomas Lukasiewicz
The increasingly large size of modern pretrained language models not only makes them inherit more human-like biases from the training corpora, but also makes it computationally expensive to mitigate such biases.
no code implementations • 5 Jun 2023 • Myeongjun Jang, Bodhisattwa Prasad Majumder, Julian McAuley, Thomas Lukasiewicz, Oana-Maria Camburu
While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated NLEs.
1 code implementation • 2 Jun 2023 • Katherine M. Collins, Albert Q. Jiang, Simon Frieder, Lionel Wong, Miri Zilka, Umang Bhatt, Thomas Lukasiewicz, Yuhuai Wu, Joshua B. Tenenbaum, William Hart, Timothy Gowers, Wenda Li, Adrian Weller, Mateja Jamnik
There is much excitement about the opportunity to harness the power of large language models (LLMs) when building problem-solving assistants.
1 code implementation • 29 May 2023 • Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, Isabelle Augenstein
Explanations of neural models aim to reveal a model's decision-making process for its predictions.
no code implementations • 15 Apr 2023 • Ruizhi Wang, Xiangtao Wang, Zhenghua Xu, Wenting Xu, Junyang Chen, Thomas Lukasiewicz
In clinical scenarios, multiple medical images with different views are usually generated at the same time, and they have high semantic consistency.
no code implementations • 7 Apr 2023 • Eleonora Giunchiglia, Fergus Imrie, Mihaela van der Schaar, Thomas Lukasiewicz
In the recent years, machine learning has made great advancements that have been at the root of many breakthroughs in different application domains.
no code implementations • 5 Apr 2023 • Louis Mahon, Lei Shah, Thomas Lukasiewicz
Recent years have seen growing interest in learning disentangled representations, in which distinct features, such as size or shape, are represented by distinct neurons.
1 code implementation • 29 Mar 2023 • Louis Mahon, Thomas Lukasiewicz
We propose a method that does not require data augmentation, and that, differently from existing methods, regularizes the hard assignments.
Ranked #1 on Online Clustering on cifar10
no code implementations • 11 Mar 2023 • Myeongjun Erik Jang, Thomas Lukasiewicz
ChatGPT has gained a huge popularity since its introduction.
no code implementations • 27 Feb 2023 • Xiangtao Wang, Ruizhi Wang, Biao Tian, Jiaojiao Zhang, Shuo Zhang, Junyang Chen, Thomas Lukasiewicz, Zhenghua Xu
We leverage the masked patches selection strategy to choose masked patches with lesions to obtain more lesion representation information, and the adaptive masking strategy is utilized to help learn more mutual information and improve performance further.
no code implementations • 22 Feb 2023 • Hexiang Zhang, Zhenghua Xu, Dan Yao, Shuo Zhang, Junyang Chen, Thomas Lukasiewicz
Analysis of X-ray images is one of the main tools to diagnose breast cancer.
no code implementations • 11 Feb 2023 • Zhongbin Xie, Vid Kocijan, Thomas Lukasiewicz, Oana-Maria Camburu
Bias-measuring datasets play a critical role in detecting biased behavior of language models and in evaluating progress of bias mitigation methods.
2 code implementations • NeurIPS 2023 • Simon Frieder, Luca Pinchetti, Alexis Chevalier, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Julius Berner
We investigate the mathematical capabilities of two iterations of ChatGPT (released 9-January-2023 and 30-January-2023) and of GPT-4 by testing them on publicly available datasets, as well as hand-crafted ones, using a novel methodology.
1 code implementation • 31 Jan 2023 • JianFeng Wang, Xiaolin Hu, Thomas Lukasiewicz
In this work, we adjust neural processes (NPs) to the semi-supervised image classification task, resulting in a new method named NP-Match.
no code implementations • 15 Jan 2023 • Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz
One form of explanation for a prediction is an extractive rationale, i. e., a subset of features of an instance that lead the model to give its prediction on that instance.
no code implementations • 9 Dec 2022 • Billy Byiringiro, Tommaso Salvatori, Thomas Lukasiewicz
Predictive coding is a message-passing framework initially developed to model information processing in the brain, and now also topic of research in machine learning due to some interesting properties.
no code implementations • 16 Nov 2022 • Tommaso Salvatori, Yuhang Song, Yordan Yordanov, Beren Millidge, Zhenghua Xu, Lei Sha, Cornelius Emde, Rafal Bogacz, Thomas Lukasiewicz
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
no code implementations • 14 Nov 2022 • Bowen Li, Thomas Lukasiewicz
Story visualization aims to generate a sequence of images to narrate each sentence in a multi-sentence story, where the images should be realistic and keep global consistency across dynamic scenes and characters.
no code implementations • 7 Nov 2022 • Luca Pinchetti, Tommaso Salvatori, Yordan Yordanov, Beren Millidge, Yuhang Song, Thomas Lukasiewicz
A large amount of recent research has the far-reaching goal of finding training methods for deep neural networks that can serve as alternatives to backpropagation (BP).
no code implementations • 14 Oct 2022 • Wenting Xu, Zhenghua Xu, Junyang Chen, Chang Qi, Thomas Lukasiewicz
In this article, we propose a hybrid reinforced medical report generation method with m-linear attention and repetition penalty mechanism (HReMRG-MR) to overcome these problems.
1 code implementation • 8 Oct 2022 • Lei Sha, Yuhang Song, Yordan Yordanov, Tommaso Salvatori, Thomas Lukasiewicz
Transformers have become an indispensable module for text generation models since their great success in machine translation.
1 code implementation • 4 Oct 2022 • Eleonora Giunchiglia, Mihaela Cătălina Stoian, Salman Khan, Fabio Cuzzolin, Thomas Lukasiewicz
Neural networks have proven to be very powerful at computer vision tasks.
1 code implementation • Asian Conference on Machine Learning 2023 • Louis Mahon, Thomas Lukasiewicz
Progress is starting to be made in the unsupervised setting, in the form of deep HAR clustering models, which can assign labels to data without having been given any labels to train on, but there are problems with evaluating deep HAR clustering models, which makes assessing the field and devising new methods difficult.
Ranked #1 on Human Activity Recognition on PAMAP2
no code implementations • 8 Sep 2022 • Bowen Li, Thomas Lukasiewicz
In this paper, we introduce novel lightweight generative adversarial networks, which can effectively capture long-range dependencies in the image generation process, and produce high-quality results with a much simpler architecture.
no code implementations • 15 Aug 2022 • Bowen Li, Philip H. S. Torr, Thomas Lukasiewicz
We introduce a memory-driven semi-parametric approach to text-to-image generation, which is based on both parametric and non-parametric techniques.
1 code implementation • 3 Aug 2022 • Bowen Li, Thomas Lukasiewicz
Story visualization aims to generate a sequence of images to narrate each sentence in a multi-sentence story with a global consistency across dynamic scenes and characters.
1 code implementation • 24 Jul 2022 • Zihang Xu, Zhenghua Xu, Shuo Zhang, Thomas Lukasiewicz
Unlike most existing semi-supervised learning methods, adversarial training based methods distinguish samples from different sources by learning the data distribution of the segmentation map, leading the segmenter to generate more accurate predictions.
1 code implementation • 21 Jul 2022 • Beren Millidge, Yuhang Song, Tommaso Salvatori, Thomas Lukasiewicz, Rafal Bogacz
In this paper, we provide a comprehensive theoretical analysis of the properties of PCNs trained with prospective configuration.
1 code implementation • 9 Jul 2022 • Maxime Kayser, Cornelius Emde, Oana-Maria Camburu, Guy Parsons, Bartlomiej Papiez, Thomas Lukasiewicz
Most deep learning algorithms lack explanations for their predictions, which limits their deployment in clinical practice.
1 code implementation • 3 Jul 2022 • JianFeng Wang, Thomas Lukasiewicz, Daniela Massiceti, Xiaolin Hu, Vladimir Pavlovic, Alexandros Neophytou
Semi-supervised learning (SSL) has been widely explored in recent years, and it is an effective way of leveraging unlabeled data to reduce the reliance on labeled data.
1 code implementation • CVPR 2022 • JianFeng Wang, Thomas Lukasiewicz
Secondly, in fact, they are only partially based on Bayesian deep learning, as their overall architectures are not designed under the Bayesian framework.
1 code implementation • 31 May 2022 • Beren Millidge, Yuhang Song, Tommaso Salvatori, Thomas Lukasiewicz, Rafal Bogacz
How the brain performs credit assignment is a fundamental unsolved problem in neuroscience.
no code implementations • 15 May 2022 • Yikuan Li, Mohammad Mamouei, Shishir Rao, Abdelaali Hassaine, Dexter Canoy, Thomas Lukasiewicz, Kazem Rahimi, Gholamreza Salimi-Khorshidi
Most machine learning (ML) models are developed for prediction only; offering no option for causal interpretation of their predictions or parameters/properties.
1 code implementation • Findings (NAACL) 2022 • Myeongjun Jang, Frank Mtumbuka, Thomas Lukasiewicz
To alleviate the issue, we propose a novel intermediate training task, names meaning-matching, designed to directly learn a meaning-text correspondence, instead of relying on the distributional hypothesis.
no code implementations • 1 May 2022 • Eleonora Giunchiglia, Mihaela Catalina Stoian, Thomas Lukasiewicz
In recent years, there has been an increasing interest in exploiting logically specified background knowledge in order to obtain neural models (i) with a better performance, (ii) able to learn from less data, and/or (iii) guaranteed to be compliant with the background knowledge itself, e. g., for safety-critical applications.
no code implementations • 18 Feb 2022 • Beren Millidge, Tommaso Salvatori, Yuhang Song, Rafal Bogacz, Thomas Lukasiewicz
The backpropagation of error algorithm used to train deep neural networks has been fundamental to the successes of deep learning.
1 code implementation • 9 Feb 2022 • Beren Millidge, Tommaso Salvatori, Yuhang Song, Thomas Lukasiewicz, Rafal Bogacz
A large number of neural network models of associative memory have been proposed in the literature.
no code implementations • 31 Jan 2022 • Tommaso Salvatori, Luca Pinchetti, Beren Millidge, Yuhang Song, TianYi Bao, Rafal Bogacz, Thomas Lukasiewicz
Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network.
no code implementations • 7 Jan 2022 • Vid Kocijan, Ernest Davis, Thomas Lukasiewicz, Gary Marcus, Leora Morgenstern
The Winograd Schema Challenge - a set of twin sentences involving pronoun reference disambiguation that seem to require the use of commonsense knowledge - was proposed by Hector Levesque in 2011.
1 code implementation • 12 Dec 2021 • Yordan Yordanov, Vid Kocijan, Thomas Lukasiewicz, Oana-Maria Camburu
A potential solution is the few-shot out-of-domain transfer of NLEs from a parent task with many NLEs to a child task.
no code implementations • AAAI Workshop CLeaR 2022 • Eleonora Giunchiglia, Thomas Lukasiewicz
In this paper, we thus propose to enhance deep learning models by incorporating background knowledge as hard logical constraints.
no code implementations • 15 Nov 2021 • Niall Taylor, Lei Sha, Dan W Joyce, Thomas Lukasiewicz, Alejo Nevado-Holgado, Andrey Kormilitzin
In this work, we apply InfoCal, the current state-of-the-art model that produces extractive rationales for its predictions, to the task of predicting hospital readmission using hospital discharge notes.
no code implementations • 29 Sep 2021 • Luca Pinchetti, Lei Sha, Thomas Lukasiewicz
By doing so, it is possible to merge multiple datasets based on different categorical models by projecting the data points into a unified latent space.
no code implementations • NeurIPS 2021 • Tommaso Salvatori, Yuhang Song, Yujian Hong, Simon Frieder, Lei Sha, Zhenghua Xu, Rafal Bogacz, Thomas Lukasiewicz
We conclude by discussing the possible impact of this work in the neuroscience community, by showing that our model provides a plausible framework to study learning and retrieval of memories in the brain, as it closely mimics the behavior of the hippocampus as a memory index and generative model.
1 code implementation • EMNLP 2021 • Vid Kocijan, Thomas Lukasiewicz
The aim of knowledge base completion is to predict unseen facts from existing facts in knowledge bases.
no code implementations • 29 Aug 2021 • Myeongjun Jang, Thomas Lukasiewicz
The recent development in pretrained language models trained in a self-supervised fashion, such as BERT, is driving rapid progress in the field of NLP.
no code implementations • 29 Aug 2021 • Myeongjun Jang, Thomas Lukasiewicz
The most predominant form of these models is the explain-then-predict (EtP) structure, which first generates explanations and uses them for making decisions.
no code implementations • 15 Aug 2021 • Myeongjun Jang, Deuk Sin Kwon, Thomas Lukasiewicz
Consistency, which refers to the capability of generating the same predictions for semantically similar contexts, is a highly desirable property for a sound language understanding model.
1 code implementation • 22 Jul 2021 • Louis Mahon, Thomas Lukasiewicz
The most accurate existing approaches combine the training of the DNN with the clustering objective, so that information from the clustering process can be used to update the DNN to produce better features for clustering.
Ranked #1 on Image Clustering on USPS
no code implementations • 25 Jun 2021 • Bodhisattwa Prasad Majumder, Oana-Maria Camburu, Thomas Lukasiewicz, Julian McAuley
Our framework improves over previous methods by: (i) reaching SOTA task performance while also providing explanations, (ii) providing two types of explanations, while existing models usually provide only one type, and (iii) beating by a large margin the previous SOTA in terms of quality of both types of explanations.
no code implementations • 21 Jun 2021 • Yikuan Li, Mohammad Mamouei, Gholamreza Salimi-Khorshidi, Shishir Rao, Abdelaali Hassaine, Dexter Canoy, Thomas Lukasiewicz, Kazem Rahimi
Capturing the whole history of medical encounters is expected to lead to more accurate predictions, but the inclusion of records collected for decades and from multiple resources can inevitably exceed the receptive field of the existing deep learning architectures.
1 code implementation • CVPR 2021 • JianFeng Wang, Thomas Lukasiewicz, Xiaolin Hu, Jianfei Cai, Zhenghua Xu
Imbalanced datasets widely exist in practice and area great challenge for training deep neural models with agood generalization on infrequent classes.
Ranked #19 on Long-tail Learning on Places-LT
1 code implementation • Findings (ACL) 2021 • Lei Sha, Patrick Hohenecker, Thomas Lukasiewicz
Experimental results on the test set show that our proposed method is a good fit for this novel NLP task.
2 code implementations • ICCV 2021 • Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, Thomas Lukasiewicz
e-ViL is a benchmark for explainable vision-language tasks that establishes a unified evaluation framework and provides the first comprehensive comparison of existing approaches that generate NLEs for VL tasks.
1 code implementation • 24 Mar 2021 • Eleonora Giunchiglia, Thomas Lukasiewicz
Multi-label classification (MC) is a standard machine learning problem in which a data point can be associated with a set of classes.
no code implementations • 8 Mar 2021 • Tommaso Salvatori, Yuhang Song, Thomas Lukasiewicz, Rafal Bogacz, Zhenghua Xu
Recent works prove that these methods can approximate BP up to a certain margin on multilayer perceptrons (MLPs), and asymptotically on any other complex model, and that zero-divergence inference learning (Z-IL), a variant of PC, is able to exactly implement BP on MLPs.
no code implementations • 5 Mar 2021 • Tommaso Salvatori, Yuhang Song, Thomas Lukasiewicz, Rafal Bogacz, Zhenghua Xu
Predictive coding networks (PCNs) are an influential model for information processing in the brain.
no code implementations • 17 Feb 2021 • Yikuan Li, Shishir Rao, Mohammad Mamouei, Gholamreza Salimi-Khorshidi, Dexter Canoy, Abdelaali Hassaine, Thomas Lukasiewicz, Kazem Rahimi
In this study, we propose two methods, namely, model distillation and variable selection, to untangle hidden patterns learned by an established deep learning model (BEHRT) for risk association identification.
no code implementations • 27 Jan 2021 • Shishir Rao, Yikuan Li, Rema Ramakrishnan, Abdelaali Hassaine, Dexter Canoy, John Cleland, Thomas Lukasiewicz, Gholamreza Salimi-Khorshidi, Kazem Rahimi
Predicting the incidence of complex chronic conditions such as heart failure is challenging.
no code implementations • 1 Jan 2021 • JianFeng Wang, Thomas Lukasiewicz, Zhongchao shi
Learning discriminative node features is the key to further improve the performance of graph-based face clustering.
no code implementations • 16 Dec 2020 • Lei Sha, Thomas Lukasiewicz
After the latent space is disentangled, the style of a sentence can be transformed by tuning the style representation without affecting other features of the sentence.
no code implementations • 16 Dec 2020 • Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz
We use an adversarial-based technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.
1 code implementation • JURIX 2020 • Alina Petrova, John Armour, Thomas Lukasiewicz
Predicting the outcome of a legal process has recently gained considerable research attention.
no code implementations • NeurIPS 2020 • Yuhang Song, Thomas Lukasiewicz, Zhenghua Xu, Rafal Bogacz
However, there are several gaps between BP and learning in biologically plausible neuronal networks of the brain (learning in the brain, or simply BL, for short), in particular, (1) it has been unclear to date, if BP can be implemented exactly via BL, (2) there is a lack of local plasticity in BP, i. e., weight updates require information that is not locally available, while BL utilizes only locally available information, and (3)~there is a lack of autonomy in BP, i. e., some external control over the neural network is required (e. g., switching between prediction and learning stages requires changes to dynamics and synaptic plasticity rules), while BL works fully autonomously.
no code implementations • 16 Nov 2020 • Wenting Xu, Chang Qi, Zhenghua Xu, Thomas Lukasiewicz
To reduce doctors' workload, deep-learning-based automatic medical report generation has recently attracted more and more research efforts, where attention mechanisms and reinforcement learning are integrated with the classic encoder-decoder architecture to enhance the performance of deep models.
no code implementations • 15 Nov 2020 • Bo wang, Lei Wang, Junyang Chen, Zhenghua Xu, Thomas Lukasiewicz, Zhigang Fu
Non-local attention and feature learning by multi-scale methods are widely used to model network, which drives progress in medical image segmentation.
no code implementations • 15 Nov 2020 • Di Yuan, Junyang Chen, Zhenghua Xu, Thomas Lukasiewicz, Zhigang Fu, Guizhi Xu
However, U-Net is mainly engaged in segmentation, and the extracted features are also targeted at segmentation location information, and the input and output are different.
no code implementations • 15 Nov 2020 • Chang Qi, Junyang Chen, Guizhi Xu, Zhenghua Xu, Thomas Lukasiewicz, Yang Liu
We first generate MRI images on limited datasets, then we trained three popular classification models to get the best model for tumor classification.
1 code implementation • 3 Nov 2020 • Vid Kocijan, Oana-Maria Camburu, Thomas Lukasiewicz
For example, if the feminine subset of a gender-bias-measuring coreference resolution dataset contains sentences with a longer average distance between the pronoun and the correct candidate, an RNN-based model may perform worse on this subset due to long-term dependencies.
no code implementations • 2 Nov 2020 • Dan Zhao, Guizhi Xu, Zhenghua Xu, Thomas Lukasiewicz, Minmin Xue, Zhigang Fu
Computer-Aided Diagnosis and Treatment of Tumors is a hot topic of deep learning in recent years, which constitutes a series of medical tasks, such as detection of tumor markers, the outline of tumor leisures, subtypes and stages of tumors, prediction of therapeutic effect, and drug development.
1 code implementation • NeurIPS 2020 • Bowen Li, Xiaojuan Qi, Philip H. S. Torr, Thomas Lukasiewicz
To achieve this, a new word-level discriminator is proposed, which provides the generator with fine-grained training feedback at word-level, to facilitate training a lightweight generator that has a small number of parameters, but can still correctly focus on specific visual attributes of an image, and then edit them without affecting other contents that are not described in the text.
1 code implementation • NeurIPS 2020 • Eleonora Giunchiglia, Thomas Lukasiewicz
Hierarchical multi-label classification (HMC) is a challenging classification task extending standard multi-label classification problems by imposing a hierarchy constraint on the classes.
1 code implementation • EMNLP 2020 • Yordan Yordanov, Oana-Maria Camburu, Vid Kocijan, Thomas Lukasiewicz
Overall, four categories of training and evaluation objectives have been introduced.
1 code implementation • 2 Oct 2020 • Ralph Abboud, İsmail İlkan Ceylan, Martin Grohe, Thomas Lukasiewicz
In this work, we analyze the expressive power of GNNs with RNI, and prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
1 code implementation • 23 Sep 2020 • Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom
For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions.
1 code implementation • 20 Jul 2020 • Louis Mahon, Eleonora Giunchiglia, Bowen Li, Thomas Lukasiewicz
Nearly all existing techniques for automated video annotation (or captioning) describe videos using natural language sentences.
1 code implementation • NeurIPS 2020 • Ralph Abboud, İsmail İlkan Ceylan, Thomas Lukasiewicz, Tommaso Salvatori
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB).
Ranked #1 on Link Prediction on FB-AUTO
no code implementations • 23 Apr 2020 • Vid Kocijan, Thomas Lukasiewicz, Ernest Davis, Gary Marcus, Leora Morgenstern
The Winograd Schema Challenge is both a commonsense reasoning and natural language understanding challenge, introduced as an alternative to the Turing test.
3 code implementations • 7 Apr 2020 • Virginie Do, Oana-Maria Camburu, Zeynep Akata, Thomas Lukasiewicz
The recently proposed SNLI-VE corpus for recognising visual-textual entailment is a large, real-world dataset for fine-grained multimodal reasoning.
no code implementations • 23 Mar 2020 • Yikuan Li, Shishir Rao, Abdelaali Hassaine, Rema Ramakrishnan, Yajie Zhu, Dexter Canoy, Gholamreza Salimi-Khorshidi, Thomas Lukasiewicz, Kazem Rahimi
In this paper, we merge features of the deep Bayesian learning framework with deep kernel learning to leverage the strengths of both methods for more comprehensive uncertainty estimation.
no code implementations • 12 Feb 2020 • Bowen Li, Xiaojuan Qi, Philip H. S. Torr, Thomas Lukasiewicz
The goal of this paper is to embed controllable factors, i. e., natural language descriptions, into image-to-image translation with generative adversarial networks, which allows text descriptions to determine the visual attributes of synthetic images.
3 code implementations • 12 Dec 2019 • Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H. S. Torr
The goal of our paper is to semantically edit parts of an image matching a given text that describes desired attributes (e. g., texture, colour, and background), while preserving other contents that are irrelevant to the text.
no code implementations • 18 Nov 2019 • Zehua Cheng, Weiyang Wang, Yan Pan, Thomas Lukasiewicz
However, most low precision training solution is based on a mixed precision strategy.
1 code implementation • ACL 2020 • Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, Phil Blunsom
To increase trust in artificial intelligence systems, a promising research direction consists of designing neural models capable of generating natural language explanations for their predictions.
2 code implementations • 4 Oct 2019 • Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, Phil Blunsom
We aim for this framework to provide a publicly available, off-the-shelf evaluation when the feature-selection perspective on explanations is needed.
2 code implementations • NeurIPS 2019 • Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, Philip H. S. Torr
In this paper, we propose a novel controllable text-to-image generative adversarial network (ControlGAN), which can effectively synthesise high-quality images and also control parts of the image generation according to natural language descriptions.
Ranked #7 on Text-to-Image Generation on Multi-Modal-CelebA-HQ
1 code implementation • IJCNLP 2019 • Vid Kocijan, Oana-Maria Camburu, Ana-Maria Cretu, Yordan Yordanov, Phil Blunsom, Thomas Lukasiewicz
We use a language-model-based approach for pronoun resolution in combination with our WikiCREM dataset.
no code implementations • ACL 2019 • Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz
The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.
2 code implementations • 17 May 2019 • Yuhang Song, Andrzej Wojcicki, Thomas Lukasiewicz, Jianyi Wang, Abi Aryan, Zhenghua Xu, Mai Xu, Zihan Ding, Lianlong Wu
That is, there is not yet a general evaluation platform for research on multi-agent intelligence.
Multi-agent Reinforcement Learning Reinforcement Learning +1
2 code implementations • 15 May 2019 • Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz
The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning.
Ranked #13 on Natural Language Inference on WNLI
1 code implementation • 12 May 2019 • Yuhang Song, Jianyi Wang, Thomas Lukasiewicz, Zhenghua Xu, Shangtong Zhang, Andrzej Wojcicki, Mai Xu
Intrinsic rewards were introduced to simulate how human intelligence works; they are usually evaluated by intrinsically-motivated play, i. e., playing games without extrinsic rewards but evaluated with extrinsic rewards.
no code implementations • 30 Apr 2019 • Zehua Cheng, Yuxiang Wu, Zhenghua Xu, Thomas Lukasiewicz, Weiyang Wang
Region proposal mechanisms are essential for existing deep learning approaches to object detection in images.
Ranked #1 on Head Detection on Rebar Head
1 code implementation • 4 Apr 2019 • Ralph Abboud, Ismail Ilkan Ceylan, Thomas Lukasiewicz
Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference.
2 code implementations • NeurIPS 2018 • Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, Phil Blunsom
In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from human-provided explanations at train time.
Ranked #1 on Natural Language Inference on e-SNLI
1 code implementation • 10 Nov 2018 • Yuhang Song, Jianyi Wang, Thomas Lukasiewicz, Zhenghua Xu, Mai Xu
However, HRL with multiple levels is usually needed in many real-world scenarios, whose ultimate goals are highly abstract, while their actions are very primitive.
2 code implementations • 24 Aug 2018 • Patrick Hohenecker, Thomas Lukasiewicz
This is an important and at the same time very natural logical reasoning task, which is why the presented approach is applicable to a plethora of important real-world problems.
no code implementations • 26 Jun 2018 • Thomas Lukasiewicz, Enrico Malizia
On the other hand, global voting over non-$\mathcal{O}$-legal CP-nets has not carefully been analyzed, despite it was stated in the literature that a theoretical comparison between global and sequential voting was promising and a precise complexity analysis for global voting has been asked for multiple times.
no code implementations • 29 May 2017 • Patrick Hohenecker, Thomas Lukasiewicz
In this work, we present a novel approach to ontology reasoning that is based on deep learning rather than logic-based formal reasoning.
no code implementations • 29 Nov 2013 • Thomas Lukasiewicz, Maria Vanina Martinez, Cristian Molinaro, Livia Predoiu, Gerardo I. Simari
Theses pieces of information of every report are then combined, along with the querying user's preferences and his/her trust into each report, to rank the query results.