1 code implementation • 26 Aug 2024 • Danil Provodin, Bram van den Akker, Christina Katsimerou, Maurits Kaptein, Mykola Pechenizkiy
Research on learning using privileged information (LUPI) aims to transfer the knowledge captured in PI onto a model that can perform inference without PI.
1 code implementation • 18 Aug 2024 • Ben Halstead, Yun Sing Koh, Patricia Riddle, Mykola Pechenizkiy, Albert Bifet
Existing streaming approaches either do not consider experience to change in relevance over time and thus cannot handle concept drift, or only consider the recency of experience and thus cannot handle recurring concepts, or only sparsely evaluate relevance and thus fail when concept drift is missed.
no code implementations • 14 Aug 2024 • Ricky Maulana Fajri, Yulong Pei, Lu Yin, Mykola Pechenizkiy
Despite significant advancements in active learning and adversarial attacks, the intersection of these two fields remains underexplored, particularly in developing robust active learning frameworks against dynamic adversarial threats.
1 code implementation • 8 Aug 2024 • Zahra Atashgahi, Tennison Liu, Mykola Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu, Mihaela van der Schaar
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
no code implementations • 24 Jul 2024 • Tianjin Huang, Fang Meng, Li Shen, Fan Liu, Yulong Pei, Mykola Pechenizkiy, Shiwei Liu, Tianlong Chen
In this paper, we investigate a charming possibility - \textit{leveraging visual prompts to capture the channel importance and derive high-quality structural sparsity}.
1 code implementation • 24 Jul 2024 • Wieger Wesselink, Bram Grooten, Qiao Xiao, Cassio de Campos, Mykola Pechenizkiy
We introduce Nerva, a fast neural network library under development in C++.
no code implementations • 26 Jun 2024 • Qiao Xiao, Pingchuan Ma, Adriana Fernandez-Lopez, Boqian Wu, Lu Yin, Stavros Petridis, Mykola Pechenizkiy, Maja Pantic, Decebal Constantin Mocanu, Shiwei Liu
The recent success of Automatic Speech Recognition (ASR) is largely attributed to the ever-growing amount of training data.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 10 Jun 2024 • Calarina Muslimani, Bram Grooten, Deepak Ranganatha Sastry Mamillapalli, Mykola Pechenizkiy, Decebal Constantin Mocanu, Matthew E. Taylor
It becomes essential that agents learn to focus on the subset of task-relevant environment features.
no code implementations • 4 Jun 2024 • Tim d'Hondt, Mykola Pechenizkiy, Robert Peharz
Optimization-based techniques for federated learning (FL) often come with prohibitive communication cost, as high dimensional model parameters need to be communicated repeatedly between server and clients.
1 code implementation • 29 May 2024 • Danil Provodin, Maurits Kaptein, Mykola Pechenizkiy
We present a new algorithm based on posterior sampling for learning in Constrained Markov Decision Processes (CMDP) in the infinite-horizon undiscounted setting.
no code implementations • 18 Apr 2024 • Hilde Weerts, Raphaële Xenidis, Fabien Tarissan, Henrik Palmer Olsen, Mykola Pechenizkiy
While individuals and organizations have an obligation to avoid discrimination, the use of fairness-aware machine learning interventions has also been described as amounting to 'algorithmic positive action' under European Union (EU) non-discrimination law.
1 code implementation • 9 Apr 2024 • Igor G. Smit, Zaharah Bukhsh, Mykola Pechenizkiy, Kostas Alogariastos, Kasper Hendriks, Yingqian Zhang
We develop a discrete-event simulation model, which we use to train and evaluate the proposed DRL approach.
no code implementations • 29 Feb 2024 • Pratik Gajane, Sean Newman, Mykola Pechenizkiy, John D. Piette
In this article, we study gender fairness in personalized pain care recommendations using a real-world application of reinforcement learning (Piette et al., 2022a).
1 code implementation • 17 Jan 2024 • Meng Fang, Shilong Deng, Yudi Zhang, Zijing Shi, Ling Chen, Mykola Pechenizkiy, Jun Wang
A wide range of real-world applications is characterized by their symbolic nature, necessitating a strong capability for symbolic reasoning.
1 code implementation • 23 Dec 2023 • Bram Grooten, Tristan Tomilin, Gautham Vasan, Matthew E. Taylor, A. Rupam Mahmood, Meng Fang, Mykola Pechenizkiy, Decebal Constantin Mocanu
Our algorithm improves the agent's focus with useful masks, while its efficient Masker network only adds 0. 2% more parameters to the original structure, in contrast to previous work.
no code implementations • 11 Dec 2023 • Jiaxu Zhao, Meng Fang, Shirui Pan, Wenpeng Yin, Mykola Pechenizkiy
In this work, we propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs (e. g., GPT-4 \cite{openai2023gpt4}) to assess bias in models.
1 code implementation • 7 Dec 2023 • Ricky Maulana Fajri, Yulong Pei, Lu Yin, Mykola Pechenizkiy
To address this problem, we propose the Structural-Clustering PageRank method for improved Active learning (SPA) specifically designed for graph-structured data.
1 code implementation • 7 Dec 2023 • Boqian Wu, Qiao Xiao, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Decebal Constantin Mocanu, Maurice van Keulen, Elena Mocanu
E2ENet achieves comparable accuracy on the large-scale challenge AMOS-CT, while saving over 68\% parameter count and 29\% FLOPs in the inference phase, compared with the previous best-performing method.
1 code implementation • 5 Dec 2023 • Jiaxu Zhao, Lu Yin, Shiwei Liu, Meng Fang, Mykola Pechenizkiy
These bias attributes are strongly spuriously correlated with the target variable, causing the models to be biased towards spurious correlations (i. e., \textit{bias-conflicting}).
1 code implementation • 3 Dec 2023 • Can Jin, Tianjin Huang, Yihua Zhang, Mykola Pechenizkiy, Sijia Liu, Shiwei Liu, Tianlong Chen
The rapid development of large-scale deep learning models questions the affordability of hardware platforms, which necessitates the pruning to reduce their computational and memory footprints.
no code implementations • 30 Oct 2023 • Iftitahu Ni'mah, Samaneh Khoshrou, Vlado Menkovski, Mykola Pechenizkiy
Interestingly, although in general the absolute advantage of learning embeddings through label supervision is highly positive across evaluation datasets, KeyGen2Vec is shown to be competitive with classifier that exploits topic label supervision in Yahoo!
no code implementations • 12 Oct 2023 • Zirui Liang, Yuntao Li, Tianjin Huang, Akrati Saxena, Yulong Pei, Mykola Pechenizkiy
This leads to suboptimal performance of standard GNNs on imbalanced graphs.
1 code implementation • 8 Oct 2023 • Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li, Ajay Jaiswal, Mykola Pechenizkiy, Yi Liang, Michael Bendersky, Zhangyang Wang, Shiwei Liu
Large Language Models (LLMs), renowned for their remarkable performance across diverse domains, present a challenge when it comes to practical deployment due to their colossal model size.
no code implementations • 27 Sep 2023 • Danil Provodin, Pratik Gajane, Mykola Pechenizkiy, Maurits Kaptein
We present a new algorithm based on posterior sampling for learning in constrained Markov decision processes (CMDP) in the infinite-horizon undiscounted setting.
1 code implementation • 25 Jun 2023 • Tianjin Huang, Shiwei Liu, Tianlong Chen, Meng Fang, Li Shen, Vlaod Menkovski, Lu Yin, Yulong Pei, Mykola Pechenizkiy
Despite the fact that adversarial training has become the de facto method for improving the robustness of deep neural networks, it is well-known that vanilla adversarial training suffers from daunting robust overfitting, resulting in unsatisfactory robust generalization.
1 code implementation • 30 May 2023 • Tianjin Huang, Lu Yin, Zhenyu Zhang, Li Shen, Meng Fang, Mykola Pechenizkiy, Zhangyang Wang, Shiwei Liu
We hereby carry out a first-of-its-kind study unveiling that modern large-kernel ConvNets, a compelling competitor to Vision Transformers, are remarkably more effective teachers for small-kernel ConvNets, due to more similar architectures.
1 code implementation • 28 May 2023 • Zahra Atashgahi, Mykola Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu
Efficient time series forecasting has become critical for real-world applications, particularly with deep neural networks (DNNs).
no code implementations • NeurIPS 2023 • Yudi Zhang, Yali Du, Biwei Huang, Ziyan Wang, Jun Wang, Meng Fang, Mykola Pechenizkiy
While the majority of current approaches construct the reward redistribution in an uninterpretable manner, we propose to explicitly model the contributions of state and action from a causal perspective, resulting in an interpretable reward redistribution and preserving policy invariance.
1 code implementation • 18 May 2023 • Jiaxu Zhao, Meng Fang, Zijing Shi, Yitong Li, Ling Chen, Mykola Pechenizkiy
We evaluate two popular pretrained Chinese conversational models, CDial-GPT and EVA2. 0, using CHBias.
7 code implementations • 15 May 2023 • Iftitahu Ni'mah, Meng Fang, Vlado Menkovski, Mykola Pechenizkiy
Our proposed framework provides access: (i) for verifying whether automatic metrics are faithful to human preference, regardless of their correlation level to human; and (ii) for inspecting the strengths and limitations of NLG systems via pairwise evaluation.
no code implementations • 5 May 2023 • Hilde Weerts, Raphaële Xenidis, Fabien Tarissan, Henrik Palmer Olsen, Mykola Pechenizkiy
In this paper, we aim to illustrate to what extent European Union (EU) non-discrimination law coincides with notions of algorithmic fairness proposed in computer science literature and where they differ.
no code implementations • 15 Mar 2023 • Hilde Weerts, Florian Pfisterer, Matthias Feurer, Katharina Eggensperger, Edward Bergman, Noor Awad, Joaquin Vanschoren, Mykola Pechenizkiy, Bernd Bischl, Frank Hutter
The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices.
1 code implementation • 10 Mar 2023 • Zahra Atashgahi, Xuhao Zhang, Neil Kichler, Shiwei Liu, Lu Yin, Mykola Pechenizkiy, Raymond Veldhuis, Decebal Constantin Mocanu
Feature selection that selects an informative subset of variables from data not only enhances the model interpretability and performance but also alleviates the resource demands.
1 code implementation • 13 Feb 2023 • Bram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew E. Taylor, Mykola Pechenizkiy, Decebal Constantin Mocanu
Tomorrow's robots will need to distinguish useful information from noise when performing different tasks.
1 code implementation • 19 Dec 2022 • Qiao Xiao, Boqian Wu, Yu Zhang, Shiwei Liu, Mykola Pechenizkiy, Elena Mocanu, Decebal Constantin Mocanu
The receptive field (RF), which determines the region of time series to be ``seen'' and used, is critical to improve the performance for time series classification (TSC).
1 code implementation • 28 Nov 2022 • Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu
Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i. e., untrained networks).
1 code implementation • 26 Nov 2022 • Ghada Sokar, Zahra Atashgahi, Mykola Pechenizkiy, Decebal Constantin Mocanu
Our proposed approach outperforms the state-of-the-art methods in terms of selecting informative features while reducing training iterations and computational costs substantially.
1 code implementation • 21 Sep 2022 • Ricky Fajri, Akrati Saxena, Yulong Pei, Mykola Pechenizkiy
Active Learning (AL) techniques have proven to be highly effective in reducing data labeling costs across a range of machine learning tasks.
1 code implementation • 8 Sep 2022 • Danil Provodin, Pratik Gajane, Mykola Pechenizkiy, Maurits Kaptein
We study a posterior sampling approach to efficient exploration in constrained reinforcement learning.
1 code implementation • 23 Aug 2022 • Lu Yin, Shiwei Liu, Meng Fang, Tianjin Huang, Vlado Menkovski, Mykola Pechenizkiy
We call our method Lottery Pools.
1 code implementation • 8 Jul 2022 • Zahra Atashgahi, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy
We show that ALACPD, on average, ranks first among state-of-the-art CPD algorithms in terms of quality of the time series segmentation, and it is on par with the best performer in terms of the accuracy of the estimated change-points.
1 code implementation • 7 Jul 2022 • Shiwei Liu, Tianlong Chen, Xiaohan Chen, Xuxi Chen, Qiao Xiao, Boqian Wu, Tommi Kärkkäinen, Mykola Pechenizkiy, Decebal Mocanu, Zhangyang Wang
Transformers have quickly shined in the computer vision world since the emergence of Vision Transformers (ViTs).
no code implementations • 30 May 2022 • Lu Yin, Vlado Menkovski, Meng Fang, Tianjin Huang, Yulong Pei, Mykola Pechenizkiy, Decebal Constantin Mocanu, Shiwei Liu
Recent works on sparse neural network training (sparse training) have shown that a compelling trade-off between performance and efficiency can be achieved by training intrinsically sparse neural networks from scratch.
1 code implementation • Findings (NAACL) 2022 • Yibin Lei, Yu Cao, Dianqi Li, Tianyi Zhou, Meng Fang, Mykola Pechenizkiy
Generating high-quality textual adversarial examples is critical for investigating the pitfalls of natural language processing (NLP) models and further promoting their robustness.
no code implementations • 20 May 2022 • Pratik Gajane, Akrati Saxena, Maryam Tavakol, George Fletcher, Mykola Pechenizkiy
In this article, we provide an extensive overview of fairness approaches that have been implemented via a reinforcement learning (RL) framework.
no code implementations • 17 Feb 2022 • Hilde Weerts, Lambèr Royakkers, Mykola Pechenizkiy
We use the framework to distil moral and empirical assumptions under which particular fairness metrics correspond to a fair distribution of outcomes.
1 code implementation • 14 Feb 2022 • Danil Provodin, Pratik Gajane, Mykola Pechenizkiy, Maurits Kaptein
Our main theoretical results show that the impact of batch learning is a multiplicative factor of batch size relative to the regret of online behavior.
1 code implementation • ICLR 2022 • Shiwei Liu, Tianlong Chen, Xiaohan Chen, Li Shen, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy
In this paper, we focus on sparse training and highlight a perhaps counter-intuitive finding, that random pruning at initialization can be quite powerful for the sparse training of modern neural networks.
no code implementations • 16 Dec 2021 • Lu Yin, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy
In this work, we advance the few-shot learning towards this more challenging scenario, the semantic-based few-shot learning, and propose a method to address the paradigm by capturing the inner semantic relationships using interactive psychometric learning.
1 code implementation • 3 Nov 2021 • Danil Provodin, Pratik Gajane, Mykola Pechenizkiy, Maurits Kaptein
We consider a special case of bandit problems, namely batched bandits.
1 code implementation • 11 Oct 2021 • Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
To address this challenge, we propose a new CL method, named AFAF, that aims to Avoid Forgetting and Allow Forward transfer in class-IL using fix-capacity models.
1 code implementation • 1 Oct 2021 • Tianjin Huang, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy
In this paper, we present the Calibrated Adversarial Training, a method that reduces the adverse effects of semantic perturbations in adversarial training.
no code implementations • 21 Sep 2021 • Xin Du, Subramanian Ramamoorthy, Wouter Duivesteijn, Jin Tian, Mykola Pechenizkiy
Specifically, we propose to leverage causal knowledge by regarding the distributional shifts in subpopulations and deployment environments as the results of interventions on the underlying system.
1 code implementation • Findings (EMNLP) 2021 • Iftitahu Ni'mah, Meng Fang, Vlado Menkovski, Mykola Pechenizkiy
The ability to detect Out-of-Domain (OOD) inputs has been a critical requirement in many real-world NLP applications.
no code implementations • 7 Aug 2021 • Masoud Mansoury, Himan Abdollahpouri, Bamshad Mobasher, Mykola Pechenizkiy, Robin Burke, Milad Sabouri
This is especially problematic when bias is amplified over time as a few popular items are repeatedly over-represented in recommendation lists.
no code implementations • 7 Jul 2021 • Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke
Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research.
no code implementations • 7 Jul 2021 • Lu Yin, Vlado Menkovski, Shiwei Liu, Mykola Pechenizkiy
One of the major challenges in the supervised learning approaches is expressing and collecting the rich knowledge that experts have with respect to the meaning present in the image data.
1 code implementation • 6 Jul 2021 • Tianjin Huang, Yulong Pei, Vlado Menkovski, Mykola Pechenizkiy
Adversarial training is an approach for increasing model's resilience against adversarial perturbations.
2 code implementations • ICLR 2022 • Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
Our framework, FreeTickets, is defined as the ensemble of these relatively cheap sparse subnetworks.
2 code implementations • NeurIPS 2021 • Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, Decebal Constantin Mocanu
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization).
Ranked #3 on Sparse Learning on ImageNet
1 code implementation • 8 Jun 2021 • Ghada Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, Peter Stone
In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process.
1 code implementation • 19 Apr 2021 • Tianjin Huang, Vlado Menkovski, Yulong Pei, Yuhao Wang, Mykola Pechenizkiy
Deep neural networks are vulnerable to adversarial examples that are crafted by imposing imperceptible changes to the inputs.
1 code implementation • 16 Apr 2021 • Tianjin Huang, Yulong Pei, Vlado Menkovski, Mykola Pechenizkiy
Although various approaches have been proposed to solve this problem, two major limitations exist: (1) unsupervised approaches usually work much less efficiently due to the lack of supervisory signal, and (2) existing anomaly detection methods only use local contextual information to detect anomalous nodes, e. g., one- or two-hop information, but ignore the global contextual information.
Self-Supervised Anomaly Detection Supervised Anomaly Detection
3 code implementations • 4 Feb 2021 • Shiwei Liu, Lu Yin, Decebal Constantin Mocanu, Mykola Pechenizkiy
By starting from a random sparse network and continuously exploring sparse connectivities during training, we can perform an Over-Parameterization in the space-time manifold, closing the gap in the expressibility between sparse training and dense training.
Ranked #4 on Sparse Learning on ImageNet
1 code implementation • 28 Jan 2021 • Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
In this paper, we propose a new method, named Self-Attention Meta-Learner (SAM), which learns a prior knowledge for continual learning that permits learning a sequence of tasks, while avoiding catastrophic forgetting.
1 code implementation • 22 Jan 2021 • Shiwei Liu, Decebal Constantin Mocanu, Yulong Pei, Mykola Pechenizkiy
Sparse neural networks have been widely applied to reduce the computational demands of training and deploying over-parameterized deep neural networks.
1 code implementation • 15 Jan 2021 • Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
Finally, we analyze the role of the shared invariant representation in mitigating the forgetting problem especially when the number of replayed samples for each previous task is small.
2 code implementations • 1 Dec 2020 • Zahra Atashgahi, Ghada Sokar, Tim Van der Lee, Elena Mocanu, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy
This method, named QuickSelection, introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance.
Ranked #2 on Dimensionality Reduction on EMNIST
no code implementations • 30 Nov 2020 • Sahithya Ravi, Samaneh Khoshrou, Mykola Pechenizkiy
In the light of the COVID-19 pandemic, deep learning methods have been widely investigated in detecting COVID-19 from chest X-rays.
1 code implementation • 7 Nov 2020 • Tianjin Huang, Vlado Menkovski, Yulong Pei, Mykola Pechenizkiy
In addition, it achieves comparable performance of adversarial robustness on MNIST dataset under white-box attack, and it achieves better performance than adv. PGD under white-box attack and effectively defends the transferable adversarial attack on CIFAR-10 dataset.
1 code implementation • 30 Sep 2020 • Yulong Pei, Tianjin Huang, Werner van Ipenburg, Mykola Pechenizkiy
Effectively detecting anomalous nodes in attributed networks is crucial for the success of many real-world applications such as fraud and intrusion detection.
no code implementations • 25 Jul 2020 • Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke
Recommendation algorithms are known to suffer from popularity bias; a few popular items are recommended frequently while the majority of other items are ignored.
1 code implementation • 15 Jul 2020 • Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
Regularization-based methods maintain a fixed model capacity; however, previous studies showed the huge performance degradation of these methods when the task identity is not available during inference (e. g. class incremental learning scenario).
2 code implementations • 24 Jun 2020 • Shiwei Liu, Tim Van der Lee, Anil Yaman, Zahra Atashgahi, Davide Ferraro, Ghada Sokar, Mykola Pechenizkiy, Decebal Constantin Mocanu
However, comparing different sparse topologies and determining how sparse topologies evolve during training, especially for the situation in which the sparse structure optimization is involved, remain as challenging open questions.
no code implementations • 3 May 2020 • Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke
That leads to low coverage of items in recommendation lists across users (i. e. low aggregate diversity) and unfair distribution of recommended items.
no code implementations • 14 Apr 2020 • Lu Yin, Vlado Menkovski, Mykola Pechenizkiy
The main reason for such a reductionist approach is the difficulty in eliciting the domain knowledge from the experts.
no code implementations • 18 Feb 2020 • Masoud Mansoury, Himan Abdollahpouri, Jessie Smith, Arman Dehpanah, Mykola Pechenizkiy, Bamshad Mobasher
The proliferation of personalized recommendation technologies has raised concerns about discrepancies in their recommendation performance across different genders, age groups, and racial or ethnic populations.
no code implementations • 10 Feb 2020 • Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, George Fletcher, Mykola Pechenizkiy
A learning process with the plasticity property often requires reinforcement signals to guide the process.
no code implementations • 15 Jan 2020 • Yuhao Wang, Vlado Menkovski, Hao Wang, Xin Du, Mykola Pechenizkiy
As systems are getting more autonomous with the development of artificial intelligence, it is important to discover the causal knowledge from observational sensory inputs.
no code implementations • 3 Nov 2019 • Masoud Mansoury, Himan Abdollahpouri, Joris Rombouts, Mykola Pechenizkiy
In this paper, we aim to explore the relationship between the consistency of users' ratings behavior and the degree of calibrated recommendations they receive.
no code implementations • 17 Sep 2019 • Iftitahu Ni'mah, Vlado Menkovski, Mykola Pechenizkiy
This study mainly investigates two decoding problems in neural keyphrase generation: sequence length bias and beam diversity.
1 code implementation • 2 Aug 2019 • Masoud Mansoury, Bamshad Mobasher, Robin Burke, Mykola Pechenizkiy
Research on fairness in machine learning has been recently extended to recommender systems.
no code implementations • 7 Jul 2019 • Hilde J. P. Weerts, Werner van Ipenburg, Mykola Pechenizkiy
In many contexts, it can be useful for domain experts to understand to what extent predictions made by a machine learning model can be trusted.
no code implementations • 7 Jul 2019 • Hilde J. P. Weerts, Werner van Ipenburg, Mykola Pechenizkiy
In this paper we present the results of a human-grounded evaluation of SHAP, an explanation method that has been well-received in the XAI and related communities.
1 code implementation • 27 Jun 2019 • Shiwei Liu, Decebal Constantin Mocanu, Mykola Pechenizkiy
Large neural networks are very successful in various tasks.
2 code implementations • 30 Apr 2019 • Xin Du, Lei Sun, Wouter Duivesteijn, Alexander Nikolaev, Mykola Pechenizkiy
The challenges for this problem are two-fold: on the one hand, we have to derive a causal estimator to estimate the causal quantity from observational data, where there exists confounding bias; on the other hand, we have to deal with the identification of CATE when the distribution of covariates in treatment and control groups are imbalanced.
no code implementations • 2 Apr 2019 • Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, Matt Coler, George Fletcher, Mykola Pechenizkiy
Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons.
no code implementations • 22 Mar 2019 • Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, George Fletcher, Mykola Pechenizkiy
Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i. e. rules that update synapses based on the neuron activations and reinforcement signals.
2 code implementations • 17 Mar 2019 • Zahra Atashgahi, Joost Pieterse, Shiwei Liu, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy
Concretely, by exploiting the cosine similarity metric to measure the importance of the connections, our proposed method, Cosine similarity-based and Random Topology Exploration (CTRE), evolves the topology of sparse neural networks by adding the most important connections to the network without calculating dense gradient in the backward.
4 code implementations • 26 Jan 2019 • Shiwei Liu, Decebal Constantin Mocanu, Amarsagar Reddy Ramapuram Matavalam, Yulong Pei, Mykola Pechenizkiy
Despite the success of ANNs, it is challenging to train and deploy modern ANNs on commodity hardware due to the ever-increasing model size and the unprecedented growth in the data volumes.
no code implementations • 26 Jan 2019 • Shiwei Liu, Decebal Constantin Mocanu, Mykola Pechenizkiy
However, LSTMs are prone to be memory-bandwidth limited in realistic applications and need an unbearable period of training and inference time as the model size is ever-increasing.
no code implementations • 8 Nov 2018 • Wenting Xiong, Iftitahu Ni'mah, Juan M. G. Huesca, Werner van Ipenburg, Jan Veldsink, Mykola Pechenizkiy
Layer-wise Relevance Propagation (LRP) and saliency maps have been recently used to explain the predictions of Deep Learning models, specifically in the domain of text classification.
no code implementations • 22 Aug 2018 • Oren Zeev-Ben-Mordehai, Wouter Duivesteijn, Mykola Pechenizkiy
Finding regions for which there is higher controversy among different classifiers is insightful with regards to the domain and our models.
no code implementations • 25 May 2018 • Yulong Pei, Xin Du, Jianpeng Zhang, George Fletcher, Mykola Pechenizkiy
Almost all previous methods represent a node into a point in space and focus on local structural information, i. e., neighborhood information.
no code implementations • 19 Apr 2018 • Anil Yaman, Decebal Constantin Mocanu, Giovanni Iacca, George Fletcher, Mykola Pechenizkiy
Many real-world control and classification tasks involve a large number of features.
no code implementations • 9 Oct 2017 • Pratik Gajane, Mykola Pechenizkiy
Machine learning algorithms for prediction are increasingly being used in critical decisions affecting human lives.
no code implementations • 15 Dec 2014 • Erik Tromp, Mykola Pechenizkiy
We study sentiment analysis beyond the typical granularity of polarity and instead use Plutchik's wheel of emotions model.
no code implementations • 23 Dec 2013 • Indre Zliobaite, Mykola Pechenizkiy
We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling.