no code implementations • 16 Oct 2024 • Anna Sokol, Nuno Moniz, Elizabeth Daly, Michael Hind, Nitesh Chawla
Large language models (LLMs) offer powerful capabilities but also introduce significant risks.
no code implementations • 29 May 2024 • Deng Pan, Nuno Moniz, Nitesh Chawla
The challenge of delivering efficient explanations is a critical barrier that prevents the adoption of model explanations in real-world applications.
no code implementations • 28 May 2024 • Yuying Duan, Yijun Tian, Nitesh Chawla, Michael Lemmon
Federated Learning (FL) is a distributed machine learning framework in which a set of local communities collaboratively learn a shared global model while retaining all training data locally within each community.
1 code implementation • 10 May 2024 • Xubin Ren, Jiabin Tang, Dawei Yin, Nitesh Chawla, Chao Huang
This survey aims to serve as a valuable resource for researchers and practitioners eager to leverage large language models in graph learning, and to inspire continued progress in this dynamic field.
no code implementations • 26 Feb 2024 • Anna Sokol, Nuno Moniz, Nitesh Chawla
However, this focus neglects the significant influence of model-specific biases on a model's performance.
1 code implementation • 1 Dec 2022 • Tânia Carvalho, Nuno Moniz, Luís Antunes, Nitesh Chawla
We also show how our method improves time requirements by at least a factor of 9 and is a resource-efficient solution that ensures high performance without specialised hardware.
no code implementations • 17 Oct 2022 • Damien Dablain, Kristen N. Jacobson, Colin Bellinger, Mark Roberts, Nitesh Chawla
To demystify CNN decisions on imbalanced data, we focus on their latent features.
no code implementations • 12 Oct 2022 • Zhichun Guo, Chunhui Zhang, Yujie Fan, Yijun Tian, Chuxu Zhang, Nitesh Chawla
In this paper, we propose a novel adaptive KD framework, called BGNN, which sequentially transfers knowledge from multiple GNNs into a student GNN.
no code implementations • 18 Jul 2022 • Md Nafee Al Islam, Yihong Ma, Pedro Alarcon Granadeno, Nitesh Chawla, Jane Cleland-Huang
While formal product documentation often provides example data plots with diagnostic suggestions, the sheer diversity of attributes, critical thresholds, and data interactions can be overwhelming to non-experts who subsequently seek help from discussion forums to interpret their data logs.
1 code implementation • 13 Jul 2022 • Damien Dablain, Bartosz Krawczyk, Nitesh Chawla
A key element in achieving algorithmic fairness with respect to protected groups is the simultaneous reduction of class and protected group imbalance in the underlying training data, which facilitates increases in both model accuracy and fairness.
1 code implementation • 13 Jul 2022 • Damien Dablain, Colin Bellinger, Bartosz Krawczyk, Nitesh Chawla
We empirically study a convolutional neural network's internal representation of imbalanced image data and measure the generalization gap between a model's feature embeddings in the training and test sets, showing that the gap is wider for minority classes.
no code implementations • ICLR 2022 • Zhuoning Yuan, Zhishuai Guo, Nitesh Chawla, Tianbao Yang
The key idea of compositional training is to minimize a compositional objective function, where the outer function corresponds to an AUC loss and the inner function represents a gradient descent step for minimizing a traditional loss, e. g., the cross-entropy (CE) loss.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Chuxu Zhang, Lu Yu, Mandana Saebi, Meng Jiang, Nitesh Chawla
Multi-hop relation reasoning over knowledge base is to generate effective and interpretable relation prediction through reasoning paths.
no code implementations • 19 May 2020 • Lu Yu, Shichao Pei, Chuxu Zhang, Shangsong Liang, Xiao Bai, Nitesh Chawla, Xiangliang Zhang
Pairwise ranking models have been widely used to address recommendation problems.
no code implementations • 12 Mar 2020 • Mandana Saebi, Steven Krieg, Chuxu Zhang, Meng Jiang, Nitesh Chawla
Path-based relational reasoning over knowledge graphs has become increasingly popular due to a variety of downstream applications such as question answering in dialogue systems, fact prediction, and recommender systems.
no code implementations • 10 Feb 2020 • Xian Wu, Chao Huang, Pablo Roblesgranda, Nitesh Chawla
The prevalence of wearable sensors (e. g., smart wristband) is creating unprecedented opportunities to not only inform health and wellness states of individuals, but also assess and infer personal attributes, including demographic and personality attributes.
no code implementations • IJCNLP 2019 • Tianwen Jiang, Tong Zhao, Bing Qin, Ting Liu, Nitesh Chawla, Meng Jiang
In this work, we propose a new sequence labeling framework (as well as a new tag schema) to jointly extract the fact and condition tuples from statement sentences.
1 code implementation • 7 Dec 2018 • Piotr Szymański, Tomasz Kajdanowicz, Nitesh Chawla
Multi-label classification aims to classify instances with discrete non-exclusive labels.
no code implementations • 13 Feb 2018 • Xian Wu, Baoxu Shi, Yuxiao Dong, Chao Huang, Nitesh Chawla
Neural collaborative filtering (NCF) and recurrent recommender systems (RRN) have been successful in modeling user-item relational data.
no code implementations • 21 Sep 2017 • Ashwin Bahulkar, Boleslaw K. Szymanski, Nitesh Chawla, Omar Lizardo, Kevin Chan
We find that personal preferences, in particular political views, and preferences for common activities help predict link formation and dissolution in both the behavioral and cognitive networks.
no code implementations • 28 Jul 2017 • Shuo Wang, Leandro L. Minku, Nitesh Chawla, Xin Yao
It provides a forum for international researchers and practitioners to share and discuss their original work on addressing new challenges and research issues in class imbalance learning, concept drift, and the combined issues of class imbalance and concept drift.
no code implementations • 14 Apr 2014 • Yuxiao Dong, Jie Tang, Nitesh Chawla, Tiancheng Lou, Yang Yang, Bai Wang
Our model can predict social status of individuals with 93% accuracy.