1 code implementation • 22 Feb 2024 • Kezhi Kong, Jiani Zhang, Zhengyuan Shen, Balasubramaniam Srinivasan, Chuan Lei, Christos Faloutsos, Huzefa Rangwala, George Karypis
Large Language Models (LLMs) trained on large volumes of data excel at various natural language tasks, but they cannot handle tasks requiring knowledge that has not been trained on previously.
1 code implementation • 30 Oct 2023 • Costas Mavromatis, Balasubramaniam Srinivasan, Zhengyuan Shen, Jiani Zhang, Huzefa Rangwala, Christos Faloutsos, George Karypis
Large Language Models (LLMs) can adapt to new tasks via in-context learning (ICL).
1 code implementation • 19 Oct 2023 • Jiani Zhang, Zhengyuan Shen, Balasubramaniam Srinivasan, Shen Wang, Huzefa Rangwala, George Karypis
Recent advances in large language models have revolutionized many sectors, including the database industry.
1 code implementation • 14 Oct 2023 • Hengrui Zhang, Jiani Zhang, Balasubramaniam Srinivasan, Zhengyuan Shen, Xiao Qin, Christos Faloutsos, Huzefa Rangwala, George Karypis
Recent advances in tabular data generation have greatly enhanced synthetic data quality.
1 code implementation • 5 Oct 2023 • Zifeng Wang, Zichen Wang, Balasubramaniam Srinivasan, Vassilis N. Ioannidis, Huzefa Rangwala, Rishita Anubhai
Foundation models (FMs) are able to leverage large volumes of unlabeled data to demonstrate superior performance across a wide range of tasks.
1 code implementation • 8 Dec 2022 • Angeela Acharya, Siddhartha Sikdar, Sanmay Das, Huzefa Rangwala
Our method combines the estimation of a dependency graph and conditional probabilities from the target location with the use of a Gaussian copula to leverage the available information from the auxiliary locations.
no code implementations • 9 Nov 2022 • Gil Sadeh, Zichen Wang, Jasleen Grewal, Huzefa Rangwala, Layne Price
In this paper, we propose a new peptide data augmentation scheme, where we train peptide language models on artificially constructed peptides that are small contiguous subsets of longer, wild-type proteins; we refer to the training peptides as "chopped proteins".
no code implementations • 16 Aug 2022 • Jonathan Vasquez, Xavier Gitiaux, Huzefa Rangwala
Data owners face increasing liability for how the use of their data could harm under-priviliged communities.
no code implementations • 26 Apr 2022 • Xavier Gitiaux, Huzefa Rangwala
To avoid discriminatory uses of their data, organizations can learn to map them into a representation that filters out information related to sensitive attributes.
no code implementations • 6 Apr 2022 • Sneha Mehta, Huzefa Rangwala, Naren Ramakrishnan
We show how such context simplification can improve the performance of MRC-based event extraction by more than 5% for actor extraction and more than 10% for target extraction.
1 code implementation • 10 Dec 2021 • Songgaojun Deng, Huzefa Rangwala, Yue Ning
(ii) Given spatiotemporal non-independent and identically distributed (non-IID) data, modeling hidden confounders for accurate causal effect estimation is not trivial.
no code implementations • 1 Sep 2021 • Yujing Chen, Zheng Chai, Yue Cheng, Huzefa Rangwala
We propose a novel approach, FedConD, to detect and deal with the concept drift on local devices and minimize the effect on the performance of models in asynchronous FL.
1 code implementation • 31 Aug 2021 • Jitin Krishnan, Antonios Anastasopoulos, Hemant Purohit, Huzefa Rangwala
Transliteration is very common on social media, but transliterated text is not adequately handled by modern neural models for various NLP tasks.
no code implementations • 28 May 2021 • Xavier Gitiaux, Huzefa Rangwala
Organizations that collect and sell data face increasing scrutiny for the discriminatory use of data.
1 code implementation • EMNLP (MRL) 2021 • Jitin Krishnan, Antonios Anastasopoulos, Hemant Purohit, Huzefa Rangwala
Predicting user intent and detecting the corresponding slots from text are two key problems in Natural Language Understanding (NLU).
no code implementations • 13 Nov 2020 • Qian Hu, Huzefa Rangwala
Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
no code implementations • 12 Oct 2020 • Zheng Chai, Yujing Chen, Ali Anwar, Liang Zhao, Yue Cheng, Huzefa Rangwala
By bridging the synchronous and asynchronous training through tiering, FedAT minimizes the straggler effect with improved convergence speed and test accuracy.
no code implementations • 15 Jun 2020 • Xavier Gitiaux, Huzefa Rangwala
Organizations that own data face increasing legal liability for its discriminatory use against protected demographic groups, extending to contractual transactions involving third parties access and use of the data.
1 code implementation • 26 Mar 2020 • Jitin Krishnan, Patrick Coronado, Hemant Purohit, Huzefa Rangwala
We build a common-knowledge concept recognition system for a Systems Engineer's Virtual Assistant (SEVA) which can be used for downstream tasks such as relation extraction, knowledge graph construction, and question-answering.
no code implementations • 9 Mar 2020 • Dom Huh, Sai Gurrapu, Frederick Olson, Huzefa Rangwala, Parth Pathak, Jana Kosecka
With advancements in deep model architectures, tasks in computer vision can reach optimal convergence provided proper data preprocessing and model parameter initialization.
no code implementations • 4 Mar 2020 • Al Amin Hosain, Panneer Selvam Santhalingam, Parth Pathak, Huzefa Rangwala, Jana Kosecka
American Sign Language recognition is a difficult gesture recognition problem, characterized by fast, highly articulate gestures.
1 code implementation • 4 Mar 2020 • Jitin Krishnan, Hemant Purohit, Huzefa Rangwala
As deep networks struggle with sparse datasets, we show that this can be improved by sharing a base layer for multi-task learning and domain adversarial training.
1 code implementation • 25 Feb 2020 • Jitin Krishnan, Hemant Purohit, Huzefa Rangwala
At present, the state-of-the-art unsupervised domain adaptation approaches for subjective text classification problems leverage unlabeled target data along with labeled source data.
no code implementations • 26 Dec 2019 • Qian Hu, Huzefa Rangwala
Traditional methods for student's performance prediction usually neglect the underlying relationships between multiple courses; and how students acquire knowledge across them.
no code implementations • 21 Dec 2019 • Songgaojun Deng, Shusen Wang, Huzefa Rangwala, Lijing Wang, Yue Ning
Forecasting influenza-like illness (ILI) is of prime importance to epidemiologists and health-care providers.
1 code implementation • 26 Nov 2019 • Sneha Mehta, Huzefa Rangwala, Naren Ramakrishnan
Effective representation learning from text has been an active area of research in the fields of NLP and text mining.
no code implementations • 5 Nov 2019 • Yujing Chen, Yue Ning, Martin Slawski, Huzefa Rangwala
In this paper, we present an Asynchronous Online Federated Learning (ASO-Fed) framework, where the edge devices perform online learning with continuous streaming local data and a central server aggregates model parameters from clients.
no code implementations • 24 Sep 2019 • Al Amin Hosain, Panneer Selvam Santhalingam, Parth Pathak, Jana Kosecka, Huzefa Rangwala
Despite having similarity with the well-studied human activity recognition, the use of 3D skeleton data in sign language recognition is rare.
no code implementations • 13 May 2019 • Yujing Chen, Yue Ning, Zheng Chai, Huzefa Rangwala
The attention mechanism of the proposed model seeks to extract feature representations from the input and learn a shared representation focused on time dimensions across multiple sensors.
no code implementations • 18 Mar 2019 • Xavier Gitiaux, Huzefa Rangwala
Machine learning algorithms are increasingly involved in sensitive decision-making process with adversarial implications on individuals.
no code implementations • 26 Feb 2019 • Qian Hu, Huzefa Rangwala
Prior research on students' grade prediction include shallow linear models; however, students' learning is a highly complex process that involves the accumulation of knowledge across a sequence of courses that can not be sufficiently modeled by these linear models.
no code implementations • 17 Jan 2018 • Zhiyun Ren, Xia Ning, Huzefa Rangwala
Grade prediction methods seek to estimate a grade that a student may achieve in a course that she may take in the future (e. g., next term).
no code implementations • 15 Sep 2017 • Zhiyun Ren, Xia Ning, Huzefa Rangwala
The grade of a student on a course is modeled as the similarity of their latent representation in the "knowledge" space.
no code implementations • 6 Jun 2017 • Azad Naik, Huzefa Rangwala
Our experimental evaluation on text and image datasets with varying distribution of features, classes and instances shows upto 3x order of speed-up on massive datasets and upto 45% less memory requirements for storing the weight vectors of learned model without any significant loss (improvement for some datasets) in the classification accuracy.
no code implementations • 6 Jun 2017 • Azad Naik, Anveshi Charuvaka, Huzefa Rangwala
Multi-task learning (MTL) is a supervised learning paradigm in which the prediction models for several related tasks are learned jointly to achieve better generalization performance.
no code implementations • 5 Jun 2017 • Azad Naik, Huzefa Rangwala
In this paper, we propose two different data-driven approaches (local and global) for hierarchical structure modification that identifies and flattens inconsistent nodes present within the hierarchy.
no code implementations • 8 May 2016 • Zhiyun Ren, Huzefa Rangwala, Aditya Johri
The past few years has seen the rapid growth of data min- ing approaches for the analysis of data obtained from Mas- sive Open Online Courses (MOOCs).
no code implementations • 2 Mar 2016 • Azad Naik, Huzefa Rangwala
Experimental comparisons of top-down HC with our modified hierarchy, on a wide range of datasets shows classification performance improvement over the baseline hierarchy (i:e:, defined by expert), clustered hierarchy and flattening based hierarchy modification approaches.