1 code implementation • 10 Oct 2024 • Hoin Jung, Taeuk Jang, Xiaoqian Wang
Recent advancements in Vision-Language Models (VLMs) have enabled complex multimodal tasks by processing text and image data simultaneously, significantly enhancing the field of artificial intelligence.
1 code implementation • 18 Jun 2024 • Xiaoze Liu, Ting Sun, Tianyang Xu, Feijie Wu, Cunxiang Wang, Xiaoqian Wang, Jing Gao
Large Language Models (LLMs) have transformed machine learning but raised significant legal concerns due to their potential to produce text that infringes on copyrights, resulting in several high-profile lawsuits.
1 code implementation • 1 Apr 2024 • Xiaoze Liu, Feijie Wu, Tianyang Xu, Zhuo Chen, Yichi Zhang, Xiaoqian Wang, Jing Gao
In this paper, we propose GraphEval to evaluate an LLM's performance using a substantially large test dataset.
1 code implementation • CVPR 2024 • Xidong Wu, Shangqian Gao, Zeyu Zhang, Zhenzhen Li, Runxue Bao, yanfu Zhang, Xiaoqian Wang, Heng Huang
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step processes that require domain-specific expertise, making their widespread adoption challenging.
no code implementations • 10 Mar 2024 • Yipei Wang, Bing He, Shannon Risacher, Andrew Saykin, Jingwen Yan, Xiaoqian Wang
Specifically, we introduce a monotonicity constraint that encourages the model to predict disease risk in a consistent and ordered manner across follow-up visits.
no code implementations • CVPR 2024 • Taeuk Jang, Xiaoqian Wang
Learning fair representation in deep learning is essential to mitigate discriminatory outcomes and enhance trustworthiness.
no code implementations • 31 Mar 2023 • Junyi Chai, Xiaoqian Wang
both fairness and accuracy?
no code implementations • 19 Feb 2023 • Tianci Liu, Haoyu Wang, Yaqing Wang, Xiaoqian Wang, Lu Su, Jing Gao
This new framework utilizes data that have similar labels when estimating fairness on a particular label group for better stability, and can unify DP and EOp.
no code implementations • CVPR 2023 • Taeuk Jang, Xiaoqian Wang
We theoretically show that the triplet loss amplifies the bias in self-supervised representation learning.
1 code implementation • NIPS 2022 • Junyi Chai, Taeuk Jang ~Taeuk_Jang1, Xiaoqian Wang
Most of existing work on fairness assumes available demographic information in the training set.
no code implementations • 9 May 2022 • Xiaoqian Wang, Rob J Hyndman, Feng Li, Yanfei Kang
Forecast combinations have flourished remarkably in the forecasting community and, in recent years, have become part of the mainstream of forecasting research and activities.
no code implementations • 27 Mar 2022 • Yipei Wang, Xiaoqian Wang
The growing need for trustworthy machine learning has led to the blossom of interpretability research.
no code implementations • NeurIPS 2021 • Yipei Wang, Xiaoqian Wang
With the proliferation of machine learning applications in the real world, the demand for explaining machine learning predictions continues to grow especially in high-stakes fields.
no code implementations • 9 Nov 2021 • Yipei Wang, Xiaoqian Wang
In this paper, we propose a self-interpretable model SITE with transformation-equivariant interpretations.
no code implementations • NeurIPS 2021 • Taeuk Jang, Pengyi Shi, Xiaoqian Wang
As we only need an estimated probability distribution of model output instead of the classification model structure, our post-processing model can be applied to a wide range of classification models and improve fairness in a model-agnostic manner and ensure privacy.
no code implementations • 29 Sep 2021 • Taeuk Jang, Xiaoqian Wang, Heng Huang
To achieve this goal, we reformulate the data input by eliminating the sensitive information and strengthen model fairness by minimizing the marginal contribution of the sensitive feature.
2 code implementations • ICLR 2021 • Rui Wang, Xiaoqian Wang, David I. Inouye
This intrinsic explanation approach enables layer-wise explanations, explanation regularization of the model during training, and fast explanation computation at test time.
no code implementations • 18 Aug 2020 • Yifu Zhou, Ziheng Duan, Haoyan Xu, Jie Feng, Anni Ren, Yueyang Wang, Xiaoqian Wang
In this paper, a MTS forecasting framework that can capture the long-term trends and short-term fluctuations of time series in parallel is proposed.
1 code implementation • 19 Jul 2020 • Xiaoqian Wang, Yanfei Kang, Rob J. Hyndman, Feng Li
Providing forecasts for ultra-long time series plays a vital role in various activities, such as investment decisions, industrial production arrangements, and farm management.
Applications Computation
no code implementations • 6 Sep 2019 • Xiaoqian Wang, Heng Huang
In order to achieve this goal, we reformulate the data input by removing the sensitive information and strengthen model fairness by minimizing the marginal contribution of the sensitive feature.
2 code implementations • 8 Aug 2019 • Xiaoqian Wang, Yanfei Kang, Fotios Petropoulos, Feng Li
In the training part, we use a collection of time series to train a model to explore how time series features affect the interval forecasting accuracy of different forecasting methods, which makes our proposed framework interpretable in terms of the contribution of each feature to the models' uncertainty prediction.
Methodology Applications Computation
no code implementations • 2 Jul 2019 • Feiping Nie, Zhanxuan Hu, Xiaoqian Wang, Rong Wang, Xuelong. Li, Heng Huang
This work aims at solving the problems with intractable sparsity-inducing norms that are often encountered in various machine learning tasks, such as multi-task learning, subspace clustering, feature selection, robust principal component analysis, and so on.
no code implementations • NeurIPS 2017 • Hong Chen, Xiaoqian Wang, Cheng Deng, Heng Huang
Among them, learning models with grouped variables have shown competitive performance for prediction and variable selection.
no code implementations • NeurIPS 2017 • Xiaoqian Wang, Hong Chen, Weidong Cai, Dinggang Shen, Heng Huang
Linear regression models have been successfully used to function estimation and model selection in high-dimensional data analysis.
no code implementations • NeurIPS 2017 • Feiping Nie, Xiaoqian Wang, Cheng Deng, Heng Huang
In graph based co-clustering methods, a bipartite graph is constructed to depict the relation between features and samples.