no code implementations • 13 Mar 2025 • Zhiyu Mou, Miao Xu, Rongquan Bai, Zhuoran Yang, Chuan Yu, Jian Xu, Bo Zheng
However, the NCB problem presents significant challenges due to its constrained bi-level structure and the typically large number of advertisers involved.
1 code implementation • 8 Mar 2025 • Zidu Wang, Jiankuo Zhao, Miao Xu, Xiangyu Zhu, Zhen Lei
We collect a dataset of over 250 high-fidelity real hair scans paired with 3D face data to serve as a prior for the 3D morphable hair.
1 code implementation • 7 Mar 2025 • Wenhao Liang, Wei zhang, Yue Lin, Miao Xu, Olaf Maennel, Weitong Chen
Medical image segmentation is fundamental for computer-aided diagnostics, providing accurate delineation of anatomical structures and pathological regions.
no code implementations • 3 Mar 2025 • Wenhao Liang, Wei Emma Zhang, Lin Yue, Miao Xu, Olaf Maennel, Weitong Chen
Kolmogorov Arnold Networks (KANs) are neural architectures inspired by the Kolmogorov Arnold representation theorem that leverage B Spline parameterizations for flexible, locally adaptive function approximation.
1 code implementation • 16 Jan 2025 • Liangwewi Nathan Zheng, Wei Emma Zhang, Lin Yue, Miao Xu, Olaf Maennel, Weitong Chen
In this work, we analyze the behavior of KANs through the lens of spline knots and derive the lower and upper bound for the number of knots in B-spline-based KANs.
1 code implementation • 18 Dec 2024 • Chenhao Zhang, Shaofei Shen, Weitong Chen, Miao Xu
We propose a novel method, Inhibited Synthetic PostFilter (ISPF), to tackle this challenge from two perspectives: First, the Inhibited Synthetic, by reducing the synthesized forgetting information; Second, the PostFilter, by fully utilizing the retaining-related information in synthesized samples.
1 code implementation • 16 Oct 2024 • Liangwei Nathan Zheng, Zhengyang Li, Chang George Dong, Wei Emma Zhang, Lin Yue, Miao Xu, Olaf Maennel, Weitong Chen
We observed that IRTS can be divided into two specialized types: Natural Irregular Time Series (NIRTS) and Accidental Irregular Time Series (AIRTS).
no code implementations • 16 Oct 2024 • Liangwei Nathan Zheng, Chang George Dong, Wei Emma Zhang, Lin Yue, Miao Xu, Olaf Maennel, Weitong Chen
We compare the performance of LLMs against simpler baseline models, such as single-layer linear models and randomly initialized LLMs.
1 code implementation • 4 Sep 2024 • Wenwu Guo, Jinlin Wu, Zhen Chen, Qingxiang Zhao, Miao Xu, Zhen Lei, Hongbin Liu
Compared with 2D instrument tracking methods, 3D instrument tracking has broader value in clinical practice, but is also more challenging due to weak texture, occlusion, and lack of Computer-Aided Design (CAD) models for 3D registration.
no code implementations • 12 Jun 2024 • Chenhao Zhang, Shaofei Shen, Yawen Zhao, Weitong Tony Chen, Miao Xu
However, the imbalanced original data can cause trouble for these proxies and unlearning, particularly when the forgetting data consists predominantly of the majority class.
1 code implementation • 31 Mar 2024 • Shaofei Shen, Chenhao Zhang, Yawen Zhao, Alina Bialkowski, Weitong Tony Chen, Miao Xu
Leveraging this approximation, we adapt the original model to eliminate information from the forgotten data at the representation level.
no code implementations • 5 Mar 2024 • Zhen Gong, Lvyin Niu, Yang Zhao, Miao Xu, Zhenzhe Zheng, Haoqi Zhang, Zhilin Zhang, Fan Wu, Rongquan Bai, Chuan Yu, Jian Xu, Bo Zheng
Through extensive offline and online experiments, we demonstrate the effectiveness and efficiency of our method, and we obtain a 7. 01% lift in Gross Merchandise Volume, a 7. 42% lift in Return on Investment, and a 3. 26% lift in ad buy count.
1 code implementation • 30 Jan 2024 • Shaofei Shen, Chenhao Zhang, Alina Bialkowski, Weitong Chen, Miao Xu
To address this shortcoming, the present study undertakes a causal analysis of the unlearning and introduces a novel framework termed Causal Machine Unlearning (CaMU).
no code implementations • 25 Feb 2023 • Yi Gao, Miao Xu, Min-Ling Zhang
Moreover, theoretical findings reveal that calculating a transition matrix from label correlations in \textit{multi-labeled CLL} (ML-CLL) needs multi-labeled data, while this is unavailable for ML-CLL.
3 code implementations • 16 Oct 2022 • Jonathan Wilton, Abigail M. Y. Koay, Ryan K. L. Ko, Miao Xu, Nan Ye
Key to our approach is a new interpretation of decision tree algorithms for positive and negative data as \emph{recursive greedy risk minimization algorithms}.
no code implementations • 13 Aug 2022 • Tong Wang, Yuan YAO, Feng Xu, Miao Xu, Shengwei An, Ting Wang
Existing defenses are mainly built upon the observation that the backdoor trigger is usually of small size or affects the activation of only a few neurons.
no code implementations • 19 May 2022 • Yawen Zhao, Mingzhe Zhang, Chenhao Zhang, Weitong Chen, Nan Ye, Miao Xu
This is because AdaPU learns a weak classifier and its weight using a weighted positive-negative (PN) dataset with some negative data weights $-$ the dataset is derived from the original PU data, and the data weights are determined by the current weighted classifier combination, but some data weights are negative.
1 code implementation • 9 May 2022 • Yueying Kao, Bowen Pan, Miao Xu, Jiangjing Lyu, Xiangyu Zhu, Yuanzhang Chang, Xiaobo Li, Zhen Lei
In 3D face reconstruction, orthogonal projection has been widely employed to substitute perspective projection to simplify the fitting process.
Ranked #3 on
Head Pose Estimation
on BIWI
no code implementations • 17 Dec 2021 • Guanhua Ye, Hongzhi Yin, Tong Chen, Miao Xu, Quoc Viet Hung Nguyen, Jiangning Song
Actuated by the growing attention to personal healthcare and the pandemic, the popularity of E-health is proliferating.
no code implementations • 29 Sep 2021 • Cheng-Yu Hsieh, Wei-I Lin, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama
The goal of multi-label learning (MLL) is to associate a given instance with its relevant labels from a set of concepts.
no code implementations • 11 Jun 2021 • Jiaqi Lv, Biao Liu, Lei Feng, Ning Xu, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama
Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL).
1 code implementation • 11 Jun 2021 • Chao Wen, Miao Xu, Zhilin Zhang, Zhenzhe Zheng, Yuhui Wang, Xiangyu Liu, Yu Rong, Dong Xie, Xiaoyang Tan, Chuan Yu, Jian Xu, Fan Wu, Guihai Chen, Xiaoqiang Zhu, Bo Zheng
Third, to deploy MAAB in the large-scale advertising system with millions of advertisers, we propose a mean-field approach.
no code implementations • 10 Mar 2021 • Changwei Zou, Zhenqi Hao, Xiangyu Luo, Shusen Ye, Qiang Gao, Xintong Li, Miao Xu, Peng Cai, Chengtian Lin, Xingjiang Zhou, Dung-Hai Lee, Yayu Wang
To elucidate the superconductor to metal transition at the end of superconducting dome, the overdoped regime has stepped onto the center stage of cuprate research recently.
Superconductivity
no code implementations • 5 Dec 2020 • Zhilin Zhang, Xiangyu Liu, Zhenzhe Zheng, Chenrui Zhang, Miao Xu, Junwei Pan, Chuan Yu, Fan Wu, Jian Xu, Kun Gai
In e-commerce advertising, the ad platform usually relies on auction mechanisms to optimize different performance metrics, such as user experience, advertiser utility, and platform revenue.
1 code implementation • NeurIPS 2020 • Long Chen, Yuan YAO, Feng Xu, Miao Xu, Hanghang Tong
Collaborative filtering has been widely used in recommender systems.
no code implementations • 5 Oct 2020 • Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama
To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.
no code implementations • 3 Sep 2020 • Zhaoqing Peng, Junqi Jin, Lan Luo, Yaodong Yang, Rui Luo, Jun Wang, Wei-Nan Zhang, Haiyang Xu, Miao Xu, Chuan Yu, Tiejian Luo, Han Li, Jian Xu, Kun Gai
To drive purchase in online advertising, it is of the advertiser's great interest to optimize the sequential advertising strategy whose performance and interpretability are both important.
no code implementations • NeurIPS 2020 • Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
no code implementations • ICML 2020 • Xiaotian Hao, Zhaoqing Peng, Yi Ma, Guan Wang, Junqi Jin, Jianye Hao, Shan Chen, Rongquan Bai, Mingzhou Xie, Miao Xu, Zhenzhe Zheng, Chuan Yu, Han Li, Jian Xu, Kun Gai
In E-commerce, advertising is essential for merchants to reach their target users.
1 code implementation • ICML 2020 • Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
no code implementations • ICLR Workshop LLD 2019 • Cheng-Yu Hsieh, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama
To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones.
no code implementations • 29 Jan 2019 • Miao Xu, Bingcong Li, Gang Niu, Bo Han, Masashi Sugiyama
May there be a new sample selection method that can outperform the latest importance reweighting method in the deep learning age?
1 code implementation • ICML 2020 • Bo Han, Gang Niu, Xingrui Yu, Quanming Yao, Miao Xu, Ivor Tsang, Masashi Sugiyama
Given data with noisy labels, over-parameterized deep networks can gradually memorize the data, and fit everything in the end.
no code implementations • 27 Sep 2018 • Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor Tsang, Masashi Sugiyama
To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations.
no code implementations • 13 Sep 2018 • Takeshi Teshima, Miao Xu, Issei Sato, Masashi Sugiyama
On the other hand, matrix completion (MC) methods can recover a low-rank matrix from various information deficits by using the principle of low-rank completion.
no code implementations • 23 May 2018 • Miao Xu, Gang Niu, Bo Han, Ivor W. Tsang, Zhi-Hua Zhou, Masashi Sugiyama
We consider a challenging multi-label classification problem where both feature matrix $\X$ and label matrix $\Y$ have missing entries.
5 code implementations • NeurIPS 2018 • Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama
Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.
Ranked #9 on
Learning with noisy labels
on CIFAR-10N-Random3
no code implementations • 15 Feb 2018 • Sheng-Jun Huang, Miao Xu, Ming-Kun Xie, Masashi Sugiyama, Gang Niu, Songcan Chen
Feature missing is a serious problem in many applications, which may lead to low quality of training data and further significantly degrade the learning performance.
no code implementations • 4 Nov 2014 • Miao Xu, Rong Jin, Zhi-Hua Zhou
In particular, the proposed algorithm computes the low rank approximation of the target matrix based on (i) the randomly sampled rows and columns, and (ii) a subset of observed entries that are randomly sampled from the matrix.
no code implementations • NeurIPS 2013 • Miao Xu, Rong Jin, Zhi-Hua Zhou
In standard matrix completion theory, it is required to have at least $O(n\ln^2 n)$ observed entries to perfectly recover a low-rank matrix $M$ of size $n\times n$, leading to a large number of observations when $n$ is large.