1 code implementation • 6 Feb 2025 • Weizhi Li, Natalie Klein, Brendan Gifford, Elizabeth Sklute, Carey Legett, Samuel Clegg
This regularizer serves a dual purpose: on the one hand, it mitigates overfitting by enforcing a constraint on the distributional difference between predictions and noisy targets.
no code implementations • 13 Jan 2025 • Weizhi Li
Unlike image classification and annotation, for which deep network models have achieved dominating superior performances compared to traditional computer vision algorithms, deep learning for automatic image segmentation still faces critical challenges.
no code implementations • 7 Jan 2025 • Weizhi Li, Visar Berisha, Gautam Dasarathy
This tutorial extends active learning concepts to two-sample testing within this \textit{label-costly} setting while maintaining statistical validity and high testing power.
no code implementations • 12 Mar 2023 • Weizhi Li, Haotai Liang, Chen Dong, Xiaodong Xu, Ping Zhang, Kaijun Liu
Semantic communication serves as a novel paradigm and attracts the broad interest of researchers.
no code implementations • 30 Jan 2023 • Weizhi Li, Prad Kadambi, Pouria Saidi, Karthikeyan Natesan Ramamurthy, Gautam Dasarathy, Visar Berisha
The classification model is adaptively updated and used to predict where the (unlabelled) features have a high dependency on labels; labeling the ``high-dependency'' features leads to the increased power of the proposed testing framework.
1 code implementation • 17 Nov 2021 • Weizhi Li, Gautam Dasarathy, Karthikeyan Natesan Ramamurthy, Visar Berisha
Two-sample tests evaluate whether two samples are realizations of the same distribution (the null hypothesis) or two different distributions (the alternative hypothesis).
1 code implementation • NeurIPS 2020 • Weizhi Li, Gautam Dasarathy, Karthikeyan Natesan Ramamurthy, Visar Berisha
We theoretically analyze the proposed framework and show that the query complexity of our active learning algorithm depends naturally on the intrinsic complexity of the underlying manifold.
no code implementations • 7 Jan 2020 • Weizhi Li, Gautam Dasarathy, Visar Berisha
Regularization is an effective way to promote the generalization performance of machine learning models.
no code implementations • 12 Sep 2019 • Mei Wang, Weizhi Li, Yan Yan
Session-based Recurrent Neural Networks (RNNs) are gaining increasing popularity for recommendation task, due to the high autocorrelation of user's behavior on the latest session and the effectiveness of RNN to capture the sequence order information.
no code implementations • 2 Mar 2017 • Yanyan Geng, Guohui Zhang, Weizhi Li, Yi Gu, Ru-Ze Liang, Gaoyuan Liang, Jingbin Wang, Yanbin Wu, Nitin Patil, Jing-Yan Wang
In this paper, we study the problem of image tag complete and proposed a novel method for this problem based on a popular image representation method, convolutional neural network (CNN).
no code implementations • 27 Sep 2016 • Yanyan Geng, Ru-Ze Liang, Weizhi Li, Jingbin Wang, Gaoyuan Liang, Chenhao Xu, Jing-Yan Wang
The CNN model is used to represent the multi-instance data point, and a classifier function is used to predict the label from the its CNN representation.
no code implementations • 16 Aug 2016 • Ru-Ze Liang, Wei Xie, Weizhi Li, Hongqi Wang, Jim Jing-Yan Wang, Lisa Taylor
We map the data of two domains to one single common space, and learn a classifier in this common space.
no code implementations • 7 Jun 2016 • Ru-Ze Liang, Wei Xie, Weizhi Li, Xin Du, Jim Jing-Yan Wang, Jingbin Wang
The existing semi-supervise structured output prediction methods learn a global predictor for all the data points in a data set, which ignores the differences of local distributions of the data set, and the effects to the structured output prediction.