no code implementations • 12 Oct 2022 • Zhanyu Wang, Guang Cheng, Jordan Awan
We analyze a DP bootstrap procedure that releases multiple private bootstrap estimates to infer the sampling distribution and construct confidence intervals.
1 code implementation • COLING 2022 • Zhanyu Wang, Xiao Zhang, Hyokun Yun, Choon Hui Teo, Trishul Chilimbi
In contrast to traditional exhaustive search, selective search first clusters documents into several groups before all the documents are searched exhaustively by a query, to limit the search executed within one group or only a few groups.
no code implementations • 22 Aug 2022 • Zhanyu Wang, Mingkang Tang, Lei Wang, Xiu Li, Luping Zhou
Automated radiographic report generation is a challenging cross-domain task that aims to automatically generate accurate and semantic-coherence reports to describe medical images.
no code implementations • 13 Oct 2021 • Mingkang Tang, Zhanyu Wang, Zhenhua Liu, Fengyun Rao, Dian Li, Xiu Li
It is noted that our model is only trained on the MSR-VTT dataset.
no code implementations • 11 Oct 2021 • Mingkang Tang, Zhanyu Wang, Zhaoyang Zeng, Fengyun Rao, Dian Li
We make the following improvements on the proposed CLIP4Caption++: We employ an advanced encoder-decoder model architecture X-Transformer as our main framework and make the following improvements: 1) we utilize three strong pre-trained CLIP models to extract the text-related appearance visual features.
no code implementations • CVPR 2021 • Zhanyu Wang, Luping Zhou, Lei Wang, Xiu Li
On one hand, the image-text matching branch helps to learn highly text-correlated visual features for the report generation branch to output high quality reports.
no code implementations • 26 Dec 2020 • Wenjie Li, Zhanyu Wang, Yichen Zhang, Guang Cheng
In this work, we investigate the idea of variance reduction by studying its properties with general adaptive mirror descent algorithms in nonsmooth nonconvex finite-sum optimization problems.
no code implementations • 5 Jul 2020 • Chi-Hua Wang, Zhanyu Wang, Will Wei Sun, Guang Cheng
In this paper, we propose a novel approach for designing dynamic pricing policy based regularized online statistical learning with theoretical guarantees.
1 code implementation • NeurIPS 2020 • Shih-Kang Chao, Zhanyu Wang, Yue Xing, Guang Cheng
In the light of the fact that the stochastic gradient descent (SGD) often finds a flat minimum valley in the training loss, we propose a novel directional pruning method which searches for a sparse minimizer in or close to that flat region.
no code implementations • 22 Feb 2020 • Zhanyu Wang, Jean Honorio
A key difference between meta-learning and the classical multi-task learning, is that meta-learning focuses only on the recovery of the parameters of the novel task, while multi-task learning estimates the parameter of all tasks, which requires l to grow with T .