no code implementations • 16 Jan 2024 • Qixin Zhang, Zongqi Wan, Zengde Deng, Zaiyi Chen, Xiaoming Sun, Jialin Zhang, Yu Yang
The fundamental idea of our boosting technique is to exploit non-oblivious search to derive a novel auxiliary function $F$, whose stationary points are excellent approximations to the global maximum of the original DR-submodular objective $f$.
no code implementations • 18 Aug 2022 • Qixin Zhang, Zengde Deng, Xiangru Jian, Zaiyi Chen, Haoyuan Hu, Yu Yang
Maximizing a monotone submodular function is a fundamental task in machine learning, economics, and statistics.
no code implementations • 16 Aug 2022 • Qixin Zhang, Zengde Deng, Zaiyi Chen, Kuangqi Zhou, Haoyuan Hu, Yu Yang
In this paper, we revisit the online non-monotone continuous DR-submodular maximization problem over a down-closed convex set, which finds wide real-world applications in the domain of machine learning, economics, and operations research.
no code implementations • 22 Apr 2022 • Hongbin Zhang, Yu Yang, Feng Wu, Qixin Zhang
Optimizing the assortment of products to display to customers is a key to increasing revenue for both offline and online retailers.
no code implementations • 3 Jan 2022 • Qixin Zhang, Zengde Deng, Zaiyi Chen, Haoyuan Hu, Yu Yang
In the online setting, for the first time we consider the adversarial delays for stochastic gradient feedback, under which we propose a boosting online gradient algorithm with the same non-oblivious function $F$.
no code implementations • 28 Dec 2021 • Qixin Zhang, Wenbing Ye, Zaiyi Chen, Haoyuan Hu, Enhong Chen, Yang Yu
As a result, only limited violations of constraints or pessimistic competitive bounds could be guaranteed.
1 code implementation • 12 Jul 2021 • Kaixin Wang, Kuangqi Zhou, Qixin Zhang, Jie Shao, Bryan Hooi, Jiashi Feng
It enables learning high-quality Laplacian representations that faithfully approximate the ground truth.