no code implementations • 16 Jan 2024 • Qixin Zhang, Zongqi Wan, Zengde Deng, Zaiyi Chen, Xiaoming Sun, Jialin Zhang, Yu Yang
The fundamental idea of our boosting technique is to exploit non-oblivious search to derive a novel auxiliary function $F$, whose stationary points are excellent approximations to the global maximum of the original DR-submodular objective $f$.
no code implementations • 9 Dec 2023 • Litian Liang, Liuyu Bian, Caiwei Xiao, Jialin Zhang, Linghao Chen, Isabella Liu, Fanbo Xiang, Zhiao Huang, Hao Su
Building robots that can automate labor-intensive tasks has long been the core motivation behind the advancements in computer vision and the robotics community.
no code implementations • 6 Oct 2023 • Qing Zhu, Qirong Mao, Jialin Zhang, Xiaohua Huang, Wenming Zheng
Group-level emotion recognition (GER) is an inseparable part of human behavior analysis, aiming to recognize an overall emotion in a multi-person scene.
no code implementations • 21 May 2023 • Zongqi Wan, Jialin Zhang, Wei Chen, Xiaoming Sun, Zhijie Zhang
Then we reduce submodular bandit with partition matroid constraint and bandit sequential monotone maximization to the online bandit learning of the monotone multi-linear DR-submodular functions, attaining $O(T^{2/3}\log T)$ of $(1-1/e)$-regret in both problems, which improve the existing results.
1 code implementation • 5 Mar 2023 • Alexander Detkov, Mohammad Salameh, Muhammad Fetrat Qharabagh, Jialin Zhang, Wei Lui, Shangling Jui, Di Niu
Reparameterization aims to improve the generalization of deep neural networks by transforming convolutional layers into equivalent multi-branched structures during training.
no code implementations • 21 Feb 2023 • Fred X. Han, Keith G. Mills, Fabian Chudak, Parsa Riahi, Mohammad Salameh, Jialin Zhang, Wei Lu, Shangling Jui, Di Niu
In this paper, we propose a general-purpose neural predictor for NAS that can transfer across search spaces, by representing any given candidate Convolutional Neural Network (CNN) with a Computation Graph (CG) that consists of primitive operators.
1 code implementation • 30 Nov 2022 • Keith G. Mills, Di Niu, Mohammad Salameh, Weichen Qiu, Fred X. Han, Puyuan Liu, Jialin Zhang, Wei Lu, Shangling Jui
Evaluating neural network performance is critical to deep neural network design but a costly procedure.
1 code implementation • 30 Nov 2022 • Keith G. Mills, Fred X. Han, Jialin Zhang, Fabian Chudak, Ali Safari Mamaghani, Mohammad Salameh, Wei Lu, Shangling Jui, Di Niu
In this paper, we propose GENNAPE, a Generalized Neural Architecture Performance Estimator, which is pretrained on open neural architecture benchmarks, and aims to generalize to completely unseen architectures through combined innovations in network representation, contrastive pretraining, and fuzzy clustering-based predictor ensemble.
no code implementations • 30 May 2022 • Zongqi Wan, Zhijie Zhang, Tongyang Li, Jialin Zhang, Xiaoming Sun
In this paper, we study MAB and SLB with quantum reward oracles and propose quantum algorithms for both models with $O(\mbox{poly}(\log T))$ regrets, exponentially improving the dependence in terms of $T$.
no code implementations • 27 Apr 2022 • Zongqi Wan, Xiaoming Sun, Jialin Zhang
Our lower bound works even when the loss sequence is oblivious but the delay is non-oblivious.
no code implementations • 29 Sep 2021 • Fred X. Han, Fabian Chudak, Keith G Mills, Mohammad Salameh, Parsa Riahi, Jialin Zhang, Wei Lu, Shangling Jui, Di Niu
Understanding and modelling the performance of neural architectures is key to Neural Architecture Search (NAS).
1 code implementation • 25 Sep 2021 • Keith G. Mills, Fred X. Han, Jialin Zhang, SEYED SAEED CHANGIZ REZAEI, Fabian Chudak, Wei Lu, Shuo Lian, Shangling Jui, Di Niu
Neural architecture search automates neural network design and has achieved state-of-the-art results in many deep learning applications.
no code implementations • 13 Sep 2021 • Zhijie Zhang, Wei Chen, Xiaoming Sun, Jialin Zhang
We study the online influence maximization (OIM) problem in social networks, where the learner repeatedly chooses seed nodes to generate cascades, observes the cascade feedback, and gradually learns the best seeds that generate the largest cascade in multiple rounds.
no code implementations • 7 Jun 2021 • Zhijie Zhang, Wei Chen, Xiaoming Sun, Jialin Zhang
Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.
no code implementations • 29 Jul 2020 • Jianhao He, Feidiao Yang, Jialin Zhang, Lvzhou Li
(ii) We show that for strongly convex loss functions, the quantum algorithm can achieve $O(\log T)$ regret with $O(1)$ queries as well, which means that the quantum algorithm can achieve the same regret bound as the classical algorithms in the full information setting.
no code implementations • ICML 2020 • Wei Chen, Xiaoming Sun, Jialin Zhang, Zhijie Zhang
We revisit the optimization from samples (OPS) model, which studies the problem of optimizing objective functions directly from the sample data.
no code implementations • 19 Feb 2019 • Jingyi Shi, Jialin Zhang, Yaorong Ge
The main advantages of the proposed method are: 1) it selects features more efficiently with the help of an improved entropy estimator, particularly when the sample size is small, and 2) it automatically learns the number of features to be selected based on the information from sample data.
no code implementations • 23 Nov 2016 • Jia Zhang, Zheng Wang, Qian Li, Jialin Zhang, Yanyan Lan, Qiang Li, Xiaoming Sun
In the guaranteed delivery scenario, ad exposures (which are also called impressions in some works) to users are guaranteed by contracts signed in advance between advertisers and publishers.