Search Results for author: Jialin Zhang

Found 18 papers, 4 papers with code

Boosting Gradient Ascent for Continuous DR-submodular Maximization

no code implementations16 Jan 2024 Qixin Zhang, Zongqi Wan, Zengde Deng, Zaiyi Chen, Xiaoming Sun, Jialin Zhang, Yu Yang

The fundamental idea of our boosting technique is to exploit non-oblivious search to derive a novel auxiliary function $F$, whose stationary points are excellent approximations to the global maximum of the original DR-submodular objective $f$.

Robo360: A 3D Omnispective Multi-Material Robotic Manipulation Dataset

no code implementations9 Dec 2023 Litian Liang, Liuyu Bian, Caiwei Xiao, Jialin Zhang, Linghao Chen, Isabella Liu, Fanbo Xiang, Zhiao Huang, Hao Su

Building robots that can automate labor-intensive tasks has long been the core motivation behind the advancements in computer vision and the robotics community.

Representation Learning

Towards A Robust Group-level Emotion Recognition via Uncertainty-Aware Learning

no code implementations6 Oct 2023 Qing Zhu, Qirong Mao, Jialin Zhang, Xiaohua Huang, Wenming Zheng

Group-level emotion recognition (GER) is an inseparable part of human behavior analysis, aiming to recognize an overall emotion in a multi-person scene.

Emotion Recognition Image Enhancement

Bandit Multi-linear DR-Submodular Maximization and Its Applications on Adversarial Submodular Bandits

no code implementations21 May 2023 Zongqi Wan, Jialin Zhang, Wei Chen, Xiaoming Sun, Zhijie Zhang

Then we reduce submodular bandit with partition matroid constraint and bandit sequential monotone maximization to the online bandit learning of the monotone multi-linear DR-submodular functions, attaining $O(T^{2/3}\log T)$ of $(1-1/e)$-regret in both problems, which improve the existing results.

Reparameterization through Spatial Gradient Scaling

1 code implementation5 Mar 2023 Alexander Detkov, Mohammad Salameh, Muhammad Fetrat Qharabagh, Jialin Zhang, Wei Lui, Shangling Jui, Di Niu

Reparameterization aims to improve the generalization of deep neural networks by transforming convolutional layers into equivalent multi-branched structures during training.

A General-Purpose Transferable Predictor for Neural Architecture Search

no code implementations21 Feb 2023 Fred X. Han, Keith G. Mills, Fabian Chudak, Parsa Riahi, Mohammad Salameh, Jialin Zhang, Wei Lu, Shangling Jui, Di Niu

In this paper, we propose a general-purpose neural predictor for NAS that can transfer across search spaces, by representing any given candidate Convolutional Neural Network (CNN) with a Computation Graph (CG) that consists of primitive operators.

Contrastive Learning Graph Representation Learning +1

GENNAPE: Towards Generalized Neural Architecture Performance Estimators

1 code implementation30 Nov 2022 Keith G. Mills, Fred X. Han, Jialin Zhang, Fabian Chudak, Ali Safari Mamaghani, Mohammad Salameh, Wei Lu, Shangling Jui, Di Niu

In this paper, we propose GENNAPE, a Generalized Neural Architecture Performance Estimator, which is pretrained on open neural architecture benchmarks, and aims to generalize to completely unseen architectures through combined innovations in network representation, contrastive pretraining, and fuzzy clustering-based predictor ensemble.

Contrastive Learning Image Classification +1

Quantum Multi-Armed Bandits and Stochastic Linear Bandits Enjoy Logarithmic Regrets

no code implementations30 May 2022 Zongqi Wan, Zhijie Zhang, Tongyang Li, Jialin Zhang, Xiaoming Sun

In this paper, we study MAB and SLB with quantum reward oracles and propose quantum algorithms for both models with $O(\mbox{poly}(\log T))$ regrets, exponentially improving the dependence in terms of $T$.

Multi-Armed Bandits reinforcement-learning +1

Bounded Memory Adversarial Bandits with Composite Anonymous Delayed Feedback

no code implementations27 Apr 2022 Zongqi Wan, Xiaoming Sun, Jialin Zhang

Our lower bound works even when the loss sequence is oblivious but the delay is non-oblivious.

Profiling Neural Blocks and Design Spaces for Mobile Neural Architecture Search

1 code implementation25 Sep 2021 Keith G. Mills, Fred X. Han, Jialin Zhang, SEYED SAEED CHANGIZ REZAEI, Fabian Chudak, Wei Lu, Shuo Lian, Shangling Jui, Di Niu

Neural architecture search automates neural network design and has achieved state-of-the-art results in many deep learning applications.

Neural Architecture Search

Online Influence Maximization under the Independent Cascade Model with Node-Level Feedback

no code implementations13 Sep 2021 Zhijie Zhang, Wei Chen, Xiaoming Sun, Jialin Zhang

We study the online influence maximization (OIM) problem in social networks, where the learner repeatedly chooses seed nodes to generate cascades, observes the cascade feedback, and gradually learns the best seeds that generate the largest cascade in multiple rounds.

Network Inference and Influence Maximization from Samples

no code implementations7 Jun 2021 Zhijie Zhang, Wei Chen, Xiaoming Sun, Jialin Zhang

Our IMS algorithms enhance the learning-and-then-optimization approach by allowing a constant approximation ratio even when the diffusion parameters are hard to learn, and we do not need any assumption related to the network structure or diffusion parameters.

Quantum Algorithm for Online Convex Optimization

no code implementations29 Jul 2020 Jianhao He, Feidiao Yang, Jialin Zhang, Lvzhou Li

(ii) We show that for strongly convex loss functions, the quantum algorithm can achieve $O(\log T)$ regret with $O(1)$ queries as well, which means that the quantum algorithm can achieve the same regret bound as the classical algorithms in the full information setting.

Optimization from Structured Samples for Coverage Functions

no code implementations ICML 2020 Wei Chen, Xiaoming Sun, Jialin Zhang, Zhijie Zhang

We revisit the optimization from samples (OPS) model, which studies the problem of optimizing objective functions directly from the sample data.

Computational Efficiency

An entropic feature selection method in perspective of Turing formula

no code implementations19 Feb 2019 Jingyi Shi, Jialin Zhang, Yaorong Ge

The main advantages of the proposed method are: 1) it selects features more efficiently with the help of an improved entropy estimator, particularly when the sample size is small, and 2) it automatically learns the number of features to be selected based on the information from sample data.

feature selection

Efficient Delivery Policy to Minimize User Traffic Consumption in Guaranteed Advertising

no code implementations23 Nov 2016 Jia Zhang, Zheng Wang, Qian Li, Jialin Zhang, Yanyan Lan, Qiang Li, Xiaoming Sun

In the guaranteed delivery scenario, ad exposures (which are also called impressions in some works) to users are guaranteed by contracts signed in advance between advertisers and publishers.

Cannot find the paper you are looking for? You can Submit a new open access paper.