no code implementations • 25 Nov 2024 • Congliang Chen, Li Shen, Zhiqiang Xu, Wei Liu, Zhi-Quan Luo, Peilin Zhao
We conduct a convergence analysis for a carefully chosen step size to maintain stability.
1 code implementation • 9 Nov 2024 • Ruiyu Li, Peilin Zhao, Guangxia Li, Zhiqiang Xu, XueWei Li
In a classical distributed computing architecture with a central server, the proposed OMTL algorithm with the ADMM optimizer outperforms SGD-based approaches in terms of accuracy and efficiency.
1 code implementation • 23 Oct 2024 • Shansan Gong, Shivam Agarwal, Yizhe Zhang, Jiacheng Ye, Lin Zheng, Mukai Li, Chenxin An, Peilin Zhao, Wei Bi, Jiawei Han, Hao Peng, Lingpeng Kong
Diffusion Language Models (DLMs) have emerged as a promising new paradigm for text generative modeling, potentially addressing limitations of autoregressive (AR) models.
no code implementations • 12 Oct 2024 • Qingyang Zhang, Yatao Bian, Xinke Kong, Peilin Zhao, Changqing Zhang
Machine learning models must continuously self-adjust themselves for novel data distribution in the open world.
no code implementations • 20 Aug 2024 • Haoyu Wang, Bingzhe Wu, Yatao Bian, Yongzhe Chang, Xueqian Wang, Peilin Zhao
Trained on external or self-generated harmful datasets, the cost value model could successfully influence the original safe LLM to output toxic content in decoding process.
1 code implementation • 4 Aug 2024 • Fushuo Huo, Wenchao Xu, Zhong Zhang, Haozhao Wang, Zhicheng Chen, Peilin Zhao
While Large Vision-Language Models (LVLMs) have rapidly advanced in recent years, the prevalent issue known as the `hallucination' problem has emerged as a significant bottleneck, hindering their real-world deployments.
no code implementations • 30 May 2024 • Zhicheng Chen, Xi Xiao, Ke Xu, Zhong Zhang, Yu Rong, Qing Li, Guojun Gan, Zhiqiang Xu, Peilin Zhao
Multivariate time series prediction is widely used in daily life, which poses significant challenges due to the complex correlations that exist at multi-grained levels.
1 code implementation • 23 May 2024 • Dezhong Yao, Sanmu Li, Yutong Dai, Zhiqiang Xu, Shengshan Hu, Peilin Zhao, Lichao Sun
Federated continual learning (FCL) has received increasing attention due to its potential in handling real-world streaming data, characterized by evolving data distributions and varying client classes over time.
1 code implementation • 2 Apr 2024 • Shuaicheng Niu, Chunyan Miao, Guohao Chen, Pengcheng Wu, Peilin Zhao
However, in real-world scenarios, models are usually deployed on resource-limited devices, e. g., FPGAs, and are often quantized and hard-coded with non-modifiable parameters for acceleration.
no code implementations • 18 Mar 2024 • Mingkui Tan, Guohao Chen, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Peilin Zhao, Shuaicheng Niu
To tackle this, we further propose EATA with Calibration (EATA-C) to separately exploit the reducible model uncertainty and the inherent data uncertainty for calibrated TTA.
1 code implementation • 1 Mar 2024 • Huan Ma, Yan Zhu, Changqing Zhang, Peilin Zhao, Baoyuan Wu, Long-Kai Huang, QinGhua Hu, Bingzhe Wu
Vision-language foundation models have exhibited remarkable success across a multitude of downstream tasks due to their scalability on extensive image-text paired data.
no code implementations • 12 Feb 2024 • Haoyu Wang, Guozheng Ma, Ziqiao Meng, Zeyu Qin, Li Shen, Zhong Zhang, Bingzhe Wu, Liu Liu, Yatao Bian, Tingyang Xu, Xueqian Wang, Peilin Zhao
To further exploit the capabilities of bootstrapping, we investigate and adjust the training order of data, which yields improved performance of the model.
no code implementations • 5 Feb 2024 • Binghui Xie, Yatao Bian, Kaiwen Zhou, Yongqiang Chen, Peilin Zhao, Bo Han, Wei Meng, James Cheng
Learning neural subset selection tasks, such as compound selection in AI-aided drug discovery, have become increasingly pivotal across diverse applications.
no code implementations • 16 Nov 2023 • Liang Chen, Yatao Bian, Yang Deng, Deng Cai, Shuaiyi Li, Peilin Zhao, Kam-Fai Wong
Text watermarking has emerged as a pivotal technique for identifying machine-generated text.
1 code implementation • 23 Oct 2023 • Yihuai Lan, Zhiqiang Hu, Lei Wang, Yang Wang, Deheng Ye, Peilin Zhao, Ee-Peng Lim, Hui Xiong, Hao Wang
This paper explores the open research problem of understanding the social behaviors of LLM-based agents.
no code implementations • 12 Oct 2023 • Yiqiang Yi, Xu Wan, Yatao Bian, Le Ou-Yang, Peilin Zhao
Predicting the docking between proteins and ligands is a crucial and challenging task for drug discovery.
no code implementations • 5 Oct 2023 • Huan Ma, Changqing Zhang, Huazhu Fu, Peilin Zhao, Bingzhe Wu
Specifically, we discuss the differences between discriminative and generative models using content moderation as an example.
1 code implementation • 20 Sep 2023 • Haoyu Wang, Guozheng Ma, Cong Yu, Ning Gui, Linrui Zhang, Zhiqi Huang, Suwei Ma, Yongzhe Chang, Sen Zhang, Li Shen, Xueqian Wang, Peilin Zhao, DaCheng Tao
Notably, we are surprised to discover that robustness tends to decrease as fine-tuning (SFT and RLHF) is conducted.
no code implementations • 25 Aug 2023 • Yang Liu, Jiashun Cheng, Haihong Zhao, Tingyang Xu, Peilin Zhao, Fugee Tsung, Jia Li, Yu Rong
Furthermore, we offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states, which is crucial for model generalization.
1 code implementation • 21 Aug 2023 • Zixuan Liu, Liu Liu, Xueqian Wang, Peilin Zhao
Differentiable optimization has received a significant amount of attention due to its foundational role in the domain of machine learning based on neural networks.
no code implementations • 30 Jul 2023 • Peng Tang, Zhiqiang Xu, Pengfei Wei, Xiaobin Hu, Peilin Zhao, Xin Cao, Chunlai Zhou, Tobias Lasser
To further alleviate the contingent effect of recursive stacking, i. e., ringing artifacts, we add identity shortcuts between atrous convolutions to simulate residual deconvolutions.
no code implementations • 28 Jun 2023 • Ziqiao Meng, Peilin Zhao, Yang Yu, Irwin King
Reaction and retrosynthesis prediction are fundamental tasks in computational chemistry that have recently garnered attention from both the machine learning and drug discovery communities.
1 code implementation • NeurIPS 2023 • Jianheng Tang, Fengrui Hua, Ziqi Gao, Peilin Zhao, Jia Li
With a long history of traditional Graph Anomaly Detection (GAD) algorithms and recently popular Graph Neural Networks (GNNs), it is still not clear (1) how they perform under a standard comprehensive setting, (2) whether GNNs can outperform traditional algorithms such as tree ensembles, and (3) how about their efficiency on large-scale graphs.
no code implementations • 11 Jun 2023 • Wensong Bai, Chao Zhang, Yichao Fu, Peilin Zhao, Hui Qian, Bin Dai
As a result, PACER fully utilizes the modeling capability of the push-forward operator and is able to explore a broader class of the policy space, compared with limited policy classes used in existing distributional actor critic algorithms (i. e. Gaussians).
no code implementations • 5 Jun 2023 • Ziqiao Meng, Peilin Zhao, Yang Yu, Irwin King
However, the current non-autoregressive decoder does not satisfy two essential rules of electron redistribution modeling simultaneously: the electron-counting rule and the symmetry rule.
no code implementations • 26 May 2023 • Qichao Wang, Huan Ma, WenTao Wei, Hangyu Li, Liang Chen, Peilin Zhao, Binwen Zhao, Bo Hu, Shu Zhang, Zibin Zheng, Bingzhe Wu
The rapid development of digital economy has led to the emergence of various black and shadow internet industries, which pose potential risks that can be identified and managed through digital risk management (DRM) that uses different techniques such as machine learning and deep learning.
no code implementations • 23 May 2023 • Yuanfeng Ji, Yatao Bian, Guoji Fu, Peilin Zhao, Ping Luo
Firstly, SyNDock formulates multimeric protein docking as a problem of learning global transformations to holistically depict the placement of chain units of a complex, enabling a learning-centric solution.
no code implementations • 9 Apr 2023 • Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, QinGhua Hu, Bingzhe Wu, Changqing Zhang, Jianhua Yao
Subpopulation shift exists widely in many real-world applications, which refers to the training and test distributions that contain the same subpopulation groups but with different subpopulation proportions.
no code implementations • 13 Mar 2023 • Ziniu Li, Ke Xu, Liu Liu, Lanqing Li, Deheng Ye, Peilin Zhao
To address this issue, we propose an alternative framework that involves a human supervising the RL models and providing additional feedback in the online deployment phase.
1 code implementation • 24 Feb 2023 • Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Zhiquan Wen, Yaofo Chen, Peilin Zhao, Mingkui Tan
In this paper, we investigate the unstable reasons and find that the batch norm layer is a crucial factor hindering TTA stability.
1 code implementation • CVPR 2023 • Zhipeng Zhou, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Wei Gong
It's widely acknowledged that deep learning models with flatter minima in its loss landscape tend to generalize better.
1 code implementation • CVPR 2023 • Deng-Bao Wang, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Min-Ling Zhang
It has been recently found that models trained with mixup also perform well on uncertainty calibration.
no code implementations • 30 Nov 2022 • Ziqi Gao, Yifan Niu, Jiashun Cheng, Jianheng Tang, Tingyang Xu, Peilin Zhao, Lanqing Li, Fugee Tsung, Jia Li
In this work, we present a regularized graph autoencoder for graph attribute imputation, named MEGAE, which aims at mitigating spectral concentration problem by maximizing the graph spectral entropy.
no code implementations • 27 Oct 2022 • Yiqiang Yi, Xu Wan, Kangfei Zhao, Le Ou-Yang, Peilin Zhao
The proposed ELGN firstly adds a super node to the 3D complex, and then builds a line graph based on the 3D complex.
no code implementations • 20 Oct 2022 • Zeyu Cao, Zhipeng Liang, Shu Zhang, Hangyu Li, Ouyang Wen, Yu Rong, Peilin Zhao, Bingzhe Wu
In this paper, we investigate a novel problem of building contextual bandits in the vertical federated setting, i. e., contextual information is vertically distributed over different departments.
1 code implementation • 19 Oct 2022 • Chengqian Gao, Ke Xu, Liu Liu, Deheng Ye, Peilin Zhao, Zhiqiang Xu
A promising paradigm for offline reinforcement learning (RL) is to constrain the learned policy to stay close to the dataset behaviors, known as policy constraint offline RL.
1 code implementation • 14 Oct 2022 • Yong Guo, Yaofo Chen, Yin Zheng, Qi Chen, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
More critically, these independent search processes cannot share their learned knowledge (i. e., the distribution of good architectures) with each other and thus often result in limited search results.
1 code implementation • 30 Sep 2022 • Songtao Liu, Zhengkai Tu, Minkai Xu, Zuobai Zhang, Lu Lin, Rex Ying, Jian Tang, Peilin Zhao, Dinghao Wu
Current strategies use a decoupled approach of single-step retrosynthesis models and search algorithms, taking only the product as the input to predict the reactants for each planning step and ignoring valuable context information along the synthetic route.
no code implementations • 27 Sep 2022 • Jiahan Liu, Chaochao Yan, Yang Yu, Chan Lu, Junzhou Huang, Le Ou-Yang, Peilin Zhao
In this paper, we propose a novel end-to-end graph generation model for retrosynthesis prediction, which sequentially identifies the reaction center, generates the synthons, and adds motifs to the synthons to generate reactants.
Ranked #2 on Single-step retrosynthesis on USPTO-50k
1 code implementation • 19 Sep 2022 • Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, Bingzhe Wu, Changqing Zhang, Jianhua Yao
Importance reweighting is a normal way to handle the subpopulation shift issue by imposing constant or adaptive sampling weights on each sample in the training dataset.
1 code implementation • 16 Sep 2022 • Lanqing Li, Liang Zeng, Ziqi Gao, Shen Yuan, Yatao Bian, Bingzhe Wu, Hengtong Zhang, Yang Yu, Chan Lu, Zhipeng Zhou, Hongteng Xu, Jia Li, Peilin Zhao, Pheng-Ann Heng
The last decade has witnessed a prosperous development of computational methods and dataset curation for AI-aided drug discovery (AIDD).
no code implementations • 11 Aug 2022 • Ke Xu, Jianqiao Wangni, Yifan Zhang, Deheng Ye, Jiaxiang Wu, Peilin Zhao
Therefore, a threshold quantization strategy with a relatively small error is adopted in QCMD adagrad and QRDA adagrad to improve the signal-to-noise ratio and preserve the sparsity of the model.
3 code implementations • 15 Jun 2022 • Yongqiang Chen, Kaiwen Zhou, Yatao Bian, Binghui Xie, Bingzhe Wu, Yonggang Zhang, Kaili Ma, Han Yang, Peilin Zhao, Bo Han, James Cheng
Recently, there has been a growing surge of interest in enabling machine learning systems to generalize well to Out-of-Distribution (OOD) data.
no code implementations • 23 May 2022 • Liang Zeng, Lanqing Li, Ziqi Gao, Peilin Zhao, Jian Li
Motivated by this observation, we propose a principled GCL framework on Imbalanced node classification (ImGCL), which automatically and adaptively balances the representations learned from GCL without labels.
no code implementations • 20 May 2022 • Bingzhe Wu, Jintang Li, Junchi Yu, Yatao Bian, Hengtong Zhang, Chaochao Chen, Chengbin Hou, Guoji Fu, Liang Chen, Tingyang Xu, Yu Rong, Xiaolin Zheng, Junzhou Huang, Ran He, Baoyuan Wu, Guangyu Sun, Peng Cui, Zibin Zheng, Zhe Liu, Peilin Zhao
Deep graph learning has achieved remarkable progresses in both business and scientific areas ranging from finance and e-commerce, to drug and advanced material discovery.
no code implementations • 16 May 2022 • Shibo Feng, Chunyan Miao, Ke Xu, Jiaxiang Wu, Pengcheng Wu, Yang Zhang, Peilin Zhao
The probability prediction of multivariate time series is a notoriously challenging but practical task.
no code implementations • 12 May 2022 • Qianggang Ding, Deheng Ye, Tingyang Xu, Peilin Zhao
To the best of our knowledge, our method is the first GNN-based bilevel optimization framework for resolving this task.
no code implementations • 16 Apr 2022 • Bingzhe Wu, Zhipeng Liang, Yuxuan Han, Yatao Bian, Peilin Zhao, Junzhou Huang
In this paper, we propose a general framework to solve the above two challenges simultaneously.
1 code implementation • 6 Apr 2022 • Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yaofo Chen, Shijian Zheng, Peilin Zhao, Mingkui Tan
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and testing data by adapting a given model w. r. t.
no code implementations • 21 Mar 2022 • Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Guanghui Xu, Haokun Li, Peilin Zhao, Junzhou Huang, YaoWei Wang, Mingkui Tan
Motivated by this, we propose to predict those hard-classified test samples in a looped manner to boost the model performance.
1 code implementation • 3 Mar 2022 • Zijing Ou, Tingyang Xu, Qinliang Su, Yingzhen Li, Peilin Zhao, Yatao Bian
Learning neural set functions becomes increasingly more important in many applications like product recommendation and compound selection in AI-aided drug discovery.
1 code implementation • 17 Feb 2022 • Erxue Min, Runfa Chen, Yatao Bian, Tingyang Xu, Kangfei Zhao, Wenbing Huang, Peilin Zhao, Junzhou Huang, Sophia Ananiadou, Yu Rong
In this survey, we provide a comprehensive review of various Graph Transformer models from the architectural design perspective.
no code implementations • 25 Jan 2022 • Erxue Min, Yu Rong, Tingyang Xu, Yatao Bian, Peilin Zhao, Junzhou Huang, Da Luo, Kangyi Lin, Sophia Ananiadou
Although these methods have made great progress, they are often limited by the recommender system's direct exposure and inactive interactions, and thus fail to mine all potential user interests.
1 code implementation • 24 Jan 2022 • Yuanfeng Ji, Lu Zhang, Jiaxiang Wu, Bingzhe Wu, Long-Kai Huang, Tingyang Xu, Yu Rong, Lanqing Li, Jie Ren, Ding Xue, Houtim Lai, Shaoyong Xu, Jing Feng, Wei Liu, Ping Luo, Shuigeng Zhou, Junzhou Huang, Peilin Zhao, Yatao Bian
AI-aided drug discovery (AIDD) is gaining increasing popularity due to its promise of making the search for new pharmaceuticals quicker, cheaper and more efficient.
1 code implementation • 20 Dec 2021 • Chaochao Yan, Peilin Zhao, Chan Lu, Yang Yu, Junzhou Huang
To overcome this limitation, we propose an innovative retrosynthesis prediction framework that can compose novel templates beyond training templates.
Ranked #3 on Single-step retrosynthesis on USPTO-50k
1 code implementation • CVPR 2022 • Yicheng Qian, Weixin Luo, Dongze Lian, Xu Tang, Peilin Zhao, Shenghua Gao
In this paper, we propose a novel sequence verification task that aims to distinguish positive video pairs performing the same action sequence from negative ones with step-level transformations but still conducting the same task.
no code implementations • 1 Dec 2021 • Runhao Zeng, Wenbing Huang, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, Chuang Gan
To this end, we propose a general graph convolutional module (GCM) that can be easily plugged into existing action localization methods, including two-stage and one-stage paradigms.
Ranked #2 on Temporal Action Localization on THUMOS’14 (mAP IOU@0.1 metric)
2 code implementations • 14 Nov 2021 • Guoji Fu, Peilin Zhao, Yatao Bian
Graph neural networks (GNNs) have demonstrated superior performance for semi-supervised node classification on graphs, as a result of their ability to exploit node features and topological information simultaneously.
2 code implementations • NeurIPS 2021 • Huaxiu Yao, Yu Wang, Ying WEI, Peilin Zhao, Mehrdad Mahdavi, Defu Lian, Chelsea Finn
In ATS, for the first time, we design a neural scheduler to decide which meta-training tasks to use next by predicting the probability being sampled for each candidate task, and train the scheduler to optimize the generalization capacity of the meta-model to unseen tasks.
no code implementations • 15 Oct 2021 • Chengqian Gao, Ke Xu, Kuangqi Zhou, Lanqing Li, Xueqian Wang, Bo Yuan, Peilin Zhao
To alleviate the action distribution shift problem in extracting RL policy from static trajectories, we propose Value Penalized Q-learning (VPQ), an uncertainty-based offline RL algorithm.
no code implementations • 29 Sep 2021 • Tian Bian, Tingyang Xu, Yu Rong, Wenbing Huang, Xi Xiao, Peilin Zhao, Junzhou Huang, Hong Cheng
Graph Clustering, which clusters the nodes of a graph given its collection of node features and edge connections in an unsupervised manner, has long been researched in graph learning and is essential in certain applications.
1 code implementation • 8 Sep 2021 • Songtao Liu, Rex Ying, Hanze Dong, Lanqing Li, Tingyang Xu, Yu Rong, Peilin Zhao, Junzhou Huang, Dinghao Wu
To address this, we propose a simple and efficient data augmentation strategy, local augmentation, to learn the distribution of the node features of the neighbors conditioned on the central node's feature and enhance GNN's expressive power with generated features.
1 code implementation • 1 Jul 2021 • Shuaicheng Niu, Jiaxiang Wu, Guanghui Xu, Yifan Zhang, Yong Guo, Peilin Zhao, Peng Wang, Mingkui Tan
To address this, we present a neural architecture adaptation method, namely Adaptation eXpert (AdaXpert), to efficiently adjust previous architectures on the growing data.
no code implementations • 29 May 2021 • Hongteng Xu, Peilin Zhao, Junzhou Huang, Dixin Luo
A linear graphon factorization model works as a decoder, leveraging the latent representations to reconstruct the induced graphons (and the corresponding observed graphs).
1 code implementation • ICCV 2021 • Jinyu Yang, Chunyuan Li, Weizhi An, Hehuan Ma, Yuzhi Guo, Yu Rong, Peilin Zhao, Junzhou Huang
Recent studies imply that deep neural networks are vulnerable to adversarial examples -- inputs with a slight but intentional perturbation are incorrectly classified by the network.
no code implementations • 15 Apr 2021 • Dezhong Yao, Peilin Zhao, Chen Yu, Hai Jin, Bin Li
This is clearly inefficient for high dimensional tasks due to its high memory and computational complexity.
1 code implementation • 14 Apr 2021 • Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Carl Yang, Han Xie, Lichao Sun, Lifang He, Liangwei Yang, Philip S. Yu, Yu Rong, Peilin Zhao, Junzhou Huang, Murali Annavaram, Salman Avestimehr
FedGraphNN is built on a unified formulation of graph FL and contains a wide range of datasets from different domains, popular GNN models, and FL algorithms, with secure and efficient system support.
no code implementations • 27 Feb 2021 • Yong Guo, Yaofo Chen, Yin Zheng, Qi Chen, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
To this end, we propose a Pareto-Frontier-aware Neural Architecture Generator (NAG) which takes an arbitrary budget as input and produces the Pareto optimal architecture for the target budget.
2 code implementations • 20 Feb 2021 • Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Zhipeng Li, Jian Chen, Peilin Zhao, Junzhou Huang
To address this issue, we propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization.
no code implementations • 1 Jan 2021 • Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
To find promising architectures under different budgets, existing methods may have to perform an independent search for each budget, which is very inefficient and unnecessary.
1 code implementation • 16 Dec 2020 • Jinyu Yang, Peilin Zhao, Yu Rong, Chaochao Yan, Chunyuan Li, Hehuan Ma, Junzhou Huang
Graph Neural Networks (GNNs) draw their strength from explicitly modeling the topological information of structured data.
1 code implementation • NeurIPS 2020 • Sifan Wu, Xi Xiao, Qianggang Ding, Peilin Zhao, Ying WEI, Junzhou Huang
Specifically, AST adopts a Sparse Transformer as the generator to learn a sparse attention map for time series forecasting, and uses a discriminator to improve the prediction performance from sequence level.
Multivariate Time Series Forecasting Probabilistic Time Series Forecasting +1
no code implementations • 25 Nov 2020 • Deheng Ye, Guibin Chen, Peilin Zhao, Fuhao Qiu, Bo Yuan, Wen Zhang, Sheng Chen, Mingfei Sun, Xiaoqian Li, Siqin Li, Jing Liang, Zhenjie Lian, Bei Shi, Liang Wang, Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang
Unlike prior attempts, we integrate the macro-strategy and the micromanagement of MOBA-game-playing into neural networks in a supervised and end-to-end manner.
1 code implementation • NeurIPS 2020 • Chaochao Yan, Qianggang Ding, Peilin Zhao, Shuangjia Zheng, Jinyu Yang, Yang Yu, Junzhou Huang
Retrosynthesis is the process of recursively decomposing target molecules into available building blocks.
5 code implementations • 27 Jul 2020 • Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Xinghua Zhu, Jianzong Wang, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram, Salman Avestimehr
Federated learning (FL) is a rapidly growing research field in machine learning.
no code implementations • 12 Jul 2020 • Tian Bian, Xi Xiao, Tingyang Xu, Yu Rong, Wenbing Huang, Peilin Zhao, Junzhou Huang
Upon a formal discussion of the variants of IGI, we choose a particular case study of node clustering by making use of the graph labels and node features, with an assistance of a hierarchical graph that further characterizes the connections between different graphs.
1 code implementation • ICML 2020 • Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou Huang, Mingkui Tan
With the proposed search strategy, our Curriculum Neural Architecture Search (CNAS) method significantly improves the search efficiency and finds better architectures than existing NAS methods.
1 code implementation • 5 Jul 2020 • Yifan Zhang, Ying WEI, Qingyao Wu, Peilin Zhao, Shuaicheng Niu, Junzhou Huang, Mingkui Tan
Deep learning based medical image diagnosis has shown great potential in clinical medicine.
2 code implementations • IJCAI 2020 • Ke Xu, Yifan Zhang, Deheng Ye, Peilin Zhao, Mingkui Tan
One of the key issues is how to represent the non-stationary price series of assets in a portfolio, which is important for portfolio decisions.
1 code implementation • ICLR 2020 • Dongze Lian, Yin Zheng, Yintao Xu, Yanxiong Lu, Leyu Lin, Peilin Zhao, Junzhou Huang, Shenghua Gao
Recently, Neural Architecture Search (NAS) has been successfully applied to multiple artificial intelligence areas and shows better performance compared with hand-designed networks.
1 code implementation • 30 Apr 2020 • Yifan Zhang, Shuaicheng Niu, Zhen Qiu, Ying WEI, Peilin Zhao, Jianhua Yao, Junzhou Huang, Qingyao Wu, Mingkui Tan
There are two main challenges: 1) the discrepancy of data distributions between domains; 2) the task difference between the diagnosis of typical pneumonia and COVID-19.
no code implementations • 29 Mar 2020 • Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Yong Guo, Peilin Zhao, Junzhou Huang, Mingkui Tan
To alleviate the performance disturbance issue, we propose a new disturbance-immune update strategy for model updating.
no code implementations • 12 Mar 2020 • Chaochao Chen, Ziqi Liu, Peilin Zhao, Jun Zhou, Xiaolong Li
However, existing MF approaches suffer from two major problems: (1) Expensive computations and storages due to the centralized model training mechanism: the centralized learners have to maintain the whole user-item rating matrix, and potentially huge low rank matrices.
no code implementations • 9 Mar 2020 • Jinyu Yang, Weizhi An, Chaochao Yan, Peilin Zhao, Junzhou Huang
To achieve this goal, we design two cross-domain attention modules to adapt context dependencies from both spatial and channel views.
Ranked #29 on Domain Adaptation on SYNTHIA-to-Cityscapes
no code implementations • 6 Mar 2020 • Yifan Zhang, Peilin Zhao, Qingyao Wu, Bin Li, Junzhou Huang, Mingkui Tan
This task, however, has two main difficulties: (i) the non-stationary price series and complex asset correlations make the learning of feature representation very hard; (ii) the practicality principle in financial markets requires controlling both transaction and risk costs.
no code implementations • 1 Mar 2020 • JieZhang Cao, Langyuan Mo, Qing Du, Yong Guo, Peilin Zhao, Junzhou Huang, Mingkui Tan
However, the resultant optimization problem is still intractable.
2 code implementations • 17 Jan 2020 • Tian Bian, Xi Xiao, Tingyang Xu, Peilin Zhao, Wenbing Huang, Yu Rong, Junzhou Huang
Meanwhile, detecting rumors from such massive information in social media is becoming an arduous challenge.
no code implementations • 20 Dec 2019 • Deheng Ye, Zhao Liu, Mingfei Sun, Bei Shi, Peilin Zhao, Hao Wu, Hongsheng Yu, Shaojie Yang, Xipeng Wu, Qingwei Guo, Qiaobo Chen, Yinyuting Yin, Hao Zhang, Tengfei Shi, Liang Wang, Qiang Fu, Wei Yang, Lanxiao Huang
We study the reinforcement learning problem of complex action control in the Multi-player Online Battle Arena (MOBA) 1v1 games.
1 code implementation • 18 Nov 2019 • Yifan Zhang, Peilin Zhao, Shuaicheng Niu, Qingyao Wu, JieZhang Cao, Junzhou Huang, Mingkui Tan
In these problems, there are two key challenges: the query budget is often limited; the ratio between classes is highly imbalanced.
1 code implementation • 17 Nov 2019 • Yifan Zhang, Ying WEI, Peilin Zhao, Shuaicheng Niu, Qingyao Wu, Mingkui Tan, Junzhou Huang
In this paper, we seek to exploit rich labeled data from relevant domains to help the learning in the target task with unsupervised domain adaptation (UDA).
1 code implementation • NeurIPS 2019 • Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Jian Chen, Peilin Zhao, Junzhou Huang
To verify the effectiveness of the proposed strategies, we apply NAT on both hand-crafted architectures and NAS based architectures.
no code implementations • 21 Oct 2019 • Chao Zhang, Jiahao Xie, Zebang Shen, Peilin Zhao, Tengfei Zhou, Hui Qian
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the Markov Chain Monte Carlo (MCMC) sampling.
no code implementations • 25 Sep 2019 • Kelong Mao, Peilin Zhao, Tingyang Xu, Yu Rong, Xi Xiao, Junzhou Huang
With massive possible synthetic routes in chemistry, retrosynthesis prediction is still a challenge for researchers.
Ranked #10 on Single-step retrosynthesis on USPTO-50k
no code implementations • 25 Sep 2019 • JieZhang Cao, Jincheng Li, Xiping Hu, Peilin Zhao, Mingkui Tan
ii) the $W$-distance of a specific layer to the target distribution tends to decrease along training iterations.
1 code implementation • ICCV 2019 • Runhao Zeng, Wenbing Huang, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, Chuang Gan
Then we apply the GCNs over the graph to model the relations among different proposals and learn powerful representations for the action classification and localization.
Ranked #4 on Temporal Action Localization on THUMOS’14 (mAP IOU@0.1 metric)
no code implementations • 7 Sep 2019 • Ying Wei, Peilin Zhao, Huaxiu Yao, Junzhou Huang
Automated machine learning aims to automate the whole process of machine learning, including model configuration.
no code implementations • 29 Jan 2019 • Yawei Zhao, Chen Yu, Peilin Zhao, Hanlin Tang, Shuang Qiu, Ji Liu
Decentralized Online Learning (online learning in decentralized networks) attracts more and more attention, since it is believed that Decentralized Online Learning can help the data providers cooperatively better solve their online problems without sharing their private data to a third party or other providers.
1 code implementation • NeurIPS 2019 • Ho Chung Leon Law, Peilin Zhao, Lucian Chan, Junzhou Huang, Dino Sejdinovic
Bayesian optimisation is a popular technique for hyperparameter learning but typically requires initial exploration even in cases where similar prior tasks have been solved.
no code implementations • 27 Sep 2018 • JieZhang Cao, Yong Guo, Langyuan Mo, Peilin Zhao, Junzhou Huang, Mingkui Tan
We study the joint distribution matching problem which aims at learning bidirectional mappings to match the joint distribution of two domains.
Open-Ended Question Answering Unsupervised Image-To-Image Translation +2
no code implementations • 19 Sep 2018 • Yong Guo, Qi Chen, Jian Chen, Junzhou Huang, Yanwu Xu, JieZhang Cao, Peilin Zhao, Mingkui Tan
However, most deep learning methods employ feed-forward architectures, and thus the dependencies between LR and HR images are not fully exploited, leading to limited learning performance.
no code implementations • ICML 2018 • Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian
Recently, the decentralized optimization problem is attracting growing attention.
no code implementations • 17 Apr 2018 • Longfei Li, Peilin Zhao, Jun Zhou, Xiaolong Li
However, to choose the rank properly, it usually needs to run the algorithm for many times using different ranks, which clearly is inefficient for some large-scale datasets.
no code implementations • 13 Apr 2018 • Chaochao Chen, Ziqi Liu, Peilin Zhao, Longfei Li, Jun Zhou, Xiaolong Li
The experimental results demonstrate that, comparing with the classic and state-of-the-art (distributed) latent factor models, DCH has comparable performance in terms of recommendation accuracy but has both fast convergence speed in offline model training procedure and realtime efficiency in online recommendation procedure.
no code implementations • 6 Apr 2018 • Peilin Zhao, Yifan Zhang, Min Wu, Steven C. H. Hoi, Mingkui Tan, Junzhou Huang
Cost-Sensitive Online Classification has drawn extensive attention in recent years, where the main approach is to directly online optimize two well-known cost-sensitive metrics: (i) weighted sum of sensitivity and specificity; (ii) weighted misclassification cost.
no code implementations • 8 Feb 2018 • Steven C. H. Hoi, Doyen Sahoo, Jing Lu, Peilin Zhao
Online learning represents an important family of machine learning algorithms, in which a learner attempts to resolve an online prediction (or any type of decision-making) task by learning a model/hypothesis from a sequence of data instances one at a time.
no code implementations • 5 Feb 2018 • Wenpeng Zhang, Xiao Lin, Peilin Zhao
To address this subsequent challenge, we follow the general projection-free algorithmic framework of Online Conditional Gradient and propose an Online Compact Convex Factorization Machine (OCCFM) algorithm that eschews the projection operation with efficient linear optimization steps.
no code implementations • ICML 2017 • Wenpeng Zhang, Peilin Zhao, Wenwu Zhu, Steven C. H. Hoi, Tong Zhang
The conditional gradient algorithm has regained a surge of research interest in recent years due to its high efficiency in handling large-scale machine learning problems.
no code implementations • 3 Jul 2017 • Peng Yang, Peilin Zhao, Xin Gao, Yong liu
Morever, the proposed algorithm can be scaled up to large-sized datasets after a relaxation.
no code implementations • 6 Jun 2017 • Peng Yang, Peilin Zhao, Xin Gao
Multi-Task Learning (MTL) can enhance a classifier's generalization performance by learning multiple related tasks simultaneously.
no code implementations • 28 May 2016 • Chenghao Liu, Tao Jin, Steven C. H. Hoi, Peilin Zhao, Jianling Sun
In this paper, we propose a novel scheme of Online Bayesian Collaborative Topic Regression (OBCTR) which is efficient and scalable for learning from data streams.
no code implementations • 1 Feb 2016 • Yi Ding, Peilin Zhao, Steven C. H. Hoi, Yew-Soon Ong
Despite their encouraging results reported, the existing online AUC maximization algorithms often adopt simple online gradient descent approaches that fail to exploit the geometrical knowledge of the data observed during the online learning process, and thus could suffer from relatively larger regret.
no code implementations • 16 Nov 2015 • Jing Lu, Steven C. H. Hoi, Doyen Sahoo, Peilin Zhao
To overcome this drawback, we present a novel framework of Budget Online Multiple Kernel Learning (BOMKL) and propose a new Sparse Passive Aggressive learning to perform effective budget online learning.
no code implementations • 25 Jul 2015 • Dayong Wang, Pengcheng Wu, Peilin Zhao, Steven C. H. Hoi
Unlike some existing online data stream classification techniques that are often based on first-order online learning, we propose a framework of Sparse Online Classification (SOC) for data stream classification, which includes some state-of-the-art first-order sparse online learning algorithms as special cases and allows us to derive a new effective second-order online learning algorithm for data stream classification.
no code implementations • 13 May 2014 • Peilin Zhao, Tong Zhang
Stochastic Gradient Descent (SGD) is a popular optimization method which has been applied to many important machine learning tasks such as Support Vector Machines and Deep Neural Networks.
no code implementations • 13 Jan 2014 • Peilin Zhao, Tong Zhang
Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Gradient Descent (prox-SGD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA).
no code implementations • 16 Dec 2013 • Peilin Zhao, Jinwei Yang, Tong Zhang, Ping Li
The Alternating Direction Method of Multipliers (ADMM) has been studied for years.
no code implementations • 26 Sep 2013 • Peilin Zhao, Steven Hoi, Jinfeng Zhuang
In this paper, we address a new problem of active learning with expert advice, where the outcome of an instance is disclosed only when it is requested by the online learner.
1 code implementation • Machine Learning 2012 • Bin Li, Peilin Zhao, Steven C. H. Hoi
This article proposes a novel online portfolio selection strategy named “Passive Aggressive Mean Reversion” (PAMR).
no code implementations • NeurIPS 2009 • Peilin Zhao, Steven C. Hoi, Rong Jin
This is clearly insufficient since when a new misclassified example is added to the pool of support vectors, we generally expect it to affect the weights for the existing support vectors.