no code implementations • ICML 2020 • QUANMING YAO, Hansi Yang, Bo Han, Gang Niu, James Kwok
Sample selection approaches are popular in robust learning from noisy labels.
1 code implementation • 11 Sep 2024 • Buhua Liu, Shitong Shao, Bao Li, Lichen Bai, Zhiqiang Xu, Haoyi Xiong, James Kwok, Sumi Helal, Zeke Xie
Diffusion models have emerged as the leading paradigm in generative modeling, excelling in various applications.
1 code implementation • 25 Aug 2024 • Qiaolong Cai, Zhaowei Wang, Shizhe Diao, James Kwok, Yangqiu Song
Compared to the existing methods, CodeGraph demonstrates strong performance on arithmetic problems in graph tasks and offers a more controllable and interpretable approach to the reasoning process.
no code implementations • 22 Aug 2024 • WeiYu Chen, James Kwok
In only one merging process, the proposed parameter-efficient structure can generate the whole Pareto set of merged models, each representing the Pareto-optimal model for a given user-specified preference.
no code implementations • 29 Jun 2024 • Quanming Yao, Yongqi Zhang, Yaqing Wang, Nan Yin, James Kwok, Qiang Yang
The brute-force scaleup of training datasets, learnable parameters and computation power, has become a prevalent strategy for developing more robust learning models.
1 code implementation • NeurIPS 2023 • Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
In MINT, the teacher aims to instruct multiple learners, with each learner focusing on learning a scalar-valued target model.
no code implementations • 20 Oct 2023 • Hansi Yang, Yongqi Zhang, Quanming Yao, James Kwok
We also propose a regularizer to align the model with graph structure.
no code implementations • 9 Oct 2023 • Zhili Liu, Kai Chen, Yifan Zhang, Jianhua Han, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-yan Yeung, James Kwok
Subsequently, the model is optimized to identify and disentangle this information, which is then adopted as negative prompts during generation.
3 code implementations • 30 Sep 2023 • Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, Zhenguo Li
We hope PIXART-$\alpha$ will provide new insights to the AIGC community and startups to accelerate building their own high-quality yet low-cost generative models from scratch.
no code implementations • ICCV 2023 • Xinchi Deng, Han Shi, Runhui Huang, Changlin Li, Hang Xu, Jianhua Han, James Kwok, Shen Zhao, Wei zhang, Xiaodan Liang
Compared with the existing methods, GrowCLIP improves 2. 3% average top-1 accuracy on zero-shot image classification of 9 downstream tasks.
no code implementations • 8 Jun 2023 • Lifeng Shen, James Kwok
In this paper, we propose TimeDiff, a non-autoregressive diffusion model that achieves high-quality time series prediction with the introduction of two novel conditioning mechanisms: future mixup and autoregressive initialization.
1 code implementation • 5 Jun 2023 • Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
In this paper, we consider the problem of Iterative Machine Teaching (IMT), where the teacher provides examples to the learner iteratively such that the learner can achieve fast convergence to a target model.
no code implementations • 28 Apr 2023 • Weisen Jiang, Hansi Yang, Yu Zhang, James Kwok
Sharpness-aware minimization (SAM), which searches for flat minima by min-max optimization, has been shown to be useful in improving model generalization.
no code implementations • CVPR 2023 • Yunhao Gou, Tom Ko, Hansi Yang, James Kwok, Yu Zhang, Mingxuan Wang
(2) Under-utilization of the unmasked tokens: CMLM primarily focuses on the masked token but it cannot simultaneously leverage other tokens to learn vision-language associations.
no code implementations • 6 May 2022 • Quanming Yao, Yaqing Wang, Bo Han, James Kwok
While the optimization problem is nonconvex and nonsmooth, we show that its critical points still have good statistical performance on the tensor completion problem.
no code implementations • NeurIPS 2021 • Weisen Jiang, James Kwok, Yu Zhang
We study the problem of meta-learning, which has proved to be advantageous to accelerate learning new tasks with a few samples.
no code implementations • 29 Sep 2021 • Weisen Jiang, James Kwok, Yu Zhang
We propose a MUlti-Subspace structured Meta-Learning (MUSML) algorithm to learn the subspace bases.
no code implementations • 29 Sep 2021 • Lawrence Ki-On Chan, James Kwok
In this paper, we remove this inconsistency in the use of ER and improve continual learning representations by integrating ER also into meta-training.
no code implementations • 29 Sep 2021 • Runsheng Yu, Xinrun Wang, James Kwok
Most advanced Actor-Critic (AC) approaches update the actor and critic concurrently through (stochastic) Gradient Descents (GD), which may be trapped into bad local optimality due to the instability of these simultaneous updating schemes.
no code implementations • NeurIPS 2020 • Lifeng Shen, Zhuocong Li, James Kwok
This allows the temporal dynamics to be well captured by a set of multi-resolution temporal clusters.
1 code implementation • NeurIPS 2019 • Lu Hou, Jinhua Zhu, James Kwok, Fei Gao, Tao Qin, Tie-Yan Liu
The long-short-term memory (LSTM), though powerful, is memory and computa\x02tion expensive.
3 code implementations • 6 Nov 2019 • Quanming Yao, Hansi Yang, Bo Han, Gang Niu, James Kwok
Sample selection approaches are popular in robust learning from noisy labels.
no code implementations • 25 Sep 2019 • Zac Wellmer, Sepanta Zeighami, James Kwok
However, decision-time planning with implicit dynamics models in continuous action space has proven to be a difficult problem.
Model-based Reinforcement Learning Policy Gradient Methods +4
no code implementations • 15 Sep 2019 • Zac Wellmer, James Kwok
This paper proposes a novel deep reinforcement learning architecture that was inspired by previous tree structured architectures which were only useable in discrete action spaces.
2 code implementations • 28 Jun 2019 • Quanming Yao, Xiangning Chen, James Kwok, Yong Li, Cho-Jui Hsieh
Motivated by the recent success of automated machine learning (AutoML), we propose in this paper the search for simple neural interaction functions (SIF) in CF.
no code implementations • 12 May 2019 • Quanming Yao, Hangsi Yang, En-Liang Hu, James Kwok
In real-world applications, it is important for machine learning algorithms to be robust against data outliers or corruptions.
4 code implementations • 10 Apr 2019 • Yaqing Wang, Quanming Yao, James Kwok, Lionel M. Ni
Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small.
1 code implementation • 8 Jan 2018 • Huan Zhao, Quanming Yao, Yangqiu Song, James Kwok, Dik Lun Lee
Collaborative filtering (CF) has been one of the most important and popular recommendation methods, which aims at predicting users' preferences (ratings) based on their past behaviors.
no code implementations • NeurIPS 2015 • Kai Fan, Ziteng Wang, Jeff Beck, James Kwok, Katherine A. Heller
We propose a second-order (Hessian or Hessian-free) based optimization method for variational inference inspired by Gaussian backpropagation, and argue that quasi-Newton optimization can be developed as well.
no code implementations • 9 Sep 2015 • Kai Fan, Ziteng Wang, Jeff Beck, James Kwok, Katherine Heller
We propose a second-order (Hessian or Hessian-free) based optimization method for variational inference inspired by Gaussian backpropagation, and argue that quasi-Newton optimization can be developed as well.