1 code implementation • ACL 2022 • Yongqi Zhang, Zhanke Zhou, Quanming Yao, Yong Li
Based on the analysis, we propose an efficient two-stage search algorithm KGTuner, which efficiently explores HP configurations on small subgraph at the first stage and transfers the top-performed configurations for fine-tuning on the large full graph at the second stage.
no code implementations • ICML 2020 • QUANMING YAO, Hansi Yang, Bo Han, Gang Niu, James Kwok
Sample selection approaches are popular in robust learning from noisy labels.
no code implementations • 15 Feb 2025 • Haiquan Qiu, You Wu, Quanming Yao
Model merging is a critical technique for combining the capabilities of multiple fine-tuned models without requiring additional training.
no code implementations • 9 Nov 2024 • Yu Liu, Shu Yang, Jingtao Ding, Quanming Yao, Yong Li
To tackle this issue, in this paper, we generalize the hyperedge expansion in hypergraph learning and propose an equivalent transformation for HKG modeling, referred to as TransEQ.
1 code implementation • 3 Nov 2024 • Haotong Du, Quanming Yao, Juzheng Zhang, Yang Liu, Zhen Wang
Subgraph-based methods have proven to be effective and interpretable in predicting drug-drug interactions (DDIs), which are essential for medical practice and drug development.
no code implementations • 24 Oct 2024 • Zhenqian Shen, Mingyang Zhou, Yongqi Zhang, Quanming Yao
However, evaluating existing methods has several limitations, such as the absence of a unified comparison framework for DDI prediction methods, lack of assessments in meaningful real-world scenarios, and insufficient exploration of side information usage.
no code implementations • 15 Oct 2024 • Haiquan Qiu, ShuZhi Liu, Quanming Yao
Complex networks describe important structures in nature and society, composed of nodes and the edges that connect them.
no code implementations • 8 Oct 2024 • Shiguang Wu, Yaqing Wang, Yatao Bian, Quanming Yao
Meta-learning enables learning systems to adapt quickly to new tasks, similar to humans.
no code implementations • 1 Aug 2024 • Juzheng Zhang, Yatao Bian, Yongqiang Chen, Quanming Yao
Equipped with this tokenizer, UniMoT can unify molecule and text modalities under a shared token representation and an autoregressive training paradigm, enabling it to interpret molecules as a foreign language and generate them as text.
no code implementations • 14 Jul 2024 • Yaqing Wang, Hongming Piao, daxiang dong, Quanming Yao, Jingbo Zhou
While existing methods focus on enhancing item ID embeddings for new items within general CTR models, they tend to adopt a global feature interaction approach, often overshadowing new items with sparse data by those with abundant interactions.
no code implementations • 29 Jun 2024 • Quanming Yao, Yongqi Zhang, Yaqing Wang, Nan Yin, James Kwok, Qiang Yang
The brute-force scaleup of training datasets, learnable parameters and computation power, has become a prevalent strategy for developing more robust learning models.
no code implementations • 20 Jun 2024 • Yongqiang Chen, Quanming Yao, Juzheng Zhang, James Cheng, Yatao Bian
As LLMs are predominantly trained with 1D text data, most existing approaches adopt a graph neural network to represent a graph as a series of node tokens and feed these tokens to LLMs for graph-language alignment.
1 code implementation • 12 Jun 2024 • Juzheng Zhang, Lanning Wei, Zhen Xu, Quanming Yao
Link prediction is a fundamental task in graph learning, inherently shaped by the topology of the graph.
no code implementations • 3 Jun 2024 • Guangyi Liu, Yongqi Zhang, Yong Li, Quanming Yao
The task of reasoning over Knowledge Graphs (KGs) poses a significant challenge for Large Language Models (LLMs) due to the complex structure and large amounts of irrelevant information.
1 code implementation • 21 Mar 2024 • Guangyi Liu, Quanming Yao, Yongqi Zhang, Lei Chen
Recommendation systems, as widely implemented nowadays on various platforms, recommend relevant items to users based on their preferences.
no code implementations • 17 Mar 2024 • Haiquan Qiu, Yatao Bian, Quanming Yao
Then, unitary adjacency matrix is obtained with a unitary projection algorithm, which is implemented by utilizing the intrinsic structure of unitary adjacency matrix and allows GUMP to be permutation-equivariant.
1 code implementation • 15 Mar 2024 • Zhanke Zhou, Yongqi Zhang, Jiangchao Yao, Quanming Yao, Bo Han
To deduce new facts on a knowledge graph (KG), a link predictor learns from the graph structure and collects local evidence to find the answer to a given query.
1 code implementation • 29 Feb 2024 • Zhen Hao Wong, Hansi Yang, Xiaoyi Fu, Quanming Yao
Heterogeneous Graph Neural Networks (HGNNs) are a class of deep learning models designed specifically for heterogeneous graphs, which are graphs that contain different types of nodes and edges.
Ranked #2 on
Node Property Prediction
on ogbn-mag
no code implementations • 18 Feb 2024 • Lanning Wei, Jun Gao, Huan Zhao, Quanming Yao
This paper proposes a novel conceptual prototype for designing versatile graph learning methods with LLMs, with a particular focus on the "where" and "how" perspectives.
no code implementations • 16 Dec 2023 • Lebin Yu, Yunbo Qiu, Quanming Yao, Yuan Shen, Xudong Zhang, Jian Wang
We propose an active defense strategy, where agents automatically reduce the impact of potentially harmful messages on the final decision.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 25 Nov 2023 • Yaqing Wang, Zaifei Yang, Quanming Yao
Thus, the lack of DDIs is implicitly compensated by the enriched drug representations and propagated drug similarities.
1 code implementation • 15 Nov 2023 • Yongqi Zhang, Quanming Yao, Ling Yue, Xian Wu, Ziheng Zhang, Zhenxi Lin, Yefeng Zheng
Accurately predicting drug-drug interactions (DDI) for emerging drugs, which offer possibilities for treating and alleviating diseases, with computational methods can improve patient care and contribute to efficient drug development.
1 code implementation • NeurIPS 2023 • Zhanke Zhou, Jiangchao Yao, Jiaxu Liu, Xiawei Guo, Quanming Yao, Li He, Liang Wang, Bo Zheng, Bo Han
To address this dilemma, we propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
2 code implementations • 22 Oct 2023 • Zhen Hao Wong, Ling Yue, Quanming Yao
Graph Neural Networks (GNNs) have shown success in various fields for learning from graph-structured data.
Ranked #1 on
Link Property Prediction
on ogbl-ddi
no code implementations • 20 Oct 2023 • Hansi Yang, Yongqi Zhang, Quanming Yao, James Kwok
We also propose a regularizer to align the model with graph structure.
2 code implementations • 13 Oct 2023 • Ling Yue, Yongqi Zhang, Quanming Yao, Yong Li, Xian Wu, Ziheng Zhang, Zhenxi Lin, Yefeng Zheng
Knowledge graph (KG) embedding is a fundamental task in natural language processing, and various methods have been proposed to explore semantic patterns in distinctive ways.
Ranked #1 on
Link Property Prediction
on ogbl-biokg
1 code implementation • 1 Oct 2023 • Shiguang Wu, Yaqing Wang, Quanming Yao
We then adopt a hierarchical adaptation mechanism to adapt the encoder at task-level and the predictor at query-level by the unified GNN adapter.
no code implementations • 8 Sep 2023 • Lanning Wei, Huan Zhao, Xiaohan Zheng, Zhiqiang He, Quanming Yao
In this paper, we propose to explore versatile graph learning approaches with LLM-based agents, and the key insight is customizing the graph learning procedures for diverse graphs and tasks.
1 code implementation • 15 Jun 2023 • Zhanke Zhou, Chenyu Zhou, Xuan Li, Jiangchao Yao, Quanming Yao, Bo Han
Although powerful graph neural networks (GNNs) have boosted numerous real-world applications, the potential privacy risk is still underexplored.
1 code implementation • 13 Jun 2023 • Xu Wang, Huan Zhao, WeiWei Tu, Quanming Yao
Next, to automatically fuse these three generative tasks, we design a surrogate metric using the \textit{total energy} to search for weight distribution of the three pretext task since total energy corresponding to the quality of 3D conformer. Extensive experiments on 2D molecular graphs are conducted to demonstrate the accuracy, efficiency and generalization ability of the proposed 3D PGT compared to various pre-training baselines.
1 code implementation • 6 Jun 2023 • Shiguang Wu, Yaqing Wang, Qinghe Jing, daxiang dong, Dejing Dou, Quanming Yao
Instead of using a fixed modulation function and deciding modulation position by expertise, we propose a modulation framework called ColdNAS for user cold-start problem, where we look for proper modulation structure, including function and position, via neural architecture search.
1 code implementation • 22 Mar 2023 • Haiquan Qiu, Yongqi Zhang, Yong Li, Quanming Yao
These results further inspire us to propose a novel labeling strategy to learn more rules in KG reasoning.
1 code implementation • 1 Mar 2023 • Jianing Zhu, Jiangchao Yao, Tongliang Liu, Quanming Yao, Jianliang Xu, Bo Han
Privacy and security concerns in real-world applications have led to the development of adversarially robust federated models.
1 code implementation • 17 Feb 2023 • Lanning Wei, Zhiqiang He, Huan Zhao, Quanming Yao
In recent years, Graph Neural Networks (GNNs) have been popular in the graph classification task.
no code implementations • ICLR 2023 2023 • Hongzhi Shi, Jingtao Ding, Yufan Cao, Quanming Yao, Li Liu, Yong Li
The essence of our method is to model the formula skeleton with a message-passing flow, which helps transform the discovery of the skeleton into the search for the message-passing flow.
no code implementations • 20 Nov 2022 • Lanning Wei, Zhiqiang He, Huan Zhao, Quanming Yao
Despite the success, we observe two aspects that can be further improved: (a) enhancing the ego feature information extraction from node itself which is more reliable in extracting the intra-class information; (b) designing node-wise GNNs can better adapt to the nodes with different homophily ratios.
Ranked #4 on
Node Classification
on Actor
1 code implementation • 30 Oct 2022 • Zhen Wang, Haotong Du, Quanming Yao, Xuelong Li
In particular, we develop a generalized framework to explore topological and temporal information in TKGs.
Ranked #1 on
Link Prediction
on GDELT
no code implementations • 27 Sep 2022 • HUI ZHANG, Quanming Yao, James T. Kwok, Xiang Bai
We design a domain-specific search space by exploring principles for having good feature extractors.
Neural Architecture Search
Vocal Bursts Intensity Prediction
1 code implementation • 14 Aug 2022 • Yinfeng Li, Chen Gao, Quanming Yao, Tong Li, Depeng Jin, Yong Li
In particular, we first unify the fine-grained user similarity and the complex matching between user preferences and spatiotemporal activity into a heterogeneous hypergraph.
no code implementations • 24 Jul 2022 • Hansi Yang, Yongqi Zhang, Quanming Yao
This scoring function, called AutoWeird, only uses tail entity and relation in a triplet to compute its plausibility score.
Ranked #2 on
Link Property Prediction
on ogbl-wikikg2
1 code implementation • 13 Jul 2022 • Xu Wang, Huan Zhao, Lanning Wei, Quanming Yao
Aiming at two molecular graph datasets and one protein association subgraph dataset in OGB graph classification task, we design a graph neural network framework for graph classification task by introducing PAS(Pooling Architecture Search).
2 code implementations • 30 May 2022 • Yongqi Zhang, Zhanke Zhou, Quanming Yao, Xiaowen Chu, Bo Han
An important design component of GNN-based KG reasoning methods is called the propagation path, which contains a set of involved entities in each propagation step.
no code implementations • 6 May 2022 • Quanming Yao, Yaqing Wang, Bo Han, James Kwok
While the optimization problem is nonconvex and nonsmooth, we show that its critical points still have good statistical performance on the tensor completion problem.
2 code implementations • 5 May 2022 • Yongqi Zhang, Zhanke Zhou, Quanming Yao, Yong Li
While hyper-parameters (HPs) are important for knowledge graph (KG) learning, existing methods fail to search them efficiently.
1 code implementation • 6 Apr 2022 • Zhen Xu, Lanning Wei, Huan Zhao, Rex Ying, Quanming Yao, Wei-Wei Tu, Isabelle Guyon
Researchers naturally adopt Automated Machine Learning on Graph Learning, aiming to reduce the human effort and achieve generally top-performing GNNs, but their methods focus more on the architecture search.
no code implementations • 15 Dec 2021 • Huiming Chen, Huandong Wang, Quanming Yao, Yong Li, Depeng Jin, Qiang Yang
Federated optimization (FedOpt), which targets at collaboratively training a learning model across a large number of distributed clients, is vital for federated learning.
no code implementations • NeurIPS 2021 • Chen Gao, Yinfeng Li, Quanming Yao, Depeng Jin, Yong Li
Deep sparse networks (DSNs), of which the crux is exploring the high-order feature interactions, have become the state-of-the-art on the prediction task with high-sparsity features.
1 code implementation • EMNLP 2021 • Yaqing Wang, Song Wang, Quanming Yao, Dejing Dou
Short text classification is a fundamental task in natural language processing.
no code implementations • 16 Oct 2021 • Gen Shi, Yifan Zhu, Wenjin Liu, Quanming Yao, Xuesong Li
Other experiments also indicate that our proposed model with a pretraining strategy alleviates the problem of limited labelled data and yields a significant improvement in accuracy.
2 code implementations • 12 Oct 2021 • Zhen Xu, Sergio Escalera, Isabelle Guyon, Adrien Pavão, Magali Richard, Wei-Wei Tu, Quanming Yao, Huan Zhao
A public instance of Codabench (https://www. codabench. org/) is open to everyone, free of charge, and allows benchmark organizers to compare fairly submissions, under the same setting (software, hardware, data, algorithms), with custom protocols and data formats.
3 code implementations • 24 Aug 2021 • Lanning Wei, Huan Zhao, Quanming Yao, Zhiqiang He
To address this problem, we propose to use neural architecture search (NAS) to search for adaptive pooling architectures for graph classification.
1 code implementation • 20 Aug 2021 • Xiawei Guo, Yuhan Quan, Huan Zhao, Quanming Yao, Yong Li, WeiWei Tu
Tabular data prediction (TDP) is one of the most popular industrial applications, and various methods have been designed to improve the prediction performance.
3 code implementations • 13 Aug 2021 • Yongqi Zhang, Quanming Yao
In this paper, we introduce a novel relational structure, i. e., relational directed graph (r-digraph), which is composed of overlapped relational paths, to capture the KG's local evidence.
no code implementations • NeurIPS 2021 • Yaqing Wang, Abulikemu Abuduweili, Quanming Yao, Dejing Dou
the target property, such that the limited labels can be effectively propagated among similar molecules.
3 code implementations • 1 Jul 2021 • Yongqi Zhang, Quanming Yao, James Tin-Yau Kwok
We first set up a search space for AutoBLM by analyzing existing scoring functions.
Ranked #6 on
Link Property Prediction
on ogbl-biokg
no code implementations • 14 Jun 2021 • Chen Gao, Quanming Yao, Depeng Jin, Yong Li
In this way, we can combinatorially generalize data-specific CF models, which have not been visited in the literature, from SOTA ones.
3 code implementations • 22 Apr 2021 • Shimin Di, Quanming Yao, Yongqi Zhang, Lei Chen
The scoring function, which measures the plausibility of triplets in knowledge graphs (KGs), is the key to ensure the excellent performance of KG embedding, and its design is also an important problem in the literature.
1 code implementation • 21 Apr 2021 • Shimin Di, Quanming Yao, Lei Chen
Recently, tensor decomposition methods have been introduced into N-ary relational data and become state-of-the-art on embedding learning.
1 code implementation • 20 Apr 2021 • Yu Liu, Quanming Yao, Yong Li
N-ary relational knowledge bases (KBs) represent knowledge with binary and beyond-binary relational facts.
1 code implementation • 14 Apr 2021 • Huan Zhao, Quanming Yao, WeiWei Tu
In this work, to obtain the data-specific GNN architectures and address the computational challenges facing by NAS approaches, we propose a framework, which tries to Search to Aggregate NEighborhood (SANE), to automatically design data-specific GNN architectures.
no code implementations • 4 Jan 2021 • Hansi Yang, Peiyu Zhang, Quanming Yao
The proposed topology-aware tensor decomposition is easy to use and simple to implement, and it can be taken as a plug-in part to upgrade many existing works, including node classification and recommendation on heterogeneous graphs.
no code implementations • 1 Jan 2021 • Hongzhi Shi, Quanming Yao, Yong Li
The score also helps relax the discrete space into a continuous one and can be uniformly transformed into matrix form by the Einstein summation convention.
no code implementations • 1 Jan 2021 • Hansi Yang, Quanming Yao
Recently, a special kind of graph, i. e., supernet, which allows two nodes connected by multi-choice edges, has exhibited its power in neural architecture search (NAS) by searching better architectures for computer vision (CV) and natural language processing (NLP) tasks.
no code implementations • 1 Jan 2021 • Huan Zhao, Lanning Wei, Quanming Yao, Zhiqiang He
To obtain state-of-the-art (SOAT) data-specific GNN architectures, researchers turn to the neural architecture search (NAS) methods.
no code implementations • 16 Nov 2020 • Yongqi Zhang, HUI ZHANG, Quanming Yao, Jun Wan
Thus, inspired by the observation that classifier is more robust to noisy labels while representation is much more fragile, and by the recent advances of self-supervised representation learning (SSRL) technologies, we design a new method, i. e., CS$^3$NL, to obtain representation by SSRL without labels and train the classifier directly with noisy labels.
1 code implementation • 9 Nov 2020 • Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W. Tsang, James T. Kwok, Masashi Sugiyama
Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios.
1 code implementation • NeurIPS 2021 • Fengli Xu, Quanming Yao, Pan Hui, Yong Li
Distinguishing the automorphic equivalence of nodes in a graph plays an essential role in many scientific domains, e. g., computational biologist and social network analysis.
1 code implementation • 24 Oct 2020 • Yongqi Zhang, Quanming Yao, Lei Chen
In this paper, motivated by the observation that negative triplets with large gradients are important but rare, we propose to directly keep track of them with the cache.
1 code implementation • 24 Oct 2020 • wei he, Quanming Yao, Chao Li, Naoto Yokoya, Qibin Zhao, Hongyan zhang, Liangpei Zhang
Non-local low-rank tensor approximation has been developed as a state-of-the-art method for hyperspectral image (HSI) restoration, which includes the tasks of denoising, compressed HSI reconstruction and inpainting.
3 code implementations • 7 Oct 2020 • Yuhui Ding, Quanming Yao, Huan Zhao, Tong Zhang
Specifically, we search for a meta graph, which can capture more complex semantic relations than a meta path, to determine how graph neural networks (GNNs) propagate messages along different types of edges.
1 code implementation • NeurIPS 2020 • Jingtao Ding, Yuhan Quan, Quanming Yao, Yong Li, Depeng Jin
Negative sampling approaches are prevalent in implicit collaborative filtering for obtaining negative labels from massive unlabeled data.
3 code implementations • 26 Aug 2020 • Huan Zhao, Lanning Wei, Quanming Yao
Recent years have witnessed the popularity of Graph Neural Networks (GNN) in various scenarios.
no code implementations • 14 Aug 2020 • Yaqing Wang, Quanming Yao, James T. Kwok
Extensive low-rank matrix completion experiments on a number of synthetic and real-world data sets show that the proposed method obtains state-of-the-art recovery performance while being the fastest in comparison to existing low-rank matrix learning methods.
1 code implementation • 8 Jul 2020 • Yu Liu, Quanming Yao, Yong Li
With the rapid development of knowledge bases (KBs), link prediction task, which completes KBs with missing facts, has been broadly studied in especially binary relational KBs (a. k. a knowledge graph) with powerful tensor decomposition related methods.
1 code implementation • 2020 IEEE 36th International Conference on Data Engineering (ICDE) 2020 • Hongzhi Shi, Quanming Yao, Qi Guo, Yaguang Li, Lingyu Zhang, Jieping Ye, Yong Li, Yan Liu
Predicting Origin-Destination (OD) flow is a crucial problem for intelligent transportation.
2 code implementations • ECCV 2020 • Hui Zhang, Quanming Yao, Mingkun Yang, Yongchao Xu, Xiang Bai
In this work, inspired by the success of neural architecture search (NAS), which can identify better architectures than human-designed ones, we propose automated STR (AutoSTR) to search data-dependent backbones to boost text recognition performance.
4 code implementations • NeurIPS 2020 • Yongqi Zhang, Quanming Yao, Lei Chen
In this work, based on the relational paths, which are composed of a sequence of triplets, we define the Interstellar as a recurrent neural architecture search problem for the short-term and long-term information along the paths.
3 code implementations • 6 Nov 2019 • Quanming Yao, Hansi Yang, Bo Han, Gang Niu, James Kwok
Sample selection approaches are popular in robust learning from noisy labels.
2 code implementations • 28 Jun 2019 • Quanming Yao, Xiangning Chen, James Kwok, Yong Li, Cho-Jui Hsieh
Motivated by the recent success of automated machine learning (AutoML), we propose in this paper the search for simple neural interaction functions (SIF) in CF.
2 code implementations • 30 May 2019 • Quanming Yao, Ju Xu, Wei-Wei Tu, Zhanxing Zhu
Recently, DARTS, which constructs a differentiable search space and then optimizes it by gradient descent, can obtain high-performance architecture and reduces the search time to several days.
no code implementations • 12 May 2019 • Quanming Yao, Hangsi Yang, En-Liang Hu, James Kwok
In real-world applications, it is important for machine learning algorithms to be robust against data outliers or corruptions.
no code implementations • 29 Apr 2019 • Yuanfei Luo, Mengshuo Wang, Hao Zhou, Quanming Yao, Wei-Wei Tu, Yuqiang Chen, Qiang Yang, Wenyuan Dai
Feature crossing captures interactions among categorical features and is useful to enhance learning from tabular data in real-world businesses.
3 code implementations • 26 Apr 2019 • Yongqi Zhang, Quanming Yao, Wenyuan Dai, Lei Chen
The algorithm is further sped up by a filter and a predictor, which can avoid repeatedly training SFs with same expressive ability and help removing bad candidates during the search before model training.
Ranked #1 on
Link Prediction
on FB15k
4 code implementations • 10 Apr 2019 • Yaqing Wang, Quanming Yao, James Kwok, Lionel M. Ni
Machine learning has been highly successful in data-intensive applications but is often hampered when the data set is small.
6 code implementations • 16 Dec 2018 • Yongqi Zhang, Quanming Yao, Yingxia Shao, Lei Chen
Negative sampling, which samples negative triplets from non-observed ones in the training data, is an important step in KG embedding.
Ranked #6 on
Link Prediction
on FB15k
2 code implementations • CVPR 2019 • Wei He, Quanming Yao, Chao Li, Naoto Yokoya, Qibin Zhao
This is done by first learning a low-dimensional projection and the related reduced image from the noisy HSI.
Ranked #10 on
Hyperspectral Image Denoising
on ICVL-HSI-Gaussian50
no code implementations • 23 Nov 2018 • Quanming Yao, Xiawei Guo, James T. Kwok, WeiWei Tu, Yuqiang Chen, Wenyuan Dai, Qiang Yang
To meet the standard of differential privacy, noise is usually added into the original data, which inevitably deteriorates the predicting performance of subsequent learning algorithms.
1 code implementation • 31 Oct 2018 • Zhenqian Shen, Yongqi Zhang, Lanning Wei, Huan Zhao, Quanming Yao
Machine learning (ML) methods have been developing rapidly, but configuring and selecting proper methods to achieve a desired performance is increasingly difficult and tedious.
1 code implementation • ICML 2020 • Bo Han, Gang Niu, Xingrui Yu, Quanming Yao, Miao Xu, Ivor Tsang, Masashi Sugiyama
Given data with noisy labels, over-parameterized deep networks can gradually memorize the data, and fit everything in the end.
1 code implementation • 23 Jul 2018 • Quanming Yao, James T. Kwok, Bo Han
Due to the easy optimization, the convex overlapping nuclear norm has been popularly used for tensor completion.
no code implementations • ICML 2018 • Yaqing Wang, Quanming Yao, James T. Kwok, Lionel M. Ni
Convolutional sparse coding (CSC) has been popularly used for the learning of shift-invariant dictionaries in image and signal processing.
5 code implementations • NeurIPS 2018 • Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama
Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.
Ranked #9 on
Learning with noisy labels
on CIFAR-10N-Random3
1 code implementation • 8 Jan 2018 • Huan Zhao, Quanming Yao, Yangqiu Song, James Kwok, Dik Lun Lee
Collaborative filtering (CF) has been one of the most important and popular recommendation methods, which aims at predicting users' preferences (ratings) based on their past behaviors.
no code implementations • 1 Aug 2017 • Quanming Yao, James T. Kwok, Taifeng Wang, Tie-Yan Liu
Based on it, we develop a proximal gradient algorithm (and its accelerated variant) with inexact proximal splitting and prove that a convergence rate of O(1/T) where T is the number of iterations is guaranteed.
no code implementations • 21 Jun 2017 • Yaqing Wang, Quanming Yao, James T. Kwok, Lionel M. Ni
Convolutional sparse coding (CSC) improves sparse coding by learning a shift-invariant dictionary from the data.
no code implementations • 29 Dec 2016 • Quanming Yao, James T. Kwok, Fei Gao, Wei Chen, Tie-Yan Liu
The proximal gradient algorithm has been popularly used for convex optimization.
Optimization and Control
1 code implementation • 5 Nov 2016 • Lu Hou, Quanming Yao, James T. Kwok
Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time.
no code implementations • 29 Oct 2016 • Quanming Yao, James T. Kwok, Xiawei Guo
In this paper, we show that a closed-form solution can be derived for the proximal step associated with this regularizer.
no code implementations • 27 Jul 2016 • Quanming Yao, James T. Kwok
Learning of low-rank matrices is fundamental to many machine learning applications.
no code implementations • 13 Jun 2016 • Quanming Yao, James T. Kwok
The nonconvex regularizer is then transformed to a familiar convex regularizer, while the resultant loss function can still be guaranteed to be smooth.
1 code implementation • 3 Dec 2015 • Quanming Yao, James T. Kwok, Wenliang Zhong
This allows the use of power method to approximate the SVD efficiently.