3 code implementations • CVPR 2019 • Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, Yi Yang
In this paper, we analyze this norm-based criterion and point out that its effectiveness depends on two requirements that are not always met: (1) the norm deviation of the filters should be large; (2) the minimum norm of the filters should be small.
1 code implementation • 1 Mar 2023 • Yang He, Lingao Xiao
The remarkable performance of deep Convolutional neural networks (CNNs) is generally attributed to their deeper and wider architectures, which can come with significant computational costs.
6 code implementations • 21 Aug 2018 • Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, Yi Yang
Therefore, the network trained by our method has a larger model capacity to learn from the training data.
1 code implementation • 29 May 2021 • Yang He, Ning Yu, Margret Keuper, Mario Fritz
The rapid advances in deep generative models over the past years have led to highly {realistic media, known as deepfakes,} that are commonly indistinguishable from real to human eyes.
1 code implementation • CVPR 2017 • Yang He, Wei-Chen Chiu, Margret Keuper, Mario Fritz
The proposed network produces a high quality segmentation of a single image by leveraging information from additional views of the same scene.
Ranked #95 on Semantic Segmentation on NYU Depth v2
1 code implementation • 11 Oct 2021 • Hui-Po Wang, Sebastian U. Stich, Yang He, Mario Fritz
Federated learning is a powerful distributed learning scheme that allows numerous edge devices to collaboratively train a model without sharing their data.
2 code implementations • 22 Aug 2018 • Yang He, Xuanyi Dong, Guoliang Kang, Yanwei Fu, Chenggang Yan, Yi Yang
With asymptotic pruning, the information of the training set would be gradually concentrated in the remaining filters, so the subsequent training and pruning process would be stable.
1 code implementation • NeurIPS 2023 • Yang He, Lingao Xiao, Joey Tianyi Zhou
However, these scenarios have two significant challenges: 1) the varying computational resources available on the devices require a dataset size different from the pre-defined condensed dataset, and 2) the limited computational resources often preclude the possibility of conducting additional condensation processes.
1 code implementation • ECCV 2020 • Yang He, Shadi Rahimian, Bernt Schiele, Mario Fritz
Today's success of state of the art methods for semantic segmentation is driven by large datasets.
1 code implementation • 6 Sep 2017 • Yang He, Margret Keuper, Bernt Schiele, Mario Fritz
In this paper, we present an approach for learning dilation parameters adaptively per channel, consistently improving semantic segmentation results on street-scene datasets like Cityscapes and Camvid.
1 code implementation • ECCV 2018 • Yang He, Bernt Schiele, Mario Fritz
Recent advances in Deep Learning and probabilistic modeling have led to strong improvements in generative models for images.
1 code implementation • 9 Oct 2023 • Junru Zhang, Lang Feng, Yang He, Yuhan Wu, Yabo Dong
While one-dimensional convolutional neural networks (1D-CNNs) have been empirically proven effective in time series classification tasks, we find that there remain undesirable outcomes that could arise in their application, motivating us to further investigate and understand their underlying mechanisms.
no code implementations • 1 Sep 2017 • Yu Wang, Jixing Xu, Aohan Wu, Mantian Li, Yang He, Jinghe Hu, Weipeng P. Yan
This paper proposes Telepath, a vision-based bionic recommender system model, which understands users from such perspective.
no code implementations • 18 Aug 2017 • Yu Wang, Jiayi Liu, Yuxiang Liu, Jun Hao, Yang He, Jinghe Hu, Weipeng P. Yan, Mantian Li
We present LADDER, the first deep reinforcement learning agent that can successfully learn control policies for large-scale real-world problems directly from raw inputs composed of high-level semantic information.
no code implementations • 5 Dec 2018 • Haipeng Jia, Xueshuang Xiang, Da Fan, Meiyu Huang, Changhao Sun, Yang He
Addressing these two issues, this paper proposes the Drop Pruning approach, which leverages stochastic optimization in the pruning process by introducing a drop strategy at each pruning step, namely, drop away, which stochastically deletes some unimportant weights, and drop back, which stochastically recovers some pruned weights.
no code implementations • 8 Apr 2019 • Yang He, Ping Liu, Linchao Zhu, Yi Yang
In addition, when evaluating the filter importance, only the magnitude information of the filters is considered.
no code implementations • 3 Nov 2019 • Yikai Wang, Liang Zhang, Quanyu Dai, Fuchun Sun, Bo Zhang, Yang He, Weipeng Yan, Yongjun Bao
In deep CTR models, exploiting users' historical data is essential for learning users' behaviors and interests.
no code implementations • 24 Jan 2020 • Xiaodong Wang, Zhedong Zheng, Yang He, Fei Yan, Zhiqiang Zeng, Yi Yang
To verify this, we evaluate our method on two widely-used image retrieval datasets, i. e., Oxford5k and Paris6K, and one person re-identification dataset, i. e., Market-1501.
no code implementations • CVPR 2020 • Yang He, Yuhang Ding, Ping Liu, Linchao Zhu, Hanwang Zhang, Yi Yang
Besides, when evaluating the sampled criteria, LFPC comprehensively consider the contribution of all the layers at the same time.
no code implementations • 18 Sep 2020 • Yang He, Bernt Schiele, Mario Fritz
Recently, learning-based image synthesis has enabled to generate high-resolution images, either applying popular adversarial training or a powerful perceptual loss.
no code implementations • 15 Dec 2020 • Yang He, Hui-Po Wang, Maximilian Zenk, Mario Fritz
Despite notable progress in gradient compression, the existing quantization methods require further improvement when low-bits compression is applied, especially the overall systems often degenerate a lot when quantization are applied in double directions to compress model weights and gradients.
no code implementations • 20 Jun 2021 • Ping Liu, Yuewei Lin, Yang He, Yunchao Wei, Liangli Zhen, Joey Tianyi Zhou, Rick Siow Mong Goh, Jingen Liu
In this paper, we propose to utilize Automated Machine Learning to adaptively search a neural architecture for deepfake detection.
no code implementations • 17 Jan 2022 • Xiaoxiao Xu, Chen Yang, Qian Yu, Zhiwei Fang, Jiaxing Wang, Chaosheng Fan, Yang He, Changping Peng, Zhangang Lin, Jingping Shao
We propose a general Variational Embedding Learning Framework (VELF) for alleviating the severe cold-start problem in CTR prediction.
no code implementations • 29 Apr 2022 • Xiaoxiao Xu, Zhiwei Fang, Qian Yu, Ruoran Huang, \\Chaosheng Fan, Yong Li, Yang He, Changping Peng, Zhangang Lin, Jingping Shao
The exposure sequence is being actively studied for user interest modeling in Click-Through Rate (CTR) prediction.
no code implementations • 28 Sep 2022 • Yang He, Yuheng Jia, Liyang Hu, Chengchuan An, Zhenbo Lu, Jingxin Xia
In this study, we proposed a Parameter-Free Non-Convex Tensor Completion model (TC-PFNC) for traffic data recovery, in which a log-based relaxation term was designed to approximate tensor algebraic rank.
no code implementations • 15 Dec 2022 • Anurag Das, Yongqin Xian, Yang He, Zeynep Akata, Bernt Schiele
For best performance, today's semantic segmentation methods use large and carefully labeled datasets, requiring expensive annotation budgets.
no code implementations • 6 Sep 2023 • Sichao Fu, Qinmu Peng, Yang He, Baokun Du, Xinge You
In recent years, graph neural networks (GNN) have achieved significant developments in a variety of graph analytical tasks.
no code implementations • 30 Dec 2023 • Yao Wan, Yang He, Zhangqian Bi, JianGuo Zhang, Hongyu Zhang, Yulei Sui, Guandong Xu, Hai Jin, Philip S. Yu
We also benchmark several state-of-the-art neural models for code intelligence, and provide an open-source toolkit tailored for the rapid prototyping of deep-learning-based code intelligence models.
no code implementations • 10 Mar 2024 • Yang He, Lingao Xiao, Joey Tianyi Zhou, Ivor Tsang
These two challenges connect to the "subset degradation problem" in traditional dataset condensation: a subset from a larger condensed dataset is often unrepresentative compared to directly condensing the whole dataset to that smaller size.