no code implementations • 5 Apr 2022 • Bo Yuan, Danpei Zhao, Shuai Shao, Zehuan Yuan, Changhu Wang
In two typical cross-domain semantic segmentation tasks, i. e., GTA5 to Cityscapes and SYNTHIA to Cityscapes, our method achieves the state-of-the-art segmentation accuracy.
no code implementations • 21 Feb 2022 • Zhecheng Yuan, Guozheng Ma, Yao Mu, Bo Xia, Bo Yuan, Xueqian Wang, Ping Luo, Huazhe Xu
One of the key challenges in visual Reinforcement Learning (RL) is to learn policies that can generalize to unseen environments.
no code implementations • 24 Jan 2022 • Yulin Chen, Zelai Yao, Haixiao Chi, Dov Gabbay, Bo Yuan, Bruno Bentzen, Beishui Liao
Then, we design a counterfactual verification module to verify the BTPK-based learning method.
no code implementations • 3 Jan 2022 • Yunhui Zeng, Zijun Liao, Yuanzhi Dai, Rong Wang, Xiu Li, Bo Yuan
The dynamic job-shop scheduling problem (DJSP) is a class of scheduling tasks that specifically consider the inherent uncertainties such as changing order requirements and possible machine breakdown in realistic smart manufacturing settings.
no code implementations • 1 Jan 2022 • Yuxing Wang, Tiantian Zhang, Yongzhe Chang, Bin Liang, Xueqian Wang, Bo Yuan
The integration of Reinforcement Learning (RL) and Evolutionary Algorithms (EAs) aims at simultaneously exploiting the sample efficiency as well as the diversity and robustness of the two paradigms.
no code implementations • 13 Dec 2021 • Yang Liu, Yongzhe Chang, Shilei Jiang, Xueqian Wang, Bin Liang, Bo Yuan
In general, IL methods can be categorized into Behavioral Cloning (BC) and Inverse Reinforcement Learning (IRL).
1 code implementation • 13 Dec 2021 • M. Vivienne Liu, Bo Yuan, Zongjie Wang, Jeffrey A. Sward, K. Max Zhang, C. Lindsay Anderson
Under the increasing need to decarbonize energy systems, there is coupled acceleration in connection of distributed and intermittent renewable resources in power grids.
1 code implementation • NeurIPS 2021 • Yang Sui, Miao Yin, Yi Xie, Huy Phan, Saman Zonouz, Bo Yuan
Filter pruning has been widely used for neural network compression because of its enabled practical acceleration.
no code implementations • 29 Sep 2021 • Wanzhao Yang, Miao Yin, Yang Sui, Bo Yuan
Based on the observations and outcomes from our analysis, we then propose SPARK, a unified DNN compression framework that can simultaneously capture model SPArsity and low-RanKness in an efficient way.
no code implementations • 29 Sep 2021 • Yuxiang Sun, Bo Yuan, Yufan Xue, Jiawei Zhou, Leonardo Stella, Xianzhong Zhou
It is also the first time in this field that an algorithm design for intelligent wargaming combines multi-attribute decision making with reinforcement learning.
no code implementations • 6 Sep 2021 • Yuxiang Sun, Bo Yuan, Yufan Xue, Jiawei Zhou, XiaoYu Zhang, Xianzhong Zhou
Researchers are increasingly focusing on intelligent games as a hot research area. The article proposes an algorithm that combines the multi-attribute management and reinforcement learning methods, and that combined their effect on wargaming, it solves the problem of the agent's low rate of winning against specific rules and its inability to quickly converge during intelligent wargame training. At the same time, this paper studied a multi-attribute decision making and reinforcement learning algorithm in a wargame simulation environment, and obtained data on red and blue conflict. Calculate the weight of each attribute based on the intuitionistic fuzzy number weight calculations.
no code implementations • 1 Sep 2021 • Tiantian Zhang, Xueqian Wang, Bin Liang, Bo Yuan
In CDaKD, we exploit online clustering to achieve context division, and interference is further alleviated by a knowledge distillation regularization term on the output layers for learned contexts.
no code implementations • CVPR 2021 • Miao Yin, Yang Sui, Siyu Liao, Bo Yuan
Notably, on CIFAR-100, with 2. 3X and 2. 4X compression ratios, our models have 1. 96% and 2. 21% higher top-1 accuracy than the original ResNet-20 and ResNet-32, respectively.
no code implementations • 19 Jun 2021 • Hua Wei, Deheng Ye, Zhao Liu, Hao Wu, Bo Yuan, Qiang Fu, Wei Yang, Zhenhui Li
While most research focuses on the state-action function part through reducing the bootstrapping error in value function approximation induced by the distribution shift of training data, the effects of error propagation in generative modeling have been neglected.
no code implementations • 8 Jun 2021 • Jiayan Gu, Yan Wu, Ashiq Anjum, John Panneerselvam, Yao Lu, Bo Yuan
With the development of Edge Computing and Artificial Intelligence (AI) technologies, edge devices are witnessed to generate data at unprecedented volume.
no code implementations • 1 May 2021 • Chen Zhang, Siwei Wang, Wenxuan Tu, Pei Zhang, Xinwang Liu, Changwang Zhang, Bo Yuan
Multi-view clustering is an important yet challenging task in machine learning and data mining community.
1 code implementation • 13 Apr 2021 • Weiqi Ji, Bo Yuan, Ciyue Shen, Aviv Regev, Chris Sander, Sili Deng
While there is no analogous ground truth for real life biological systems, this work demonstrates the ability to construct and parameterize a considerable diversity of network models with high predictive ability.
no code implementations • CVPR 2021 • Miao Yin, Siyu Liao, Xiao-Yang Liu, Xiaodong Wang, Bo Yuan
Although various prior works have been proposed to reduce the RNN model sizes, executing RNN models in resource-restricted environments is still a very challenging problem.
no code implementations • 28 Mar 2021 • Xiao Zang, Yi Xie, Siyu Liao, Jie Chen, Bo Yuan
In this paper, we, for the first time, perform systematic investigation on noise injection-based regularization for point cloud-domain DNNs.
no code implementations • 8 Feb 2021 • Siyu Liao, Chunhua Deng, Miao Yin, Bo Yuan
Recently deep neural networks have been successfully applied in channel coding to improve the decoding performance.
no code implementations • 8 Jan 2021 • Liang Xu, Liying Zheng, Weijun Li, Zhenbo Chen, Weishun Song, Yue Deng, Yongzhe Chang, Jing Xiao, Bo Yuan
In recent studies, Lots of work has been done to solve time series anomaly detection by applying Variational Auto-Encoders (VAEs).
no code implementations • 30 Dec 2020 • Li Zhong, Zhen Fang, Feng Liu, Jie Lu, Bo Yuan, Guangquan Zhang
Experiments show that the proxy can effectively curb the increase of the combined risk when minimizing the source risk and distribution discrepancy.
no code implementations • 25 Nov 2020 • Deheng Ye, Guibin Chen, Peilin Zhao, Fuhao Qiu, Bo Yuan, Wen Zhang, Sheng Chen, Mingfei Sun, Xiaoqian Li, Siqin Li, Jing Liang, Zhenjie Lian, Bei Shi, Liang Wang, Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang
Unlike prior attempts, we integrate the macro-strategy and the micromanagement of MOBA-game-playing into neural networks in a supervised and end-to-end manner.
no code implementations • NeurIPS 2020 • Deheng Ye, Guibin Chen, Wen Zhang, Sheng Chen, Bo Yuan, Bo Liu, Jia Chen, Zhao Liu, Fuhao Qiu, Hongsheng Yu, Yinyuting Yin, Bei Shi, Liang Wang, Tengfei Shi, Qiang Fu, Wei Yang, Lanxiao Huang, Wei Liu
However, existing work falls short in handling the raw game complexity caused by the explosion of agent combinations, i. e., lineups, when expanding the hero pool in case that OpenAI's Dota AI limits the play to a pool of only 17 heroes.
1 code implementation • 4 Aug 2020 • Yiyang Zhang, Feng Liu, Zhen Fang, Bo Yuan, Guangquan Zhang, Jie Lu
We consider two cases of this setting, one is that the source domain only contains complementary-label data (completely complementary unsupervised domain adaptation, CC-UDA), and the other is that the source domain has plenty of complementary-label data and a small amount of true-label data (partly complementary unsupervised domain adaptation, PC-UDA).
1 code implementation • 29 Jul 2020 • Yiyang Zhang, Feng Liu, Zhen Fang, Bo Yuan, Guangquan Zhang, Jie Lu
To mitigate this problem, we consider a novel problem setting where the classifier for the target domain has to be trained with complementary-label data from the source domain and unlabeled data from the target domain named budget-friendly UDA (BFUDA).
no code implementations • 27 Jun 2020 • Wei Wang, Gangqiang Hu, Bo Yuan, Shandong Ye, Chao Chen, YaYun Cui, Xi Zhang, Liting Qian
To illustrate the importance of prior knowledge, the result of the algorithm without prior knowledge is also investigated.
no code implementations • 23 Jun 2020 • Li Zhong, Zhen Fang, Feng Liu, Bo Yuan, Guangquan Zhang, Jie Lu
To achieve this aim, a previous study has proven an upper bound of the target-domain risk, and the open set difference, as an important term in the upper bound, is used to measure the risk on unknown target data.
no code implementations • 9 May 2020 • Miao Yin, Siyu Liao, Xiao-Yang Liu, Xiaodong Wang, Bo Yuan
Recurrent Neural Networks (RNNs) have been widely used in sequence analysis and modeling.
no code implementations • 26 Apr 2020 • Yi Xie, Zhuohang Li, Cong Shi, Jian Liu, Yingying Chen, Bo Yuan
These idealized assumptions, however, makes the existing audio adversarial attacks mostly impossible to be launched in a timely fashion in practice (e. g., playing unnoticeable adversarial perturbations along with user's streaming input).
no code implementations • 23 Apr 2020 • Chunhua Deng, Siyu Liao, Yi Xie, Keshab K. Parhi, Xuehai Qian, Bo Yuan
On the other hand, the recent structured matrix-based approach (i. e., CirCNN) is limited by the relatively complex arithmetic computation (i. e., FFT), less flexible compression ratio, and its inability to fully utilize input sparsity.
no code implementations • 4 Mar 2020 • Yi Xie, Cong Shi, Zhuohang Li, Jian Liu, Yingying Chen, Bo Yuan
As the popularity of voice user interface (VUI) exploded in recent years, speaker recognition system has emerged as an important medium of identifying a speaker in many security-required applications and services.
1 code implementation • 12 Feb 2020 • Xiao Zang, Yi Xie, Jie Chen, Bo Yuan
Worse, the bad actors found for one graph model severely compromise other models as well.
no code implementations • 11 Jan 2020 • Siyu Liao, Jie Chen, Yanzhi Wang, Qinru Qiu, Bo Yuan
Continuous representation of words is a standard component in deep learning-based NLP models.
no code implementations • 16 Dec 2019 • Huy Phan, Yi Xie, Siyu Liao, Jie Chen, Bo Yuan
In addition, CAG exhibits high transferability across different DNN classifier models in black-box attack scenario by introducing random dropout in the process of generating perturbations.
no code implementations • 16 Sep 2019 • Rayan Mosli, Matthew Wright, Bo Yuan, Yin Pan
In this paper, we present AdversarialPSO, a black-box attack that uses fewer queries to create adversarial examples with high success rates.
no code implementations • 10 Aug 2019 • Tiantian Zhang, Li Zhong, Bo Yuan
Experimental evaluation is a major research methodology for investigating clustering algorithms and many other machine learning algorithms.
no code implementations • 28 Feb 2019 • Siyu Liao, Zhe Li, Liang Zhao, Qinru Qiu, Yanzhi Wang, Bo Yuan
Deep neural networks (DNNs), especially deep convolutional neural networks (CNNs), have emerged as the powerful technique in various machine learning applications.
1 code implementation • 29 Jan 2019 • Guoji Fu, Bo Yuan, Qiqi Duan, Xin Yao
Network representation learning (NRL) has been widely used to help analyze large-scale networks through mapping original networks into a low-dimensional vector space.
Ranked #1 on
Link Prediction
on IMDb
no code implementations • 4 Jul 2018 • Zhisheng Wang, Fangxuan Sun, Jun Lin, Zhongfeng Wang, Bo Yuan
Based on the developed guideline and adaptive dropping mechanism, an innovative soft-guided adaptively-dropped (SGAD) neural network is proposed in this paper.
no code implementations • 10 May 2018 • Zhe Li, Ji Li, Ao Ren, Caiwen Ding, Jeffrey Draper, Qinru Qiu, Bo Yuan, Yanzhi Wang
Recently, Deep Convolutional Neural Network (DCNN) has achieved tremendous success in many machine learning applications.
no code implementations • 28 Mar 2018 • Caiwen Ding, Ao Ren, Geng Yuan, Xiaolong Ma, Jiayu Li, Ning Liu, Bo Yuan, Yanzhi Wang
For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10.
no code implementations • 14 Mar 2018 • Yanzhi Wang, Zheng Zhan, Jiayu Li, Jian Tang, Bo Yuan, Liang Zhao, Wujie Wen, Siyue Wang, Xue Lin
Based on the universal approximation property, we further prove that SCNNs and BNNs exhibit the same energy complexity.
no code implementations • 14 Mar 2018 • Shuo Wang, Zhe Li, Caiwen Ding, Bo Yuan, Yanzhi Wang, Qinru Qiu, Yun Liang
The previous work proposes to use a pruning based compression technique to reduce the model size and thus speedups the inference on FPGAs.
no code implementations • 18 Feb 2018 • Yanzhi Wang, Caiwen Ding, Zhe Li, Geng Yuan, Siyu Liao, Xiaolong Ma, Bo Yuan, Xuehai Qian, Jian Tang, Qinru Qiu, Xue Lin
Hardware accelerations of deep learning systems have been extensively investigated in industry and academia.
no code implementations • 29 Aug 2017 • Caiwen Ding, Siyu Liao, Yanzhi Wang, Zhe Li, Ning Liu, Youwei Zhuo, Chao Wang, Xuehai Qian, Yu Bai, Geng Yuan, Xiaolong Ma, Yi-Peng Zhang, Jian Tang, Qinru Qiu, Xue Lin, Bo Yuan
As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy.
no code implementations • ICML 2017 • Liang Zhao, Siyu Liao, Yanzhi Wang, Zhe Li, Jian Tang, Victor Pan, Bo Yuan
Recently low displacement rank (LDR) matrices, or so-called structured matrices, have been proposed to compress large-scale neural networks.
no code implementations • 18 Nov 2016 • Ao Ren, Ji Li, Zhe Li, Caiwen Ding, Xuehai Qian, Qinru Qiu, Bo Yuan, Yanzhi Wang
Stochastic Computing (SC), which uses bit-stream to represent a number within [-1, 1] by counting the number of ones in the bit-stream, has a high potential for implementing DCNNs with high scalability and ultra-low hardware footprint.