Search Results for author: Zidong Du

Found 25 papers, 7 papers with code

Pushing the Limits of Machine Design: Automated CPU Design with AI

1 code implementation21 Jun 2023 Shuyao Cheng, Pengwei Jin, Qi Guo, Zidong Du, Rui Zhang, Yunhao Tian, Xing Hu, Yongwei Zhao, Yifan Hao, Xiangtao Guan, Husheng Han, Zhengyue Zhao, Ximing Liu, Ling Li, Xishan Zhang, Yuejie Chu, Weilong Mao, Tianshi Chen, Yunji Chen

By efficiently exploring a search space of unprecedented size 10^{10^{540}}, which is the largest one of all machine-designed objects to our best knowledge, and thus pushing the limits of machine design, our approach generates an industrial-scale RISC-V CPU within only 5 hours.

Online Prototype Alignment for Few-shot Policy Transfer

1 code implementation12 Jun 2023 Qi Yi, Rui Zhang, Shaohui Peng, Jiaming Guo, Yunkai Gao, Kaizhao Yuan, Ruizhi Chen, Siming Lan, Xing Hu, Zidong Du, Xishan Zhang, Qi Guo, Yunji Chen

Domain adaptation in reinforcement learning (RL) mainly deals with the changes of observation when transferring the policy to a new environment.

Domain Adaptation Reinforcement Learning (RL)

Flew Over Learning Trap: Learn Unlearnable Samples by Progressive Staged Training

1 code implementation3 Jun 2023 Pucheng Dang, Xing Hu, Kaidi Xu, Jinhao Duan, Di Huang, Husheng Han, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen

Unlearning techniques are proposed to prevent third parties from exploiting unauthorized data, which generate unlearnable samples by adding imperceptible perturbations to data for public publishing.

Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation

no code implementations2 Jun 2023 Zhengyue Zhao, Jinhao Duan, Xing Hu, Kaidi Xu, Chenan Wang, Rui Zhang, Zidong Du, Qi Guo, Yunji Chen

This imperceptible protective noise makes the data almost unlearnable for diffusion models, i. e., diffusion models trained or fine-tuned on the protected data cannot generate high-quality and diverse images related to the protected training data.

Denoising Image Generation

ANPL: Towards Natural Programming with Interactive Decomposition

1 code implementation NeurIPS 2023 Di Huang, Ziyuan Nan, Xing Hu, Pengwei Jin, Shaohui Peng, Yuanbo Wen, Rui Zhang, Zidong Du, Qi Guo, Yewen Pu, Yunji Chen

We deploy ANPL on the Abstraction and Reasoning Corpus (ARC), a set of unique tasks that are challenging for state-of-the-art AI systems, showing it outperforms baseline programming systems that (a) without the ability to decompose tasks interactively and (b) without the guarantee that the modules can be correctly composed together.

Code Generation Program Synthesis

Conceptual Reinforcement Learning for Language-Conditioned Tasks

no code implementations9 Mar 2023 Shaohui Peng, Xing Hu, Rui Zhang, Jiaming Guo, Qi Yi, Ruizhi Chen, Zidong Du, Ling Li, Qi Guo, Yunji Chen

Recently, the language-conditioned policy is proposed to facilitate policy transfer through learning the joint representation of observation and text that catches the compact and invariant information across environments.

reinforcement-learning Reinforcement Learning (RL)

Ultra-low Precision Multiplication-free Training for Deep Neural Networks

no code implementations28 Feb 2023 Chang Liu, Rui Zhang, Xishan Zhang, Yifan Hao, Zidong Du, Xing Hu, Ling Li, Qi Guo

The energy-efficient works try to decrease the precision of multiplication or replace the multiplication with energy-efficient operations such as addition or bitwise shift, to reduce the energy consumption of FP32 multiplications.

Quantization

Online Symbolic Regression with Informative Query

no code implementations21 Feb 2023 Pengwei Jin, Di Huang, Rui Zhang, Xing Hu, Ziyuan Nan, Zidong Du, Qi Guo, Yunji Chen

Symbolic regression, the task of extracting mathematical expressions from the observed data $\{ \vx_i, y_i \}$, plays a crucial role in scientific discovery.

regression Symbolic Regression

Causality-driven Hierarchical Structure Discovery for Reinforcement Learning

no code implementations13 Oct 2022 Shaohui Peng, Xing Hu, Rui Zhang, Ke Tang, Jiaming Guo, Qi Yi, Ruizhi Chen, Xishan Zhang, Zidong Du, Ling Li, Qi Guo, Yunji Chen

To address this issue, we propose CDHRL, a causality-driven hierarchical reinforcement learning framework, leveraging a causality-driven discovery instead of a randomness-driven exploration to effectively build high-quality hierarchical structures in complicated environments.

Hierarchical Reinforcement Learning reinforcement-learning +1

Object-Category Aware Reinforcement Learning

no code implementations13 Oct 2022 Qi Yi, Rui Zhang, Shaohui Peng, Jiaming Guo, Xing Hu, Zidong Du, Xishan Zhang, Qi Guo, Yunji Chen

Object-oriented reinforcement learning (OORL) is a promising way to improve the sample efficiency and generalization ability over standard RL.

Feature Engineering Object +3

Neural Program Synthesis with Query

no code implementations ICLR 2022 Di Huang, Rui Zhang, Xing Hu, Xishan Zhang, Pengwei Jin, Nan Li, Zidong Du, Qi Guo, Yunji Chen

In this work, we propose a query-based framework that trains a query neural network to generate informative input-output examples automatically and interactively from a large query space.

Program Synthesis

ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers

no code implementations NeurIPS 2021 Husheng Han, Kaidi Xu, Xing Hu, Xiaobing Chen, Ling Liang, Zidong Du, Qi Guo, Yanzhi Wang, Yunji Chen

Our experimental results show that the certified accuracy is increased from 36. 3% (the state-of-the-art certified detection) to 60. 4% on the ImageNet dataset, largely pushing the certified defenses for practical use.

Hindsight Value Function for Variance Reduction in Stochastic Dynamic Environment

1 code implementation26 Jul 2021 Jiaming Guo, Rui Zhang, Xishan Zhang, Shaohui Peng, Qi Yi, Zidong Du, Xing Hu, Qi Guo, Yunji Chen

In this paper, we propose to replace the state value function with a novel hindsight value function, which leverages the information from the future to reduce the variance of the gradient estimate for stochastic dynamic environments.

Policy Gradient Methods

Rubik: A Hierarchical Architecture for Efficient Graph Learning

no code implementations26 Sep 2020 Xiaobing Chen, yuke wang, Xinfeng Xie, Xing Hu, Abanti Basak, Ling Liang, Mingyu Yan, Lei Deng, Yufei Ding, Zidong Du, Yunji Chen, Yuan Xie

Graph convolutional network (GCN) emerges as a promising direction to learn the inductive representation in graph data commonly used in widespread applications, such as E-commerce, social networks, and knowledge graphs.

Hardware Architecture

Balancing Efficiency and Flexibility for DNN Acceleration via Temporal GPU-Systolic Array Integration

no code implementations18 Feb 2020 Cong Guo, Yangjie Zhou, Jingwen Leng, Yuhao Zhu, Zidong Du, Quan Chen, Chao Li, Bin Yao, Minyi Guo

We propose Simultaneous Multi-mode Architecture (SMA), a novel architecture design and execution model that offers general-purpose programmability on DNN accelerators in order to accelerate end-to-end applications.

DWM: A Decomposable Winograd Method for Convolution Acceleration

no code implementations3 Feb 2020 Di Huang, Xishan Zhang, Rui Zhang, Tian Zhi, Deyuan He, Jiaming Guo, Chang Liu, Qi Guo, Zidong Du, Shaoli Liu, Tianshi Chen, Yunji Chen

In this paper, we propose a novel Decomposable Winograd Method (DWM), which breaks through the limitation of original Winograd's minimal filtering algorithm to a wide and general convolutions.

CompactNet: Platform-Aware Automatic Optimization for Convolutional Neural Networks

1 code implementation28 May 2019 Weicheng Li, Rui Wang, Zhongzhi Luan, Di Huang, Zidong Du, Yunji Chen, Depei Qian

Convolutional Neural Network (CNN) based Deep Learning (DL) has achieved great progress in many real-life applications.

Image Classification

BENCHIP: Benchmarking Intelligence Processors

no code implementations23 Oct 2017 Jinhua Tao, Zidong Du, Qi Guo, Huiying Lan, Lei Zhang, Shengyuan Zhou, Lingjie Xu, Cong Liu, Haifeng Liu, Shan Tang, Allen Rush, Willian Chen, Shaoli Liu, Yunji Chen, Tianshi Chen

The variety of emerging intelligence processors requires standard benchmarks for fair comparison and system optimization (in both software and hardware).

Benchmarking

Cannot find the paper you are looking for? You can Submit a new open access paper.