Search Results for author: Yifan Hao

Found 9 papers, 1 papers with code

On the Benefits of Over-parameterization for Out-of-Distribution Generalization

no code implementations26 Mar 2024 Yifan Hao, Yong Lin, Difan Zou, Tong Zhang

We demonstrate that in this scenario, further increasing the model's parameterization can significantly reduce the OOD loss.

Out-of-Distribution Generalization

The Surprising Harmfulness of Benign Overfitting for Adversarial Robustness

no code implementations19 Jan 2024 Yifan Hao, Tong Zhang

Recent empirical and theoretical studies have established the generalization capabilities of large machine learning models that are trained to (approximately or exactly) fit noisy data.

Adversarial Robustness

Spurious Feature Diversification Improves Out-of-distribution Generalization

no code implementations29 Sep 2023 Yong Lin, Lu Tan, Yifan Hao, Honam Wong, Hanze Dong, Weizhong Zhang, Yujiu Yang, Tong Zhang

Contrary to the conventional wisdom that focuses on learning invariant features for better OOD performance, our findings suggest that incorporating a large number of diverse spurious features weakens their individual contributions, leading to improved overall OOD generalization performance.

Out-of-Distribution Generalization

Reverse Diffusion Monte Carlo

no code implementations5 Jul 2023 Xunpeng Huang, Hanze Dong, Yifan Hao, Yi-An Ma, Tong Zhang

We propose a Monte Carlo sampler from the reverse diffusion process.

Pushing the Limits of Machine Design: Automated CPU Design with AI

1 code implementation21 Jun 2023 Shuyao Cheng, Pengwei Jin, Qi Guo, Zidong Du, Rui Zhang, Yunhao Tian, Xing Hu, Yongwei Zhao, Yifan Hao, Xiangtao Guan, Husheng Han, Zhengyue Zhao, Ximing Liu, Ling Li, Xishan Zhang, Yuejie Chu, Weilong Mao, Tianshi Chen, Yunji Chen

By efficiently exploring a search space of unprecedented size 10^{10^{540}}, which is the largest one of all machine-designed objects to our best knowledge, and thus pushing the limits of machine design, our approach generates an industrial-scale RISC-V CPU within only 5 hours.

Ultra-low Precision Multiplication-free Training for Deep Neural Networks

no code implementations28 Feb 2023 Chang Liu, Rui Zhang, Xishan Zhang, Yifan Hao, Zidong Du, Xing Hu, Ling Li, Qi Guo

The energy-efficient works try to decrease the precision of multiplication or replace the multiplication with energy-efficient operations such as addition or bitwise shift, to reduce the energy consumption of FP32 multiplications.

Quantization

Class-Specific Attention (CSA) for Time-Series Classification

no code implementations19 Nov 2022 Yifan Hao, Huiping Cao, K. Selcuk Candan, Jiefei Liu, Huiying Chen, Ziwei Ma

In this paper, we propose a novel class-specific attention (CSA) module to capture significant class-specific features and improve the overall classification performance of time series.

Classification Time Series +2

Scheduling Real-time Deep Learning Services as Imprecise Computations

no code implementations2 Nov 2020 Shuochao Yao, Yifan Hao, Yiran Zhao, Huajie Shao, Dongxin Liu, Shengzhong Liu, Tianshi Wang, Jinyang Li, Tarek Abdelzaher

The paper presents an efficient real-time scheduling algorithm for intelligent real-time edge services, defined as those that perform machine intelligence tasks, such as voice recognition, LIDAR processing, or machine vision, on behalf of local embedded devices that are themselves unable to support extensive computations.

Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.