no code implementations • 20 Dec 2023 • Zhaojian Yu, Xin Zhang, Ning Shang, Yangyu Huang, Can Xu, Yishujie Zhao, Wenxiang Hu, Qiufeng Yin
This paper thus offers a significant contribution to the field of instruction data generation and fine-tuning models, providing new insights and tools for enhancing performance in code-related tasks.
1 code implementation • 24 Apr 2023 • QiHao Zhao, Yangyu Huang, Wei Hu, Fan Zhang, Jun Liu
TransMix uses unreliable attention maps to compute mixed attention labels that can affect the model.
Ranked #1 on Data Augmentation on ImageNet
no code implementations • 19 Dec 2022 • Yangyu Huang, Xi Chen, Jongyoo Kim, Hao Yang, Chong Li, Jiaolong Yang, Dong Chen
To evaluate our method, we manually label the dense landmarks on 300W testset.
Ranked #1 on Face Alignment on 300W
2 code implementations • CVPR 2022 • Yinglin Zheng, Hao Yang, Ting Zhang, Jianmin Bao, Dongdong Chen, Yangyu Huang, Lu Yuan, Dong Chen, Ming Zeng, Fang Wen
In this paper, we study the transfer performance of pre-trained models on face analysis tasks and introduce a framework, called FaRL, for general Facial Representation Learning in a visual-linguistic manner.
Ranked #1 on Face Parsing on CelebAMask-HQ (using extra training data)
1 code implementation • ICCV 2021 • Yangyu Huang, Hao Yang, Chong Li, Jongyoo Kim, Fangyun Wei
On the other hand, AAM is an attention module which can get anisotropic attention mask focusing on the region of point and its local edge connected by adjacent points, it has a stronger response in tangent than in normal, which means relaxed constraints in the tangent.
Ranked #5 on Face Alignment on 300W
1 code implementation • 14 Sep 2020 • Wei Hu, QiHao Zhao, Yangyu Huang, Fan Zhang
Learning deep neural network (DNN) classifier with noisy labels is a challenging task because the DNN can easily over-fit on these noisy labels due to its high capability.
no code implementations • 9 Apr 2020 • Yibin Xu, Yangyu Huang, Jianhua Shao, George Theodorakopoulos
First, in a non-sharding blockchain, nodes can have different weight (power or stake) to create a consensus, and as such an adversary needs to control half of the overall weight in order to manipulate the system ($p/2$ security level).
Distributed, Parallel, and Cluster Computing Cryptography and Security
no code implementations • 15 Jan 2020 • Yibin Xu, Yangyu Huang
Traditional Blockchain Sharding approaches can only tolerate up to n/3 of nodes being adversary because they rely on the hypergeometric distribution to make a failure (an adversary does not have n/3 of nodes globally but can manipulate the consensus of a Shard) hard to happen.
Cryptography and Security Distributed, Parallel, and Cluster Computing 68M12
2 code implementations • CVPR 2019 • Wei Hu, Yangyu Huang, Fan Zhang, Ruirui Li
Benefit from large-scale training datasets, deep Convolutional Neural Networks(CNNs) have achieved impressive results in face recognition(FR).
1 code implementation • 17 Mar 2018 • Wei Hu, Yangyu Huang, Fan Zhang, Ruirui Li, Wei Li, Guodong Yuan
Deep convolutional neural networks (CNNs) have greatly improved the Face Recognition (FR) performance in recent years.
Ranked #1 on Face Verification on YouTube Faces DB