Search Results for author: Hao Ge

Found 16 papers, 4 papers with code

Seed1.5-Thinking: Advancing Superb Reasoning Models with Reinforcement Learning

no code implementations10 Apr 2025 ByteDance Seed, :, Jiaze Chen, Tiantian Fan, Xin Liu, Lingjun Liu, Zhiqi Lin, Mingxuan Wang, Chengyi Wang, Xiangpeng Wei, Wenyuan Xu, Yufeng Yuan, Yu Yue, Lin Yan, Qiying Yu, Xiaochen Zuo, Chi Zhang, Ruofei Zhu, Zhecheng An, Zhihao Bai, Yu Bao, Xingyan Bin, Jiangjie Chen, Feng Chen, Hongmin Chen, Riwei Chen, Liangqiang Chen, Zixin Chen, Jinsong Chen, Siyan Chen, Kaiyuan Chen, Zhi Chen, Jin Chen, Jiecao Chen, Jinxin Chi, Weinan Dai, Ning Dai, Jiahui Dai, Shihan Dou, Yantao Du, Zhengyin Du, Jianhui Duan, Chen Dun, Ting-Han Fan, Jiazhan Feng, Junda Feng, Ziyuan Feng, Yuwei Fu, Wenqi Fu, Hanjie Fu, Hao Ge, Hongyi Guo, Mingji Han, Li Han, Wenhao Hao, Xintong Hao, Qianyu He, Jerry He, Feng He, Wen Heng, Zehua Hong, Qi Hou, Liang Hu, Shengding Hu, Nan Hu, Kai Hua, Qi Huang, Ziyue Huang, Hongzhi Huang, Zihao Huang, Ting Huang, Wenhao Huang, Wei Jia, Bin Jia, Xiaoying Jia, Yuhua Jiang, Haobin Jiang, Ziheng Jiang, Kaihua Jiang, Chengquan Jiang, Jianpeng Jiao, Xiaoran Jin, Xing Jin, Xunhao Lai, Xiang Li, Liyi Li, Hongkai Li, Zheng Li, Shengxian Wan, Ya Wang, Yunshui Li, Chenggang Li, Niuniu Li, Siyu Li, Xi Li, Xiao Li, Aoyan Li, Yuntao Li, Nianning Liang, Xinnian Liang, Haibin Lin, Weijian Lin, Ye Lin, Zhicheng Liu, Guanlin Liu, Chenxiao Liu, Yan Liu, Gaohong Liu, Juncai Liu, Chundian Liu, Deyi Liu, Kaibo Liu, Siyao Liu, Qi Liu, Yongfei Liu, Kang Liu, Gan Liu, Boyi Liu, Rui Long, Weiqiang Lou, Chenwei Lou, Xiang Luo, Yao Luo, Caiping Lv, Heyang Lv, Bole Ma, Qianli Ma, Hongzhi Ma, Yiyuan Ma, Jin Ma, Wenchang Ma, Tingting Ma, Chen Mao, Qiyang Min, Zhe Nan, Guanghan Ning, Jinxiang Ou, Haojie Pan, Renming Pang, Yanghua Peng, Tao Peng, Lihua Qian, Mu Qiao, Meng Qu, Cheng Ren, Hongbin Ren, Yong Shan, Wei Shen, Ke Shen, Kai Shen, Guangming Sheng, Jinlong Shi, Wenlei Shi, Guang Shi, Shuai Shuai Cao, Yuxin Song, Zuquan Song, Jing Su, Yifan Sun, Tao Sun, Zewei Sun, Borui Wan, Xiaohui Wang, Xi Wang, Shuguang Wang, Jun Wang, Qinlong Wang, Chenyuan Wang, Shuai Wang, Zihan Wang, Changbao Wang, Jiaqiang Wang, Shihang Wang, Xuwu Wang, Zaiyuan Wang, Yuxuan Wang, Wenqi Wang, Taiqing Wang, Chengzhi Wei, Houmin Wei, Ziyun Wei, Shufa Wei, Zheng Wu, Yonghui Wu, Yangjun Wu, Bohong Wu, Shuang Wu, Jingqiao Wu, Ning Wu, Shuangzhi Wu, Jianmin Wu, Chenguang Xi, Fan Xia, Yuqiao Xian, Liang Xiang, Boren Xiang, Bowen Xiao, Zhen Xiao, Xia Xiao, Yongsheng Xiao, Chao Xin, Shulin Xin, Yuwen Xiong, Jingjing Xu, Ziwen Xu, Chenyin Xu, Jiayi Xu, Yifan Xu, Wei Xu, Yufei Xu, Shikun Xu, Shipeng Yan, Shen Yan, Qingping Yang, Xi Yang, Tianhao Yang, Yuehang Yang, Yuan Yang, Ximing Yang, Zeyu Yang, Guang Yang, Yifan Yang, Xuesong Yao, Bairen Yi, Fan Yin, Jianian Yin, Ziqiang Ying, Xiangyu Yu, Hongli Yu, Song Yu, Menghan Yu, Huan Yu, Siyu Yuan, Jun Yuan, Yutao Zeng, Tianyang Zhan, Zheng Zhang, Yun Zhang, Mofan Zhang, Wang Zhang, Ru Zhang, Zhi Zhang, Tianqi Zhang, Xinyi Zhang, Zhexi Zhang, Sijun Zhang, Wenqiang Zhang, Xiangxiang Zhang, Yongtao Zhang, Yuyu Zhang, Ge Zhang, He Zhang, Yue Zhang, Renjie Zheng, Ningxin Zheng, Zhuolin Zheng, Yaowei Zheng, Chen Zheng, Xiaoyun Zhi, Wanjun Zhong, Cheng Zhong, Zheng Zhong, Baoquan Zhong, Xun Zhou, Na Zhou, Huan Zhou, Hang Zhu, Defa Zhu, Wenjia Zhu, Lei Zuo

We introduce Seed1. 5-Thinking, capable of reasoning through thinking before responding, resulting in improved performance on a wide range of benchmarks.

Mixture-of-Experts reinforcement-learning +1

ByteScale: Efficient Scaling of LLM Training with a 2048K Context Length on More Than 12,000 GPUs

no code implementations28 Feb 2025 Hao Ge, Junda Feng, Qi Huang, Fangcheng Fu, Xiaonan Nie, Lei Zuo, Haibin Lin, Bin Cui, Xin Liu

The mismatch between data heterogeneity and static mesh causes redundant communication and imbalanced computation, degrading the training efficiency.

Demystifying Workload Imbalances in Large Transformer Model Training over Variable-length Sequences

no code implementations10 Dec 2024 Haoyang Li, Fangcheng Fu, Sheng Lin, Hao Ge, XuanYu Wang, Jiawen Niu, Jie Jiang, Bin Cui

To optimize large Transformer model training, efficient parallel computing and advanced data management are essential.

Management

Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems

2 code implementations11 Jul 2024 Yuxin Cao, Yumeng Zhu, Derui Wang, Sheng Wen, Minhui Xue, Jin Lu, Hao Ge

In contrast to widely studied sophisticated attacks in the field, we propose an effective yet easy-to-launch physical adversarial attack, named AdvColor, against black-box face recognition pipelines in the physical world.

Adversarial Attack Face Recognition

LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model

no code implementations18 Mar 2024 Yuxin Cao, Jinghao Li, Xi Xiao, Derui Wang, Minhui Xue, Hao Ge, Wei Liu, Guangwu Hu

Benefiting from the popularity and scalably usability of Segment Anything Model (SAM), we first extract different regions according to semantic information and then track them through the video stream to maintain the temporal consistency.

Adversarial Attack Style Transfer +2

3D Face Reconstruction Using A Spectral-Based Graph Convolution Encoder

1 code implementation8 Mar 2024 Haoxin Xu, Zezheng Zhao, Yuxin Cao, Chunyu Chen, Hao Ge, Ziyao Liu

To overcome this limitation and enhance the reconstruction of 3D structural features, we propose an innovative approach that integrates existing 2D features with 3D features to guide the model learning process.

3D Face Reconstruction

Unsupervised multiple choices question answering via universal corpus

no code implementations27 Feb 2024 Qin Zhang, Hao Ge, Xiaojun Chen, Meng Fang

Unsupervised question answering is a promising yet challenging task, which alleviates the burden of building large-scale annotated data in a new domain.

Form Knowledge Graphs +2

Influential Recommender System

no code implementations18 Nov 2022 Haoren Zhu, Hao Ge, Xiaodong Gu, Pengfei Zhao, Dik Lun Lee

Traditional recommender systems are typically passive in that they try to adapt their recommendations to the user's historical interests.

Recommendation Systems

What's the relationship between CNNs and communication systems?

no code implementations3 Mar 2020 Hao Ge, Xiaoguang Tu, Yanxiang Gong, Mei Xie, Zheng Ma

The interpretability of Convolutional Neural Networks (CNNs) is an important topic in the field of computer vision.

NN-PARS: A Parallelized Neural Network Based Circuit Simulation Framework

no code implementations13 Feb 2020 Mohammad Saeed Abrishami, Hao Ge, Justin F. Calderon, Massoud Pedram, Shahin Nazarian

The shrinking of transistor geometries as well as the increasing complexity of integrated circuits, significantly aggravate nonlinear design behavior.

Scheduling

The Nonequilibrium Mechanism of Noise Enhancer synergizing with Activator in HIV Latency Reactivation

no code implementations15 Jan 2020 Xiaolu Guo, Tao Tang, Minxuan Duan, Lei Zhang, Hao Ge

Noise-modulating chemicals can synergize with transcriptional activators in reactivating latent HIV to eliminate latent HIV reservoirs.

Translation

Defending from adversarial examples with a two-stream architecture

no code implementations30 Dec 2019 Hao Ge, Xiaoguang Tu, Mei Xie, Zheng Ma

We demonstrate that our two-stream architecture is robust to adversarial examples built by currently known attacking algorithms.

Vocal Bursts Valence Prediction

Fictitious GAN: Training GANs with Historical Models

1 code implementation ECCV 2018 Hao Ge, Yin Xia, Xu Chen, Randall Berry, Ying Wu

Inspired by the fictitious play learning process, a novel training method, referred to as Fictitious GAN, is introduced.

Training Generative Adversarial Networks via Primal-Dual Subgradient Methods: A Lagrangian Perspective on GAN

no code implementations ICLR 2018 Xu Chen, Jiang Wang, Hao Ge

This formulation shows the connection between the standard GAN training process and the primal-dual subgradient methods for convex optimization.

A Parameter-Free Learning Automaton Scheme

no code implementations28 Nov 2017 Hao Ge

For a learning automaton, a proper configuration of its learning parameters, which are crucial for the automaton's performance, is relatively difficult due to the necessity of a manual parameter tuning before real applications.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.