Search Results for author: Longhui Yu

Found 11 papers, 8 papers with code

Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision

1 code implementation14 Mar 2024 Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, Chuang Gan

This paper answers this question in the context of tackling hard reasoning tasks (e. g., level 4-5 MATH problems) via learning from human annotations on easier tasks (e. g., level 1-3 MATH problems), which we term as easy-to-hard generalization.

Math Reinforcement Learning (RL) +1

Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization

1 code implementation10 Nov 2023 Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf

We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT).

MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models

1 code implementation21 Sep 2023 Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu

Our MetaMath-7B model achieves 66. 4% on GSM8K and 19. 4% on MATH, exceeding the state-of-the-art models of the same size by 11. 5% and 8. 7%.

Ranked #57 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning GSM8K +4

Forward-Backward Reasoning in Large Language Models for Mathematical Verification

no code implementations15 Aug 2023 Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu, Yu Zhang, Zhenguo Li, James T. Kwok

Instead of using forward or backward reasoning alone, we propose FOBAR to combine FOrward and BAckward Reasoning for verification.

Mathematical Reasoning

ConsistentNeRF: Enhancing Neural Radiance Fields with 3D Consistency for Sparse View Synthesis

1 code implementation18 May 2023 Shoukang Hu, Kaichen Zhou, Kaiyu Li, Longhui Yu, Lanqing Hong, Tianyang Hu, Zhenguo Li, Gim Hee Lee, Ziwei Liu

In this paper, we propose ConsistentNeRF, a method that leverages depth information to regularize both multi-view and single-view 3D consistency among pixels.

3D Reconstruction SSIM

DeepVecFont-v2: Exploiting Transformers to Synthesize Vector Fonts with Higher Quality

1 code implementation CVPR 2023 Yuqing Wang, Yizhi Wang, Longhui Yu, Yuesheng Zhu, Zhouhui Lian

First, we adopt Transformers instead of RNNs to process sequential data and design a relaxation representation for vector outlines, markedly improving the model's capability and stability of synthesizing long and complex outlines.

Decoder Vector Graphics

Generalizing and Decoupling Neural Collapse via Hyperspherical Uniformity Gap

3 code implementations11 Mar 2023 Weiyang Liu, Longhui Yu, Adrian Weller, Bernhard Schölkopf

We then use hyperspherical uniformity (which characterizes the degree of uniformity on the unit hypersphere) as a unified framework to quantify these two objectives.

Dual-Curriculum Teacher for Domain-Inconsistent Object Detection in Autonomous Driving

no code implementations17 Oct 2022 Longhui Yu, Yifan Zhang, Lanqing Hong, Fei Chen, Zhenguo Li

Specifically, DucTeacher consists of two curriculums, i. e., (1) domain evolving curriculum seeks to learn from the data progressively to handle data distribution discrepancy by estimating the similarity between domains, and (2) distribution matching curriculum seeks to estimate the class distribution for each unlabeled domain to handle class distribution shifts.

Autonomous Driving object-detection +2

Continual Learning by Modeling Intra-Class Variation

1 code implementation11 Oct 2022 Longhui Yu, Tianyang Hu, Lanqing Hong, Zhen Liu, Adrian Weller, Weiyang Liu

It has been observed that neural networks perform poorly when the data or tasks are presented sequentially.

Continual Learning

Multi-Teacher Knowledge Distillation for Incremental Implicitly-Refined Classification

no code implementations23 Feb 2022 Longhui Yu, Zhenyu Weng, Yuqing Wang, Yuesheng Zhu

However, distilling knowledge from two teacher models could result in the student model making some redundant predictions.

Classification Incremental Learning +1

Memory Replay with Data Compression for Continual Learning

1 code implementation ICLR 2022 Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu

In this work, we propose memory replay with data compression (MRDC) to reduce the storage cost of old training samples and thus increase their amount that can be stored in the memory buffer.

Autonomous Driving class-incremental learning +6

Cannot find the paper you are looking for? You can Submit a new open access paper.