Search Results for author: Tianyu Fu

Found 17 papers, 6 papers with code

Exclusivity-Consistency Regularized Knowledge Distillation for Face Recognition

no code implementations ECCV 2020 Xiaobo Wang, Tianyu Fu, Shengcai Liao, Shuo Wang, Zhen Lei, Tao Mei

Knowledge distillation is an effective tool to compress large pre-trained Convolutional Neural Networks (CNNs) or their ensembles into models applicable to mobile and embedded devices.

Diversity Face Recognition +2

MetaFE-DE: Learning Meta Feature Embedding for Depth Estimation from Monocular Endoscopic Images

no code implementations5 Feb 2025 Dawei Lu, Deqiang Xiao, Danni Ai, Jingfan Fan, Tianyu Fu, Yucong Lin, Hong Song, Xujiong Ye, Lei Zhang, Jian Yang

Given that RGB and depth images are two views of the same endoscopic surgery scene, in this paper, we introduce a novel concept referred as ``meta feature embedding (MetaFE)", in which the physical entities (e. g., tissues and surgical instruments) of endoscopic surgery are represented using the shared features that can be alternatively decoded into RGB or depth image.

Monocular Depth Estimation Self-Supervised Learning

FrameFusion: Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models

1 code implementation30 Dec 2024 Tianyu Fu, Tengxuan Liu, Qinghao Han, Guohao Dai, Shengen Yan, Huazhong Yang, Xuefei Ning, Yu Wang

Leveraging the unique properties of similarity over importance, we introduce FrameFusion, a novel approach that combines similarity-based merging with importance-based pruning for better token reduction in LVLMs.

Question Answering Token Reduction +1

Efficient Non-Exemplar Class-Incremental Learning with Retrospective Feature Synthesis

no code implementations3 Nov 2024 Liang Bai, Hong Song, Yucong Lin, Tianyu Fu, Deqiang Xiao, Danni Ai, Jingfan Fan, Jian Yang

Additionally, we introduce a similarity-based feature compensation mechanism that integrates generated old class features with similar new class features to synthesize robust retrospective representations.

class-incremental learning Class Incremental Learning +1

Double-Shot 3D Shape Measurement with a Dual-Branch Network for Structured Light Projection Profilometry

no code implementations19 Jul 2024 Mingyang Lei, Jingfan Fan, Long Shao, Hong Song, Deqiang Xiao, Danni Ai, Tianyu Fu, Ying Gu, Jian Yang

Within PDCNet, a Transformer branch is used to capture global perception in the fringe images, while a CNN branch is designed to collect local details in the speckle images.

Hypergraph Multi-modal Large Language Model: Exploiting EEG and Eye-tracking Modalities to Evaluate Heterogeneous Responses for Video Understanding

1 code implementation11 Jul 2024 Minghui Wu, Chenxu Zhao, Anyang Su, Donglin Di, Tianyu Fu, Da An, Min He, Ya Gao, Meng Ma, Kun Yan, Ping Wang

Along with the dataset, we designed a Hypergraph Multi-modal Large Language Model (HMLLM) to explore the associations among different demographics, video elements, EEG, and eye-tracking indicators.

EEG Language Modeling +4

MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression

1 code implementation21 Jun 2024 Tianyu Fu, Haofeng Huang, Xuefei Ning, Genghan Zhang, Boju Chen, Tianqi Wu, Hongyi Wang, Zixiao Huang, Shiyao Li, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang

Existing methods typically employ a uniform sparse attention mask, applying the same sparse pattern across different attention heads and input lengths.

Language Modeling Language Modelling +3

Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study

1 code implementation20 Jun 2024 Xuefei Ning, Zifu Wang, Shiyao Li, Zinan Lin, Peiran Yao, Tianyu Fu, Matthew B. Blaschko, Guohao Dai, Huazhong Yang, Yu Wang

We reveal some findings: (1) Teaching materials that make it easier for students to learn have clearer and more accurate logic when using in-context learning as the student's "learning" method; (2) Weak-to-strong generalization: LbT might help improve strong models by teaching weak models; (3) Diversity in students might help: teaching multiple students could be better than teaching one student or the teacher itself.

In-Context Learning Knowledge Distillation

Representation Learning for Frequent Subgraph Mining

no code implementations22 Feb 2024 Rex Ying, Tianyu Fu, Andrew Wang, Jiaxuan You, Yu Wang, Jure Leskovec

SPMiner combines graph neural networks, order embedding space, and an efficient search strategy to identify network subgraph patterns that appear most frequently in the target graph.

Representation Learning Subgraph Counting

FlightLLM: Efficient Large Language Model Inference with a Complete Mapping Flow on FPGAs

no code implementations8 Jan 2024 Shulin Zeng, Jun Liu, Guohao Dai, Xinhao Yang, Tianyu Fu, Hongyi Wang, Wenheng Ma, Hanbo Sun, Shiyao Li, Zixiao Huang, Yadong Dai, Jintao Li, Zehao Wang, Ruoyu Zhang, Kairui Wen, Xuefei Ning, Yu Wang

However, existing GPU and transformer-based accelerators cannot efficiently process compressed LLMs, due to the following unresolved challenges: low computational efficiency, underutilized memory bandwidth, and large compilation overheads.

Computational Efficiency Language Modeling +3

DeSCo: Towards Generalizable and Scalable Deep Subgraph Counting

1 code implementation16 Aug 2023 Tianyu Fu, Chiyue Wei, Yu Wang, Rex Ying

We introduce DeSCo, a scalable neural deep subgraph counting pipeline, designed to accurately predict both the count and occurrence position of queries on target graphs post single training.

Graph Neural Network Graph Regression +2

Mis-classified Vector Guided Softmax Loss for Face Recognition

no code implementations26 Nov 2019 Xiaobo Wang, Shifeng Zhang, Shuo Wang, Tianyu Fu, Hailin Shi, Tao Mei

Face recognition has witnessed significant progress due to the advances of deep convolutional neural networks (CNNs), the central task of which is how to improve the feature discrimination.

Face Recognition

Improved Selective Refinement Network for Face Detection

no code implementations20 Jan 2019 Shifeng Zhang, Rui Zhu, Xiaobo Wang, Hailin Shi, Tianyu Fu, Shuo Wang, Tao Mei, Stan Z. Li

With the availability of face detection benchmark WIDER FACE dataset, much of the progresses have been made by various algorithms in recent years.

Data Augmentation Face Detection +1

Support Vector Guided Softmax Loss for Face Recognition

4 code implementations29 Dec 2018 Xiaobo Wang, Shuo Wang, Shifeng Zhang, Tianyu Fu, Hailin Shi, Tao Mei

Face recognition has witnessed significant progresses due to the advances of deep convolutional neural networks (CNNs), the central challenge of which, is feature discrimination.

Face Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.