Search Results for author: Hanyu Wang

Found 27 papers, 16 papers with code

LLM-DSE: Searching Accelerator Parameters with LLM Agents

1 code implementation18 May 2025 Hanyu Wang, Xinrui Wu, Zijian Ding, Su Zheng, Chengyue Wang, Tony Nowatzki, Yizhou Sun, Jason Cong

Even though high-level synthesis (HLS) tools mitigate the challenges of programming domain-specific accelerators (DSAs) by raising the abstraction level, optimizing hardware directive parameters remains a significant hurdle.

High-Level Synthesis

MoC: Mixtures of Text Chunking Learners for Retrieval-Augmented Generation System

1 code implementation12 Mar 2025 Jihao Zhao, Zhiyuan Ji, Zhaoxin Fan, Hanyu Wang, Simin Niu, Bo Tang, Feiyu Xiong, Zhiyu Li

Retrieval-Augmented Generation (RAG), while serving as a viable complement to large language models (LLMs), often overlooks the crucial aspect of text chunking within its pipeline.

Chunking Computational Efficiency +3

SEAP: Training-free Sparse Expert Activation Pruning Unlock the Brainpower of Large Language Models

1 code implementation10 Mar 2025 Xun Liang, Hanyu Wang, Huayi Lai, Simin Niu, Shichao Song, Jiawei Yang, Jihao Zhao, Feiyu Xiong, Bo Tang, Zhiyu Li

Notably, at 50% pruning, SEAP surpasses both WandA and FLAP by over 20%, and at 20% pruning, it incurs only a 2. 2% performance drop compared to the dense model.

Computational Efficiency

Primer C-VAE: An interpretable deep learning primer design method to detect emerging virus variants

no code implementations3 Mar 2025 Hanyu Wang, Emmanuel K. Tsinda, Anthony J. Dunn, Francis Chikweto, Alain B. Zemkoho

For organisms with large, similar genomes like Escherichia coli and Shigella flexneri, differentiating between species is also difficult but crucial.

Epidemiology

SurveyX: Academic Survey Automation via Large Language Models

1 code implementation20 Feb 2025 Xun Liang, Jiawei Yang, Yezhaohui Wang, Chen Tang, Zifan Zheng, Simin Niu, Shichao Song, Hanyu Wang, Bo Tang, Feiyu Xiong, Keming Mao, Zhiyu Li

Large Language Models (LLMs) have demonstrated exceptional comprehension capabilities and a vast knowledge base, suggesting that LLMs can serve as efficient tools for automated survey generation.

Survey

TruthFlow: Truthful LLM Generation via Representation Flow Correction

no code implementations6 Feb 2025 Hanyu Wang, Bochuan Cao, Yuanpu Cao, Jinghui Chen

Large language models (LLMs) are known to struggle with consistently generating truthful responses.

Hallucination TruthfulQA

SafeRAG: Benchmarking Security in Retrieval-Augmented Generation of Large Language Model

1 code implementation28 Jan 2025 Xun Liang, Simin Niu, Zhiyu Li, Sensen Zhang, Hanyu Wang, Feiyu Xiong, Jason Zhaoxin Fan, Bo Tang, Shichao Song, Mengwei Wang, Jiawei Yang

However, the incorporation of external and unverified knowledge increases the vulnerability of LLMs because attackers can perform attack tasks by manipulating knowledge.

Benchmarking Language Modeling +4

LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior

1 code implementation28 Oct 2024 Hanyu Wang, Saksham Suri, Yixuan Ren, Hao Chen, Abhinav Shrivastava

By incorporating the prior model during training, LARP learns a latent space that is not only optimized for video reconstruction but is also structured in a way that is more conducive to autoregressive generation.

Video Generation Video Reconstruction

SaVe-TAG: Semantic-aware Vicinal Risk Minimization for Long-Tailed Text-Attributed Graphs

no code implementations22 Oct 2024 Leyao Wang, Yu Wang, Bo Ni, Yuying Zhao, Hanyu Wang, Yao Ma, Tyler Derr

Real-world graph data often follows long-tailed distributions, making it difficult for Graph Neural Networks (GNNs) to generalize well across both head and tail classes.

Classification Data Augmentation +5

TurtleBench: Evaluating Top Language Models via Real-World Yes/No Puzzles

1 code implementation7 Oct 2024 Qingchen Yu, Shichao Song, Ke Fang, Yunfeng Shi, Zifan Zheng, Hanyu Wang, Simin Niu, Zhiyu Li

This approach allows for the relatively dynamic generation of evaluation datasets, mitigating the risk of model cheating while aligning assessments more closely with genuine user needs for reasoning capabilities, thus enhancing the reliability of evaluations.

Logical Reasoning

Controllable Text Generation for Large Language Models: A Survey

1 code implementation22 Aug 2024 Xun Liang, Hanyu Wang, Yezhaohui Wang, Shichao Song, Jiawei Yang, Simin Niu, Jie Hu, Dan Liu, Shunyu Yao, Feiyu Xiong, Zhiyu Li

This paper systematically reviews the latest advancements in CTG for LLMs, offering a comprehensive definition of its core concepts and clarifying the requirements for control conditions and text quality.

Attribute Prompt Engineering +2

Internal Consistency and Self-Feedback in Large Language Models: A Survey

1 code implementation19 Jul 2024 Xun Liang, Shichao Song, Zifan Zheng, Hanyu Wang, Qingchen Yu, Xunkai Li, Rong-Hua Li, Yi Wang, Zhonghao Wang, Feiyu Xiong, Zhiyu Li

In this paper, we use a unified perspective of internal consistency, offering explanations for reasoning deficiencies and hallucinations.

Empowering Large Language Models to Set up a Knowledge Retrieval Indexer via Self-Learning

1 code implementation27 May 2024 Xun Liang, Simin Niu, Zhiyu Li, Sensen Zhang, Shichao Song, Hanyu Wang, Jiawei Yang, Feiyu Xiong, Bo Tang, Chenyang Xi

Retrieval-Augmented Generation (RAG) offers a cost-effective approach to injecting real-time knowledge into large language models (LLMs).

Question Answering RAG +3

Solving General Noisy Inverse Problem via Posterior Sampling: A Policy Gradient Viewpoint

no code implementations15 Mar 2024 Haoyue Tang, Tian Xie, Aosong Feng, Hanyu Wang, Chenyang Zhang, Yang Bai

Solving image inverse problems (e. g., super-resolution and inpainting) requires generating a high fidelity image that matches the given input (the low-resolution image or the masked image).

Image Restoration Super-Resolution

Controlled Text Generation for Large Language Model with Dynamic Attribute Graphs

1 code implementation17 Feb 2024 Xun Liang, Hanyu Wang, Shichao Song, Mengting Hu, Xunzhi Wang, Zhiyu Li, Feiyu Xiong, Bo Tang

In this study, we introduce a pluggable CTG framework for Large Language Models (LLMs) named Dynamic Attribute Graphs-based controlled text generation (DATG).

Attribute Language Modeling +3

Multimodality-guided Image Style Transfer using Cross-modal GAN Inversion

1 code implementation4 Dec 2023 Hanyu Wang, Pengxiang Wu, Kevin Dela Rosa, Chen Wang, Abhinav Shrivastava

Compared to IIST, such approaches provide more flexibility with text-specified styles, which are useful in scenarios where the style is hard to define with reference images.

Style Transfer

PathRL: An End-to-End Path Generation Method for Collision Avoidance via Deep Reinforcement Learning

no code implementations20 Oct 2023 Wenhao Yu, Jie Peng, Quecheng Qiu, Hanyu Wang, Lu Zhang, Jianmin Ji

However, two roadblocks arise for training a DRL policy that outputs paths: (1) The action space for potential paths often involves higher dimensions comparing to low-level commands, which increases the difficulties of training; (2) It takes multiple time steps to track a path instead of a single time step, which requires the path to predicate the interactions of the robot w. r. t.

Collision Avoidance Deep Reinforcement Learning +1

Towards Scalable Neural Representation for Diverse Videos

no code implementations CVPR 2023 Bo He, Xitong Yang, Hanyu Wang, Zuxuan Wu, Hao Chen, Shuaiyi Huang, Yixuan Ren, Ser-Nam Lim, Abhinav Shrivastava

Implicit neural representations (INR) have gained increasing attention in representing 3D scenes and images, and have been recently applied to encode videos (e. g., NeRV, E-NeRV).

Action Recognition Video Compression

Deep learning forward and reverse primer design to detect SARS-CoV-2 emerging variants

no code implementations25 Sep 2022 Hanyu Wang, Emmanuel K. Tsinda, Anthony J. Dunn, Francis Chikweto, Nusreen Ahmed, Emanuela Pelosi, Alain B. Zemkoho

Hence, in this paper, we develop a semi-automated method to design both forward and reverse primer sets to detect SARS-CoV-2 variants.

Neural Space-filling Curves

no code implementations18 Apr 2022 Hanyu Wang, Kamal Gupta, Larry Davis, Abhinav Shrivastava

We present Neural Space-filling Curves (SFCs), a data-driven approach to infer a context-based scan order for a set of images.

Image Compression

NeRV: Neural Representations for Videos

3 code implementations NeurIPS 2021 Hao Chen, Bo He, Hanyu Wang, Yixuan Ren, Ser-Nam Lim, Abhinav Shrivastava

In contrast, with NeRV, we can use any neural network compression method as a proxy for video compression, and achieve comparable performance to traditional frame-based video compression approaches (H. 264, HEVC \etc).

Denoising Neural Network Compression +3

Learning 3D Keypoint Descriptors for Non-Rigid Shape Matching

no code implementations ECCV 2018 Hanyu Wang, Jianwei Guo, Dong-Ming Yan, Weize Quan, Xiaopeng Zhang

In this paper, we present a novel deep learning framework that derives discriminative local descriptors for 3D surface shapes.

Metric Learning Triplet

Cannot find the paper you are looking for? You can Submit a new open access paper.