1 code implementation • EMNLP 2021 • Yingya Li, Jun Wang, Bei Yu
We also conducted a case study that applied this prediction model to retrieve specific health advice on COVID-19 treatments from LitCovid, a large COVID research literature portal, demonstrating the usefulness of retrieving health advice sentences as an advanced research literature navigation function for health researchers and the general public.
no code implementations • NAACL (unimplicit) 2022 • Yingya Li, Bei Yu
Prior studies have raised concerns over specificity issues in clinical advice.
1 code implementation • 16 Mar 2025 • Tianyuan Qu, Longxiang Tang, Bohao Peng, Senqiao Yang, Bei Yu, Jiaya Jia
Together, our LSDBench and RHS framework address the unique challenges of high-NSD long-video tasks, setting a new standard for evaluating and improving LVLMs in this domain.
no code implementations • 11 Mar 2025 • Mingkang Zhu, Xi Chen, Zhongdao Wang, Bei Yu, Hengshuang Zhao, Jiaya Jia
In contrast, instant merging methods often cause identity loss and interference of individual merged concepts and are usually limited to a small number of concepts.
2 code implementations • 9 Mar 2025 • Yuqi Liu, Bohao Peng, Zhisheng Zhong, Zihao Yue, Fanbin Lu, Bei Yu, Jiaya Jia
Traditional methods for reasoning segmentation rely on supervised fine-tuning with categorical labels and simple descriptions, limiting its out-of-domain generalization and lacking explicit reasoning processes.
no code implementations • 5 Mar 2025 • Mingkang Zhu, Xi Chen, Zhongdao Wang, Bei Yu, Hengshuang Zhao, Jiaya Jia
We found and verified that knowledge learning for LLMs can be deemed as an implicit supervised task hidden in the autoregressive pre-training objective.
no code implementations • 15 Feb 2025 • Haoyuan Wu, Haisheng Zheng, Zhuolun He, Bei Yu
Recently, with the development of tool-calling capabilities in large language models (LLMs), these models have demonstrated significant potential for automating electronic design automation (EDA) flows by interacting with EDA tool APIs via EDA scripts.
1 code implementation • 6 Feb 2025 • Zixiao Wang, Jieya Zhou, Su Zheng, Shuo Yin, Kaichao Liang, Shoubo Hu, Xiao Chen, Bei Yu
Recent decades have witnessed remarkable advancements in artificial intelligence (AI), including large language models (LLMs), image and video generative models, and embodied AI systems.
1 code implementation • 6 Feb 2025 • Zehua Pei, Lancheng Zou, Hui-Ling Zhen, Xianzhi Yu, Wulong Liu, Sinno Jialin Pan, Mingxuan Yuan, Bei Yu
Large language models (LLMs) achieve impressive performance by scaling model parameters, but this comes with significant inference overhead.
no code implementations • 4 Feb 2025 • Jun Wang, Bei Yu
The prevalence of drawing causal conclusions from observational studies has raised concerns about potential exaggeration in science communication.
no code implementations • 27 Dec 2024 • Shaoteng Liu, Tianyu Wang, Jui-Hsien Wang, Qing Liu, Zhifei Zhang, Joon-Young Lee, Yijun Li, Bei Yu, Zhe Lin, Soo Ye Kim, Jiaya Jia
Large-scale video generation models have the inherent ability to realistically model natural scenes.
no code implementations • 22 Dec 2024 • Bin Xia, Yuechen Zhang, Jingyao Li, Chengyao Wang, Yitong Wang, Xinglong Wu, Bei Yu, Jiaya Jia
We begin by analyzing existing frameworks and the requirements of downstream tasks, proposing a unified framework that integrates both T2I models and various editing tasks.
no code implementations • 18 Dec 2024 • Zixiao Wang, Junwu Weng, Mengyuan Liu, Bei Yu
We treat the representation of human pose joint coordinates as skeleton image and transfer a pre-trained pose annotation generator with only a few annotation guidance.
1 code implementation • 5 Dec 2024 • Senqiao Yang, Yukang Chen, Zhuotao Tian, Chengyao Wang, Jingyao Li, Bei Yu, Jiaya Jia
To address this, we introduce VisionZip, a simple yet effective method that selects a set of informative tokens for input to the language model, reducing visual token redundancy and improving efficiency while maintaining model performance.
Ranked #176 on
Visual Question Answering
on MM-Vet
no code implementations • 25 Nov 2024 • Yu Zhang, Mingzi Wang, Lancheng Zou, Wulong Liu, Hui-Ling Zhen, Mingxuan Yuan, Bei Yu
Transformer-based large language models (LLMs) have achieved remarkable success as model sizes continue to grow, yet their deployment remains challenging due to significant computational and memory demands.
1 code implementation • 21 Nov 2024 • Zehua Pei, Hui-Ling Zhen, Xianzhi Yu, Sinno Jialin Pan, Mingxuan Yuan, Bei Yu
In this paper, we propose FuseGPT, a novel methodology to recycle the pruned transformer blocks to further recover the model performance.
1 code implementation • 4 Sep 2024 • Xufeng Yao, Yiwen Wang, Xing Li, Yingzhao Lian, Ran Chen, Lei Chen, Mingxuan Yuan, Hong Xu, Bei Yu
Our comparative analysis with established compilers such as Yosys and E-graph demonstrates significant improvements, highlighting the benefits of integrating large models into the early stages of circuit design.
no code implementations • 23 Aug 2024 • Guojin Chen, HaoYu Yang, Bei Yu, Haoxing Ren
Advancements in chip design and manufacturing have enabled the processing of complex tasks such as deep learning and natural language processing, paving the way for the development of artificial general intelligence (AGI).
no code implementations • 16 Aug 2024 • Guojin Chen, HaoYu Yang, Haoxing Ren, Bei Yu, David Z. Pan
Optical proximity correction (OPC) is crucial for pushing the boundaries of semiconductor manufacturing and enabling the continued scaling of integrated circuits.
no code implementations • 22 Jul 2024 • Yuan Pu, Zhuolun He, Tairu Qiu, Haoyuan Wu, Bei Yu
Retrieval augmented generation (RAG) enhances the accuracy and reliability of generative AI models by sourcing factual information from external databases, which is extensively employed in document-grounded question-answering (QA) tasks.
1 code implementation • 11 Jun 2024 • Zixiao Wang, Jingwei Zhang, Wenqian Zhao, Farzan Farnia, Bei Yu
Our numerical results suggest the robustness of MoreauPruner against weight perturbations, and indicate the MoreauPruner's successful accuracy-based scores in comparison to several existing pruning methods.
1 code implementation • 7 Jun 2024 • Guojin Chen, Keren Zhu, Seunggeun Kim, Hanqing Zhu, Yao Lai, Bei Yu, David Z. Pan
Analog layout synthesis faces significant challenges due to its dependence on manual processes, considerable time requirements, and performance instability.
no code implementations • 15 Apr 2024 • Tong Qiao, Jianlei Yang, Yingjie Qi, Ao Zhou, Chen Bai, Bei Yu, Weisheng Zhao, Chunming Hu
Graph Neural Networks (GNNs) succeed significantly in many applications recently.
no code implementations • 1 Apr 2024 • Xiaoxiao Liang, HaoYu Yang, Kang Liu, Bei Yu, Yuzhe ma
Optical proximity correction (OPC) is a vital step to ensure printability in modern VLSI manufacturing.
no code implementations • 18 Mar 2024 • Xufeng Yao, Haoyang Li, Tsz Ho Chan, Wenyi Xiao, Mingxuan Yuan, Yu Huang, Lei Chen, Bei Yu
In the domain of chip design, Hardware Description Languages (HDLs) play a pivotal role.
no code implementations • 15 Mar 2024 • Zixiao Wang, Yunheng Shen, Xufeng Yao, Wenqian Zhao, Yang Bai, Farzan Farnia, Bei Yu
Existing works focus on fixed-size layout pattern generation, while the more practical free-size pattern generation receives limited attention.
no code implementations • 13 Mar 2024 • Yuyang Ye, Peng Xu, Lizheng Ren, Tinghuan Chen, Hao Yan, Bei Yu, Longxing Shi
Gate sizing plays an important role in timing optimization after physical design.
1 code implementation • CVPR 2024 • Jiequan Cui, Beier Zhu, Xin Wen, Xiaojuan Qi, Bei Yu, Hanwang Zhang
Second, with the proposed concept of Model Prediction Bias, we investigate the origins of problematic representation during optimization.
no code implementations • 19 Feb 2024 • Yu Zhang, Hui-Ling Zhen, Zehua Pei, Yingzhao Lian, Lihao Yin, Mingxuan Yuan, Bei Yu
In this paper, we propose a novel differential logic layer-aided language modeling (DiLA) approach, where logical constraints are integrated into the forward and backward passes of a network layer, to provide another option for LLM tool learning.
no code implementations • 3 Feb 2024 • Zehua Pei, Hui-Ling Zhen, Mingxuan Yuan, Yu Huang, Bei Yu
In this work, we propose a Verilog generation framework, BetterV, which fine-tunes the large language models (LLMs) on processed domain-specific datasets and incorporates generative discriminators for guidance on particular design demands.
2 code implementations • 5 Jan 2024 • Haoyuan Wu, Haisheng Zheng, Zhuolun He, Bei Yu
Using PESC during instruction tuning, our best sparse model outperforms other sparse and dense models and exhibits superior general capabilities compared to GPT-3. 5.
Ranked #6 on
Common Sense Reasoning
on ARC (Easy)
1 code implementation • 17 Dec 2023 • Haoyuan Wu, Xinyun Zhang, Peng Xu, Peiyu Liao, Xufeng Yao, Bei Yu
In this paper, we present a novel modeling framework that recasts adapter tuning after attention as a graph message passing process on attention graphs, where the projected query and value features and attention matrix constitute the node features and the graph adjacency matrix, respectively.
no code implementations • 3 Nov 2023 • Jianlei Yang, Jiacheng Liao, Fanding Lei, Meichen Liu, Junyi Chen, Lingkun Long, Han Wan, Bei Yu, Weisheng Zhao
To the best of our knowledge, SparseEngine is the first deployment framework capable of performing inference of sparse models with transformer on MCUs.
no code implementations • 18 Oct 2023 • Zixiao Wang, Farzan Farnia, Zhenghao Lin, Yunheng Shen, Bei Yu
On the other hand, several applications of generative models concern distributed settings, e. g. the federated learning setting, where the reference data for conducting evaluation are provided by several clients in a network.
1 code implementation • 20 Aug 2023 • Zhuolun He, Haoyuan Wu, Xinyun Zhang, Xufeng Yao, Su Zheng, Haisheng Zheng, Bei Yu
The integration of a complex set of Electronic Design Automation (EDA) tools to enhance interoperability is a critical concern for circuit designers.
1 code implementation • 23 May 2023 • Peng Xu, Lin Zhang, Xuanzhou Liu, Jiaqi Sun, Yue Zhao, Haiqin Yang, Bei Yu
Neural architecture search (NAS) for Graph neural networks (GNNs), called NAS-GNNs, has achieved significant performance over manually designed GNN architectures.
4 code implementations • 23 May 2023 • Jiequan Cui, Zhuotao Tian, Zhisheng Zhong, Xiaojuan Qi, Bei Yu, Hanwang Zhang
In this paper, we delve deeper into the Kullback-Leibler (KL) Divergence loss and mathematically prove that it is equivalent to the Decoupled Kullback-Leibler (DKL) Divergence loss that consists of 1) a weighted Mean Square Error (wMSE) loss and 2) a Cross-Entropy loss incorporating soft labels.
no code implementations • 12 May 2023 • Xinyun Zhang, Haochen Tan, Han Wu, Bei Yu
To inject visual knowledge into PLMs, existing methods incorporate either the text or image encoder of vision-language models (VLMs) to encode the visual information and update all the original parameters of PLMs for knowledge fusion.
no code implementations • 25 Mar 2023 • Guojin Chen, HaoYu Yang, Bei Yu
Multiple patterning lithography (MPL) is regarded as one of the most promising ways of overcoming the resolution limitations of conventional optical lithography due to the delay of next-generation lithography technology.
no code implementations • 23 Mar 2023 • Zixiao Wang, Yunheng Shen, Wenqian Zhao, Yang Bai, Guojin Chen, Farzan Farnia, Bei Yu
Deep generative models dominate the existing literature in layout pattern generation.
no code implementations • 18 Mar 2023 • Guojin Chen, Ziyang Yu, Hongduo Liu, Yuzhe ma, Bei Yu
To further enhance printability and fast iterative convergence, we propose a novel deep neural network delicately designed with level set intrinsic principles to facilitate the joint optimization of DNN and GPU accelerated level set optimizer.
no code implementations • 16 Mar 2023 • Wenqian Zhao, Qi Sun, Yang Bai, Wenbo Li, Haisheng Zheng, Bei Yu, Martin D. F. Wong
Recent years have witnessed impressive progress in super-resolution (SR) processing.
no code implementations • 15 Mar 2023 • Guojin Chen, Zehua Pei, HaoYu Yang, Yuzhe ma, Bei Yu, Martin D. F. Wong
Lithography is fundamental to integrated circuit fabrication, necessitating large computation overhead.
no code implementations • 15 Mar 2023 • Wenqian Zhao, Xufeng Yao, Ziyang Yu, Guojin Chen, Yuzhe ma, Bei Yu, Martin D. F. Wong
We inspect the pattern distribution on a design layer and find that different sub-regions have different pattern complexity.
no code implementations • ICCV 2023 • Wanli Chen, Xufeng Yao, Xinyun Zhang, Bei Yu
By modeling the pixel grid as a graph, they first adopt GNN to predict the edge weights and then generate a minimum spanning tree (MST) based on the predictions, which is further used to construct the SFC.
4 code implementations • 26 Sep 2022 • Jiequan Cui, Zhisheng Zhong, Zhuotao Tian, Shu Liu, Bei Yu, Jiaya Jia
Based on theoretical analysis, we observe that supervised contrastive loss tends to bias high-frequency classes and thus increases the difficulty of imbalanced learning.
Ranked #7 on
Long-tail Learning
on iNaturalist 2018
no code implementations • 15 Aug 2022 • Wei Li, Ruxuan Li, Yuzhe ma, Siu On Chan, David Pan, Bei Yu
Graph coloring, a classical and critical NP-hard problem, is the problem of assigning connected nodes as different colors as possible.
no code implementations • 4 Jul 2022 • Xiaogang Xu, Yitong Yu, Nianjuan Jiang, Jiangbo Lu, Bei Yu, Jiaya Jia
Moreover, we also propose a new video denoising framework, called Recurrent Video Denoising Transformer (RVDT), which can achieve SOTA performance on PVDD and other current video denoising benchmarks.
1 code implementation • 6 Apr 2022 • Yilun Chen, Shijia Huang, Shu Liu, Bei Yu, Jiaya Jia
First, to effectively lift the 2D information to stereo volume, we propose depth-wise plane sweeping (DPS) that allows denser connections and extracts depth-guided features.
no code implementations • 29 Mar 2022 • Mingjun Li, Jianlei Yang, Yingjie Qi, Meng Dong, Yuhao Yang, Runze Liu, Weitao Pan, Bei Yu, Weisheng Zhao
In this paper, Eventor is proposed as a fast and efficient EMVS accelerator by realizing the most critical and time-consuming stages including event back-projection and volumetric ray-counting on FPGA.
1 code implementation • CVPR 2022 • Xufeng Yao, Yang Bai, Xinyun Zhang, Yuechen Zhang, Qi Sun, Ran Chen, Ruiyu Li, Bei Yu
Domain generalization refers to the problem of training a model from a collection of different source domains that can directly generalize to the unseen target domains.
Ranked #23 on
Domain Generalization
on PACS
1 code implementation • 28 Sep 2021 • Xiaoliu Luo, Zhuotao Tian, Taiping Zhang, Bei Yu, Yuan Yan Tang, Jiaya Jia
In this work, we revisit the prior mask guidance proposed in ``Prior Guided Feature Enrichment Network for Few-Shot Segmentation''.
no code implementations • 12 Aug 2021 • Xiaogang Xu, Yi Wang, LiWei Wang, Bei Yu, Jiaya Jia
To synthesize a realistic action sequence based on a single human image, it is crucial to model both motion patterns and diversity in the action video.
5 code implementations • ICCV 2021 • Jiequan Cui, Zhisheng Zhong, Shu Liu, Bei Yu, Jiaya Jia
In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition.
Ranked #14 on
Long-tail Learning
on iNaturalist 2018
1 code implementation • 14 Jul 2021 • Jun Wang, Bei Yu
Accurately linking news articles to scientific research works is a critical component in a number of applications, such as measuring the social impact of a research work and detecting inaccuracies or distortions in science news.
1 code implementation • NAACL 2021 • Jun Wang, Kelly Cui, Bei Yu
Prior studies have found that women self-promote less than men due to gender stereotypes.
no code implementations • 7 Mar 2021 • HaoYu Yang, Shuhe Li, Bei Yu
The activation of lower layer capsules affects the behavior of the following capsules via routing links that are constructed during training via certain routing algorithms.
1 code implementation • 10 Jan 2021 • Guyue Huang, Jingbo Hu, Yifan He, Jialong Liu, Mingyuan Ma, Zhaoyang Shen, Juejian Wu, Yuanfan Xu, Hengrui Zhang, Kai Zhong, Xuefei Ning, Yuzhe ma, HaoYu Yang, Bei Yu, Huazhong Yang, Yu Wang
With the down-scaling of CMOS technology, the design complexity of very large-scale integrated (VLSI) is increasing.
no code implementations • ICCV 2021 • Qi Sun, Chen Bai, Tinghuan Chen, Hao Geng, Xinyun Zhang, Yang Bai, Bei Yu
Firstly, a deep Gaussian process (DGP) model is built on the historical data to learn empirical knowledge.
1 code implementation • ICCV 2021 • RuiXing Wang, Xiaogang Xu, Chi-Wing Fu, Jiangbo Lu, Bei Yu, Jiaya Jia
Low-light video enhancement is an important task.
no code implementations • 1 Jan 2021 • Wei Li, Ruxuan Li, Yuzhe ma, Siu On Chan, Bei Yu
To characterize the power of GNNs for the graph coloring problem, we first formalize the discrimination power of GNNs as the capability to assign nodes different colors.
1 code implementation • COLING 2020 • Bei Yu, Jun Wang, Lu Guo, Yingya Li
By comparing the claims made in a press release with the corresponding claims in the original research paper, we found that 22{\%} of press releases made exaggerated causal claims from correlational findings in observational studies.
no code implementations • ECCV 2020 • Wanli Chen, Xinge Zhu, Ruoqi Sun, Junjun He, Ruiyu Li, Xiaoyong Shen, Bei Yu
Then we use these rank-1 tensors to recover the high-rank context features through our proposed tensor reconstruction module (TRM).
no code implementations • ECCV 2020 • Ran Chen, Yong liu, Mengdan Zhang, Shu Liu, Bei Yu, Yu-Wing Tai
Anchor free methods have defined the new frontier in state-of-the-art object detection researches where accurate bounding box estimation is the key to the success of these methods.
no code implementations • 8 Jul 2020 • Haocheng Li, Satwik Patnaik, Abhrajit Sengupta, Hao-Yu Yang, Johann Knechtel, Bei Yu, Evangeline F. Y. Young, Ozgur Sinanoglu
The notion of integrated circuit split manufacturing which delegates the front-end-of-line (FEOL) and back-end-of-line (BEOL) parts to different foundries, is to prevent overproduction, piracy of the intellectual property (IP), or targeted insertion of hardware Trojans by adversaries in the FEOL facility.
no code implementations • 16 Dec 2019 • Haoyu Yang, Wei Zhong, Yuzhe ma, Hao Geng, Ran Chen, Wanli Chen, Bei Yu
VLSI mask optimization is one of the most critical stages in manufacturability aware design, which is costly due to the complicated mask optimization and lithography simulation.
1 code implementation • 12 Dec 2019 • Haoyu Yang, Wen Chen, Piyush Pathak, Frank Gennari, Ya-Chieh Lai, Bei Yu
The tool currently supports both metal layer and via layer generation.
no code implementations • IJCNLP 2019 • Bei Yu, Yingya Li, Jun Wang
We then applied the prediction model to measure the causal language use in the research conclusions of about 38, 000 observational studies in PubMed.
no code implementations • 25 Jun 2019 • Kang Liu, Hao-Yu Yang, Yuzhe ma, Benjamin Tan, Bei Yu, Evangeline F. Y. Young, Ramesh Karri, Siddharth Garg
There is substantial interest in the use of machine learning (ML) based techniques throughout the electronic computer-aided design (CAD) flow, particularly those based on deep learning.
no code implementations • 27 Dec 2018 • Husheng Zhou, Wei Li, Yuankun Zhu, Yuqun Zhang, Bei Yu, Lingming Zhang, Cong Liu
Furthermore, DeepBillboard is sufficiently robust and resilient for generating physical-world adversarial billboard tests for real-world driving under various weather conditions.
no code implementations • 1 Sep 2018 • Xiaowei Xu, Xinyi Zhang, Bei Yu, X. Sharon Hu, Christopher Rowen, Jingtong Hu, Yiyu Shi
The 55th Design Automation Conference (DAC) held its first System Design Contest (SDC) in 2018.
no code implementations • COLING 2018 • Shi Yuan, Bei Yu
This study evaluates the performance of four information extraction tools (extractors) on identifying health claims in health news headlines.
no code implementations • 26 Jul 2018 • Yuzhe Ma, Ran Chen, Wei Li, Fanhua Shang, Wenjian Yu, Minsik Cho, Bei Yu
To address this issue, various approximation techniques have been investigated, which seek for a light weighted network with little performance degradation in exchange of smaller model size or faster inference.
no code implementations • 23 Jul 2018 • Qianru Zhang, Meng Zhang, Tinghuan Chen, Zhifei Sun, Yuzhe ma, Bei Yu
We propose a taxonomy in terms of three levels, i. e.~structure level, algorithm level, and implementation level, for acceleration methods.
1 code implementation • 18 Jul 2018 • Yuzhe Ma, Subhendu Roy, Jin Miao, Jiamin Chen, Bei Yu
In spite of maturity to the modern electronic design automation (EDA) tools, optimized designs at architectural stage may become sub-optimal after going through physical design flow.
no code implementations • 13 Jul 2018 • Haoyu Yang, Shuhe Li, Cyrus Tabery, Bingqing Lin, Bei Yu
Layout hotpot detection is one of the main steps in modern VLSI design.
no code implementations • WS 2017 • Yingya Li, Jieke Zhang, Bei Yu
The discrepancy between science and media has been affecting the effectiveness of science communication.