Search Results for author: Chia-Yuan Chang

Found 14 papers, 3 papers with code

Learning to Compress Prompt in Natural Language Formats

no code implementations28 Feb 2024 Yu-Neng Chuang, Tianwei Xing, Chia-Yuan Chang, Zirui Liu, Xun Chen, Xia Hu

In this work, we propose a Natural Language Prompt Encapsulation (Nano-Capsulator) framework compressing original prompts into NL formatted Capsule Prompt while maintaining the prompt utility and transferability.

Large Language Models As Faithful Explainers

no code implementations7 Feb 2024 Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Ruixiang Tang, Fan Yang, Mengnan Du, Xuanting Cai, Xia Hu

In this work, we introduce a generative explanation framework, xLLM, to improve the faithfulness of the explanations provided in natural language formats for LLMs.

Decision Making

LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning

2 code implementations2 Jan 2024 Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu

To achieve this goal, we propose SelfExtend to extend the context window of LLMs by constructing bi-level attention information: the grouped attention and the neighbor attention.

LETA: Learning Transferable Attribution for Generic Vision Explainer

no code implementations23 Dec 2023 Guanchu Wang, Yu-Neng Chuang, Fan Yang, Mengnan Du, Chia-Yuan Chang, Shaochen Zhong, Zirui Liu, Zhaozhuo Xu, Kaixiong Zhou, Xuanting Cai, Xia Hu

To address this problem, we develop a pre-trained, DNN-based, generic explainer on large-scale image datasets, and leverage its transferability to explain various vision models for downstream tasks.

CODA: Temporal Domain Generalization via Concept Drift Simulator

no code implementations2 Oct 2023 Chia-Yuan Chang, Yu-Neng Chuang, Zhimeng Jiang, Kwei-Herng Lai, Anxiao Jiang, Na Zou

In real-world applications, machine learning models often become obsolete due to shifts in the joint distribution arising from underlying temporal trends, a phenomenon known as the "concept drift".

Domain Generalization Feature Correlation

GrowLength: Accelerating LLMs Pretraining by Progressively Growing Training Length

no code implementations1 Oct 2023 Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Chia-Yuan Chang, Xia Hu

Our method progressively increases the training length throughout the pretraining phase, thereby mitigating computational costs and enhancing efficiency.

DiscoverPath: A Knowledge Refinement and Retrieval System for Interdisciplinarity on Biomedical Research

1 code implementation4 Sep 2023 Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Kwei-Herng Lai, Daochen Zha, Ruixiang Tang, Fan Yang, Alfredo Costilla Reyes, Kaixiong Zhou, Xiaoqian Jiang, Xia Hu

The exponential growth in scholarly publications necessitates advanced tools for efficient article retrieval, especially in interdisciplinary fields where diverse terminologies are used to describe similar research.

named-entity-recognition Named Entity Recognition +5

DISPEL: Domain Generalization via Domain-Specific Liberating

no code implementations14 Jul 2023 Chia-Yuan Chang, Yu-Neng Chuang, Guanchu Wang, Mengnan Du, Na Zou

Domain generalization aims to learn a generalization model that can perform well on unseen test domains by only training on limited source domains.

Domain Generalization

Towards Assumption-free Bias Mitigation

no code implementations9 Jul 2023 Chia-Yuan Chang, Yu-Neng Chuang, Kwei-Herng Lai, Xiaotian Han, Xia Hu, Na Zou

Those studies face challenges, either in inaccurate predictions of sensitive attributes or the need to mitigate unequal distribution of manually defined non-sensitive attributes related to bias.


Mitigating Relational Bias on Knowledge Graphs

no code implementations26 Nov 2022 Yu-Neng Chuang, Kwei-Herng Lai, Ruixiang Tang, Mengnan Du, Chia-Yuan Chang, Na Zou, Xia Hu

Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning.

Graph Representation Learning Knowledge Graphs +1

Auto-PINN: Understanding and Optimizing Physics-Informed Neural Architecture

no code implementations27 May 2022 Yicheng Wang, Xiaotian Han, Chia-Yuan Chang, Daochen Zha, Ulisses Braga-Neto, Xia Hu

Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.

Hyperparameter Optimization Neural Architecture Search

EPSNet: Efficient Panoptic Segmentation Network with Cross-layer Attention Fusion

1 code implementation23 Mar 2020 Chia-Yuan Chang, Shuo-En Chang, Pei-Yung Hsiao, Li-Chen Fu

In this work, we propose an Efficient Panoptic Segmentation Network (EPSNet) to tackle the panoptic segmentation tasks with fast inference speed.

Instance Segmentation Panoptic Segmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.