Search Results for author: JunHao Chen

Found 31 papers, 13 papers with code

Jailbreaking? One Step Is Enough!

no code implementations17 Dec 2024 Weixiong Zheng, Peijian Zeng, Yiwei Li, Hongyan Wu, Nankai Lin, JunHao Chen, Aimin Yang, Yongmei Zhou

Specifically, REDA starts from the target response, guiding the model to embed harmful content within its defensive measures, thereby relegating harmful content to a secondary role and making the model believe it is performing a defensive task.

In-Context Learning

QueEn: A Large Language Model for Quechua-English Translation

no code implementations6 Dec 2024 JunHao Chen, Peng Shu, Yiwei Li, Huaqin Zhao, Hanqi Jiang, Yi Pan, Yifan Zhou, Zhengliang Liu, Lewis C Howe, Tianming Liu

Recent studies show that large language models (LLMs) are powerful tools for working with natural language, bringing advances in many areas of computational linguistics.

Computational Efficiency Language Modeling +4

OracleSage: Towards Unified Visual-Linguistic Understanding of Oracle Bone Scripts through Cross-Modal Knowledge Fusion

no code implementations26 Nov 2024 Hanqi Jiang, Yi Pan, JunHao Chen, Zhengliang Liu, Yifan Zhou, Peng Shu, Yiwei Li, Huaqin Zhao, Stephen Mihm, Lewis C Howe, Tianming Liu

Oracle bone script (OBS), as China's earliest mature writing system, present significant challenges in automatic recognition due to their complex pictographic structures and divergence from modern Chinese characters.

Transcending Language Boundaries: Harnessing LLMs for Low-Resource Language Translation

no code implementations18 Nov 2024 Peng Shu, JunHao Chen, Zhengliang Liu, Hui Wang, Zihao Wu, Tianyang Zhong, Yiwei Li, Huaqin Zhao, Hanqi Jiang, Yi Pan, Yifan Zhou, Constance Owl, Xiaoming Zhai, Ninghao Liu, Claudio Saunt, Tianming Liu

Our comparison with the zero-shot performance of GPT-4o and LLaMA 3. 1 405B, highlights the significant challenges these models face when translating into low-resource languages.

Retrieval Translation

Legal Evalutions and Challenges of Large Language Models

no code implementations15 Nov 2024 Jiaqi Wang, Huan Zhao, Zhenyuan Yang, Peng Shu, JunHao Chen, Haobo Sun, Ruixi Liang, Shixin Li, Pengcheng Shi, Longjun Ma, Zongjia Liu, Zhengliang Liu, Tianyang Zhong, Yutong Zhang, Chong Ma, Xin Zhang, Tuo Zhang, Tianli Ding, Yudan Ren, Tianming Liu, Xi Jiang, Shu Zhang

In this paper, we review legal testing methods based on Large Language Models (LLMs), using the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions.

Legal Reasoning

ReLayout: Towards Real-World Document Understanding via Layout-enhanced Pre-training

no code implementations14 Oct 2024 Zhouqiang Jiang, Bowen Wang, JunHao Chen, Yuta Nakashima

Recent approaches for visually-rich document understanding (VrDU) uses manually annotated semantic groups, where a semantic group encompasses all semantically relevant but not obviously grouped words.

document understanding Optical Character Recognition (OCR)

EG-SpikeFormer: Eye-Gaze Guided Transformer on Spiking Neural Networks for Medical Image Analysis

no code implementations12 Oct 2024 Yi Pan, Hanqi Jiang, JunHao Chen, Yiwei Li, Huaqin Zhao, Yifan Zhou, Peng Shu, Zihao Wu, Zhengliang Liu, Dajiang Zhu, Xiang Li, Yohannes Abate, Tianming Liu

Neuromorphic computing has emerged as a promising energy-efficient alternative to traditional artificial intelligence, predominantly utilizing spiking neural networks (SNNs) implemented on neuromorphic hardware.

Image Classification Medical Image Analysis +1

IW-Bench: Evaluating Large Multimodal Models for Converting Image-to-Web

no code implementations14 Sep 2024 Hongcheng Guo, Wei zhang, JunHao Chen, Yaonan Gu, Jian Yang, Junjia Du, Binyuan Hui, Tianyu Liu, Jianxin Ma, Chang Zhou, Zhoujun Li

We have conducted extensive experiments on existing large multimodal models, offering insights into their performance and areas for improvement in image-to-web domain.

Image Comprehension

Putting People in LLMs' Shoes: Generating Better Answers via Question Rewriter

1 code implementation20 Aug 2024 JunHao Chen, Bowen Wang, Zhouqiang Jiang, Yuta Nakashima

By enhancing the intelligibility of human questions for black-box LLMs, our question rewriter improves the quality of generated answers.

Long Form Question Answering

DiReCT: Diagnostic Reasoning for Clinical Notes via Large Language Models

1 code implementation4 Aug 2024 Bowen Wang, Jiuyang Chang, Yiming Qian, Guoxin Chen, JunHao Chen, Zhouqiang Jiang, Jiahao Zhang, Yuta Nakashima, Hajime Nagahara

Large language models (LLMs) have recently showcased remarkable capabilities, spanning a wide range of tasks and applications, including those in the medical domain.

Diagnostic Question Answering

States Hidden in Hidden States: LLMs Emerge Discrete State Representations Implicitly

no code implementations16 Jul 2024 JunHao Chen, Shengding Hu, Zhiyuan Liu, Maosong Sun

Our work presents a novel exploration of LLMs' symbolic calculation abilities and the underlying mechanisms.

Varying Manifolds in Diffusion: From Time-varying Geometries to Visual Saliency

no code implementations7 Jun 2024 JunHao Chen, Manyi Li, Zherong Pan, Xifeng Gao, Changhe Tu

Our key contribution is the introduction of generation rate, which corresponds to the local deformation of manifold over time around an image component.

Image Manipulation

Idea23D: Collaborative LMM Agents Enable 3D Model Generation from Interleaved Multimodal Inputs

1 code implementation5 Apr 2024 JunHao Chen, Xiang Li, Xiaojun Ye, Chao Li, Zhaoxin Fan, Hao Zhao

Recently, this success has been extended to 3D AIGC, with state-of-the-art methods generating textured 3D models from single images or text.

3D Generation Image to 3D +1

Ultraman: Single Image 3D Human Reconstruction with Ultra Speed and Detail

1 code implementation18 Mar 2024 Mingjin Chen, JunHao Chen, Xiaojun Ye, Huan-ang Gao, Xiaoxue Chen, Zhaoxin Fan, Hao Zhao

In this paper, we propose a new method called \emph{Ultraman} for fast reconstruction of textured 3D human models from a single image.

Lifelike 3D Human Generation

$\infty$Bench: Extending Long Context Evaluation Beyond 100K Tokens

4 code implementations21 Feb 2024 Xinrong Zhang, Yingfa Chen, Shengding Hu, Zihang Xu, JunHao Chen, Moo Khai Hao, Xu Han, Zhen Leng Thai, Shuo Wang, Zhiyuan Liu, Maosong Sun

Processing and reasoning over long contexts is crucial for many practical applications of Large Language Models (LLMs), such as document comprehension and agent construction.

Fine-Grained Stateful Knowledge Exploration: A Novel Paradigm for Integrating Knowledge Graphs with Large Language Models

1 code implementation24 Jan 2024 Dehao Tao, Congqi Wang, Feng Huang, JunHao Chen, Yongfeng Huang, Minghu Jiang

Most existing methods use a paradigm that treats the question as the objective, with relevant knowledge being incrementally retrieved from the knowledge graph.

Knowledge Base Question Answering Knowledge Graphs +1

MaxQ: Multi-Axis Query for N:M Sparsity Network

1 code implementation CVPR 2024 Jingyang Xiang, Siqi Li, JunHao Chen, Zhuangzhi Chen, Tianxin Huang, Linpeng Peng, Yong liu

Meanwhile, a sparsity strategy that gradually increases the percentage of N:M weight blocks is applied, which allows the network to heal from the pruning-induced damage progressively.

Image Classification Instance Segmentation +3

Asca: less audio data is more insightful

1 code implementation23 Sep 2023 Xiang Li, JunHao Chen, Chao Li, Hongwu Lv

Audio recognition in specialized areas such as birdsong and submarine acoustics faces challenges in large-scale pre-training due to the limitations in available samples imposed by sampling environments and specificity requirements.

Specificity

ZhuJiu: A Multi-dimensional, Multi-faceted Chinese Benchmark for Large Language Models

no code implementations28 Aug 2023 Baoli Zhang, Haining Xie, Pengfan Du, JunHao Chen, Pengfei Cao, Yubo Chen, Shengping Liu, Kang Liu, Jun Zhao

To this end, we propose the ZhuJiu benchmark, which has the following strengths: (1) Multi-dimensional ability coverage: We comprehensively evaluate LLMs across 7 ability dimensions covering 51 tasks.

IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuning

1 code implementation23 Aug 2023 Feiyu Zhang, Liangzhi Li, JunHao Chen, Zhouqiang Jiang, Bowen Wang, Yiming Qian

This approach is different from the pruning method as it is not limited by the initial number of training parameters, and each parameter matrix has a higher rank upper bound for the same training overhead.

parameter-efficient fine-tuning

Data-driven multinomial random forest

no code implementations9 Apr 2023 JunHao Chen, Xueli wang

In this article, we strengthen the proof methods of some previously weakly consistent variants of random forests into strongly consistent proof methods, and improve the data utilization of these variants, in order to obtain better theoretical properties and experimental performance.

Data-driven multinomial random forest: A new random forest variant with strong consistency

no code implementations28 Nov 2022 JunHao Chen

In this paper, we modify the proof methods of some previously weakly consistent variants of random forests into strongly consistent proof methods, and improve the data utilization of these variants in order to obtain better theoretical properties and experimental performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.