Search Results for author: Peng Shu

Found 28 papers, 4 papers with code

Target-driven Self-Distillation for Partial Observed Trajectories Forecasting

no code implementations28 Jan 2025 Pengfei Zhu, Peng Shu, Mengshi Qi, Liang Liu, Huadong Ma

This involves firstly training a fully observed model and then using a distillation process to create the final model.

Knowledge Distillation Motion Forecasting

QueEn: A Large Language Model for Quechua-English Translation

no code implementations6 Dec 2024 JunHao Chen, Peng Shu, Yiwei Li, Huaqin Zhao, Hanqi Jiang, Yi Pan, Yifan Zhou, Zhengliang Liu, Lewis C Howe, Tianming Liu

Recent studies show that large language models (LLMs) are powerful tools for working with natural language, bringing advances in many areas of computational linguistics.

Computational Efficiency Language Modeling +4

OracleSage: Towards Unified Visual-Linguistic Understanding of Oracle Bone Scripts through Cross-Modal Knowledge Fusion

no code implementations26 Nov 2024 Hanqi Jiang, Yi Pan, JunHao Chen, Zhengliang Liu, Yifan Zhou, Peng Shu, Yiwei Li, Huaqin Zhao, Stephen Mihm, Lewis C Howe, Tianming Liu

Oracle bone script (OBS), as China's earliest mature writing system, present significant challenges in automatic recognition due to their complex pictographic structures and divergence from modern Chinese characters.

Transcending Language Boundaries: Harnessing LLMs for Low-Resource Language Translation

no code implementations18 Nov 2024 Peng Shu, JunHao Chen, Zhengliang Liu, Hui Wang, Zihao Wu, Tianyang Zhong, Yiwei Li, Huaqin Zhao, Hanqi Jiang, Yi Pan, Yifan Zhou, Constance Owl, Xiaoming Zhai, Ninghao Liu, Claudio Saunt, Tianming Liu

Our comparison with the zero-shot performance of GPT-4o and LLaMA 3. 1 405B, highlights the significant challenges these models face when translating into low-resource languages.

Retrieval Translation

Legal Evalutions and Challenges of Large Language Models

no code implementations15 Nov 2024 Jiaqi Wang, Huan Zhao, Zhenyuan Yang, Peng Shu, JunHao Chen, Haobo Sun, Ruixi Liang, Shixin Li, Pengcheng Shi, Longjun Ma, Zongjia Liu, Zhengliang Liu, Tianyang Zhong, Yutong Zhang, Chong Ma, Xin Zhang, Tuo Zhang, Tianli Ding, Yudan Ren, Tianming Liu, Xi Jiang, Shu Zhang

In this paper, we review legal testing methods based on Large Language Models (LLMs), using the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions.

Legal Reasoning

Large Language Models for Manufacturing

no code implementations28 Oct 2024 Yiwei Li, Huaqin Zhao, Hanqi Jiang, Yi Pan, Zhengliang Liu, Zihao Wu, Peng Shu, Jie Tian, Tianze Yang, Shaochen Xu, Yanjun Lyu, Parker Blenk, Jacob Pence, Jason Rupram, Eliza Banu, Ninghao Liu, Linbing Wang, WenZhan Song, Xiaoming Zhai, Kenan Song, Dajiang Zhu, Beiwen Li, Xianqiao Wang, Tianming Liu

The rapid advances in Large Language Models (LLMs) have the potential to transform manufacturing industry, offering new opportunities to optimize processes, improve efficiency, and drive innovation.

Retrieval Instead of Fine-tuning: A Retrieval-based Parameter Ensemble for Zero-shot Learning

no code implementations13 Oct 2024 Pengfei Jin, Peng Shu, Sekeun Kim, Qing Xiao, Sifan Song, Cheng Chen, Tianming Liu, Xiang Li, Quanzheng Li

Foundation models have become a cornerstone in deep learning, with techniques like Low-Rank Adaptation (LoRA) offering efficient fine-tuning of large models.

Computational Efficiency Image Segmentation +5

EG-SpikeFormer: Eye-Gaze Guided Transformer on Spiking Neural Networks for Medical Image Analysis

no code implementations12 Oct 2024 Yi Pan, Hanqi Jiang, JunHao Chen, Yiwei Li, Huaqin Zhao, Yifan Zhou, Peng Shu, Zihao Wu, Zhengliang Liu, Dajiang Zhu, Xiang Li, Yohannes Abate, Tianming Liu

Neuromorphic computing has emerged as a promising energy-efficient alternative to traditional artificial intelligence, predominantly utilizing spiking neural networks (SNNs) implemented on neuromorphic hardware.

Image Classification Medical Image Analysis +1

Examining the Commitments and Difficulties Inherent in Multimodal Foundation Models for Street View Imagery

no code implementations23 Aug 2024 Zhenyuan Yang, Xuhui Lin, Qinyi He, Ziye Huang, Zhengliang Liu, Hanqi Jiang, Peng Shu, Zihao Wu, Yiwei Li, Stephen Law, Gengchen Mai, Tianming Liu, Tao Yang

The emergence of Large Language Models (LLMs) and multimodal foundation models (FMs) has generated heightened interest in their applications that integrate vision and language.

Question Answering Zero-Shot Learning

MGH Radiology Llama: A Llama 3 70B Model for Radiology

no code implementations13 Aug 2024 Yucheng Shi, Peng Shu, Zhengliang Liu, Zihao Wu, Quanzheng Li, Tianming Liu, Ninghao Liu, Xiang Li

In recent years, the field of radiology has increasingly harnessed the power of artificial intelligence (AI) to enhance diagnostic accuracy, streamline workflows, and improve patient care.

Language Modeling Language Modelling +1

Reasoning before Comparison: LLM-Enhanced Semantic Similarity Metrics for Domain Specialized Text Analysis

no code implementations17 Feb 2024 Shaochen Xu, Zihao Wu, Huaqin Zhao, Peng Shu, Zhengliang Liu, Wenxiong Liao, Sheng Li, Andrea Sikora, Tianming Liu, Xiang Li

In this study, we leverage LLM to enhance the semantic analysis and develop similarity metrics for texts, addressing the limitations of traditional unsupervised NLP metrics like ROUGE and BLEU.

Semantic Similarity Semantic Textual Similarity +1

LLMs for Coding and Robotics Education

no code implementations9 Feb 2024 Peng Shu, Huaqin Zhao, Hanqi Jiang, Yiwei Li, Shaochen Xu, Yi Pan, Zihao Wu, Zhengliang Liu, Guoyu Lu, Le Guan, Gong Chen, Xianqiao Wang Tianming Liu

To teach young children how to code and compete in robot challenges, large language models are being utilized for robot code explanation, generation, and modification.

Code Generation Explanation Generation

Assessing Large Language Models in Mechanical Engineering Education: A Study on Mechanics-Focused Conceptual Understanding

no code implementations13 Jan 2024 Jie Tian, Jixin Hou, Zihao Wu, Peng Shu, Zhengliang Liu, Yujie Xiang, Beikang Gu, Nicholas Filla, Yiwei Li, Ning Liu, Xianyan Chen, Keke Tang, Tianming Liu, Xianqiao Wang

This study is a pioneering endeavor to investigate the capabilities of Large Language Models (LLMs) in addressing conceptual questions within the domain of mechanical engineering with a focus on mechanics.

Multiple-choice Prompt Engineering

Large Language Models for Robotics: Opportunities, Challenges, and Perspectives

no code implementations9 Jan 2024 Jiaqi Wang, Zihao Wu, Yiwei Li, Hanqi Jiang, Peng Shu, Enze Shi, Huawen Hu, Chong Ma, Yiheng Liu, Xuhui Wang, Yincheng Yao, Xuan Liu, Huaqin Zhao, Zhengliang Liu, Haixing Dai, Lin Zhao, Bao Ge, Xiang Li, Tianming Liu, Shu Zhang

Notably, in the realm of robot task planning, LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.

Task Planning

DeID-GPT: Zero-shot Medical Text De-Identification by GPT-4

1 code implementation20 Mar 2023 Zhengliang Liu, Yue Huang, Xiaowei Yu, Lu Zhang, Zihao Wu, Chao Cao, Haixing Dai, Lin Zhao, Yiwei Li, Peng Shu, Fang Zeng, Lichao Sun, Wei Liu, Dinggang Shen, Quanzheng Li, Tianming Liu, Dajiang Zhu, Xiang Li

The digitization of healthcare has facilitated the sharing and re-using of medical data but has also raised concerns about confidentiality and privacy.

Benchmarking De-identification +4

Domain-Constrained Advertising Keyword Generation

no code implementations27 Feb 2019 Hao Zhou, Minlie Huang, Yishun Mao, Changlei Zhu, Peng Shu, Xiaoyan Zhu

Second, the inefficient ad impression issue: a large proportion of search queries, which are unpopular yet relevant to many ad keywords, have no ads presented on their search result pages.

Reinforcement Learning Retrieval

Skipping Word: A Character-Sequential Representation based Framework for Question Answering

no code implementations2 Sep 2016 Lingxun Meng, Yan Li, Mengyi Liu, Peng Shu

Recent works using artificial neural networks based on word distributed representation greatly boost the performance of various natural language learning tasks, especially question answering.

Answer Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.