Search Results for author: Yongqiang Zhao

Found 15 papers, 3 papers with code

Full-Time Monocular Road Detection Using Zero-Distribution Prior of Angle of Polarization

1 code implementation ECCV 2020 Ning Li, Yongqiang Zhao, Quan Pan, Seong G. Kong, Jonathan Cheung-Wai Chan

Zero-distribution prior embodies the zero-distribution of Angle of Polarization (AoP) of a road scene image, which provides a significant contrast between the road and the background.

Autonomous Navigation

Exploring LLM-based Data Annotation Strategies for Medical Dialogue Preference Alignment

no code implementations5 Oct 2024 Chengfeng Dou, Ying Zhang, Zhi Jin, Wenpin Jiao, Haiyan Zhao, Yongqiang Zhao, Zhengwei Tao

We argue that the primary challenges in current RLAIF research for healthcare are the limitations of automated evaluation methods and the difficulties in accurately representing physician preferences.

Inter and Intra Prior Learning-based Hyperspectral Image Reconstruction Using Snapshot SWIR Metasurface

no code implementations10 Jul 2024 Linqiang Li, Jinglei Hao, Yongqiang Zhao, Pan Liu, Haofang Yan, Ziqin Zhang, Seong G. Kong

Shortwave-infrared(SWIR) spectral information, ranging from 1 {\mu}m to 2. 5{\mu}m, overcomes the limitations of traditional color cameras in acquiring scene information.

Decoder Image Reconstruction

Integrating Physician Diagnostic Logic into Large Language Models: Preference Learning from Process Feedback

1 code implementation11 Jan 2024 Chengfeng Dou, Zhi Jin, Wenpin Jiao, Haiyan Zhao, Yongqiang Zhao, Zhenwei Tao

The use of large language models in medical dialogue generation has garnered significant attention, with a focus on improving response quality and fluency.

Dialogue Generation

Enhancing the Spatial Awareness Capability of Multi-Modal Large Language Model

no code implementations31 Oct 2023 Yongqiang Zhao, Zhenyu Li, Zhi Jin, Feng Zhang, Haiyan Zhao, Chengfeng Dou, Zhengwei Tao, Xinhai Xu, Donghong Liu

The Multi-Modal Large Language Model (MLLM) refers to an extension of the Large Language Model (LLM) equipped with the capability to receive and infer multi-modal data.

Autonomous Driving Language Modelling +1

Multi-granularity Backprojection Transformer for Remote Sensing Image Super-Resolution

no code implementations19 Oct 2023 Jinglei Hao, Wukai Li, Binglu Wang, Shunzhou Wang, Yuting Lu, Ning li, Yongqiang Zhao

Backprojection networks have achieved promising super-resolution performance for nature images but not well be explored in the remote sensing image super-resolution (RSISR) field due to the high computation costs.

Image Reconstruction Image Super-Resolution

Enhancing Subtask Performance of Multi-modal Large Language Model

no code implementations31 Aug 2023 Yongqiang Zhao, Zhenyu Li, Feng Zhang, Xinhai Xu, Donghong Liu

Finally, the results from multiple pre-trained models for the same subtask are compared using the LLM, and the best result is chosen as the outcome for that subtask.

Language Modelling Large Language Model

CORE: Cooperative Reconstruction for Multi-Agent Perception

1 code implementation ICCV 2023 Binglu Wang, Lei Zhang, Zhaozhong Wang, Yongqiang Zhao, Tianfei Zhou

This paper presents CORE, a conceptually simple, effective and communication-efficient model for multi-agent cooperative perception.

3D Object Detection object-detection +1

Cross-Spatial Pixel Integration and Cross-Stage Feature Fusion Based Transformer Network for Remote Sensing Image Super-Resolution

no code implementations6 Jul 2023 Yuting Lu, Lingtong Min, Binglu Wang, Le Zheng, Xiaoxu Wang, Yongqiang Zhao, Teng Long

The model incorporates cross-spatial pixel integration attention (CSPIA) to introduce contextual information into a local window, while cross-stage feature fusion attention (CSFFA) adaptively fuses features from the previous stage to improve feature expression in line with the requirements of the current stage.

Image Super-Resolution

Unsupervised Spectral Demosaicing with Lightweight Spectral Attention Networks

no code implementations5 Jul 2023 Kai Feng, Yongqiang Zhao, Seong G. Kong, Haijin Zeng

This paper presents a deep learning-based spectral demosaicing technique trained in an unsupervised manner.

Benchmarking Demosaicking

PlugMed: Improving Specificity in Patient-Centered Medical Dialogue Generation using In-Context Learning

no code implementations19 May 2023 Chengfeng Dou, Zhi Jin, Wenping Jiao, Haiyan Zhao, Zhenwei Tao, Yongqiang Zhao

PlugMed is equipped with two modules, the prompt generation (PG) module and the response ranking (RR) module, to enhances LLMs' dialogue strategies for improving the specificity of the dialogue.

Common Sense Reasoning Dialogue Generation +2

Inheriting Bayer's Legacy-Joint Remosaicing and Denoising for Quad Bayer Image Sensor

no code implementations23 Mar 2023 Haijin Zeng, Kai Feng, JieZhang Cao, Shaoguang Huang, Yongqiang Zhao, Hiep Luong, Jan Aelterman, Wilfried Philips

DJRD includes a newly designed Quad Bayer remosaicing (QB-Re) block, integrated denoising modules based on Swin-transformer and multi-scale wavelet transform.

Denoising

Learning Pixel-Adaptive Weights for Portrait Photo Retouching

no code implementations7 Dec 2021 Binglu Wang, Chengzhe Lu, Dawei Yan, Yongqiang Zhao

Secondly, as neighboring pixels exhibit different affinities to the center pixel, we estimate a local attention mask to modulate the influence of neighboring pixels.

Photo Retouching

Real-time division-of-focal-plane polarization imaging system with progressive networks

no code implementations26 Oct 2021 Rongyuan Wu, Yongqiang Zhao, Ning li, Seong G. Kong

Division-of-focal-plane (DoFP) polarization imaging technical recently has been applied in many fields.

Demosaicking

DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification

no code implementations ACL 2020 Lianwei Wu, Yuan Rao, Yongqiang Zhao, Hao Liang, Ambreen Nazir

Simultaneously, the discovered evidence only roughly aims at the interpretability of the whole sequence of claims but insufficient to focus on the false parts of claims.

Claim Verification

Cannot find the paper you are looking for? You can Submit a new open access paper.