Search Results for author: XiaoMing Zhang

Found 25 papers, 11 papers with code

Refining Interactions: Enhancing Anisotropy in Graph Neural Networks with Language Semantics

no code implementations2 Apr 2025 Zhaoxing Li, XiaoMing Zhang, Haifeng Zhang, Chengxiang Liu

The integration of Large Language Models (LLMs) with Graph Neural Networks (GNNs) has recently been explored to enhance the capabilities of Text Attribute Graphs (TAGs).

Attribute Graph Neural Network

Collaborative Evolution: Multi-Round Learning Between Large and Small Language Models for Emergent Fake News Detection

no code implementations27 Mar 2025 Ziyi Zhou, XiaoMing Zhang, Shenghan Tan, Litian Zhang, Chaozhuo Li

The proliferation of fake news on social media platforms has exerted a substantial influence on society, leading to discernible impacts and deleterious consequences.

Fake News Detection In-Context Learning

Asteroid shape inversion with light curves using deep learning

no code implementations23 Feb 2025 YiJun Tang, ChenChen Ying, ChengZhe Xia, XiaoMing Zhang, XiaoJun Jiang

In addition, we used 3D point clouds to represent asteroid shapes and utilized the deviation between the light curves of non-convex asteroids and their convex hulls to predict the concave areas of non-convex asteroids.

Deep Learning

Beyond Self-Talk: A Communication-Centric Survey of LLM-Based Multi-Agent Systems

no code implementations20 Feb 2025 Bingyu Yan, XiaoMing Zhang, Litian Zhang, Lian Zhang, Ziyi Zhou, Dezhuang Miao, Chaozhuo Li

Large Language Models (LLMs) have recently demonstrated remarkable capabilities in reasoning, planning, and decision-making.

Decision Making Survey

Hierarchical Retrieval-Augmented Generation Model with Rethink for Multi-hop Question Answering

1 code implementation20 Aug 2024 XiaoMing Zhang, Ming Wang, Xiaocui Yang, Daling Wang, Shi Feng, Yifei Zhang

Multi-hop Question Answering (QA) necessitates complex reasoning by integrating multiple pieces of information to resolve intricate questions.

Multi-hop Question Answering Question Answering +2

Efficient Single Image Super-Resolution with Entropy Attention and Receptive Field Augmentation

no code implementations8 Aug 2024 Xiaole Zhao, Linze Li, Chengxing Xie, XiaoMing Zhang, Ting Jiang, Wenjie Lin, Shuaicheng Liu, Tianrui Li

Transformer-based deep models for single image super-resolution (SISR) have greatly improved the performance of lightweight SISR tasks in recent years.

Image Super-Resolution

Diff-Shadow: Global-guided Diffusion Model for Shadow Removal

1 code implementation23 Jul 2024 Jinting Luo, Ru Li, Chengzhi Jiang, XiaoMing Zhang, Mingyan Han, Ting Jiang, Haoqiang Fan, Shuaicheng Liu

Specifically, we propose a parallel UNets architecture: 1) the local branch performs the patch-based noise estimation in the diffusion process, and 2) the global branch recovers the low-resolution shadow-free images.

Noise Estimation Shadow Removal

Large Kernel Distillation Network for Efficient Single Image Super-Resolution

1 code implementation19 Jul 2024 Chengxing Xie, XiaoMing Zhang, Linze Li, Haiteng Meng, Tianlin Zhang, Tianrui Li, Xiaole Zhao

Efficient and lightweight single-image super-resolution (SISR) has achieved remarkable performance in recent years.

Image Super-Resolution

Improved Esophageal Varices Assessment from Non-Contrast CT Scans

no code implementations18 Jul 2024 Chunli Li, XiaoMing Zhang, Yuan Gao, Xiaoli Yin, Le Lu, Ling Zhang, Ke Yan, Yu Shi

Esophageal varices (EV), a serious health concern resulting from portal hypertension, are traditionally diagnosed through invasive endoscopic procedures.

Diagnostic

LIDIA: Precise Liver Tumor Diagnosis on Multi-Phase Contrast-Enhanced CT via Iterative Fusion and Asymmetric Contrastive Learning

no code implementations18 Jul 2024 Wei Huang, Wei Liu, XiaoMing Zhang, Xiaoli Yin, Xu Han, Chunli Li, Yuan Gao, Yu Shi, Le Lu, Ling Zhang, Lei Zhang, Ke Yan

The early detection and precise diagnosis of liver tumors are tasks of critical clinical value, yet they pose significant challenges due to the high heterogeneity and variability of liver tumors.

Contrastive Learning

FineFake: A Knowledge-Enriched Dataset for Fine-Grained Multi-Domain Fake News Detection

1 code implementation30 Mar 2024 Ziyi Zhou, XiaoMing Zhang, Litian Zhang, Jiacheng Liu, Senzhang Wang, Zheng Liu, Xi Zhang, Chaozhuo Li, Philip S. Yu

Existing benchmarks for fake news detection have significantly contributed to the advancement of models in assessing the authenticity of news content.

Domain Adaptation Fake News Detection

MixMobileNet: A Mixed Mobile Network for Edge Vision Applications

1 code implementation Electronics 2024 Yanju Meng, Peng Wu, Jian Feng, XiaoMing Zhang

For global, we propose the global-feature aggregation encoder (GFAE), which employs a pooling strategy and computes the covariance matrix between channels instead of the spatial dimensions, changing the computational complexity from quadratic to linear, and this accelerates the inference of the model.

image-classification Image Classification +3

Enhancing Cognitive Diagnosis using Un-interacted Exercises: A Collaboration-aware Mixed Sampling Approach

no code implementations15 Dec 2023 Haiping Ma, Changqian Wang, HengShu Zhu, Shangshang Yang, XiaoMing Zhang, Xingyi Zhang

Finally, we demonstrate the effectiveness and interpretability of our framework through comprehensive experiments on real-world datasets.

cognitive diagnosis

Deep learning acceleration of iterative model-based light fluence correction for photoacoustic tomography

no code implementations4 Dec 2023 Zhaoyong Liang, Shuangyang Zhang, Zhichao Liang, Zhongxin Mo, XiaoMing Zhang, Yutian Zhong, Wufan Chen, Li Qi

Photoacoustic tomography (PAT) is a promising imaging technique that can visualize the distribution of chromophores within biological tissue.

Hi-ResNet: Edge Detail Enhancement for High-Resolution Remote Sensing Segmentation

no code implementations22 May 2023 Yuxia Chen, Pengcheng Fang, Jianhui Yu, Xiaoling Zhong, XiaoMing Zhang, Tianrui Li

In this work, we solve the above-mentioned problems by proposing a High-resolution remote sensing network (Hi-ResNet) with efficient network structure designs, which consists of a funnel module, a multi-branch module with stacks of information aggregation (IA) blocks, and a feature refinement module, sequentially, and Class-agnostic Edge Aware (CEA) loss.

Semantic Segmentation

CISum: Learning Cross-modality Interaction to Enhance Multimodal Semantic Coverage for Multimodal Summarization

no code implementations20 Feb 2023 Litian Zhang, XiaoMing Zhang, Ziming Guo, Zhipeng Liu

Then, the visual description and text content are fused to generate the textual summary to capture the semantics of the multimodal content, and the most relevant image is selected as the visual summary.

Boosting Single Image Super-Resolution via Partial Channel Shifting

1 code implementation ICCV 2023 XiaoMing Zhang, Tianrui Li, Xiaole Zhao

Specifically, it is inspired by the temporal shifting in video understanding and displaces part of the channels along the spatial dimensions, thus allowing the effective receptive field to be amplified and the feature diversity to be augmented at almost zero cost.

Diversity Image Super-Resolution +1

Hierarchical Cross-Modality Semantic Correlation Learning Model for Multimodal Summarization

no code implementations16 Dec 2021 Litian Zhang, XiaoMing Zhang, Junshu Pan, Feiran Huang

In this paper, we propose a hierarchical cross-modality semantic correlation learning model (HCSCL) to learn the intra- and inter-modal correlation existing in the multimodal data.

Diversity

Group-based Interleaved Pipeline Parallelism for Large-scale DNN Training

1 code implementation ICLR 2022 Pengcheng Yang, XiaoMing Zhang, Wenpeng Zhang, Ming Yang, Hong Wei

The recent trend of using large-scale deep neural networks (DNN) to boost performance has propelled the development of the parallel pipelining technique for efficient DNN training, which has resulted in the development of several prominent pipelines such as GPipe, PipeDream, and PipeDream-2BW.

K-XLNet: A General Method for Combining Explicit Knowledge with Language Model Pretraining

no code implementations25 Mar 2021 Ruiqing Yan, Lanchang Sun, Fang Wang, XiaoMing Zhang

Though pre-trained language models such as Bert and XLNet, have rapidly advanced the state-of-the-art on many NLP tasks, they implicit semantics only relying on surface information between words in corpus.

Common Sense Reasoning Language Modeling +1

Cannot find the paper you are looking for? You can Submit a new open access paper.