Search Results for author: XiaoMing Zhang

Found 13 papers, 5 papers with code

FineFake: A Knowledge-Enriched Dataset for Fine-Grained Multi-Domain Fake News Detecction

no code implementations30 Mar 2024 Ziyi Zhou, XiaoMing Zhang, Litian Zhang, Jiacheng Liu, Xi Zhang, Chaozhuo Li

Existing benchmarks for fake news detection have significantly contributed to the advancement of models in assessing the authenticity of news content.

Domain Adaptation Fake News Detection

MixMobileNet: A Mixed Mobile Network for Edge Vision Applications

1 code implementation Electronics 2024 Yanju Meng, Peng Wu, Jian Feng, XiaoMing Zhang

For global, we propose the global-feature aggregation encoder (GFAE), which employs a pooling strategy and computes the covariance matrix between channels instead of the spatial dimensions, changing the computational complexity from quadratic to linear, and this accelerates the inference of the model.

Image Classification Inductive Bias +2

Enhancing Cognitive Diagnosis using Un-interacted Exercises: A Collaboration-aware Mixed Sampling Approach

no code implementations15 Dec 2023 Haiping Ma, Changqian Wang, HengShu Zhu, Shangshang Yang, XiaoMing Zhang, Xingyi Zhang

Finally, we demonstrate the effectiveness and interpretability of our framework through comprehensive experiments on real-world datasets.

cognitive diagnosis

Deep learning acceleration of iterative model-based light fluence correction for photoacoustic tomography

no code implementations4 Dec 2023 Zhaoyong Liang, Shuangyang Zhang, Zhichao Liang, Zhongxin Mo, XiaoMing Zhang, Yutian Zhong, Wufan Chen, Li Qi

Photoacoustic tomography (PAT) is a promising imaging technique that can visualize the distribution of chromophores within biological tissue.

Hi-ResNet: A High-Resolution Remote Sensing Network for Semantic Segmentation

no code implementations22 May 2023 Yuxia Chen, Pengcheng Fang, Jianhui Yu, Xiaoling Zhong, XiaoMing Zhang, Tianrui Li

In this work, we solve the above-mentioned problems by proposing a High-resolution remote sensing network (Hi-ResNet) with efficient network structure designs, which consists of a funnel module, a multi-branch module with stacks of information aggregation (IA) blocks, and a feature refinement module, sequentially, and Class-agnostic Edge Aware (CEA) loss.

Semantic Segmentation

CISum: Learning Cross-modality Interaction to Enhance Multimodal Semantic Coverage for Multimodal Summarization

no code implementations20 Feb 2023 Litian Zhang, XiaoMing Zhang, Ziming Guo, Zhipeng Liu

Then, the visual description and text content are fused to generate the textual summary to capture the semantics of the multimodal content, and the most relevant image is selected as the visual summary.

Boosting Single Image Super-Resolution via Partial Channel Shifting

1 code implementation ICCV 2023 XiaoMing Zhang, Tianrui Li, Xiaole Zhao

Specifically, it is inspired by the temporal shifting in video understanding and displaces part of the channels along the spatial dimensions, thus allowing the effective receptive field to be amplified and the feature diversity to be augmented at almost zero cost.

Image Super-Resolution Video Understanding

Hierarchical Cross-Modality Semantic Correlation Learning Model for Multimodal Summarization

no code implementations16 Dec 2021 Litian Zhang, XiaoMing Zhang, Junshu Pan, Feiran Huang

In this paper, we propose a hierarchical cross-modality semantic correlation learning model (HCSCL) to learn the intra- and inter-modal correlation existing in the multimodal data.

Group-based Interleaved Pipeline Parallelism for Large-scale DNN Training

1 code implementation ICLR 2022 Pengcheng Yang, XiaoMing Zhang, Wenpeng Zhang, Ming Yang, Hong Wei

The recent trend of using large-scale deep neural networks (DNN) to boost performance has propelled the development of the parallel pipelining technique for efficient DNN training, which has resulted in the development of several prominent pipelines such as GPipe, PipeDream, and PipeDream-2BW.

K-XLNet: A General Method for Combining Explicit Knowledge with Language Model Pretraining

no code implementations25 Mar 2021 Ruiqing Yan, Lanchang Sun, Fang Wang, XiaoMing Zhang

Though pre-trained language models such as Bert and XLNet, have rapidly advanced the state-of-the-art on many NLP tasks, they implicit semantics only relying on surface information between words in corpus.

Common Sense Reasoning Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.