Search Results for author: Xiang Liu

Found 82 papers, 25 papers with code

Integrating Generative Adversarial Networks and Convolutional Neural Networks for Enhanced Traffic Accidents Detection and Analysis

no code implementations19 Jun 2025 Zhenghao Xi, Xiang Liu, Yaqi Liu, Yitong Cai, Yangyu Zheng

Accident detection using Closed Circuit Television (CCTV) footage is one of the most imperative features for enhancing transport safety and efficient traffic control.

Combining Self-attention and Dilation Convolutional for Semantic Segmentation of Coal Maceral Groups

no code implementations15 Jun 2025 Zhenghao Xi, Zhengnan Lv, Yang Zheng, Xiang Liu, Zhuang Yu, Junran Chen, Jing Hu, Yaqi Liu

The segmentation of coal maceral groups can be described as a semantic segmentation process of coal maceral group images, which is of great significance for studying the chemical properties of coal.

Segmentation Semantic Segmentation

Topological Machine Learning for Protein-Nucleic Acid Binding Affinity Changes Upon Mutation

1 code implementation28 May 2025 Xiang Liu, JunJie Wee, Guo-Wei Wei

Understanding how protein mutations affect protein-nucleic acid binding is critical for unraveling disease mechanisms and advancing therapies.

Topological Data Analysis

Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression

1 code implementation26 May 2025 Peijie Dong, Zhenheng Tang, Xiang Liu, Lujun Li, Xiaowen Chu, Bo Li

Post-training compression reduces the computational and memory costs of large language models (LLMs), enabling resource-efficient deployment.

Language Modeling Language Modelling +3

Efficient Reasoning via Chain of Unconscious Thought

1 code implementation26 May 2025 Ruihan Gong, Yue Liu, Wenjie Qu, Mingzhe Du, Yufei He, Yingwei Ma, Yulin Chen, Xiang Liu, Yi Wen, Xinfeng Li, Ruidong Wang, Xinzhong Zhu, Bryan Hooi, Jiaheng Zhang

Inspired by UTT, we propose a new reasoning paradigm, termed Chain of Unconscious Thought (CoUT), to improve the token efficiency of LRMs by guiding them to mimic human unconscious thought and internalize reasoning processes.

Multimodal Online Federated Learning with Modality Missing in Internet of Things

no code implementations22 May 2025 Heqiang Wang, Xiang Liu, Xiaoxiong Zhong, Lixing Chen, Fangming Liu, Weizhe Zhang

Furthermore, the real-time nature of data collection and limited local storage on edge devices in IoT call for an online learning paradigm.

Federated Learning

FlowKV: Enhancing Multi-Turn Conversational Coherence in LLMs via Isolated Key-Value Cache Management

no code implementations21 May 2025 Xiang Liu, Hong Chen, Xuming Hu, Xiaowen Chu

FlowKV's core innovation is a multi-turn isolation mechanism that preserves the accumulated compressed KV cache from past turns.

Instruction Following Management

SSR: Speculative Parallel Scaling Reasoning in Test-time

no code implementations21 May 2025 Yuanlin Chu, Bo wang, Xiang Liu, Hong Chen, Aiwei Liu, Xuming Hu

Large language models (LLMs) have achieved impressive results on multi-step mathematical reasoning, yet at the cost of high computational overhead.

Diversity Math +1

CAFES: A Collaborative Multi-Agent Framework for Multi-Granular Multimodal Essay Scoring

1 code implementation20 May 2025 Jiamin Su, Yibo Yan, Zhuoran Gao, Han Zhang, Xiang Liu, Xuming Hu

Automated Essay Scoring (AES) is crucial for modern education, particularly with the increasing prevalence of multimodal assessments.

Automated Essay Scoring Diversity +4

Quotient Complex Transformer (QCformer) for Perovskite Data Analysis

no code implementations14 May 2025 Xinyu You, Xiang Liu, Chuan-Shen Hu, Kelin Xia, Tze Chien Sum

To address these limitations, we propose a novel representation based on quotient complexes (QCs) and introduce the Quotient Complex Transformer (QCformer) for material property prediction.

Property Prediction

TSP-OCS: A Time-Series Prediction for Optimal Camera Selection in Multi-Viewpoint Surgical Video Analysis

no code implementations9 Apr 2025 Xinyu Liu, Xiaoguang Lin, Xiang Liu, Yong Yang, Hongqian Wang, Qilong Sun

This study addresses these limitations by employing a multi-viewpoint camera recording system, capturing the surgical procedure from six different angles to mitigate occlusions.

Prediction Time Series +1

State Space Model Meets Transformer: A New Paradigm for 3D Object Detection

1 code implementation International Conference on Learning Representations 2025 Chuxin Wang, Wenfei Yang, Xiang Liu, Tianzhu Zhang

To the best of our knowledge, this is the first method to model queries as system states and scene points as system inputs, which can simultaneously update scene point features and query features with linear complexity.

3D Object Detection Decoder +2

A Multimodal Benchmark Dataset and Model for Crop Disease Diagnosis

1 code implementation10 Mar 2025 Xiang Liu, Zhaoxiang Liu, Huan Hu, Zezhou Chen, Kohou Wang, Kai Wang, Shiguo Lian

We demonstrate the utility of the dataset by finetuning state-of-the-art multimodal models, showcasing significant improvements in crop disease diagnosis.

Question Answering

Optimizing for the Shortest Path in Denoising Diffusion Model

1 code implementation CVPR 2025 Ping Chen, Xingpeng Zhang, Zhaoxiang Liu, Huan Hu, Xiang Liu, Kai Wang, Min Wang, Yanlin Qian, Shiguo Lian

In this research, we propose a novel denoising diffusion model based on shortest-path modeling that optimizes residual propagation to enhance both denoising efficiency and quality. Drawing on Denoising Diffusion Implicit Models (DDIM) and insights from graph theory, our model, termed the Shortest Path Diffusion Model (ShortDF), treats the denoising process as a shortest-path problem aimed at minimizing reconstruction error.

Denoising

Manifold Topological Deep Learning for Biomedical Data

no code implementations28 Feb 2025 Xiang Liu, Zhe Su, Yongyi Shi, Yiying Tong, Ge Wang, Guo-Wei Wei

Recently, topological deep learning (TDL), which integrates algebraic topology with deep neural networks, has achieved tremendous success in processing point-cloud data, emerging as a promising paradigm in data science.

Deep Learning

The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?

no code implementations24 Feb 2025 Zhenheng Tang, Xiang Liu, Qian Wang, Peijie Dong, Bingsheng He, Xiaowen Chu, Bo Li

In this blog, we present a brief review of recent advancements in LLMs related to retrieval-augmented generation, multi-step reasoning, external tools, and computational expressivity, all of which substantially enhance LLM performance.

Arithmetic Reasoning Common Sense Reasoning +2

Road Traffic Sign Recognition method using Siamese network Combining Efficient-CNN based Encoder

no code implementations21 Feb 2025 Zhenghao Xi, Yuchao Shao, Yang Zheng, Xiang Liu, Yaqi Liu, Yitong Cai

Traffic signs recognition (TSR) plays an essential role in assistant driving and intelligent transportation system.

Traffic Sign Recognition

Perovskite-LLM: Knowledge-Enhanced Large Language Models for Perovskite Solar Cell Research

no code implementations18 Feb 2025 Xiang Liu, Penglei Sun, Shuyan Chen, Longhan Zhang, Peijie Dong, Huajie You, Yongqi Zhang, Chang Yan, Xiaowen Chu, Tong-Yi Zhang

The rapid advancement of perovskite solar cells (PSCs) has led to an exponential growth in research publications, creating an urgent need for efficient knowledge management and reasoning systems in this domain.

Experimental Design

Automatic Pruning via Structured Lasso with Class-wise Information

no code implementations13 Feb 2025 Xiang Liu, Mingchen Li, Xia Li, Leigang Qu, Zifan Peng, Yijun Song, Zemin Liu, Linshan Jiang, Jialin Li

For instance, using the VGG16 model on the CIFAR-10 dataset, we achieve a parameter reduction of 85%, a decrease in FLOPs by 61%, and maintain an accuracy of 94. 10% (0. 14% higher than the original model); we reduce the parameters by 55% with the accuracy at 76. 12% using the ResNet architecture on ImageNet (only drops 0. 03%).

Network Pruning

One-shot Federated Learning Methods: A Practical Guide

1 code implementation13 Feb 2025 Xiang Liu, Zhenheng Tang, Xia Li, Yijun Song, Sijie Ji, Zemin Liu, Bo Han, Linshan Jiang, Jialin Li

One-shot Federated Learning (OFL) is a distributed machine learning paradigm that constrains client-server communication to a single round, addressing privacy and communication overhead issues associated with multiple rounds of data exchange in traditional Federated Learning (FL).

Federated Learning

HODDI: A Dataset of High-Order Drug-Drug Interactions for Computational Pharmacovigilance

1 code implementation10 Feb 2025 Zhaoying Wang, Yingdan Shi, Xiang Liu, Can Chen, Jun Wen, Ren Wang

However, the scarcity of higher-order datasets that capture the combinatorial effects of multiple drugs severely limits progress in this field.

Pharmacovigilance

Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing

no code implementations6 Feb 2025 Kunfeng Lai, Zhenheng Tang, Xinglin Pan, Peijie Dong, Xiang Liu, Haolan Chen, Li Shen, Bo Li, Xiaowen Chu

To further reduce storage costs, inspired by task arithmetic sparsity, we decouple multiple fine-tuned experts into a dense expert and several sparse experts.

Task Arithmetic

Can LLMs Maintain Fundamental Abilities under KV Cache Compression?

no code implementations4 Feb 2025 Xiang Liu, Zhenheng Tang, Hong Chen, Peijie Dong, Zeyu Li, Xiuze Zhou, Bo Li, Xuming Hu, Xiaowen Chu

This paper investigates an underexplored challenge in large language models (LLMs): the impact of KV cache compression methods on LLMs' fundamental capabilities.

Arithmetic Reasoning Code Generation +2

ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference

no code implementations1 Feb 2025 Xiang Liu, Zhenheng Tang, Peijie Dong, Zeyu Li, Yue Liu, Bo Li, Xuming Hu, Xiaowen Chu

Large Language Models (LLMs) require significant GPU memory when processing long texts, with the key value (KV) cache consuming up to 70\% of total memory during inference.

GSM8K In-Context Learning

Rethinking Noisy Video-Text Retrieval via Relation-aware Alignment

no code implementations CVPR 2025 Huakai Lai, Guoxin Xiong, Huayu Mai, Xiang Liu, Tianzhu Zhang

Video-Text Retrieval (VTR) is a core task in multi-modal understanding, drawing growing attention from both academia and industry in recent years.

Relation Text Retrieval +1

Dual-Agent Optimization framework for Cross-Domain Few-Shot Segmentation

no code implementations CVPR 2025 Zhaoyang Li, YuAn Wang, Wangkai Li, Tianzhu Zhang, Xiang Liu

In the consistent mutual aggregation module, we employ a set of agents to learn domain-invariant features across domains, and then use these features to enhance the original representations for feature adaptation.

Cross-Domain Few-Shot Domain Adaptation

Learning Radiance Fields from a Single Snapshot Compressive Image

no code implementations27 Dec 2024 Yunhao Li, Xiang Liu, Xiaodong Wang, Xin Yuan, Peidong Liu

SCI is a cost-effective method that enables the recording of high-dimensional data, such as hyperspectral or temporal information, into a single image using low-cost 2D imaging sensors.

3DGS 3D Scene Reconstruction +3

SILMM: Self-Improving Large Multimodal Models for Compositional Text-to-Image Generation

no code implementations CVPR 2025 Leigang Qu, Haochuan Li, Wenjie Wang, Xiang Liu, Juncheng Li, Liqiang Nie, Tat-Seng Chua

To adapt SILMM to LMMs with continuous features, we propose a diversity mechanism to obtain diverse representations and a kernel-based continuous DPO for alignment.

Diversity Prompt Engineering +2

S3TU-Net: Structured Convolution and Superpixel Transformer for Lung Nodule Segmentation

no code implementations19 Nov 2024 Yuke Wu, Xiang Liu, Yunyu Shi, Xinyi Chen, Zhenglei Wang, YuQing Xu, Shuo Hong Wang

In addition to comparison and ablation studies, we validated the generalization ability of our model on the EPDB private dataset, achieving a DSC of 86. 40%.

Computational Efficiency Computed Tomography (CT) +1

Should We Really Edit Language Models? On the Evaluation of Edited Language Models

1 code implementation24 Oct 2024 Qi Li, Xiang Liu, Zhenheng Tang, Peijie Dong, Zeyu Li, Xinglin Pan, Xiaowen Chu

Our findings indicate that current editing methods are only suitable for small-scale knowledge updates within language models, which motivates further research on more practical and reliable editing methods.

General Knowledge Model Editing

Efficient Partitioning Vision Transformer on Edge Devices for Distributed Inference

no code implementations15 Oct 2024 Xiang Liu, Yijun Song, Xia Li, Yifei Sun, Huiying Lan, Zemin Liu, Linshan Jiang, Jialin Li

To further reduce computational overhead and inference latency, we introduce a class-wise pruning technique that decreases the size of each sub-model.

LPZero: Language Model Zero-cost Proxy Search from Zero

no code implementations7 Oct 2024 Peijie Dong, Lujun Li, Xiang Liu, Zhenheng Tang, Xuebo Liu, Qiang Wang, Xiaowen Chu

Specifically, we model the ZC proxy as a symbolic equation and incorporate a unified proxy search space that encompasses existing ZC proxies, which are composed of a predefined set of mathematical symbols.

Language Modeling Language Modelling +1

LongGenBench: Long-context Generation Benchmark

1 code implementation5 Oct 2024 Xiang Liu, Peijie Dong, Xuming Hu, Xiaowen Chu

Current long-context benchmarks primarily focus on retrieval-based tests, requiring Large Language Models (LLMs) to locate specific information within extensive input contexts, such as the needle-in-a-haystack (NIAH) benchmark.

Language Modelling Retrieval

DIIT: A Domain-Invariant Information Transfer Method for Industrial Cross-Domain Recommendation

no code implementations29 Sep 2024 Heyuan Huang, Xingyu Lou, Chaochao Chen, Pengxiang Cheng, Yue Xin, Chengwei He, Xiang Liu, Jun Wang

Finally, for improving the efficiency, we design a migrator to transfer the extracted information to the latest target domain model, which only need the target domain model for inference.

Recommendation Systems

Piculet: Specialized Models-Guided Hallucination Decrease for MultiModal Large Language Models

no code implementations2 Aug 2024 Kohou Wang, Xiang Liu, Zhaoxiang Liu, Kai Wang, Shiguo Lian

Multimodal Large Language Models (MLLMs) have made significant progress in bridging the gap between visual and language modalities.

Hallucination

3D Question Answering for City Scene Understanding

no code implementations24 Jul 2024 Penglei Sun, Yaoxian Song, Xiang Liu, Xiaofei Yang, Qiang Wang, Tiefeng Li, Yang Yang, Xiaowen Chu

3D multimodal question answering (MQA) plays a crucial role in scene understanding by enabling intelligent agents to comprehend their surroundings in 3D environments.

Autonomous Driving Question Answering +1

Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models

1 code implementation5 Jun 2024 Peijie Dong, Lujun Li, Zhenheng Tang, Xiang Liu, Xinglin Pan, Qiang Wang, Xiaowen Chu

In particular, we devise an elaborate search space encompassing the existing pruning metrics to discover the potential symbolic pruning metric.

Diversity Language Modeling +1

Gaussian Primitives for Deformable Image Registration

no code implementations5 Jun 2024 Jihe Li, Xiang Liu, Fabian Zhang, Xia Li, Xixin Cao, Ye Zhang, Joachim Buhmann

Furthermore, the movement of individual voxel is derived via blending the local rigid transformation of the neighboring Gaussian primitives.

Computational Efficiency Image Registration

FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning

no code implementations20 May 2024 Liuzhi Zhou, Yu He, Kun Zhai, Xiang Liu, Sen Liu, Xingjun Ma, Guangnan Ye, Yu-Gang Jiang, Hongfeng Chai

This comparative analysis revealed that due to the limited information contained within client models from other clients during the initial stages of federated learning, more substantial constraints need to be imposed on the parameters of the adaptive algorithm.

Federated Learning

Computation and Communication Efficient Lightweighting Vertical Federated Learning for Smart Building IoT

no code implementations30 Mar 2024 Heqiang Wang, Xiang Liu, Yucheng Liu, Jia Zhou, Weihong Yang, Xiaoxiong Zhong

With the increasing number and enhanced capabilities of IoT devices in smart buildings, these devices are evolving beyond basic data collection and control to actively participate in deep learning tasks.

Computational Efficiency image-classification +2

LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning

1 code implementation26 Mar 2024 Rui Pan, Xiang Liu, Shizhe Diao, Renjie Pi, Jipeng Zhang, Chi Han, Tong Zhang

Attempting to complement this deficiency, we investigate the layerwise properties of LoRA on fine-tuning tasks and observe an unexpected but consistent skewness of weight norms across different layers.

GSM8K Language Modeling +5

ParZC: Parametric Zero-Cost Proxies for Efficient NAS

no code implementations3 Feb 2024 Peijie Dong, Lujun Li, Xinglin Pan, Zimian Wei, Xiang Liu, Qiang Wang, Xiaowen Chu

Recent advancements in Zero-shot Neural Architecture Search (NAS) highlight the efficacy of zero-cost proxies in various NAS benchmarks.

Neural Architecture Search

An Efficient Implicit Neural Representation Image Codec Based on Mixed Autoregressive Model for Low-Complexity Decoding

no code implementations23 Jan 2024 Xiang Liu, Jiahong Chen, Bin Chen, Zimo Liu, Baoyi An, Shu-Tao Xia, Zhi Wang

To the best of our knowledge, our method is the first INR-based codec comparable with Hyperprior in both decoding speed and quality while maintaining low complexity.

Computational Efficiency Image Compression

MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization

1 code implementation12 Jan 2024 Shuaijie She, Wei Zou, ShuJian Huang, Wenhao Zhu, Xiang Liu, Xiang Geng, Jiajun Chen

To enhance reasoning abilities in non-dominant languages, we propose a Multilingual-Alignment-as-Preference Optimization framework (MAPO), aiming to align the reasoning processes in other languages with the dominant language.

Mathematical Reasoning

Plum: Prompt Learning using Metaheuristic

1 code implementation14 Nov 2023 Rui Pan, Shuo Xing, Shizhe Diao, Wenhe Sun, Xiang Liu, Kashun Shum, Renjie Pi, Jipeng Zhang, Tong Zhang

Since the emergence of large language models, prompt learning has become a popular method for optimizing and customizing these models.

Image Generation Prompt Learning

Dissecting the Runtime Performance of the Training, Fine-tuning, and Inference of Large Language Models

no code implementations7 Nov 2023 Longteng Zhang, Xiang Liu, Zeyu Li, Xinglin Pan, Peijie Dong, Ruibo Fan, Rui Guo, Xin Wang, Qiong Luo, Shaohuai Shi, Xiaowen Chu

For end users, our benchmark and findings help better understand different optimization techniques, training and inference frameworks, together with hardware platforms in choosing configurations for deploying LLMs.

Quantization

$R^3$-NL2GQL: A Model Coordination and Knowledge Graph Alignment Approach for NL2GQL

1 code implementation3 Nov 2023 YuHang Zhou, Yu He, Siyu Tian, Yuchen Ni, Zhangyue Yin, Xiang Liu, Chuanjun Ji, Sen Liu, Xipeng Qiu, Guangnan Ye, Hongfeng Chai

While current tasks of converting natural language to SQL (NL2SQL) using Foundation Models have shown impressive achievements, adapting these approaches for converting natural language to Graph Query Language (NL2GQL) encounters hurdles due to the distinct nature of GQL compared to SQL, alongside the diverse forms of GQL.

Knowledge Graphs Natural Language Queries +3

FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation

1 code implementation30 Sep 2023 Xiang Liu, Liangxi Liu, Feiyang Ye, Yunheng Shen, Xia Li, Linshan Jiang, Jialin Li

Efficiently aggregating trained neural networks from local clients into a global model on a server is a widely researched topic in federated learning.

Federated Learning

Torsion Graph Neural Networks

1 code implementation23 Jun 2023 Cong Shen, Xiang Liu, Jiawei Luo, Kelin Xia

This demonstrates that analytic torsion is a highly efficient topological invariant in the characterization of graph structures and can significantly boost the performance of GNNs.

Graph Neural Network Link Prediction +1

Active Prompting with Chain-of-Thought for Large Language Models

2 code implementations23 Feb 2023 Shizhe Diao, Pengcheng Wang, Yong Lin, Rui Pan, Xiang Liu, Tong Zhang

For this purpose, we propose a solution to the key problem of determining which questions are the most important and helpful ones to annotate from a pool of task-specific queries.

Active Learning Zero-Shot Learning

AI of Brain and Cognitive Sciences: From the Perspective of First Principles

no code implementations20 Jan 2023 Luyao Chen, Zhiqiang Chen, Longsheng Jiang, Xiang Liu, Linlu Xu, Bo Zhang, Xiaolong Zou, Jinying Gao, Yu Zhu, Xizi Gong, Shan Yu, Sen Song, Liangyi Chen, Fang Fang, Si Wu, Jia Liu

Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation.

Few-Shot Learning image-classification +1

Research on College Students' Innovation and Entrepreneurship Education from The Perspective of Artificial Intelligence Knowledge-Based Crowdsourcing

no code implementations12 Dec 2022 Yufei Xie, Xiang Liu, Qizhong Yuan

Based on the practical process of innovation and entrepreneurship education for college students in the author's university, this study analyzes and deconstructs the key concepts of AI knowledge-based crowdsourcing on the basis of literature research, and analyzes the objective fitting needs of combining AI knowledge-based crowdsourcing with college students' innovation and entrepreneurship education practice through a survey and research of a random sample of college students, and verifies that college students' knowledge and application of AI knowledge-based crowdsourcing in the learning and practice of innovation and entrepreneurship The study also verifies the awareness and application of AI knowledge-based crowdsourcing knowledge by university students in the learning and practice of innovation and entrepreneurship.

Cross-Modality Transformer for Visible-Infrared Person Re-Identification

no code implementations ECCV 2022 Kongzhu Jiang, Tianzhu Zhang, Xiang Liu, Bingqiao Qian, Yongdong Zhang, Feng Wu ;

To alleviate the above issues, we propose a novel Cross-Modality Transformer (CMT) to jointly explore a modality-level alignment module and an instance-level module for VI-ReID.

Decoder Person Re-Identification

DiVAE: Photorealistic Images Synthesis with Denoising Diffusion Decoder

no code implementations1 Jun 2022 Jie Shi, Chenfei Wu, Jian Liang, Xiang Liu, Nan Duan

Our work proposes a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.

Decoder Denoising +1

Transmit Design for Joint MIMO Radar and Multiuser Communications with Transmit Covariance Constraint

no code implementations2 Sep 2021 Xiang Liu, Tianyao Huang, Yimin Liu

With this constraint, we formulate and solve the signal-to-interference-plus-noise ratio (SINR) balancing problem for multiuser transmit beamforming via convex optimization.

Diverse Part Discovery: Occluded Person Re-identification with Part-Aware Transformer

no code implementations CVPR 2021 Yulin Li, Jianfeng He, Tianzhu Zhang, Xiang Liu, Yongdong Zhang, Feng Wu

To address these issues, we propose a novel end-to-end Part-Aware Transformer (PAT) for occluded person Re-ID through diverse part discovery via a transformer encoderdecoder architecture, including a pixel context based transformer encoder and a part prototype based transformer decoder.

Decoder Diversity +1

Newly observed $X(4630)$: a new charmoniumlike molecule

no code implementations4 Mar 2021 Xin-Dian Yang, Fu-Lai Wang, Zhan-Wei Liu, Xiang Liu

Very recently, the LHCb Collaboration at the Large Hadron Collider at CERN observed new resonance $X(4630)$.

High Energy Physics - Phenomenology

Hidden-charm pentaquarks with triple strangeness due to the $Ω_{c}^{(*)}\bar{D}_s^{(*)}$ interactions

no code implementations27 Jan 2021 Fu-Lai Wang, Xin-Dian Yang, Rui Chen, Xiang Liu

Our results suggest that the $\Omega_{c}\bar D_s^*$ state with $J^P={3}/{2}^{-}$ and the $\Omega_{c}^{*}\bar D_s^*$ state with $J^P={5}/{2}^{-}$ can be recommended as the candidates of the hidden-charm molecular pentaquark with triple strangeness.

High Energy Physics - Phenomenology High Energy Physics - Experiment

Universal behavior of mass gaps existing in the single heavy baryon family

no code implementations26 Jan 2021 Bing Chen, Si-Qiang Luo, Xiang Liu

The mass gaps existing in the discovered single heavy flavor baryons are analyzed, which show some universal behaviors.

High Energy Physics - Phenomenology

Time-Series Regeneration with Convolutional Recurrent Generative Adversarial Network for Remaining Useful Life Estimation

no code implementations11 Jan 2021 Xuewen Zhang, Yan Qin, Chau Yuen, Lahiru Jayasinghe, Xiang Liu

Out of this consideration, an enhanced RUL framework focusing on data self-generation is put forward for both non-cyclic and cyclic degradation patterns for the first time.

Generative Adversarial Network Time Series +1

Unambiguous Delay-Doppler Recovery from Random Phase Coded Pulses

no code implementations22 Dec 2020 Xiang Liu, Deborah Cohen, Tianyao Huang, Yimin Liu, Yonina C. Eldar

Our method encodes each pulse with a random phase, varying from pulse to pulse, and then processes the received samples jointly to resolve the range ambiguity.

compressed sensing

Multi-Features Guidance Network for partial-to-partial point cloud registration

1 code implementation24 Nov 2020 Hongyuan Wang, Xiang Liu, Wen Kang, Zhiqiang Yan, Bingwen Wang, Qianhao Ning

In the correspondences credibility computation module, based on the conflicted relationship between the features matching matrix and the coordinates matching matrix, we score the reliability for each correspondence, which can reduce the impact of mismatched or non-matched points.

Computational Efficiency Point Cloud Registration

Establishing the first hidden-charm pentaquark with strangeness

no code implementations2 Nov 2020 Hua-Xing Chen, Wei Chen, Xiang Liu, Xiao-Hai Liu

We study the $P_{cs}(4459)^0$ recently observed by LHCb using the method of QCD sum rules.

High Energy Physics - Phenomenology High Energy Physics - Experiment

Neural Network-based Automatic Factor Construction

no code implementations14 Aug 2020 Jie Fang, Jian-Wu Lin, Shu-Tao Xia, Yong Jiang, Zhikang Xia, Xiang Liu

This paper proposes Neural Network-based Automatic Factor Construction (NNAFC), a tailored neural network framework that can automatically construct diversified financial factors based on financial domain knowledge and a variety of neural network structures.

Time Series Time Series Analysis

FSD-10: A Dataset for Competitive Sports Content Analysis

no code implementations9 Feb 2020 Shenlan Liu, Xiang Liu, Gao Huang, Lin Feng, Lianyu Hu, Dong Jiang, Aibin Zhang, Yang Liu, Hong Qiao

To promote the research on action recognition from competitive sports video clips, we introduce a Figure Skating Dataset (FSD-10) for finegrained sports content analysis.

Action Recognition Benchmarking +1

PENet: Object Detection using Points Estimation in Aerial Images

no code implementations22 Jan 2020 Ziyang Tang, Xiang Liu, Guangyu Shen, Baijian Yang

Aerial imagery has been increasingly adopted in mission-critical tasks, such as traffic surveillance, smart cities, and disaster assistance.

object-detection Object Detection

Alpha Discovery Neural Network based on Prior Knowledge

no code implementations26 Dec 2019 Jie Fang, Shu-Tao Xia, Jian-Wu Lin, Zhikang Xia, Xiang Liu, Yong Jiang

This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators based on prior knowledge.

Time Series Time Series Analysis

Multiple Learning for Regression in big data

no code implementations3 Mar 2019 Xiang Liu, Ziyang Tang, Huyunting Huang, Tonglin Zhang, Baijian Yang

Results showed our approaches can achieve closed-form solutions of multiple models at the cost of half training time of the traditional methods for a single model.

Form regression

Color Recognition for Rubik's Cube Robot

1 code implementation11 Jan 2019 Shenglan Liu, Dong Jiang, Lin Feng, Feilong Wang, Zhanbo Feng, Xiang Liu, Shuai Guo, Bingjun Li, Yuchen Cong

We finally design a Rubik's cube robot and construct a dataset to illustrate the efficiency and effectiveness of our online methods and to indicate the ineffectiveness of offline method by color drifting in our dataset.

Rubik's Cube

Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge

1 code implementation5 Nov 2018 Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze

This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.

Brain Tumor Segmentation Prognosis +2

A Sparse Graph-Structured Lasso Mixed Model for Genetic Association with Confounding Correction

1 code implementation11 Nov 2017 Wenting Ye, Xiang Liu, Tianwei Yue, Wenping Wang

We proposed the sparse graph-structured linear mixed model (sGLMM) that can incorporate the relatedness information from traits in a dataset with confounding correction.

Cannot find the paper you are looking for? You can Submit a new open access paper.