no code implementations • 19 Jun 2025 • Zhenghao Xi, Xiang Liu, Yaqi Liu, Yitong Cai, Yangyu Zheng
Accident detection using Closed Circuit Television (CCTV) footage is one of the most imperative features for enhancing transport safety and efficient traffic control.
no code implementations • 15 Jun 2025 • Zhenghao Xi, Zhengnan Lv, Yang Zheng, Xiang Liu, Zhuang Yu, Junran Chen, Jing Hu, Yaqi Liu
The segmentation of coal maceral groups can be described as a semantic segmentation process of coal maceral group images, which is of great significance for studying the chemical properties of coal.
1 code implementation • 28 May 2025 • Xiang Liu, JunJie Wee, Guo-Wei Wei
Understanding how protein mutations affect protein-nucleic acid binding is critical for unraveling disease mechanisms and advancing therapies.
1 code implementation • 26 May 2025 • Peijie Dong, Zhenheng Tang, Xiang Liu, Lujun Li, Xiaowen Chu, Bo Li
Post-training compression reduces the computational and memory costs of large language models (LLMs), enabling resource-efficient deployment.
1 code implementation • 26 May 2025 • Ruihan Gong, Yue Liu, Wenjie Qu, Mingzhe Du, Yufei He, Yingwei Ma, Yulin Chen, Xiang Liu, Yi Wen, Xinfeng Li, Ruidong Wang, Xinzhong Zhu, Bryan Hooi, Jiaheng Zhang
Inspired by UTT, we propose a new reasoning paradigm, termed Chain of Unconscious Thought (CoUT), to improve the token efficiency of LRMs by guiding them to mimic human unconscious thought and internalize reasoning processes.
no code implementations • 22 May 2025 • Heqiang Wang, Xiang Liu, Xiaoxiong Zhong, Lixing Chen, Fangming Liu, Weizhe Zhang
Furthermore, the real-time nature of data collection and limited local storage on edge devices in IoT call for an online learning paradigm.
no code implementations • 21 May 2025 • Xiang Liu, Hong Chen, Xuming Hu, Xiaowen Chu
FlowKV's core innovation is a multi-turn isolation mechanism that preserves the accumulated compressed KV cache from past turns.
no code implementations • 21 May 2025 • Yuanlin Chu, Bo wang, Xiang Liu, Hong Chen, Aiwei Liu, Xuming Hu
Large language models (LLMs) have achieved impressive results on multi-step mathematical reasoning, yet at the cost of high computational overhead.
1 code implementation • 20 May 2025 • Jiamin Su, Yibo Yan, Zhuoran Gao, Han Zhang, Xiang Liu, Xuming Hu
Automated Essay Scoring (AES) is crucial for modern education, particularly with the increasing prevalence of multimodal assessments.
1 code implementation • 18 May 2025 • YuHang Zhou, Xutian Chen, Yixin Cao, Yuchen Ni, Yu He, Siyu Tian, Xiang Liu, Jian Zhang, Chuanjun Ji, Guangnan Ye, Xipeng Qiu
Recent progress in large language models (LLMs) has outpaced the development of effective evaluation methods.
no code implementations • 14 May 2025 • Xinyu You, Xiang Liu, Chuan-Shen Hu, Kelin Xia, Tze Chien Sum
To address these limitations, we propose a novel representation based on quotient complexes (QCs) and introduce the Quotient Complex Transformer (QCformer) for material property prediction.
no code implementations • 9 Apr 2025 • Xinyu Liu, Xiaoguang Lin, Xiang Liu, Yong Yang, Hongqian Wang, Qilong Sun
This study addresses these limitations by employing a multi-viewpoint camera recording system, capturing the surgical procedure from six different angles to mitigate occlusions.
no code implementations • 18 Mar 2025 • Huan Ren, Wenfei Yang, Xiang Liu, Shifeng Zhang, Tianzhu Zhang
Category-level object pose estimation aims to determine the pose and size of novel objects in specific categories.
1 code implementation • International Conference on Learning Representations 2025 • Chuxin Wang, Wenfei Yang, Xiang Liu, Tianzhu Zhang
To the best of our knowledge, this is the first method to model queries as system states and scene points as system inputs, which can simultaneously update scene point features and query features with linear complexity.
Ranked #1 on
3D Object Detection
on ScanNetV2
1 code implementation • 10 Mar 2025 • Xiang Liu, Zhaoxiang Liu, Huan Hu, Zezhou Chen, Kohou Wang, Kai Wang, Shiguo Lian
We demonstrate the utility of the dataset by finetuning state-of-the-art multimodal models, showcasing significant improvements in crop disease diagnosis.
1 code implementation • CVPR 2025 • Ping Chen, Xingpeng Zhang, Zhaoxiang Liu, Huan Hu, Xiang Liu, Kai Wang, Min Wang, Yanlin Qian, Shiguo Lian
In this research, we propose a novel denoising diffusion model based on shortest-path modeling that optimizes residual propagation to enhance both denoising efficiency and quality. Drawing on Denoising Diffusion Implicit Models (DDIM) and insights from graph theory, our model, termed the Shortest Path Diffusion Model (ShortDF), treats the denoising process as a shortest-path problem aimed at minimizing reconstruction error.
no code implementations • 28 Feb 2025 • Xiang Liu, Zhe Su, Yongyi Shi, Yiying Tong, Ge Wang, Guo-Wei Wei
Recently, topological deep learning (TDL), which integrates algebraic topology with deep neural networks, has achieved tremendous success in processing point-cloud data, emerging as a promising paradigm in data science.
no code implementations • 24 Feb 2025 • Zhenheng Tang, Xiang Liu, Qian Wang, Peijie Dong, Bingsheng He, Xiaowen Chu, Bo Li
In this blog, we present a brief review of recent advancements in LLMs related to retrieval-augmented generation, multi-step reasoning, external tools, and computational expressivity, all of which substantially enhance LLM performance.
no code implementations • 21 Feb 2025 • Zhenghao Xi, Yuchao Shao, Yang Zheng, Xiang Liu, Yaqi Liu, Yitong Cai
Traffic signs recognition (TSR) plays an essential role in assistant driving and intelligent transportation system.
no code implementations • 18 Feb 2025 • Xiang Liu, Penglei Sun, Shuyan Chen, Longhan Zhang, Peijie Dong, Huajie You, Yongqi Zhang, Chang Yan, Xiaowen Chu, Tong-Yi Zhang
The rapid advancement of perovskite solar cells (PSCs) has led to an exponential growth in research publications, creating an urgent need for efficient knowledge management and reasoning systems in this domain.
1 code implementation • 17 Feb 2025 • Jiamin Su, Yibo Yan, Fangteng Fu, Han Zhang, Jingheng Ye, Xiang Liu, Jiahao Huo, Huiyu Zhou, Xuming Hu
Automated Essay Scoring (AES) plays a crucial role in educational assessment by providing scalable and consistent evaluations of writing tasks.
no code implementations • 13 Feb 2025 • Xiang Liu, Mingchen Li, Xia Li, Leigang Qu, Zifan Peng, Yijun Song, Zemin Liu, Linshan Jiang, Jialin Li
For instance, using the VGG16 model on the CIFAR-10 dataset, we achieve a parameter reduction of 85%, a decrease in FLOPs by 61%, and maintain an accuracy of 94. 10% (0. 14% higher than the original model); we reduce the parameters by 55% with the accuracy at 76. 12% using the ResNet architecture on ImageNet (only drops 0. 03%).
1 code implementation • 13 Feb 2025 • Xiang Liu, Zhenheng Tang, Xia Li, Yijun Song, Sijie Ji, Zemin Liu, Bo Han, Linshan Jiang, Jialin Li
One-shot Federated Learning (OFL) is a distributed machine learning paradigm that constrains client-server communication to a single round, addressing privacy and communication overhead issues associated with multiple rounds of data exchange in traditional Federated Learning (FL).
1 code implementation • 10 Feb 2025 • Zhaoying Wang, Yingdan Shi, Xiang Liu, Can Chen, Jun Wen, Ren Wang
However, the scarcity of higher-order datasets that capture the combinatorial effects of multiple drugs severely limits progress in this field.
no code implementations • 6 Feb 2025 • Kunfeng Lai, Zhenheng Tang, Xinglin Pan, Peijie Dong, Xiang Liu, Haolan Chen, Li Shen, Bo Li, Xiaowen Chu
To further reduce storage costs, inspired by task arithmetic sparsity, we decouple multiple fine-tuned experts into a dense expert and several sparse experts.
no code implementations • 4 Feb 2025 • Xiang Liu, Zhenheng Tang, Hong Chen, Peijie Dong, Zeyu Li, Xiuze Zhou, Bo Li, Xuming Hu, Xiaowen Chu
This paper investigates an underexplored challenge in large language models (LLMs): the impact of KV cache compression methods on LLMs' fundamental capabilities.
no code implementations • 1 Feb 2025 • Xiang Liu, Zhenheng Tang, Peijie Dong, Zeyu Li, Yue Liu, Bo Li, Xuming Hu, Xiaowen Chu
Large Language Models (LLMs) require significant GPU memory when processing long texts, with the key value (KV) cache consuming up to 70\% of total memory during inference.
no code implementations • CVPR 2025 • Huakai Lai, Guoxin Xiong, Huayu Mai, Xiang Liu, Tianzhu Zhang
Video-Text Retrieval (VTR) is a core task in multi-modal understanding, drawing growing attention from both academia and industry in recent years.
no code implementations • CVPR 2025 • Zhaoyang Li, YuAn Wang, Wangkai Li, Tianzhu Zhang, Xiang Liu
In the consistent mutual aggregation module, we employ a set of agents to learn domain-invariant features across domains, and then use these features to enhance the original representations for feature adaptation.
no code implementations • 27 Dec 2024 • Yunhao Li, Xiang Liu, Xiaodong Wang, Xin Yuan, Peidong Liu
SCI is a cost-effective method that enables the recording of high-dimensional data, such as hyperspectral or temporal information, into a single image using low-cost 2D imaging sensors.
no code implementations • CVPR 2025 • Leigang Qu, Haochuan Li, Wenjie Wang, Xiang Liu, Juncheng Li, Liqiang Nie, Tat-Seng Chua
To adapt SILMM to LMMs with continuous features, we propose a diversity mechanism to obtain diverse representations and a kernel-based continuous DPO for alignment.
no code implementations • 19 Nov 2024 • Yuke Wu, Xiang Liu, Yunyu Shi, Xinyi Chen, Zhenglei Wang, YuQing Xu, Shuo Hong Wang
In addition to comparison and ablation studies, we validated the generalization ability of our model on the EPDB private dataset, achieving a DSC of 86. 40%.
1 code implementation • 24 Oct 2024 • Qi Li, Xiang Liu, Zhenheng Tang, Peijie Dong, Zeyu Li, Xinglin Pan, Xiaowen Chu
Our findings indicate that current editing methods are only suitable for small-scale knowledge updates within language models, which motivates further research on more practical and reliable editing methods.
no code implementations • 15 Oct 2024 • Xiang Liu, Yijun Song, Xia Li, Yifei Sun, Huiying Lan, Zemin Liu, Linshan Jiang, Jialin Li
To further reduce computational overhead and inference latency, we introduce a class-wise pruning technique that decreases the size of each sub-model.
no code implementations • 7 Oct 2024 • Peijie Dong, Lujun Li, Xiang Liu, Zhenheng Tang, Xuebo Liu, Qiang Wang, Xiaowen Chu
Specifically, we model the ZC proxy as a symbolic equation and incorporate a unified proxy search space that encompasses existing ZC proxies, which are composed of a predefined set of mathematical symbols.
1 code implementation • 5 Oct 2024 • Xiang Liu, Peijie Dong, Xuming Hu, Xiaowen Chu
Current long-context benchmarks primarily focus on retrieval-based tests, requiring Large Language Models (LLMs) to locate specific information within extensive input contexts, such as the needle-in-a-haystack (NIAH) benchmark.
no code implementations • 29 Sep 2024 • Heyuan Huang, Xingyu Lou, Chaochao Chen, Pengxiang Cheng, Yue Xin, Chengwei He, Xiang Liu, Jun Wang
Finally, for improving the efficiency, we design a migrator to transfer the extracted information to the latest target domain model, which only need the target domain model for inference.
no code implementations • 2 Aug 2024 • Kohou Wang, Xiang Liu, Zhaoxiang Liu, Kai Wang, Shiguo Lian
Multimodal Large Language Models (MLLMs) have made significant progress in bridging the gap between visual and language modalities.
no code implementations • 24 Jul 2024 • Penglei Sun, Yaoxian Song, Xiang Liu, Xiaofei Yang, Qiang Wang, Tiefeng Li, Yang Yang, Xiaowen Chu
3D multimodal question answering (MQA) plays a crucial role in scene understanding by enabling intelligent agents to comprehend their surroundings in 3D environments.
1 code implementation • 5 Jun 2024 • Peijie Dong, Lujun Li, Zhenheng Tang, Xiang Liu, Xinglin Pan, Qiang Wang, Xiaowen Chu
In particular, we devise an elaborate search space encompassing the existing pruning metrics to discover the potential symbolic pruning metric.
no code implementations • 5 Jun 2024 • Jihe Li, Xiang Liu, Fabian Zhang, Xia Li, Xixin Cao, Ye Zhang, Joachim Buhmann
Furthermore, the movement of individual voxel is derived via blending the local rigid transformation of the neighboring Gaussian primitives.
no code implementations • 20 May 2024 • Liuzhi Zhou, Yu He, Kun Zhai, Xiang Liu, Sen Liu, Xingjun Ma, Guangnan Ye, Yu-Gang Jiang, Hongfeng Chai
This comparative analysis revealed that due to the limited information contained within client models from other clients during the initial stages of federated learning, more substantial constraints need to be imposed on the parameters of the adaptive algorithm.
no code implementations • 30 Mar 2024 • Heqiang Wang, Xiang Liu, Yucheng Liu, Jia Zhou, Weihong Yang, Xiaoxiong Zhong
With the increasing number and enhanced capabilities of IoT devices in smart buildings, these devices are evolving beyond basic data collection and control to actively participate in deep learning tasks.
1 code implementation • 26 Mar 2024 • Rui Pan, Xiang Liu, Shizhe Diao, Renjie Pi, Jipeng Zhang, Chi Han, Tong Zhang
Attempting to complement this deficiency, we investigate the layerwise properties of LoRA on fine-tuning tasks and observe an unexpected but consistent skewness of weight norms across different layers.
no code implementations • 20 Feb 2024 • YuHang Zhou, Yuchen Ni, Yunhui Gan, Zhangyue Yin, Xiang Liu, Jian Zhang, Sen Liu, Xipeng Qiu, Guangnan Ye, Hongfeng Chai
Results show varying degrees of financial irrationality among models, influenced by their design and training.
no code implementations • 3 Feb 2024 • Peijie Dong, Lujun Li, Xinglin Pan, Zimian Wei, Xiang Liu, Qiang Wang, Xiaowen Chu
Recent advancements in Zero-shot Neural Architecture Search (NAS) highlight the efficacy of zero-cost proxies in various NAS benchmarks.
no code implementations • 23 Jan 2024 • Xiang Liu, Jiahong Chen, Bin Chen, Zimo Liu, Baoyi An, Shu-Tao Xia, Zhi Wang
To the best of our knowledge, our method is the first INR-based codec comparable with Hyperprior in both decoding speed and quality while maintaining low complexity.
1 code implementation • 12 Jan 2024 • Shuaijie She, Wei Zou, ShuJian Huang, Wenhao Zhu, Xiang Liu, Xiang Geng, Jiajun Chen
To enhance reasoning abilities in non-dominant languages, we propose a Multilingual-Alignment-as-Preference Optimization framework (MAPO), aiming to align the reasoning processes in other languages with the dominant language.
1 code implementation • 14 Nov 2023 • Rui Pan, Shuo Xing, Shizhe Diao, Wenhe Sun, Xiang Liu, Kashun Shum, Renjie Pi, Jipeng Zhang, Tong Zhang
Since the emergence of large language models, prompt learning has become a popular method for optimizing and customizing these models.
no code implementations • 7 Nov 2023 • Longteng Zhang, Xiang Liu, Zeyu Li, Xinglin Pan, Peijie Dong, Ruibo Fan, Rui Guo, Xin Wang, Qiong Luo, Shaohuai Shi, Xiaowen Chu
For end users, our benchmark and findings help better understand different optimization techniques, training and inference frameworks, together with hardware platforms in choosing configurations for deploying LLMs.
1 code implementation • 3 Nov 2023 • YuHang Zhou, Yu He, Siyu Tian, Yuchen Ni, Zhangyue Yin, Xiang Liu, Chuanjun Ji, Sen Liu, Xipeng Qiu, Guangnan Ye, Hongfeng Chai
While current tasks of converting natural language to SQL (NL2SQL) using Foundation Models have shown impressive achievements, adapting these approaches for converting natural language to Graph Query Language (NL2GQL) encounters hurdles due to the distinct nature of GQL compared to SQL, alongside the diverse forms of GQL.
1 code implementation • 30 Sep 2023 • Xiang Liu, Liangxi Liu, Feiyang Ye, Yunheng Shen, Xia Li, Linshan Jiang, Jialin Li
Efficiently aggregating trained neural networks from local clients into a global model on a server is a widely researched topic in federated learning.
1 code implementation • 23 Jun 2023 • Cong Shen, Xiang Liu, Jiawei Luo, Kelin Xia
This demonstrates that analytic torsion is a highly efficient topological invariant in the characterization of graph structures and can significantly boost the performance of GNNs.
no code implementations • 6 Mar 2023 • Wenli Zhang, Jiaheng Xie, Zhu Zhang, Xiang Liu
Depression is a common disease worldwide.
2 code implementations • 23 Feb 2023 • Shizhe Diao, Pengcheng Wang, Yong Lin, Rui Pan, Xiang Liu, Tong Zhang
For this purpose, we propose a solution to the key problem of determining which questions are the most important and helpful ones to annotate from a pool of task-specific queries.
no code implementations • 20 Jan 2023 • Luyao Chen, Zhiqiang Chen, Longsheng Jiang, Xiang Liu, Linlu Xu, Bo Zhang, Xiaolong Zou, Jinying Gao, Yu Zhu, Xizi Gong, Shan Yu, Sen Song, Liangyi Chen, Fang Fang, Si Wu, Jia Liu
Nowadays, we have witnessed the great success of AI in various applications, including image classification, game playing, protein structure analysis, language translation, and content generation.
no code implementations • 12 Dec 2022 • Yufei Xie, Xiang Liu, Qizhong Yuan
Based on the practical process of innovation and entrepreneurship education for college students in the author's university, this study analyzes and deconstructs the key concepts of AI knowledge-based crowdsourcing on the basis of literature research, and analyzes the objective fitting needs of combining AI knowledge-based crowdsourcing with college students' innovation and entrepreneurship education practice through a survey and research of a random sample of college students, and verifies that college students' knowledge and application of AI knowledge-based crowdsourcing in the learning and practice of innovation and entrepreneurship The study also verifies the awareness and application of AI knowledge-based crowdsourcing knowledge by university students in the learning and practice of innovation and entrepreneurship.
no code implementations • ECCV 2022 • Kongzhu Jiang, Tianzhu Zhang, Xiang Liu, Bingqiao Qian, Yongdong Zhang, Feng Wu ;
To alleviate the above issues, we propose a novel Cross-Modality Transformer (CMT) to jointly explore a modality-level alignment module and an instance-level module for VI-ReID.
no code implementations • 8 Nov 2022 • Jiaheng Xie, Xiaohang Zhao, Xiang Liu, Xiao Fang
We draw on the medical literature to support depression detection using motion sensor data.
no code implementations • 30 Aug 2022 • Xiang Liu, Hongyuan Wang, Zhiqiang Yan, Yu Chen, Xinlong Chen, WeiChun Chen
In this way, the background interference to foreground spacecraft depth completion is effectively avoided.
no code implementations • 1 Jun 2022 • Jie Shi, Chenfei Wu, Jian Liang, Xiang Liu, Nan Duan
Our work proposes a VQ-VAE architecture model with a diffusion decoder (DiVAE) to work as the reconstructing component in image synthesis.
no code implementations • 2 Sep 2021 • Xiang Liu, Tianyao Huang, Yimin Liu
With this constraint, we formulate and solve the signal-to-interference-plus-noise ratio (SINR) balancing problem for multiuser transmit beamforming via convex optimization.
no code implementations • CVPR 2021 • Yulin Li, Jianfeng He, Tianzhu Zhang, Xiang Liu, Yongdong Zhang, Feng Wu
To address these issues, we propose a novel end-to-end Part-Aware Transformer (PAT) for occluded person Re-ID through diverse part discovery via a transformer encoderdecoder architecture, including a pixel context based transformer encoder and a part prototype based transformer decoder.
Ranked #18 on
Person Re-Identification
on Occluded-DukeMTMC
no code implementations • 4 Mar 2021 • Xin-Dian Yang, Fu-Lai Wang, Zhan-Wei Liu, Xiang Liu
Very recently, the LHCb Collaboration at the Large Hadron Collider at CERN observed new resonance $X(4630)$.
High Energy Physics - Phenomenology
no code implementations • 27 Jan 2021 • Fu-Lai Wang, Xin-Dian Yang, Rui Chen, Xiang Liu
Our results suggest that the $\Omega_{c}\bar D_s^*$ state with $J^P={3}/{2}^{-}$ and the $\Omega_{c}^{*}\bar D_s^*$ state with $J^P={5}/{2}^{-}$ can be recommended as the candidates of the hidden-charm molecular pentaquark with triple strangeness.
High Energy Physics - Phenomenology High Energy Physics - Experiment
no code implementations • 26 Jan 2021 • Bing Chen, Si-Qiang Luo, Xiang Liu
The mass gaps existing in the discovered single heavy flavor baryons are analyzed, which show some universal behaviors.
High Energy Physics - Phenomenology
no code implementations • 11 Jan 2021 • Xuewen Zhang, Yan Qin, Chau Yuen, Lahiru Jayasinghe, Xiang Liu
Out of this consideration, an enhanced RUL framework focusing on data self-generation is put forward for both non-cyclic and cyclic degradation patterns for the first time.
no code implementations • 22 Dec 2020 • Xiang Liu, Deborah Cohen, Tianyao Huang, Yimin Liu, Yonina C. Eldar
Our method encodes each pulse with a random phase, varying from pulse to pulse, and then processes the received samples jointly to resolve the range ambiguity.
1 code implementation • 24 Nov 2020 • Hongyuan Wang, Xiang Liu, Wen Kang, Zhiqiang Yan, Bingwen Wang, Qianhao Ning
In the correspondences credibility computation module, based on the conflicted relationship between the features matching matrix and the coordinates matching matrix, we score the reliability for each correspondence, which can reduce the impact of mismatched or non-matched points.
no code implementations • 2 Nov 2020 • Hua-Xing Chen, Wei Chen, Xiang Liu, Xiao-Hai Liu
We study the $P_{cs}(4459)^0$ recently observed by LHCb using the method of QCD sum rules.
High Energy Physics - Phenomenology High Energy Physics - Experiment
no code implementations • 14 Aug 2020 • Jie Fang, Jian-Wu Lin, Shu-Tao Xia, Yong Jiang, Zhikang Xia, Xiang Liu
This paper proposes Neural Network-based Automatic Factor Construction (NNAFC), a tailored neural network framework that can automatically construct diversified financial factors based on financial domain knowledge and a variety of neural network structures.
no code implementations • 6 Aug 2020 • Qianru Zhang, Meng Zhang, Chinthaka Gamanayake, Chau Yuen, Zehao Geng, Hirunima Jayasekara, Xuewen Zhang, Chia-wei Woo, Jenny Low, Xiang Liu
With some advanced algorithms, the new technologies are expected to control the production quality based on the digital images.
no code implementations • 9 Feb 2020 • Shenlan Liu, Xiang Liu, Gao Huang, Lin Feng, Lianyu Hu, Dong Jiang, Aibin Zhang, Yang Liu, Hong Qiao
To promote the research on action recognition from competitive sports video clips, we introduce a Figure Skating Dataset (FSD-10) for finegrained sports content analysis.
no code implementations • 22 Jan 2020 • Ziyang Tang, Xiang Liu, Guangyu Shen, Baijian Yang
Aerial imagery has been increasingly adopted in mission-critical tasks, such as traffic surveillance, smart cities, and disaster assistance.
no code implementations • 26 Dec 2019 • Jie Fang, Shu-Tao Xia, Jian-Wu Lin, Zhikang Xia, Xiang Liu, Yong Jiang
This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators based on prior knowledge.
no code implementations • 3 Mar 2019 • Xiang Liu, Ziyang Tang, Huyunting Huang, Tonglin Zhang, Baijian Yang
Results showed our approaches can achieve closed-form solutions of multiple models at the cost of half training time of the traditional methods for a single model.
1 code implementation • 11 Jan 2019 • Shenglan Liu, Dong Jiang, Lin Feng, Feilong Wang, Zhanbo Feng, Xiang Liu, Shuai Guo, Bingjun Li, Yuchen Cong
We finally design a Rubik's cube robot and construct a dataset to illustrate the efficiency and effectiveness of our online methods and to indicate the ineffectiveness of offline method by color drifting in our dataset.
1 code implementation • 5 Nov 2018 • Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze
This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.
no code implementations • 25 Oct 2018 • Shenglan Liu, Xiang Liu, Yang Liu, Lin Feng, Hong Qiao, Jian Zhou, Yang Wang
Supervised learning methods are widely used in machine learning.
1 code implementation • 11 Nov 2017 • Wenting Ye, Xiang Liu, Tianwei Yue, Wenping Wang
We proposed the sparse graph-structured linear mixed model (sGLMM) that can incorporate the relatedness information from traits in a dataset with confounding correction.
no code implementations • 7 May 2017 • Dong Wu, Xiang Liu, Feng Xue, Hanqing Zheng, Yehang Shou, Wen Jiang
In this paper, a new decision making methodology based on Z-numbers is presented.