no code implementations • 25 Sep 2024 • Yihong Tang, Bo wang, Xu Wang, Dongming Zhao, Jing Liu, Jijun Zhang, Ruifang He, Yuexian Hou
Role-playing systems powered by large language models (LLMs) have become increasingly influential in emotional communication applications.
no code implementations • 19 Sep 2024 • Weichao Pan, Xu Wang, Wenqing Huan
Compared with existing mainstream models (e. g., YOLOv5, YOLOv8, YOLOv9, and YOLOv10), EFA-YOLO exhibits a significant enhancement in detection accuracy (mAP) and inference speed, with model parameter amount is reduced by 94. 6 and the inference speed is improved by 88 times.
no code implementations • 4 Sep 2024 • Weichao Pan, Xu Wang, Wenqing Huan
Experimental results on the UAV-PDD2023 public dataset show that our model RT-DSAFDet achieves a mAP50 of 54. 2%, which is 11. 1% higher than that of YOLOv10-m, an efficient variant of the latest real-time object detection model YOLOv10, while the amount of parameters is reduced to 1. 8M and FLOPs to 4. 6G, with a decreased by 88% and 93%, respectively.
no code implementations • 3 Sep 2024 • Weichao Pan, Jiaju Kang, Xu Wang, Zhihao Chen, Yiyuan Ge
Current road damage detection methods, relying on manual inspections or sensor-mounted vehicles, are inefficient, limited in coverage, and often inaccurate, especially for minor damages, leading to delays and safety hazards.
1 code implementation • 19 Aug 2024 • Kang Xiao, Xu Wang, Yulin He, Baoliang Chen, Xuelin Shen
Full-reference image quality assessment (FR-IQA) models generally operate by measuring the visual differences between a degraded image and its reference.
no code implementations • 17 Aug 2024 • Xiaojie Lin, Baihe Ma, Xu Wang, Guangsheng Yu, Ying He, Ren Ping Liu, Wei Ni
As the primary standard protocol for modern cars, the Controller Area Network (CAN) is a critical research target for automotive cybersecurity threats and autonomous applications.
no code implementations • 16 Aug 2024 • Yubao Zhao, Tian Zhang, Xu Wang, Puyu Han, Tong Chen, Linlin Huang, Youzhu Jin, Jiaju Kang
Additionally, expanding existing datasets, we constructed a 19k ECG diagnosis dataset and a 25k multi-turn dialogue dataset for training and fine-tuning ECG-Chat, which provides professional diagnostic and conversational capabilities.
no code implementations • 16 Aug 2024 • Xiao Liu, Mingyuan Li, Xu Wang, Guangsheng Yu, Wei Ni, Lixiang Li, Haipeng Peng, Renping Liu
A key enabler is the new Unified Model Inheritance Graph (UMIG), which captures the inheritance using a Directed Acyclic Graph (DAG). Central to our framework is the new Fisher Inheritance Unlearning (FIUn) algorithm, which utilizes the Fisher Information Matrix (FIM) from initial unlearning models to pinpoint impacted parameters in inherited models.
no code implementations • 10 Aug 2024 • Xu Wang, Jiangxia Cao, Zhiyi Fu, Kun Gai, Guorui Zhou
(3) Expert Underfitting: In our services, we have dozens of behavior tasks that need to be predicted, but we find that some data-sparse prediction tasks tend to ignore their specific-experts and assign large weights to shared-experts.
1 code implementation • 13 Jul 2024 • Xiaoxu Xu, Yitian Yuan, Jinlong Li, Qiudan Zhang, Zequn Jie, Lin Ma, Hao Tang, Nicu Sebe, Xu Wang
In this paper, we propose 3DSS-VLG, a weakly supervised approach for 3D Semantic Segmentation with 2D Vision-Language Guidance, an alternative approach that a 3D model predicts dense-embedding for each point which is co-embedded with both the aligned image and text spaces from the 2D vision-language model.
1 code implementation • 5 May 2024 • Xu Wang, Cheng Li, Yi Chang, Jindong Wang, Yuan Wu
The results are revealing: NegativePrompt markedly enhances the performance of LLMs, evidenced by relative improvements of 12. 89% in Instruction Induction tasks and 46. 25% in BIG-Bench tasks.
no code implementations • 23 Apr 2024 • Zhe Zhao, Pengkun Wang, Xu Wang, Haibin Wen, Xiaolong Xie, Zhengyang Zhou, Qingfu Zhang, Yang Wang
Pre-training GNNs to extract transferable knowledge and apply it to downstream tasks has become the de facto standard of graph representation learning.
no code implementations • 4 Apr 2024 • Xu Wang, Tian Ye, Rajgopal Kannan, Viktor Prasanna
FACTUAL consists of two components: (1) Differing from existing works, a novel perturbation scheme that incorporates realistic physical adversarial attacks (such as OTSA) to build a supervised adversarial pre-training network.
no code implementations • 3 Apr 2024 • Xu Wang, YiFan Li, Qiudan Zhang, Wenhui Wu, Mark Junjie Li, Jianmin Jinag
However, previous 3D scene graph generation methods utilize a fully supervised learning manner and require a large amount of entity-level annotation data of objects and relations, which is extremely resource-consuming and tedious to obtain.
no code implementations • 1 Mar 2024 • Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Xu Wang, Qing Yang, Dongliang Xu, Wanxiang Che
Presently, two dominant paradigms for collecting tuning data are natural-instruct (human-written) and self-instruct (automatically generated).
no code implementations • 27 Feb 2024 • Jingying Wang, Haoran Tang, Taylor Kantor, Tandis Soltani, Vitaliy Popov, Xu Wang
The segmentation pipeline enables functionalities to create visual questions and feedback desired by surgeons from a formative study.
no code implementations • 26 Feb 2024 • Xiao Liu, Mingyuan Li, Xu Wang, Guangsheng Yu, Wei Ni, Lixiang Li, Haipeng Peng, Renping Liu
Unlearning in Federated Learning (FL) presents significant challenges, as models grow and evolve with complex inheritance relationships.
no code implementations • 29 Jan 2024 • Nahyun Kwon, Tong Sun, Yuyang Gao, Liang Zhao, Xu Wang, Jeeeun Kim, Sungsoo Ray Hong
While troubleshooting plays an essential part of 3D printing, the process remains challenging for many remote novices even with the help of well-developed online sources, such as online troubleshooting archives and online community help.
1 code implementation • 27 Jan 2024 • Yue Zhou, Chenlu Guo, Xu Wang, Yi Chang, Yuan Wu
Leveraging large models, these data augmentation techniques have outperformed traditional approaches.
no code implementations • 26 Jan 2024 • Yipin Lei, Xu Wang, Meng Fang, Han Li, Xiang Li, Jianyang Zeng
In summary, our proposed frameworks can serve as potent tools to facilitate peptide early drug discovery.
no code implementations • 25 Dec 2023 • Xu Wang, Jiawei Huang, Qingyuan Yang, Jinpeng Zhang
Firstly, we improve efficiency through model reducing; we reduce RWB as an augmented Wasserstein barycenter problem, which works for both fixed-RWB and free-RWB.
no code implementations • 15 Dec 2023 • Xiaoxu Xu, Yitian Yuan, Qiudan Zhang, Wenhui Wu, Zequn Jie, Lin Ma, Xu Wang
During the inference stage, the learned text-3D correspondence will help us ground the text queries to the 3D target objects even without 2D images.
no code implementations • 14 Nov 2023 • Wei Wen, Kuang-Hung Liu, Igor Fedorov, Xin Zhang, Hang Yin, Weiwei Chu, Kaveh Hassani, Mengying Sun, Jiang Liu, Xu Wang, Lin Jiang, Yuxin Chen, Buyun Zhang, Xi Liu, Dehua Cheng, Zhengxing Chen, Guang Zhao, Fangqiu Han, Jiyan Yang, Yuchen Hao, Liang Xiong, Wen-Yen Chen
In industry system, such as ranking system in Meta, it is unclear whether NAS algorithms from the literature can outperform production baselines because of: (1) scale - Meta ranking systems serve billions of users, (2) strong baselines - the baselines are production models optimized by hundreds to thousands of world-class engineers for years since the rise of deep learning, (3) dynamic baselines - engineers may have established new and stronger baselines during NAS search, and (4) efficiency - the search pipeline must yield results quickly in alignment with the productionization life cycle.
1 code implementation • 12 Oct 2023 • Dong Yang, Xu Wang, Remzi Celebi
To address this, we present Vocabulary Expandable BERT for knowledge base construction, which expand the language model's vocabulary while preserving semantic embeddings for newly added words.
Ranked #1 on Knowledge Base Population on LM-KBC 2023
1 code implementation • 11 Sep 2023 • Li Chen, Mengyi Zhao, Yiheng Liu, Mingxu Ding, Yangyang Song, Shizun Wang, Xu Wang, Hao Yang, Jing Liu, Kang Du, Min Zheng
Personalized text-to-image generation has emerged as a powerful and sought-after tool, empowering users to create customized images based on their specific concepts and prompts.
1 code implementation • 7 Aug 2023 • Huichao Zhang, Bowen Chen, Hao Yang, Liao Qu, Xu Wang, Li Chen, Chao Long, Feida Zhu, Kang Du, Min Zheng
We present AvatarVerse, a stable pipeline for generating expressive high-quality 3D avatars from nothing but text descriptions and pose guidance.
no code implementations • 19 Jul 2023 • Yaran Chen, Xueyu Chen, Yu Han, Haoran Li, Dongbin Zhao, Jingzhong Li, Xu Wang
From the dataset, we quantitatively analyze and select clinical metadata that most contribute to NAFLD prediction.
no code implementations • 17 Jul 2023 • Yanna Jiang, Baihe Ma, Xu Wang, Guangsheng Yu, Caijun Sun, Wei Ni, Ren Ping Liu
As a distributed learning, Federated Learning (FL) faces two challenges: the unbalanced distribution of training data among participants, and the model attack by Byzantine nodes.
1 code implementation • 7 Jul 2023 • Ling Chen, Chaodu Song, Xu Wang, Dachao Fu, Feifei Li
To this end, we propose CSCLog, a Component Subsequence Correlation-Aware Log anomaly detection method, which not only captures the sequential dependencies in subsequences, but also models the implicit correlations of subsequences.
1 code implementation • 6 Jul 2023 • Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, Xing Xie
Large language models (LLMs) are gaining increasing popularity in both academia and industry, owing to their unprecedented performance in various applications.
no code implementations • 14 Jun 2023 • Weidong Ji, Shijie Zan, Guohui Zhou, Xu Wang
To address the issue of poor generalization ability in end-to-end speech recognition models within deep learning, this study proposes a new Conformer-based speech recognition model called "Conformer-R" that incorporates the R-drop structure.
no code implementations • 14 Jun 2023 • Weidong Ji, Yousheng Zhang, Guohui Zhou, Xu Wang
To enhance the generalization ability of the model and improve the effectiveness of the transformer for named entity recognition tasks, the XLNet-Transformer-R model is proposed in this paper.
1 code implementation • 13 Jun 2023 • Xu Wang, Huan Zhao, WeiWei Tu, Quanming Yao
Next, to automatically fuse these three generative tasks, we design a surrogate metric using the \textit{total energy} to search for weight distribution of the three pretext task since total energy corresponding to the quality of 3D conformer. Extensive experiments on 2D molecular graphs are conducted to demonstrate the accuracy, efficiency and generalization ability of the proposed 3D PGT compared to various pre-training baselines.
no code implementations • 24 May 2023 • Zefan Cai, Xin Zheng, Tianyu Liu, Xu Wang, Haoran Meng, Jiaqi Han, Gang Yuan, Binghuai Lin, Baobao Chang, Yunbo Cao
In the constant updates of the product dialogue systems, we need to retrain the natural language understanding (NLU) model as new data from the real users would be merged into the existent data accumulated in the last updates.
no code implementations • 8 May 2023 • Yanna Jiang, Baihe Ma, Xu Wang, Ping Yu, Guangsheng Yu, Zhe Wang, Wei Ni, Ren Ping Liu
The demand for intelligent industries and smart services based on big data is rising rapidly with the increasing digitization and intelligence of the modern world.
no code implementations • 4 May 2023 • Xu Wang, Jun Ma, Jing Li
Coronary CT angiography (CCTA) scans are widely used for diagnosis of coronary artery diseases.
1 code implementation • IEEE Transactions on Pattern Analysis and Machine Intelligence 2023 • Peng Hu, Zhenyu Huang, Dezhong Peng, Xu Wang, Xi Peng
On the one hand, our method only utilizes the negative information which is much less likely false compared with the positive information, thus avoiding the overfitting issue to PMPs.
Contrastive Learning Cross-modal retrieval with noisy correspondence +2
1 code implementation • 13 Feb 2023 • Xu Wang, Dezhong Peng, Ming Yan, Peng Hu
Thanks to the ISS and CCA, our method could encode the discrimination into the domain-invariant embedding space for unsupervised cross-domain image retrieval.
no code implementations • 27 Jan 2023 • Xu Wang, Pengfei Gu, Pengkun Wang, Binwu Wang, Zhengyang Zhou, Lei Bai, Yang Wang
In this paper, with extensive and deep-going experiments, we comprehensively analyze existing spatiotemporal graph learning models and reveal that extracting adjacency matrices with carefully design strategies, which are viewed as the key of enhancing performance on graph learning, are largely ineffective.
no code implementations • 19 Jan 2023 • Shizun Wang, Weihong Zeng, Xu Wang, Hao Yang, Li Chen, Yi Yuan, Yunzhao Zeng, Min Zheng, Chuang Zhang, Ming Wu
To this end, we propose SwiftAvatar, a novel avatar auto-creation framework that is evidently superior to previous works.
no code implementations • 7 Jan 2023 • Guangsheng Yu, Xu Wang, Caijun Sun, Qin Wang, Ping Yu, Wei Ni, Ren Ping Liu, Xiwei Xu
Federated learning (FL) provides an effective machine learning (ML) architecture to protect data privacy in a distributed manner.
no code implementations • ICCV 2023 • Yingfan Tao, Jingna Sun, Hao Yang, Li Chen, Xu Wang, Wenming Yang, Daniel Du, Min Zheng
LGLA consists of two core components: a Class-aware Logit Adjustment (CLA) strategy and an Adaptive Angular Weighted (AAW) loss.
no code implementations • 14 Dec 2022 • Xin Zheng, Tianyu Liu, Haoran Meng, Xu Wang, Yufan Jiang, Mengliang Rao, Binghuai Lin, Zhifang Sui, Yunbo Cao
Harvesting question-answer (QA) pairs from customer service chatlog in the wild is an efficient way to enrich the knowledge base for customer service chatbots in the cold start or continuous integration scenarios.
no code implementations • 15 Nov 2022 • Jiali Zeng, Yufan Jiang, Yongjing Yin, Xu Wang, Binghuai Lin, Yunbo Cao
We present DualNER, a simple and effective framework to make full use of both annotated source language corpus and unlabeled target language text for zero-shot cross-lingual named entity recognition (NER).
3 code implementations • 31 Oct 2022 • Ling Sun, Guiqiong Liu, Xunping Jiang, Junrui Liu, Xu Wang, Han Yang, Shiping Yang
However, there is no study on normalizing of the animal face image with arbitrary directions.
1 code implementation • ACM International Conference on Multimedia 2022 • Yang Qin, Dezhong Peng, Xi Peng, Xu Wang, Peng Hu
However, it will unavoidably introduce noise (i. e., mismatched pairs) into training data, dubbed noisy correspondence.
Ranked #2 on Text-based Person Retrieval with Noisy Correspondence on RSTPReid (using extra training data)
Cross-modal retrieval with noisy correspondence Retrieval +1
no code implementations • 20 Sep 2022 • Haohong Liao, Silin Zheng, Xuelin Shen, Mark Junjie Li, Xu Wang
However, according to our investigation, the lacking of datasets focusing on real-world application scenarios limits the further improvements for current learning-based MTMCT models.
1 code implementation • 16 Sep 2022 • Jinlong Li, Zequn Jie, Xu Wang, Xiaolin Wei, Lin Ma
To tackle with this issue, this paper proposes an Expansion and Shrinkage scheme based on the offset learning in the deformable convolution, to sequentially improve the recall and precision of the located object in the two respective stages.
1 code implementation • 16 Sep 2022 • Jinlong Li, Zequn Jie, Xu Wang, Yu Zhou, Xiaolin Wei, Lin Ma
"Progressive Patch Learning" further extends the feature destruction and patch learning to multi-level granularities in a progressive manner.
Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation
1 code implementation • COLING 2022 • Tianyi Lei, Honghui Hu, Qiaoyang Luo, Dezhong Peng, Xu Wang
To address this issue, we propose a novel Adaptive Meta-learner via Gradient Similarity (AMGS) method to improve the model generalization ability to a new task.
no code implementations • 8 Aug 2022 • Longzhao Huang, Yujie Li, Xu Wang, Haoyu Wang, Ahmed Bouridane, Ahmad Chaddad
We propose a differential residual model (DRNet) combined with a new loss function to make use of the difference information of two eye images.
1 code implementation • 13 Jul 2022 • Xu Wang, Huan Zhao, Lanning Wei, Quanming Yao
Aiming at two molecular graph datasets and one protein association subgraph dataset in OGB graph classification task, we design a graph neural network framework for graph classification task by introducing PAS(Pooling Architecture Search).
1 code implementation • NAACL 2022 • Xu Wang, Simin Fan, Jessica Houghton, Lu Wang
NLP-powered automatic question generation (QG) techniques carry great pedagogical potential of saving educators' time and benefiting student learning.
no code implementations • 8 Mar 2022 • Yinghui Tao, Min Gao, Junliang Yu, Zongwei Wang, Qingyu Xiong, Xu Wang
To explore recommendation-specific auxiliary tasks, we first quantitatively analyze the heterogeneous interaction data and find a strong positive correlation between the interactions and the number of user-item paths induced by meta-paths.
no code implementations • 1 Mar 2022 • Tianjiao Jiang, Yi Jin, Tengfei Liang, Xu Wang, Yidong Li
Image semantic segmentation aims at the pixel-level classification of images, which has requirements for both accuracy and speed in practical application.
no code implementations • 20 Feb 2022 • Pingping Zhang, Xu Wang, Linwei Zhu, Yun Zhang, Shiqi Wang, Sam Kwong
In this paper, we propose a distortion-aware loop filtering model to improve the performance of intra coding for 360$^o$ videos projected via equirectangular projection (ERP) format.
no code implementations • 13 Feb 2022 • Xu Wang, Yi Jin, Yigang Cen, Tao Wang, Bowen Tang, Yidong Li
Compared with traditional task-irrelevant downsampling methods, task-oriented neural networks have shown improved performance in point cloud downsampling range.
no code implementations • 5 Feb 2022 • Ching-Chun Chang, Xu Wang, Sisheng Chen, Hitoshi Kiya, Isao Echizen
The core strength of neural networks is the ability to render accurate predictions for a bewildering variety of data.
no code implementations • 14 Jan 2022 • Gongjin Lan, Ting Liu, Xu Wang, Xueli Pan, Zhisheng Huang
In this paper, we propose an SW technology index to standardize the development for ensuring that the work of SW technology is designed well and to quantitatively evaluate the quality of the work in SW technology.
no code implementations • 4 Jan 2022 • Xu Wang, Huan Zhao, WeiWei Tu, Hao Li, Yu Sun, Xiaochen Bo
Double-strand DNA breaks (DSBs) are a form of DNA damage that can cause abnormal chromosomal rearrangements.
1 code implementation • CVPR 2022 • Wenhui Wu, Jian Weng, Pingping Zhang, Xu Wang, Wenhan Yang, Jianmin Jiang
Retinex model-based methods have shown to be effective in layer-wise manipulation with well-designed priors for low-light image enhancement.
no code implementations • 23 Dec 2021 • Jun Wan, Hui Xi, Jie zhou, Zhihui Lai, Witold Pedrycz, Xu Wang, Hang Sun
We show that by integrating the BALI fields and SCPA model into a novel self-calibrated pose attention network, more facial prior knowledge can be learned and the detection accuracy and robustness of our method for faces with large poses and heavy occlusions have been improved.
no code implementations • 26 Oct 2021 • Junning Liu, Zijie Xia, Yu Lei, Xinjian Li, Xu Wang
For example, when using MTL to model various user behaviors in RS, if we differentiate new users and new items from old ones, there will be a cartesian product style increase of tasks with multi-dimensional relations.
no code implementations • 9 Oct 2021 • Qi Zhao, Xu Wang, Shuchang Lyu, Binghao Liu, Yifan Yang
To handle these two issues, we propose a feature consistency driven attention erasing network (FCAENet) for fine-grained image retrieval.
1 code implementation • 29 Sep 2021 • Yueming Lyu, Peibin Chen, Jingna Sun, Bo Peng, Xu Wang, Jing Dong
To evaluate the effectiveness and show the general use of our method, we conduct a set of experiments on makeup transfer and semantic image synthesis.
no code implementations • 23 Sep 2021 • Xu Wang, Yuyan Li, Ye Duan
Each layer has two parallel branches, namely the voxel branch and the point branch.
1 code implementation • 23 Sep 2021 • Xu Wang, Ali Shojaie
Modern high-dimensional point process data, especially those from neuroscience experiments, often involve observations from multiple conditions and/or experiments.
no code implementations • 23 Sep 2021 • Yuyan Li, Chuanmao Fan, Xu Wang, Ye Duan
Experimental results show that SPConv is effective in local shape encoding, and our SPNet is able to achieve top-ranking performances in semantic segmentation tasks.
no code implementations • 22 Sep 2021 • Xu Wang, Ali Shojaie
Thanks to technological advances leading to near-continuous time observations, emerging multivariate point process data offer new opportunities for causal discovery.
no code implementations • Findings (EMNLP) 2021 • Xu Wang, Hainan Zhang, Shuai Zhao, Yanyan Zou, Hongshen Chen, Zhuoye Ding, Bo Cheng, Yanyan Lan
Furthermore, the consistency signals between each candidate and the speaker's own history are considered to drive a model to prefer a candidate that is logically consistent with the speaker's history logic.
2 code implementations • 26 Aug 2021 • Xingkui Zhu, Shuchang Lyu, Xu Wang, Qi Zhao
Object detection on drone-captured scenarios is a recent popular task.
no code implementations • 13 Jun 2021 • Ching-Chun Chang, Xu Wang, Sisheng Chen, Isao Echizen, Victor Sanchez, Chang-Tsun Li
Given that reversibility is governed independently by the coding module, we narrow our focus to the incorporation of neural networks into the analytics module, which serves the purpose of predicting pixel intensities and a pivotal role in determining capacity and imperceptibility.
no code implementations • 17 May 2021 • Etienne David, Mario Serouart, Daniel Smith, Simon Madec, Kaaviya Velumani, Shouyang Liu, Xu Wang, Francisco Pinto Espinosa, Shahameh Shafiee, Izzat S. A. Tahir, Hisashi Tsujimoto, Shuhei Nasuda, Bangyou Zheng, Norbert Kichgessner, Helge Aasen, Andreas Hund, Pouria Sadhegi-Tehran, Koichi Nagasawa, Goro Ishikawa, Sébastien Dandrifosse, Alexis Carlier, Benoit Mercatoris, Ken Kuroki, Haozhou Wang, Masanori Ishii, Minhajul A. Badhon, Curtis Pozniak, David Shaner LeBauer, Morten Lilimo, Jesse Poland, Scott Chapman, Benoit de Solan, Frédéric Baret, Ian Stavness, Wei Guo
We now release a new version of the Global Wheat Head Detection (GWHD) dataset in 2021, which is bigger, more diverse, and less noisy than the 2020 version.
no code implementations • 5 May 2021 • Yunhua Xiang, Tianyu Zhang, Xu Wang, Ali Shojaie, Noah Simon
Originally developed for imputing missing entries in low rank, or approximately low rank matrices, matrix completion has proven widely effective in many problems where there is no reason to assume low-dimensional linear structure in the underlying matrix, as would be imposed by rank constraints.
no code implementations • 1 Apr 2021 • Xu Wang, Shuai Zhao, Bo Cheng, Jiale Han, Yingting Li, Hao Yang, Ivan Sekulic, Guoshun Nan
Question Answering (QA) models over Knowledge Bases (KBs) are capable of providing more precise answers by utilizing relation information among entities.
no code implementations • 22 Feb 2021 • Xu Wang, Yi Jin, Yigang Cen, Tao Wang, Yidong Li
Recently, the advancement of 3D point clouds in deep learning has attracted intensive research in different application domains such as computer vision and robotic tasks.
no code implementations • 8 Feb 2021 • Abudushataer Kuerban, Yong-Feng Huang, Jin-Jun Geng, Bing Li, Fan Xu, Xu Wang
Fast radio bursts (FRBs) are mysterious transient phenomena.
High Energy Astrophysical Phenomena
no code implementations • COLING 2020 • Xu Wang, Shuai Zhao, Jiale Han, Bo Cheng, Hao Yang, Jianchang Ao, Zhenzi Li
The structural information of Knowledge Bases (KBs) has proven effective to Question Answering (QA).
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jiale Han, Bo Cheng, Xu Wang
The incompleteness of knowledge base (KB) is a vital factor limiting the performance of question answering (QA).
no code implementations • 22 Oct 2020 • Hong Shen, Wesley Hanwen Deng, Aditi Chattopadhyay, Zhiwei Steven Wu, Xu Wang, Haiyi Zhu
In this paper, we present Value Card, an educational toolkit to inform students and practitioners of the social impacts of different machine learning models via deliberation.
1 code implementation • 15 Jul 2020 • Xu Wang, Mladen Kolar, Ali Shojaie
The key ingredient for this inference procedure is a new concentration inequality on the first- and second-order statistics for integrated stochastic processes, which summarize the entire history of the process.
no code implementations • 13 Jun 2020 • Longlong Feng, Xu Wang
Identifying sleep problem severity from overnight polysomnography (PSG) recordings plays an important role in diagnosing and treating sleep disorders such as the Obstructive Sleep Apnea (OSA).
1 code implementation • 28 Apr 2020 • Xin Yang, Xu Wang, Yi Wang, Haoran Dou, Shengli Li, Huaxuan Wen, Yi Lin, Pheng-Ann Heng, Dong Ni
In this paper, we propose the first fully-automated solution to segment the whole fetal head in US volumes.
no code implementations • 17 Feb 2020 • Emily T. Winn, Marilyn Vazquez, Prachi Loliencar, Kaisa Taipale, Xu Wang, Giseon Heo
Pediatric obstructive sleep apnea affects an estimated 1-5% of elementary-school aged children and can lead to other detrimental health problems.
1 code implementation • NeurIPS 2019 • Xu Wang, Jingming He, Lin Ma
In this paper, we propose one novel model for point cloud semantic segmentation, which exploits both the local and global structures within the point cloud based on the contextual point representations.
1 code implementation • 10 Oct 2019 • Haoran Dou, Xin Yang, Jikuan Qian, Wufeng Xue, Hao Qin, Xu Wang, Lequan Yu, Shujun Wang, Yi Xiong, Pheng-Ann Heng, Dong Ni
In this study, we propose a novel reinforcement learning (RL) framework to automatically localize fetal brain standard planes in 3D US.
no code implementations • 31 Aug 2019 • Xu Wang, Xin Yang, Haoran Dou, Shengli Li, Pheng-Ann Heng, Dong Ni
In this paper, we propose an effective framework for simultaneous segmentation and landmark localization in prenatal ultrasound volumes.
2 code implementations • 25 Jul 2019 • Xiaotao Song, Hailong Sun, Xu Wang, Jiafei Yan
Finally, we summarize some future directions for advancing the techniques of automatic generation of code comments and the quality assessment of comments.
Software Engineering
no code implementations • 29 Jun 2019 • XiaoYu Chen, Xu Wang, Lianfa Bai, Jing Han, Zhuang Zhao
In this paper, we present a convolution neural network based method to recover the light intensity distribution from the overlapped dispersive spectra instead of adding an extra light path to capture it directly for the first time.
no code implementations • 12 Jun 2019 • Sheng Chen, Xu Wang, Chao Chen, Yifan Lu, Xijin Zhang, Linfu Wen
In this paper, we pursue very efficient neural network modules which can significantly boost the learning power of deep convolutional neural networks with negligible extra computational cost.
no code implementations • 14 Dec 2018 • Xin Yang, Na Wang, Yi Wang, Xu Wang, Reza Nezafat, Dong Ni, Pheng-Ann Heng
In this paper, we propose a fully-automated framework to segment left atrium in gadolinium-enhanced MR volumes.
no code implementations • 18 Apr 2017 • Gaurav Singh Tomar, Sreecharan Sankaranarayanan, Xu Wang, Carolyn Penstein Rosé
An earlier study of a collaborative chat intervention in a Massive Open Online Course (MOOC) identified negative effects on attrition stemming from a requirement for students to be matched with exactly one partner prior to beginning the activity.
no code implementations • 28 Oct 2015 • Xu Wang, Gilad Lerman
Kernel methods obtain superb performance in terms of accuracy for various machine learning tasks since they can effectively extract nonlinear relations.
no code implementations • 27 Oct 2015 • Xu Wang
Laplacian Eigenvectors of the graph constructed from a data set are used in many spectral manifold learning algorithms such as diffusion maps and spectral clustering.
no code implementations • 1 Oct 2014 • Xu Wang, Konstantinos Slavakis, Gilad Lerman
This paper advocates a novel framework for segmenting a dataset in a Riemannian manifold $M$ into clusters lying around low-dimensional submanifolds of $M$.