no code implementations • 30 May 2023 • Xiaofeng Liu, Helen A. Shih, Fangxu Xing, Emiliano Santarnecchi, Georges El Fakhri, Jonghye Woo
Deep learning (DL) models for segmenting various anatomical structures have achieved great success via a static DL model that is trained in a single source domain.
no code implementations • 23 May 2023 • Xiaofeng Liu, Jerry L. Prince, Fangxu Xing, Jiachen Zhuo, Reese Timothy, Maureen Stone, Georges El Fakhri, Jonghye Woo
We evaluated our framework on two cross-scanner/center, inter-subject translation tasks, including tagged-to-cine magnetic resonance (MR) image translation and T1-weighted MR-to-fractional anisotropy translation.
no code implementations • 17 May 2023 • Xiaofeng Liu, Jiaxin Gao, Yaohua Liu, Risheng Liu, Nenggan Zheng
Recently significant progress has been made in human action recognition and behavior prediction using deep learning techniques, leading to improved vision-based semantic understanding.
no code implementations • 17 May 2023 • Xiaofeng Liu, Jiaxin Gao, Ziyu Yue, Xin Fan, Risheng Liu
Low-light situations severely restrict the pursuit of aesthetic quality in consumer photography.
no code implementations • 17 Mar 2023 • Xiaofeng Liu, Thibault Marin, Tiss Amal, Jonghye Woo, Georges El Fakhri, Jinsong Ouyang
Purpose: This work aims at using deep learning to efficiently estimate posterior distributions of imaging parameters, which in turn can be used to derive the most probable parameters as well as their uncertainties.
no code implementations • 14 Feb 2023 • Xiaofeng Liu, Fangxu Xing, Jerry L. Prince, Maureen Stone, Georges El Fakhri, Jonghye Woo
However, elucidating the relationship between these two sources of information is challenging, due in part to the disparity in data structure between spatiotemporal motion fields (i. e., 4D motion fields) and one-dimensional audio waveforms.
no code implementations • 21 Jan 2023 • Xiaofeng Liu, Fangxu Xing, Hanna K. Gaggin, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
Cardiac cine magnetic resonance imaging (MRI) has been used to characterize cardiovascular diseases (CVD), often providing a noninvasive phenotyping tool.~While recently flourished deep learning based approaches using cine MRI yield accurate characterization results, the performance is often degraded by small training samples.
no code implementations • 10 Jan 2023 • Chaopeng Shen, Alison P. Appling, Pierre Gentine, Toshiyuki Bandai, Hoshin Gupta, Alexandre Tartakovsky, Marco Baity-Jesi, Fabrizio Fenicia, Daniel Kifer, Li Li, Xiaofeng Liu, Wei Ren, Yi Zheng, Ciaran J. Harman, Martyn Clark, Matthew Farthing, Dapeng Feng, Praveen Kumar, Doaa Aboelyazeed, Farshid Rahmani, Hylke E. Beck, Tadd Bindas, Dipankar Dwivedi, Kuai Fang, Marvin Höge, Chris Rackauckas, Tirthankar Roy, Chonggang Xu, Kathryn Lawson
Here we present differentiable geoscientific modeling as a powerful pathway toward dissolving the perceived barrier between them and ushering in a paradigm shift.
1 code implementation • 22 Dec 2022 • Yongsong Huang, Tomo Miyazaki, Xiaofeng Liu, Shinichiro Omachi
Image Super-Resolution (SR) is essential for a wide range of computer vision and image processing tasks.
no code implementations • 16 Sep 2022 • Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo
Unsupervised domain adaptation (UDA) has been a vital protocol for migrating information learned from a labeled source domain to facilitate the implementation in an unlabeled heterogeneous target domain.
no code implementations • 27 Aug 2022 • Qing Wang, Jing Jin, Xiaofeng Liu, Huixuan Zong, Yunfeng Shao, Yinchuan Li
Federated learning (FL) is a new distributed machine learning framework that can achieve reliably collaborative training without collecting users' private data.
no code implementations • 26 Aug 2022 • Lingsheng Kong, Bo Hu, Xiongchang Liu, Jun Lu, Jane You, Xiaofeng Liu
Deep learning is usually data starved, and the unsupervised domain adaptation (UDA) is developed to introduce the knowledge in the labeled source domain to the unlabeled target domain.
no code implementations • 16 Aug 2022 • Xiaofeng Liu, Chaehwa Yoo, Fangxu Xing, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
Unsupervised domain adaptation (UDA) has been widely used to transfer knowledge from a labeled source domain to an unlabeled target domain to counter the difficulty of labeling in a new domain.
no code implementations • 16 Aug 2022 • Xiaofeng Liu, Fangxu Xing, Jia You, Jun Lu, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
In TPN, while the closeness of class centers between source and target domains is explicitly enforced in a latent space, the underlying fine-grained subtype structure and the cross-domain within-class compactness have not been fully investigated.
no code implementations • 15 Aug 2022 • Xiaofeng Liu, Chaehwa Yoo, Fangxu Xing, Hyejin Oh, Georges El Fakhri, Je-Won Kang, Jonghye Woo
Unsupervised domain adaptation (UDA) is proposed to counter this, by leveraging both labeled source domain data and unlabeled target domain data to carry out various tasks in the target domain.
no code implementations • 16 Jun 2022 • Han Xiao, Zhiqin Wang, Dexin Li, Wenqiang Tian, Xiaofeng Liu, Wendong Liu, Shi Jin, Jia Shen, Zhi Zhang, Ning Yang
This paper is based on the background of the 2nd Wireless Communication Artificial Intelligence (AI) Competition (WAIC) which is hosted by IMT-2020(5G) Promotion Group 5G+AIWork Group, where the framework of the eigenvector-based channel state information (CSI) feedback problem is firstly provided.
no code implementations • 5 Jun 2022 • Xiaofeng Liu, Fangxu Xing, Jerry L. Prince, Jiachen Zhuo, Maureen Stone, Georges El Fakhri, Jonghye Woo
Understanding the underlying relationship between tongue and oropharyngeal muscle deformation seen in tagged-MRI and intelligible speech plays an important role in advancing speech motor control theories and treatment of speech related-disorders.
no code implementations • 5 Jun 2022 • Xiaofeng Liu, Fangxu Xing, Nadya Shusharina, Ruth Lim, C-C Jay Kuo, Georges El Fakhri, Jonghye Woo
Unsupervised domain adaptation (UDA) has been vastly explored to alleviate domain shifts between source and target domains, by applying a well-performed model in an unlabeled target domain via supervision of a labeled source domain.
no code implementations • 25 Mar 2022 • Xiaofeng Liu, Yinchuan Li, Yunfeng Shao, Qing Wang
Federated learning (FL) can achieve privacy-safe and reliable collaborative training without collecting users' private data.
1 code implementation • 5 Mar 2022 • Xiaofeng Liu, Yalan Song, Chaopeng Shen
We also found the surrogate architecture (whether with both velocity and water surface elevation or velocity only as outputs) does not show significant impact on inversion result.
no code implementations • 25 Feb 2022 • Xiaofeng Liu, Fangxu Xing, Jerry L. Prince, Maureen Stone, Georges El Fakhri, Jonghye Woo
Specifically, we propose a novel input-output image patches self-training scheme to achieve a disentanglement of underlying anatomical structures and imaging modalities.
no code implementations • 22 Jan 2022 • Kaiwen Tan, Weixian Huang, Xiaofeng Liu, Jinlong Hu, Shoubin Dong
By integrating these heterogeneous but complementary data, many multi-modal methods are proposed to study the complex mechanisms of cancers, and most of them achieve comparable or better results from previous single-modal methods.
no code implementations • 18 Jan 2022 • Xiaofeng Liu, Fangxu Xing, Thibault Marin, Georges El Fakhri, Jonghye Woo
Then, we apply a variational autoencoder network and optimize its evidence lower bound (ELBO) to efficiently approximate the distribution of the segmentation map, given an MR image.
no code implementations • 13 Jan 2022 • Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo
Unsupervised domain adaptation (UDA) between two significantly disparate domains to learn high-level semantic alignment is a crucial yet challenging task.~To this end, in this work, we propose exploiting low-level edge information to facilitate the adaptation as a precursor task, which has a small cross-domain gap, compared with semantic segmentation.~The precise contour then provides spatial information to guide the semantic adaptation.
1 code implementation • 20 Dec 2021 • Yalan Song, Chaopeng Shen, Xiaofeng Liu
The new method was evaluated and compared against existing methods based on convolutional neural networks (CNNs), which can only make image-to-image predictions on structured or regular meshes.
no code implementations • ACL 2021 • Yubin Ge, Ly Dinh, Xiaofeng Liu, Jinsong Su, Ziyao Lu, Ante Wang, Jana Diesner
In this paper, we focus on the problem of citing sentence generation, which entails generating a short text to capture the salient information in a cited paper and the connection between the citing and cited paper.
no code implementations • ICCV 2021 • Xiaofeng Liu, Zhenhua Guo, Site Li, Fangxu Xing, Jane You, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
In this work, we propose an adversarial unsupervised domain adaptation (UDA) approach with the inherent conditional and label shifts, in which we aim to align the distributions w. r. t.
no code implementations • ICCV 2021 • Xiaofeng Liu, Site Li, Yubin Ge, Pengyi Ye, Jane You, Jun Lu
The UDA for ordinal classification requires inducing non-trivial ordinal distribution prior to the latent space.
no code implementations • 22 Jul 2021 • Wanqing Xie, Lizhong Liang, Yao Lu, Hui Luo, Xiaofeng Liu
The superior performance of our system shows the validity of combining facial video recording with the SDS score for more accurate self-diagnose.
no code implementations • 22 Jul 2021 • Xiaofeng Liu, Bo Hu, Linghao Jin, Xu Han, Fangxu Xing, Jinsong Ouyang, Jun Lu, Georges El Fakhri, Jonghye Woo
In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training.
no code implementations • 22 Jul 2021 • Xiaofeng Liu, Fangxu Xing, Hanna K. Gaggin, Weichung Wang, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
Assessment of cardiovascular disease (CVD) with cine magnetic resonance imaging (MRI) has been used to non-invasively evaluate detailed cardiac structure and function.
no code implementations • 12 Jul 2021 • Yinchuan Li, Xiaofeng Liu, Yunfeng Shao, Qing Wang, Yanhui Geng
Structured pruning is an effective compression technique to reduce the computation of neural networks, which is usually achieved by adding perturbations to reduce network parameters at the cost of slightly increasing training loss.
no code implementations • 12 Jul 2021 • Xiaofeng Liu, Yinchuan Li, Qing Wang, Xu Zhang, Yunfeng Shao, Yanhui Geng
By incorporating an approximated L1-norm and the correlation between client models and global model into standard FL loss function, the performance on statistical diversity data is improved and the communicational and computational loads required in the network are reduced compared with non-sparse FL.
no code implementations • 25 Jun 2021 • Wanqing Xie, Lizhong Liang, Yao Lu, Chen Wang, Jihong Shen, Hui Luo, Xiaofeng Liu
To automatically interpret depression from the SDS evaluation and the paired video, we propose an end-to-end hierarchical framework for the long-term variable-length video, which is also conditioned on the questionnaire results and the answering time.
no code implementations • 23 Jun 2021 • Xiaofeng Liu, Fangxu Xing, Chao Yang, Georges El Fakhri, Jonghye Woo
To alleviate this, in this work, we target source free UDA for segmentation, and propose to adapt an ``off-the-shelf" segmentation model pre-trained in the source domain to the target domain, with an adaptive batch-wise normalization statistics adaptation framework.
no code implementations • 23 Jun 2021 • Xiaofeng Liu, Fangxu Xing, Maureen Stone, Jiachen Zhuo, Reese Timothy, Jerry L. Prince, Georges El Fakhri, Jonghye Woo
Self-training based unsupervised domain adaptation (UDA) has shown great potential to address the problem of domain shift, when applying a trained deep learning model in a source domain to unlabeled target domains.
no code implementations • 12 Jun 2021 • Han Xiao, Zhiqin Wang, Wenqiang Tian, Xiaofeng Liu, Wendong Liu, Shi Jin, Jia Shen, Zhi Zhang, Ning Yang
In this paper, we give a systematic description of the 1st Wireless Communication Artificial Intelligence (AI) Competition (WAIC) which is hosted by IMT-2020(5G) Promotion Group 5G+AI Work Group.
no code implementations • 30 Apr 2021 • Yubin Ge, Site Li, Xuyang Li, Fangfang Fan, Wanqing Xie, Jane You, Xiaofeng Liu
The ground distance matrix can be pre-defined following a priori of hierarchical semantic risk.
no code implementations • 9 Apr 2021 • Rodrigo Cabrera, Xiaofeng Liu, Mohammadreza Ghodsi, Zebulun Matteson, Eugene Weinstein, Anjuli Kannan
Streaming processing of speech audio is required for many contemporary practical speech recognition tasks.
no code implementations • 17 Jan 2021 • Xiaofeng Liu, Fangxu Xing, Chao Yang, C. -C. Jay Kuo, Georges ElFakhri, Jonghye Woo
Deformable registration of magnetic resonance images between patients with brain tumors and healthy subjects has been an important tool to specify tumor geometry through location alignment and facilitate pathological analysis.
no code implementations • 14 Jan 2021 • Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo
Our framework hinges on a cycle-constrained conditional adversarial training approach, where it can extract a modality-invariant anatomical feature with a modality-agnostic encoder and generate a target modality with a conditioned decoder.
no code implementations • 14 Jan 2021 • Xiaofeng Liu, Fangxu Xing, Jerry L. Prince, Aaron Carass, Maureen Stone, Georges El Fakhri, Jonghye Woo
Tagged magnetic resonance imaging (MRI) is a widely used imaging technique for measuring tissue deformation in moving organs.
no code implementations • 13 Jan 2021 • Xiaofeng Liu, Fangxu Xing, Chao Yang, C. -C. Jay Kuo, Suma Babu, Georges El Fakhri, Thomas Jenkins, Jonghye Woo
Deep learning has great potential for accurate detection and classification of diseases with medical imaging data, but the performance is often limited by the number of training datasets and memory requirements.
no code implementations • 1 Jan 2021 • Xiaofeng Liu, Bo Hu, Xiongchang Liu, Jun Lu, Jane You, Lingsheng Kong
Unsupervised domain adaptation (UDA) aims to transfer the knowledge on a labeled source domain distribution to perform well on an unlabeled target domain.
no code implementations • 1 Jan 2021 • Xiaofeng Liu, Linghao Jin, Xu Han, Jun Lu, Jane You, Lingsheng Kong
In the up to two orders of magnitude compressed domain, we can explicitly infer the expression from the residual frames and possible to extract identity factors from the I frame with a pre-trained face recognition network.
no code implementations • 1 Jan 2021 • Xiaofeng Liu, Xiongchang Liu, Bo Hu, Wenxuan Ji, Fangxu Xing, Jun Lu, Jane You, C. -C. Jay Kuo, Georges El Fakhri, Jonghye Woo
Recent advances in unsupervised domain adaptation (UDA) show that transferable prototypical learning presents a powerful means for class conditional alignment, which encourages the closeness of cross-domain class centroids.
no code implementations • 21 Oct 2020 • Xiaofeng Liu, Yuzhuo Han, Song Bai, Yi Ge, Tianxing Wang, Xu Han, Site Li, Jane You, Ju Lu
However, the cross entropy loss can not take the different importance of each class in an self-driving system into account.
no code implementations • 20 Oct 2020 • Xiaofeng Liu, Linghao Jin, Xu Han, Jane You
In the up to two orders of magnitude compressed domain, we can explicitly infer the expression from the residual frames and possibly extract identity factors from the I frame with a pre-trained face recognition network.
no code implementations • 11 Aug 2020 • Xiaofeng Liu, Yimeng Zhang, Xiongchang Liu, Song Bai, Site Li, Jane You
The ground metric of Wasserstein distance can be pre-defined following the experience on a specific task.
no code implementations • ECCV 2020 • Xiaofeng Liu, Tong Che, Yiqun Lu, Chao Yang, Site Li, Jane You
This paper targets on learning-based novel view synthesis from a single or limited 2D images without the pose supervision.
no code implementations • 14 Jun 2020 • Xiaofeng Liu
This chapter systematically summarize the detrimental factors as task-relevant/irrelevant semantic variations and unspecified latent variation.
no code implementations • CVPR 2020 • Xiaofeng Liu, Wenxuan Ji, Jane You, Georges El Fakhri, Jonghye Woo
In addition, our method can adaptively learn the ground metric in a high-fidelity simulator, following a reinforcement alternative optimization scheme.
no code implementations • 4 Feb 2020 • Biao Yang, Caizhen He, Pin Wang, Ching-Yao Chan, Xiaofeng Liu, Yang Chen
A latent variable predictor is proposed to estimate latent variable distributions from observed and ground-truth trajectories.
no code implementations • 12 Dec 2019 • Chao Yang, Xiaofeng Liu, Qingming Tang, C. -C. Jay Kuo
We study the problem of learning disentangled representations for data across multiple domains and its applications in human retargeting.
no code implementations • 18 Nov 2019 • Tong Che, Xiaofeng Liu, Site Li, Yubin Ge, Ruixiang Zhang, Caiming Xiong, Yoshua Bengio
We test the verifier network on out-of-distribution detection and adversarial example detection problems, as well as anomaly detection problems in structured prediction tasks such as image caption generation.
no code implementations • ICCV 2019 • Xiaofeng Liu, Yang Zou, Tong Che, Peng Ding, Ping Jia, Jane You, Kumar B. V. K
We propose to incorporate inter-class correlations in a Wasserstein training framework by pre-defining ($i. e.,$ using arc length of a circle) or adaptively learning the ground metric.
no code implementations • 3 Nov 2019 • Xiaofeng Liu, Xu Han, Yukai Qiao, Yi Ge, Lu Jun
In this paper, we target on this task from the perspective of loss function.
2 code implementations • ICCV 2019 • Yang Zou, Zhiding Yu, Xiaofeng Liu, B. V. K. Vijaya Kumar, Jinsong Wang
Recent advances in domain adaptation show that deep self-training presents a powerful means for unsupervised domain adaptation.
Ranked #15 on
Domain Adaptation
on VisDA2017
no code implementations • 5 Aug 2019 • Xiaofeng Liu, Zhenhua Guo, Jane You, B. V. K. Vijaya Kumar
The importance of each image is usually considered either equal or based on a quality assessment of that image independent of other images and/or videos in that image set.
no code implementations • ICCV 2019 • Xiaofeng Liu, Zhenhua Guo, Site Li, Lingsheng Kong, Ping Jia, Jane You, B. V. K. Kumar
We consider the problem of comparing the similarity of image sets with variable-quantity, quality and un-ordered heterogeneous images.
no code implementations • ECCV 2018 • Xiaofeng Liu, B. V. K. Vijaya Kumar, Chao Yang, Qingming Tang, Jane You
This paper targets the problem of image set-based face verification and identification.
no code implementations • CVPR 2019 • Xiaofeng Liu, Site Li, Lingsheng Kong, Wanqing Xie, Ping Jia, Jane You, B.V.K. Kumar
Recent successes of deep learning-based recognition rely on maintaining the content related to the main-task label.
no code implementations • 26 Nov 2018 • Qihao Liu, Yujia Wang, Xiaofeng Liu
To balance exploration and exploitation, the Novelty Search (NS) is employed in every chief agent to encourage policies with high novelty while maximizing per-episode performance.
no code implementations • 23 Mar 2018 • Chao Yang, Yuhang Song, Xiaofeng Liu, Qingming Tang, C. -C. Jay Kuo
We present a new approach to address the difficulty of training a very deep generative model to synthesize high-quality photo-realistic inpainting.
no code implementations • ECCV 2018 • Yuhang Song, Chao Yang, Zhe Lin, Xiaofeng Liu, Qin Huang, Hao Li, C. -C. Jay Kuo
We study the task of image inpainting, which is to fill in the missing region of an incomplete image with plausible contents.
no code implementations • 10 Jun 2015 • Krzysztof Choromanski, Sanjiv Kumar, Xiaofeng Liu
To achieve fast clustering, we propose to represent each cluster by a skeleton set which is updated continuously as new data is seen.