Search Results for author: Mingming Gong

Found 81 papers, 31 papers with code

Label-Noise Robust Domain Adaptation

no code implementations ICML 2020 Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao

Domain adaptation aims to correct the classifiers when faced with distribution shift between source (training) and target (test) domains.

Denoising Domain Adaptation

LTF: A Label Transformation Framework for Correcting Label Shift

no code implementations ICML 2020 Jiaxian Guo, Mingming Gong, Tongliang Liu, Kun Zhang, DaCheng Tao

Distribution shift is a major obstacle to the deployment of current deep learning models on real-world problems.

Detecting Deepfake by Creating Spatio-Temporal Regularity Disruption

no code implementations21 Jul 2022 Jiazhi Guan, Hang Zhou, Mingming Gong, Youjian Zhao, Errui Ding, Jingdong Wang

Specifically, by carefully examining the spatial and temporal properties, we propose to disrupt a real video through a Pseudo-fake Generator and create a wide range of pseudo-fake videos for training.

DeepFake Detection Face Swapping

MetaComp: Learning to Adapt for Online Depth Completion

no code implementations21 Jul 2022 Yang Chen, Shanshan Zhao, Wei Ji, Mingming Gong, Liping Xie

However, facing a new environment where the test data occurs online and differs from the training data in the RGB image content and depth sparsity, the trained model might suffer severe performance drop.

Depth Completion Meta-Learning +1

Style Interleaved Learning for Generalizable Person Re-identification

no code implementations7 Jul 2022 Wentao Tan, Pengfei Wang, Changxing Ding, Mingming Gong, Kui Jia

We employ the features of interleaved styles to update the feature extractor and classifiers using different forward propagations, which helps the model avoid overfitting to certain domain styles.

Domain Generalization Generalizable Person Re-identification +1

Harnessing Out-Of-Distribution Examples via Augmenting Content and Style

no code implementations7 Jul 2022 Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong, Tongliang Liu

Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples, such a problem has drawn much attention.

Data Augmentation Disentanglement +3

Adversarial Consistency for Single Domain Generalization in Medical Image Segmentation

no code implementations28 Jun 2022 Yanwu Xu, Shaoan Xie, Maxwell Reynolds, Matthew Ragoza, Mingming Gong, Kayhan Batmanghelich

An organ segmentation method that can generalize to unseen contrasts and scanner settings can significantly reduce the need for retraining of deep learning models.

Contrastive Learning Domain Generalization +2

Understanding Robust Overfitting of Adversarial Training and Beyond

1 code implementation17 Jun 2022 Chaojian Yu, Bo Han, Li Shen, Jun Yu, Chen Gong, Mingming Gong, Tongliang Liu

Here, we explore the causes of robust overfitting by comparing the data distribution of \emph{non-overfit} (weak adversary) and \emph{overfitted} (strong adversary) adversarial training, and observe that the distribution of the adversarial data generated by weak adversary mainly contain small-loss data.

Adversarial Robustness Data Ablation

A Relational Intervention Approach for Unsupervised Dynamics Generalization in Model-Based Reinforcement Learning

1 code implementation ICLR 2022 Jixian Guo, Mingming Gong, DaCheng Tao

However, because environments are not labelled, the extracted information inevitably contains redundant information unrelated to the dynamics in transition segments and thus fails to maintain a crucial property of $Z$: $Z$ should be similar in the same environment and dissimilar in different ones.

Model-based Reinforcement Learning reinforcement-learning

Robust Weight Perturbation for Adversarial Training

1 code implementation30 May 2022 Chaojian Yu, Bo Han, Mingming Gong, Li Shen, Shiming Ge, Bo Du, Tongliang Liu

Based on these observations, we propose a robust perturbation strategy to constrain the extent of weight perturbation.

Classification

Counterfactual Fairness with Partially Known Causal Graph

no code implementations27 May 2022 Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, Mingming Gong

Interestingly, we find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided: the sensitive attributes do not have ancestors in the causal graph.

BIG-bench Machine Learning Causal Inference +1

MissDAG: Causal Discovery in the Presence of Missing Data with Continuous Additive Noise Models

no code implementations27 May 2022 Erdun Gao, Ignavier Ng, Mingming Gong, Li Shen, Wei Huang, Tongliang Liu, Kun Zhang, Howard Bondell

One straightforward way to address the missing data problem is first to impute the data using off-the-shelf imputation methods and then apply existing causal discovery methods.

Causal Discovery Imputation +1

Few-Shot Font Generation by Learning Fine-Grained Local Styles

1 code implementation CVPR 2022 Licheng Tang, Yiyang Cai, Jiaming Liu, Zhibin Hong, Mingming Gong, Minhu Fan, Junyu Han, Jingtuo Liu, Errui Ding, Jingdong Wang

Instead of explicitly disentangling global or component-wise modeling, the cross-attention mechanism can attend to the right local styles in the reference glyphs and aggregate the reference styles into a fine-grained style representation for the given content glyphs.

Font Generation

On Causality in Domain Adaptation and Semi-Supervised Learning: an Information-Theoretic Analysis

no code implementations10 May 2022 Xuetong Wu, Mingming Gong, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

We show that in causal learning, the excess risk depends on the size of the source sample at a rate of O(1/m) only if the labelling distribution between the source and target domains remains unchanged.

Unsupervised Domain Adaptation

MP2: A Momentum Contrast Approach for Recommendation with Pointwise and Pairwise Learning

no code implementations18 Apr 2022 Menghan Wang, Yuchen Guo, Zhenqi Zhao, Guangzheng Hu, Yuming Shen, Mingming Gong, Philip Torr

To alleviate the influence of the annotation bias, we perform a momentum update to ensure a consistent item representation.

Federated Causal Discovery

1 code implementation7 Dec 2021 Erdun Gao, Junjia Chen, Li Shen, Tongliang Liu, Mingming Gong, Howard Bondell

}$ In this paper, with the additive noise model assumption of data, we take the first step in developing a gradient-based learning framework named DAG-Shared Federated Causal Discovery (DS-FCD), which can learn the causal graph without directly touching local data and naturally handle the data heterogeneity.

Causal Discovery

Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?

1 code implementation NeurIPS 2021 Petar Stojanov, Zijian Li, Mingming Gong, Ruichu Cai, Jaime Carbonell, Kun Zhang

We provide reasoning why when the supports of the source and target data from overlap, any map of $X$ that is fixed across domains may not be suitable for domain adaptation via invariant features.

Representation Learning Unsupervised Domain Adaptation

CRIS: CLIP-Driven Referring Image Segmentation

1 code implementation CVPR 2022 Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming Gong, Tongliang Liu

In addition, we present text-to-pixel contrastive learning to explicitly enforce the text feature similar to the related pixel-level features and dissimilar to the irrelevances.

Contrastive Learning Referring Expression Segmentation +1

Learning Domain-Invariant Relationship with Instrumental Variable for Domain Generalization

no code implementations4 Oct 2021 Junkun Yuan, Xu Ma, Kun Kuang, Ruoxuan Xiong, Mingming Gong, Lanfen Lin

Specifically, it first learns the conditional distribution of input features of one domain given input features of another domain, and then it estimates the domain-invariant relationship by predicting labels with the learned conditional distribution.

Domain Generalization

Co-variance: Tackling Noisy Labels with Sample Selection by Emphasizing High-variance Examples

no code implementations29 Sep 2021 Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu

The sample selection approach is popular in learning with noisy labels, which tends to select potentially clean data out of noisy data for robust training.

Learning with noisy labels

Can Label-Noise Transition Matrix Help to Improve Sample Selection and Label Correction?

no code implementations29 Sep 2021 Yu Yao, Xuefeng Li, Tongliang Liu, Alan Blair, Mingming Gong, Bo Han, Gang Niu, Masashi Sugiyama

Existing methods for learning with noisy labels can be generally divided into two categories: (1) sample selection and label correction based on the memorization effect of neural networks; (2) loss correction with the transition matrix.

Learning with noisy labels

Unaligned Image-to-Image Translation by Learning to Reweight

1 code implementation ICCV 2021 Shaoan Xie, Mingming Gong, Yanwu Xu, Kun Zhang

An essential yet restrictive assumption for unsupervised image translation is that the two domains are aligned, e. g., for the selfie2anime task, the anime (selfie) domain must contain only anime (selfie) face images that can be translated to some images in the other domain.

Translation Unsupervised Image-To-Image Translation

Instance-dependent Label-noise Learning under a Structural Causal Model

1 code implementation NeurIPS 2021 Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang

In particular, we show that properly modeling the instances will contribute to the identifiability of the label noise transition matrix and thus lead to a better classifier.

Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification

1 code implementation22 Aug 2021 Pengfei Wang, Changxing Ding, Wentao Tan, Mingming Gong, Kui Jia, DaCheng Tao

In particular, the performance of our unsupervised UCF method in the MSMT17$\to$Market1501 task is better than that of the fully supervised setting on Market1501.

Box-Adapt: Domain-Adaptive Medical Image Segmentation using Bounding BoxSupervision

no code implementations19 Aug 2021 Yanwu Xu, Mingming Gong, Shaoan Xie, Kayhan Batmanghelich

In this paper, we propose a weakly supervised do-main adaptation setting, in which we can partially label newdatasets with bounding boxes, which are easier and cheaperto obtain than segmentation masks.

Domain Adaptation Liver Segmentation +1

Exploring Set Similarity for Dense Self-supervised Representation Learning

no code implementations CVPR 2022 Zhaoqing Wang, Qiang Li, Guoxin Zhang, Pengfei Wan, Wen Zheng, Nannan Wang, Mingming Gong, Tongliang Liu

By considering the spatial correspondence, dense self-supervised representation learning has achieved superior performance on various dense prediction tasks.

Instance Segmentation Keypoint Detection +4

Kernel Mean Estimation by Marginalized Corrupted Distributions

no code implementations10 Jul 2021 Xiaobo Xia, Shuo Shan, Mingming Gong, Nannan Wang, Fei Gao, Haikun Wei, Tongliang Liu

Estimating the kernel mean in a reproducing kernel Hilbert space is a critical component in many kernel learning algorithms.

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

no code implementations NeurIPS 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.

Learning with noisy labels

Instance Correction for Learning with Open-set Noisy Labels

no code implementations1 Jun 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.

Learning with Group Noise

no code implementations17 Mar 2021 Qizhou Wang, Jiangchao Yao, Chen Gong, Tongliang Liu, Mingming Gong, Hongxia Yang, Bo Han

Most of the previous approaches in this area focus on the pairwise relation (casual or correlational relationship) with noise, such as learning with noisy labels.

Learning with noisy labels

Minimal Geometry-Distortion Constraint for Unsupervised Image-to-Image Translation

no code implementations1 Jan 2021 Jiaxian Guo, Jiachen Li, Mingming Gong, Huan Fu, Kun Zhang, DaCheng Tao

Unsupervised image-to-image (I2I) translation, which aims to learn a domain mapping function without paired data, is very challenging because the function is highly under-constrained.

Translation Unsupervised Image-To-Image Translation

Improving robustness of softmax corss-entropy loss via inference information

no code implementations1 Jan 2021 Bingbing Song, wei he, Renyang Liu, Shui Yu, Ruxin Wang, Mingming Gong, Tongliang Liu, Wei Zhou

Several state-of-the-arts start from improving the inter-class separability of training samples by modifying loss functions, where we argue that the adversarial samples are ignored and thus limited robustness to adversarial attacks is resulted.

Not All Operations Contribute Equally: Hierarchical Operation-Adaptive Predictor for Neural Architecture Search

no code implementations ICCV 2021 Ziye Chen, Yibing Zhan, Baosheng Yu, Mingming Gong, Bo Du

Despite their efficiency, current graph-based predictors treat all operations equally, resulting in biased topological knowledge of cell architectures.

Neural Architecture Search

Contextual Graph Reasoning Networks

no code implementations1 Jan 2021 Zhaoqing Wang, Jiaming Liu, Yangyuxuan Kang, Mingming Gong, Chuang Zhang, Ming Lu, Ming Wu

Graph Reasoning has shown great potential recently in modeling long-range dependencies, which are crucial for various computer vision tasks.

Instance Segmentation Pose Estimation +1

Score-based Causal Discovery from Heterogeneous Data

no code implementations1 Jan 2021 Chenwei Ding, Biwei Huang, Mingming Gong, Kun Zhang, Tongliang Liu, DaCheng Tao

Most algorithms in causal discovery consider a single domain with a fixed distribution.

Causal Discovery

Domain Generalization via Entropy Regularization

1 code implementation NeurIPS 2020 Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, DaCheng Tao

To arrive at this, some methods introduce a domain discriminator through adversarial learning to match the feature distributions in multiple source domains.

Domain Generalization

Deep Learning is Singular, and That's Good

1 code implementation22 Oct 2020 Daniel Murfet, Susan Wei, Mingming Gong, Hui Li, Jesse Gell-Redman, Thomas Quella

In singular models, the optimal set of parameters forms an analytic set with singularities and classical statistical inference cannot be applied to such models.

Learning Theory

Class2Simi: A New Perspective on Learning with Label Noise

no code implementations28 Sep 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

It is worthwhile to perform the transformation: We prove that the noise rate for the noisy similarity labels is lower than that of the noisy class labels, because similarity labels themselves are robust to noise.

Adaptive Context-Aware Multi-Modal Network for Depth Completion

1 code implementation25 Aug 2020 Shanshan Zhao, Mingming Gong, Huan Fu, DaCheng Tao

Furthermore, considering the mutli-modality of input data, we exploit the graph propagation on the two modalities respectively to extract multi-modal representations.

Depth Completion

Hierarchical Amortized Training for Memory-efficient High Resolution 3D GAN

no code implementations5 Aug 2020 Li Sun, Junxiang Chen, Yanwu Xu, Mingming Gong, Ke Yu, Kayhan Batmanghelich

During training, we adopt a hierarchical structure that simultaneously generates a low-resolution version of the image and a randomly selected sub-volume of the high-resolution image.

Data Augmentation Domain Adaptation +3

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

1 code implementation NeurIPS 2020 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama

By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.

Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels

no code implementations14 Jun 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

To give an affirmative answer, in this paper, we propose a framework called Class2Simi: it transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not.

Contrastive Learning Learning with noisy labels +1

Part-dependent Label Noise: Towards Instance-dependent Label Noise

1 code implementation NeurIPS 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, DaCheng Tao, Masashi Sugiyama

Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise.

Multi-Class Classification from Noisy-Similarity-Labeled Data

no code implementations16 Feb 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

We further estimate the transition matrix from only noisy data and build a novel learning system to learn a classifier which can assign noise-free class labels for instances.

Classification General Classification +1

Rethinking Class-Prior Estimation for Positive-Unlabeled Learning

no code implementations ICLR 2022 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao

Hitherto, the distributional-assumption-free CPE methods rely on a critical assumption that the support of the positive data distribution cannot be contained in the support of the negative data distribution.

Domain Adaptation as a Problem of Inference on Graphical Models

1 code implementation NeurIPS 2020 Kun Zhang, Mingming Gong, Petar Stojanov, Biwei Huang, Qingsong Liu, Clark Glymour

Such a graphical model distinguishes between constant and varied modules of the distribution and specifies the properties of the changes across domains, which serves as prior knowledge of the changing modules for the purpose of deriving the posterior of the target variable $Y$ in the target domain.

Bayesian Inference Unsupervised Domain Adaptation

Twin Auxilary Classifiers GAN

1 code implementation NeurIPS 2019 Mingming Gong, Yanwu Xu, Chunyuan Li, Kun Zhang, Kayhan Batmanghelich

One of the popular conditional models is Auxiliary Classifier GAN (AC-GAN) that generates highly discriminative images by extending the loss function of GAN with an auxiliary classifier.

Conditional Image Generation

Specific and Shared Causal Relation Modeling and Mechanism-Based Clustering

1 code implementation NeurIPS 2019 Biwei Huang, Kun Zhang, Pengtao Xie, Mingming Gong, Eric P. Xing, Clark Glymour

The learned SSCM gives the specific causal knowledge for each individual as well as the general trend over the population.

Causal Discovery

Learning Multi-level Weight-centric Features for Few-shot Learning

no code implementations28 Nov 2019 Mingjiang Liang, Shaoli Huang, Shirui Pan, Mingming Gong, Wei Liu

Few-shot learning is currently enjoying a considerable resurgence of interest, aided by the recent advance of deep learning.

Few-Shot Learning

Learning Depth from Monocular Videos Using Synthetic Data: A Temporally-Consistent Domain Adaptation Approach

no code implementations16 Jul 2019 Yipeng Mou, Mingming Gong, Huan Fu, Kayhan Batmanghelich, Kun Zhang, DaCheng Tao

Due to the stylish difference between synthetic and real images, we propose a temporally-consistent domain adaptation (TCDA) approach that simultaneously explores labels in the synthetic domain and temporal constraints in the videos to improve style transfer and depth prediction.

Depth Prediction Domain Adaptation +4

Twin Auxiliary Classifiers GAN

4 code implementations5 Jul 2019 Mingming Gong, Yanwu Xu, Chunyuan Li, Kun Zhang, Kayhan Batmanghelich

One of the popular conditional models is Auxiliary Classifier GAN (AC-GAN), which generates highly discriminative images by extending the loss function of GAN with an auxiliary classifier.

Conditional Image Generation

Causal Discovery and Forecasting in Nonstationary Environments with State-Space Models

no code implementations26 May 2019 Biwei Huang, Kun Zhang, Mingming Gong, Clark Glymour

In many scientific fields, such as economics and neuroscience, we are often faced with nonstationary time series, and concerned with both finding causal relations and forecasting the values of variables of interest, both of which are particularly challenging in such nonstationary environments.

Bayesian Inference Causal Discovery +1

Generative-Discriminative Complementary Learning

no code implementations2 Apr 2019 Yanwu Xu, Mingming Gong, Junxiang Chen, Tongliang Liu, Kun Zhang, Kayhan Batmanghelich

The success of such approaches heavily depends on high-quality labeled instances, which are not easy to obtain, especially as the number of candidate classes increases.

Robust Angular Local Descriptor Learning

1 code implementation21 Jan 2019 Yanwu Xu, Mingming Gong, Tongliang Liu, Kayhan Batmanghelich, Chaohui Wang

In recent years, the learned local descriptors have outperformed handcrafted ones by a large margin, due to the powerful deep convolutional neural network architectures such as L2-Net [1] and triplet based metric learning [2].

Metric Learning

Modeling Dynamic Missingness of Implicit Feedback for Recommendation

no code implementations NeurIPS 2018 Menghan Wang, Mingming Gong, Xiaolin Zheng, Kun Zhang

Recent studies modeled \emph{exposure}, a latent missingness variable which indicates whether an item is missing to a user, to give each missing entry a confidence of being negative feedback.

Collaborative Filtering Recommendation Systems

Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge

1 code implementation5 Nov 2018 Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze

This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.

Brain Tumor Segmentation Survival Prediction +1

Geometry-Consistent Generative Adversarial Networks for One-Sided Unsupervised Domain Mapping

no code implementations CVPR 2019 Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, Kun Zhang, DaCheng Tao

Unsupervised domain mapping aims to learn a function to translate domain X to Y by a function GXY in the absence of paired examples.

Correcting the Triplet Selection Bias for Triplet Loss

1 code implementation ECCV 2018 Baosheng Yu, Tongliang Liu, Mingming Gong, Changxing Ding, DaCheng Tao

Considering that the number of triplets grows cubically with the size of training data, triplet mining is thus indispensable for efficiently training with triplet loss.

Face Recognition Fine-Grained Image Classification +4

Deep Domain Generalization via Conditional Invariant Adversarial Networks

no code implementations ECCV 2018 Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, DaCheng Tao

Under the assumption that the conditional distribution $P(Y|X)$ remains unchanged across domains, earlier approaches to domain generalization learned the invariant representation $T(X)$ by minimizing the discrepancy of the marginal distribution $P(T(X))$.

Domain Generalization Representation Learning

Domain Generalization via Conditional Invariant Representation

1 code implementation23 Jul 2018 Ya Li, Mingming Gong, Xinmei Tian, Tongliang Liu, DaCheng Tao

With the conditional invariant representation, the invariance of the joint distribution $\mathbb{P}(h(X), Y)$ can be guaranteed if the class prior $\mathbb{P}(Y)$ does not change across training and test domains.

Domain Generalization

Subject2Vec: Generative-Discriminative Approach from a Set of Image Patches to a Vector

no code implementations28 Jun 2018 Sumedha Singla, Mingming Gong, Siamak Ravanbakhsh, Frank Sciurba, Barnabas Poczos, Kayhan N. Batmanghelich

Our model consists of three mutually dependent modules which regulate each other: (1) a discriminative network that learns a fixed-length representation from local features and maps them to disease severity; (2) an attention mechanism that provides interpretability by focusing on the areas of the anatomy that contribute the most to the prediction task; and (3) a generative network that encourages the diversity of the local latent features.

Anatomy

MoE-SPNet: A Mixture-of-Experts Scene Parsing Network

no code implementations19 Jun 2018 Huan Fu, Mingming Gong, Chaohui Wang, DaCheng Tao

In the proposed networks, different levels of features at each spatial location are adaptively re-weighted according to the local structure and surrounding contextual information before aggregation.

Scene Parsing

Deep Ordinal Regression Network for Monocular Depth Estimation

5 code implementations CVPR 2018 Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, DaCheng Tao

These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions.

Monocular Depth Estimation

An Efficient and Provable Approach for Mixture Proportion Estimation Using Linear Independence Assumption

no code implementations CVPR 2018 Xiyu Yu, Tongliang Liu, Mingming Gong, Kayhan Batmanghelich, DaCheng Tao

In this paper, we study the mixture proportion estimation (MPE) problem in a new setting: given samples from the mixture and the component distributions, we identify the proportions of the components in the mixture distribution.

Causal Generative Domain Adaptation Networks

no code implementations12 Apr 2018 Mingming Gong, Kun Zhang, Biwei Huang, Clark Glymour, DaCheng Tao, Kayhan Batmanghelich

For this purpose, we first propose a flexible Generative Domain Adaptation Network (G-DAN) with specific latent variables to capture changes in the generating process of features across domains.

Domain Adaptation

Learning with Biased Complementary Labels

1 code implementation ECCV 2018 Xiyu Yu, Tongliang Liu, Mingming Gong, DaCheng Tao

We therefore reason that the transition probabilities will be different.

A Coarse-Fine Network for Keypoint Localization

no code implementations ICCV 2017 Shaoli Huang, Mingming Gong, DaCheng Tao

To target this problem, we develop a keypoint localization network composed of several coarse detector branches, each of which is built on top of a feature layer in a CNN, and a fine detector branch built on top of multiple feature layers.

Pose Estimation

A Compromise Principle in Deep Monocular Depth Estimation

no code implementations28 Aug 2017 Huan Fu, Mingming Gong, Chaohui Wang, DaCheng Tao

However, we find that training a network to predict a high spatial resolution continuous depth map often suffers from poor local solutions.

Classification Data Augmentation +2

Transfer Learning with Label Noise

no code implementations31 Jul 2017 Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao

However, when learning this invariant knowledge, existing methods assume that the labels in source domain are uncontaminated, while in reality, we often have access to source data with noisy labels.

Denoising Transfer Learning

Causal Discovery in the Presence of Measurement Error: Identifiability Conditions

no code implementations10 Jun 2017 Kun Zhang, Mingming Gong, Joseph Ramsey, Kayhan Batmanghelich, Peter Spirtes, Clark Glymour

This problem has received much attention in multiple fields, but it is not clear to what extent the causal model for the measurement-error-free variables can be identified in the presence of measurement error with unknown variance.

Causal Discovery

Causal Inference by Identification of Vector Autoregressive Processes with Hidden Components

no code implementations14 Nov 2014 Philipp Geiger, Kun Zhang, Mingming Gong, Dominik Janzing, Bernhard Schölkopf

A widely applied approach to causal inference from a non-experimental time series $X$, often referred to as "(linear) Granger causal analysis", is to regress present on past and interpret the regression matrix $\hat{B}$ causally.

Causal Inference Time Series

Cannot find the paper you are looking for? You can Submit a new open access paper.