Search Results for author: Mingming Gong

Found 59 papers, 17 papers with code

LTF: A Label Transformation Framework for Correcting Label Shift

no code implementations ICML 2020 Jiaxian Guo, Mingming Gong, Tongliang Liu, Kun Zhang, DaCheng Tao

Distribution shift is a major obstacle to the deployment of current deep learning models on real-world problems.

Label-Noise Robust Domain Adaptation

no code implementations ICML 2020 Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao

Domain adaptation aims to correct the classifiers when faced with distribution shift between source (training) and target (test) domains.

Denoising Domain Adaptation

Learning Domain-Invariant Relationship with Instrumental Variable for Domain Generalization

no code implementations4 Oct 2021 Junkun Yuan, Xu Ma, Kun Kuang, Ruoxuan Xiong, Mingming Gong, Lanfen Lin

Specifically, it first learns the conditional distribution of input features of one domain given input features of another domain, and then it estimates the domain-invariant relationship by predicting labels with the learned conditional distribution.

Domain Generalization

Unaligned Image-to-Image Translation by Learning to Reweight

no code implementations ICCV 2021 Shaoan Xie, Mingming Gong, Yanwu Xu, Kun Zhang

An essential yet restrictive assumption for unsupervised image translation is that the two domains are aligned, e. g., for the selfie2anime task, the anime (selfie) domain must contain only anime (selfie) face images that can be translated to some images in the other domain.

Translation Unsupervised Image-To-Image Translation

Instance-dependent Label-noise Learning under a Structural Causal Model

no code implementations7 Sep 2021 Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang

In particular, we show that properly modeling the instances will contribute to the identifiability of the label noise transition matrix and thus lead to a better classifier.

Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification

no code implementations22 Aug 2021 Pengfei Wang, Changxing Ding, Wentao Tan, Mingming Gong, Kui Jia, DaCheng Tao

In particular, the performance of our unsupervised UCF method in the MSMT17$\to$Market1501 task is better than that of the fully supervised setting on Market1501.

Box-Adapt: Domain-Adaptive Medical Image Segmentation using Bounding BoxSupervision

no code implementations19 Aug 2021 Yanwu Xu, Mingming Gong, Shaoan Xie, Kayhan Batmanghelich

In this paper, we propose a weakly supervised do-main adaptation setting, in which we can partially label newdatasets with bounding boxes, which are easier and cheaperto obtain than segmentation masks.

Domain Adaptation Liver Segmentation

Exploring Set Similarity for Dense Self-supervised Representation Learning

no code implementations19 Jul 2021 Zhaoqing Wang, Qiang Li, Guoxin Zhang, Pengfei Wan, Wen Zheng, Nannan Wang, Mingming Gong, Tongliang Liu

By considering the spatial correspondence, dense self-supervised representation learning has achieved superior performance on various dense prediction tasks.

Instance Segmentation Keypoint Detection +3

Kernel Mean Estimation by Marginalized Corrupted Distributions

no code implementations10 Jul 2021 Xiaobo Xia, Shuo Shan, Mingming Gong, Nannan Wang, Fei Gao, Haikun Wei, Tongliang Liu

Estimating the kernel mean in a reproducing kernel Hilbert space is a critical component in many kernel learning algorithms.

Adversarial Robustness through the Lens of Causality

no code implementations11 Jun 2021 Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, Kun Zhang

The spurious correlation implies that the adversarial distribution is constructed via making the statistical conditional association between style information and labels drastically different from that in natural distribution.

Adversarial Attack

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

no code implementations1 Jun 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.

Learning with noisy labels

Instance Correction for Learning with Open-set Noisy Labels

no code implementations1 Jun 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.

Learning with Group Noise

no code implementations17 Mar 2021 Qizhou Wang, Jiangchao Yao, Chen Gong, Tongliang Liu, Mingming Gong, Hongxia Yang, Bo Han

Most of the previous approaches in this area focus on the pairwise relation (casual or correlational relationship) with noise, such as learning with noisy labels.

Learning with noisy labels

Improving robustness of softmax corss-entropy loss via inference information

no code implementations1 Jan 2021 Bingbing Song, wei he, Renyang Liu, Shui Yu, Ruxin Wang, Mingming Gong, Tongliang Liu, Wei Zhou

Several state-of-the-arts start from improving the inter-class separability of training samples by modifying loss functions, where we argue that the adversarial samples are ignored and thus limited robustness to adversarial attacks is resulted.

Not All Operations Contribute Equally: Hierarchical Operation-Adaptive Predictor for Neural Architecture Search

no code implementations ICCV 2021 Ziye Chen, Yibing Zhan, Baosheng Yu, Mingming Gong, Bo Du

Despite their efficiency, current graph-based predictors treat all operations equally, resulting in biased topological knowledge of cell architectures.

Neural Architecture Search

Minimal Geometry-Distortion Constraint for Unsupervised Image-to-Image Translation

no code implementations1 Jan 2021 Jiaxian Guo, Jiachen Li, Mingming Gong, Huan Fu, Kun Zhang, DaCheng Tao

Unsupervised image-to-image (I2I) translation, which aims to learn a domain mapping function without paired data, is very challenging because the function is highly under-constrained.

Translation Unsupervised Image-To-Image Translation

Score-based Causal Discovery from Heterogeneous Data

no code implementations1 Jan 2021 Chenwei Ding, Biwei Huang, Mingming Gong, Kun Zhang, Tongliang Liu, DaCheng Tao

Most algorithms in causal discovery consider a single domain with a fixed distribution.

Causal Discovery

Contextual Graph Reasoning Networks

no code implementations1 Jan 2021 Zhaoqing Wang, Jiaming Liu, Yangyuxuan Kang, Mingming Gong, Chuang Zhang, Ming Lu, Ming Wu

Graph Reasoning has shown great potential recently in modeling long-range dependencies, which are crucial for various computer vision tasks.

Instance Segmentation Pose Estimation +1

Domain Generalization via Entropy Regularization

1 code implementation NeurIPS 2020 Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, DaCheng Tao

To arrive at this, some methods introduce a domain discriminator through adversarial learning to match the feature distributions in multiple source domains.

Domain Generalization

Deep Learning is Singular, and That's Good

no code implementations22 Oct 2020 Daniel Murfet, Susan Wei, Mingming Gong, Hui Li, Jesse Gell-Redman, Thomas Quella

In singular models, the optimal set of parameters forms an analytic set with singularities and classical statistical inference cannot be applied to such models.

Learning Theory

Adaptive Context-Aware Multi-Modal Network for Depth Completion

1 code implementation25 Aug 2020 Shanshan Zhao, Mingming Gong, Huan Fu, DaCheng Tao

Furthermore, considering the mutli-modality of input data, we exploit the graph propagation on the two modalities respectively to extract multi-modal representations.

Depth Completion

Hierarchical Amortized Training for Memory-efficient High Resolution 3D GAN

no code implementations5 Aug 2020 Li Sun, Junxiang Chen, Yanwu Xu, Mingming Gong, Ke Yu, Kayhan Batmanghelich

During training, we adopt a hierarchical structure that simultaneously generates a low-resolution version of the image and a randomly selected sub-volume of the high-resolution image.

Data Augmentation Domain Adaptation +4

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

1 code implementation NeurIPS 2020 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama

By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.

Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels

no code implementations14 Jun 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

To give an affirmative answer, in this paper, we propose a framework called Class2Simi: it transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not.

Contrastive Learning Learning with noisy labels +1

Part-dependent Label Noise: Towards Instance-dependent Label Noise

1 code implementation NeurIPS 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, DaCheng Tao, Masashi Sugiyama

Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise.

Multi-Class Classification from Noisy-Similarity-Labeled Data

no code implementations16 Feb 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

We further estimate the transition matrix from only noisy data and build a novel learning system to learn a classifier which can assign noise-free class labels for instances.

Classification General Classification +1

Towards Mixture Proportion Estimation without Irreducibility

no code implementations10 Feb 2020 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao

It is worthwhile to change the problem: we prove that if the assumption holds, our method will not affect anything; if the assumption does not hold, the bias from problem changing is less than the bias from violation of the irreducible assumption in the original problem.

Domain Adaptation as a Problem of Inference on Graphical Models

1 code implementation NeurIPS 2020 Kun Zhang, Mingming Gong, Petar Stojanov, Biwei Huang, Qingsong Liu, Clark Glymour

Such a graphical model distinguishes between constant and varied modules of the distribution and specifies the properties of the changes across domains, which serves as prior knowledge of the changing modules for the purpose of deriving the posterior of the target variable $Y$ in the target domain.

Bayesian Inference Unsupervised Domain Adaptation

Twin Auxilary Classifiers GAN

1 code implementation NeurIPS 2019 Mingming Gong, Yanwu Xu, Chunyuan Li, Kun Zhang, Kayhan Batmanghelich

One of the popular conditional models is Auxiliary Classifier GAN (AC-GAN) that generates highly discriminative images by extending the loss function of GAN with an auxiliary classifier.

Conditional Image Generation

Specific and Shared Causal Relation Modeling and Mechanism-Based Clustering

1 code implementation NeurIPS 2019 Biwei Huang, Kun Zhang, Pengtao Xie, Mingming Gong, Eric P. Xing, Clark Glymour

The learned SSCM gives the specific causal knowledge for each individual as well as the general trend over the population.

Causal Discovery

Learning Multi-level Weight-centric Features for Few-shot Learning

no code implementations28 Nov 2019 Mingjiang Liang, Shaoli Huang, Shirui Pan, Mingming Gong, Wei Liu

Few-shot learning is currently enjoying a considerable resurgence of interest, aided by the recent advance of deep learning.

Few-Shot Learning

Learning Depth from Monocular Videos Using Synthetic Data: A Temporally-Consistent Domain Adaptation Approach

no code implementations16 Jul 2019 Yipeng Mou, Mingming Gong, Huan Fu, Kayhan Batmanghelich, Kun Zhang, DaCheng Tao

Due to the stylish difference between synthetic and real images, we propose a temporally-consistent domain adaptation (TCDA) approach that simultaneously explores labels in the synthetic domain and temporal constraints in the videos to improve style transfer and depth prediction.

Domain Adaptation Monocular Depth Estimation +3

Twin Auxiliary Classifiers GAN

4 code implementations5 Jul 2019 Mingming Gong, Yanwu Xu, Chunyuan Li, Kun Zhang, Kayhan Batmanghelich

One of the popular conditional models is Auxiliary Classifier GAN (AC-GAN), which generates highly discriminative images by extending the loss function of GAN with an auxiliary classifier.

Conditional Image Generation

Causal Discovery and Forecasting in Nonstationary Environments with State-Space Models

no code implementations26 May 2019 Biwei Huang, Kun Zhang, Mingming Gong, Clark Glymour

In many scientific fields, such as economics and neuroscience, we are often faced with nonstationary time series, and concerned with both finding causal relations and forecasting the values of variables of interest, both of which are particularly challenging in such nonstationary environments.

Bayesian Inference Causal Discovery +1

Generative-Discriminative Complementary Learning

no code implementations2 Apr 2019 Yanwu Xu, Mingming Gong, Junxiang Chen, Tongliang Liu, Kun Zhang, Kayhan Batmanghelich

The success of such approaches heavily depends on high-quality labeled instances, which are not easy to obtain, especially as the number of candidate classes increases.

Robust Angular Local Descriptor Learning

1 code implementation21 Jan 2019 Yanwu Xu, Mingming Gong, Tongliang Liu, Kayhan Batmanghelich, Chaohui Wang

In recent years, the learned local descriptors have outperformed handcrafted ones by a large margin, due to the powerful deep convolutional neural network architectures such as L2-Net [1] and triplet based metric learning [2].

Metric Learning

Modeling Dynamic Missingness of Implicit Feedback for Recommendation

no code implementations NeurIPS 2018 Menghan Wang, Mingming Gong, Xiaolin Zheng, Kun Zhang

Recent studies modeled \emph{exposure}, a latent missingness variable which indicates whether an item is missing to a user, to give each missing entry a confidence of being negative feedback.

Collaborative Filtering Recommendation Systems

Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge

1 code implementation5 Nov 2018 Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze

This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.

Brain Tumor Segmentation Survival Prediction +1

Geometry-Consistent Generative Adversarial Networks for One-Sided Unsupervised Domain Mapping

no code implementations CVPR 2019 Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, Kun Zhang, DaCheng Tao

Unsupervised domain mapping aims to learn a function to translate domain X to Y by a function GXY in the absence of paired examples.

Deep Domain Generalization via Conditional Invariant Adversarial Networks

no code implementations ECCV 2018 Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, DaCheng Tao

Under the assumption that the conditional distribution $P(Y|X)$ remains unchanged across domains, earlier approaches to domain generalization learned the invariant representation $T(X)$ by minimizing the discrepancy of the marginal distribution $P(T(X))$.

Domain Generalization Representation Learning

Correcting the Triplet Selection Bias for Triplet Loss

1 code implementation ECCV 2018 Baosheng Yu, Tongliang Liu, Mingming Gong, Changxing Ding, DaCheng Tao

Considering that the number of triplets grows cubically with the size of training data, triplet mining is thus indispensable for efficiently training with triplet loss.

Face Recognition Fine-Grained Image Classification +4

Domain Generalization via Conditional Invariant Representation

1 code implementation23 Jul 2018 Ya Li, Mingming Gong, Xinmei Tian, Tongliang Liu, DaCheng Tao

With the conditional invariant representation, the invariance of the joint distribution $\mathbb{P}(h(X), Y)$ can be guaranteed if the class prior $\mathbb{P}(Y)$ does not change across training and test domains.

Domain Generalization

Subject2Vec: Generative-Discriminative Approach from a Set of Image Patches to a Vector

no code implementations28 Jun 2018 Sumedha Singla, Mingming Gong, Siamak Ravanbakhsh, Frank Sciurba, Barnabas Poczos, Kayhan N. Batmanghelich

Our model consists of three mutually dependent modules which regulate each other: (1) a discriminative network that learns a fixed-length representation from local features and maps them to disease severity; (2) an attention mechanism that provides interpretability by focusing on the areas of the anatomy that contribute the most to the prediction task; and (3) a generative network that encourages the diversity of the local latent features.

MoE-SPNet: A Mixture-of-Experts Scene Parsing Network

no code implementations19 Jun 2018 Huan Fu, Mingming Gong, Chaohui Wang, DaCheng Tao

In the proposed networks, different levels of features at each spatial location are adaptively re-weighted according to the local structure and surrounding contextual information before aggregation.

Scene Parsing

Deep Ordinal Regression Network for Monocular Depth Estimation

5 code implementations CVPR 2018 Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, DaCheng Tao

These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions.

Monocular Depth Estimation

An Efficient and Provable Approach for Mixture Proportion Estimation Using Linear Independence Assumption

no code implementations CVPR 2018 Xiyu Yu, Tongliang Liu, Mingming Gong, Kayhan Batmanghelich, DaCheng Tao

In this paper, we study the mixture proportion estimation (MPE) problem in a new setting: given samples from the mixture and the component distributions, we identify the proportions of the components in the mixture distribution.

Causal Generative Domain Adaptation Networks

no code implementations12 Apr 2018 Mingming Gong, Kun Zhang, Biwei Huang, Clark Glymour, DaCheng Tao, Kayhan Batmanghelich

For this purpose, we first propose a flexible Generative Domain Adaptation Network (G-DAN) with specific latent variables to capture changes in the generating process of features across domains.

Domain Adaptation

Learning with Biased Complementary Labels

1 code implementation ECCV 2018 Xiyu Yu, Tongliang Liu, Mingming Gong, DaCheng Tao

We therefore reason that the transition probabilities will be different.

A Coarse-Fine Network for Keypoint Localization

no code implementations ICCV 2017 Shaoli Huang, Mingming Gong, DaCheng Tao

To target this problem, we develop a keypoint localization network composed of several coarse detector branches, each of which is built on top of a feature layer in a CNN, and a fine detector branch built on top of multiple feature layers.

Pose Estimation

A Compromise Principle in Deep Monocular Depth Estimation

no code implementations28 Aug 2017 Huan Fu, Mingming Gong, Chaohui Wang, DaCheng Tao

However, we find that training a network to predict a high spatial resolution continuous depth map often suffers from poor local solutions.

Classification Data Augmentation +2

Transfer Learning with Label Noise

no code implementations31 Jul 2017 Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao

However, when learning this invariant knowledge, existing methods assume that the labels in source domain are uncontaminated, while in reality, we often have access to source data with noisy labels.

Denoising Transfer Learning

Causal Discovery in the Presence of Measurement Error: Identifiability Conditions

no code implementations10 Jun 2017 Kun Zhang, Mingming Gong, Joseph Ramsey, Kayhan Batmanghelich, Peter Spirtes, Clark Glymour

This problem has received much attention in multiple fields, but it is not clear to what extent the causal model for the measurement-error-free variables can be identified in the presence of measurement error with unknown variance.

Causal Discovery

Causal Inference by Identification of Vector Autoregressive Processes with Hidden Components

no code implementations14 Nov 2014 Philipp Geiger, Kun Zhang, Mingming Gong, Dominik Janzing, Bernhard Schölkopf

A widely applied approach to causal inference from a non-experimental time series $X$, often referred to as "(linear) Granger causal analysis", is to regress present on past and interpret the regression matrix $\hat{B}$ causally.

Causal Inference Time Series

Cannot find the paper you are looking for? You can Submit a new open access paper.