no code implementations • ECCV 2020 • Fan Wang, Huidong Liu, Dimitris Samaras, Chao Chen
We show in experiments that our method generates synthetic images with realistic topology.
no code implementations • 12 Sep 2023 • Moyan Li, Jinmiao Fu, Shaoyuan Xu, Huidong Liu, Jia Liu, Bryan Wang
Unlike public data, another practical challenge on shopping websites is that some paired images are of low quality.
1 code implementation • 11 Mar 2023 • Lei Zhou, Huidong Liu, Joseph Bae, Junjun He, Dimitris Samaras, Prateek Prasanna
To this end, we reformulate segmentation as a sparse encoding -> token completion -> dense decoding (SCD) pipeline.
1 code implementation • 10 Mar 2022 • Lei Zhou, Huidong Liu, Joseph Bae, Junjun He, Dimitris Samaras, Prateek Prasanna
Masked Autoencoder (MAE) has recently been shown to be effective in pre-training Vision Transformers (ViT) for natural image analysis.
1 code implementation • 18 Jan 2022 • Lei Zhou, Joseph Bae, Huidong Liu, Gagandeep Singh, Jeremy Green, Amit Gupta, Dimitris Samaras, Prateek Prasanna
Well-labeled datasets of chest radiographs (CXRs) are difficult to acquire due to the high cost of annotation.
no code implementations • 7 Dec 2021 • Huidong Liu, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Ning Xie, Chien-Chih Wang, Bryan Wang, Yi Sun
In this paper, we propose the Cross-Modality Attention Contrastive Language-Image Pre-training (CMA-CLIP), a new framework which unifies two types of cross-modality attentions, sequence-wise attention and modality-wise attention, to effectively fuse information from image and text pairs.
no code implementations • 29 Sep 2021 • Huidong Liu, Ke Ma, Lei Zhou, Dimitris Samaras
If the \texttt{MRE} is smaller than 1, then every target point is guaranteed to have an area in the source distribution that is mapped to it.
1 code implementation • 9 Feb 2021 • Yikai Zhang, Hui Qu, Qi Chang, Huidong Liu, Dimitris Metaxas, Chao Chen
A federatedGAN jointly trains a centralized generator and multiple private discriminators hosted at different sites.
1 code implementation • NeurIPS 2020 • Boyu Wang, Huidong Liu, Dimitris Samaras, Minh Hoai
Existing crowd counting methods need to use a Gaussian to smooth each annotated dot or to estimate the likelihood of every pixel given the annotated point.
Ranked #4 on Crowd Counting on UCF CC 50
no code implementations • ICCV 2019 • Huidong Liu, Xianfeng Gu, Dimitris Samaras
Based on the quadratic transport cost, we propose an Optimal Transport Regularizer (OTR) to stabilize the training process of WGAN-QC.
no code implementations • 16 Sep 2018 • Huidong Liu, Yang Guo, Na lei, Zhixin Shu, Shing-Tung Yau, Dimitris Samaras, Xianfeng GU
Experimental results on an eight-Gaussian dataset show that the proposed OT can handle multi-cluster distributions.
no code implementations • ICML 2018 • Huidong Liu, Xianfeng GU, Dimitris Samaras
In this paper, we propose a two-step method to compute the Wasserstein distance in Wasserstein Generative Adversarial Networks (WGANs): 1) The convex part of our objective can be solved by linear programming; 2) The non-convex residual can be approximated by a deep neural network.
1 code implementation • 7 May 2018 • Yingkai Li, Huidong Liu
In this paper, we implement the Stochastic Damped LBFGS (SdLBFGS) for stochastic non-convex optimization.