4 code implementations • NeurIPS 2019 • Xiaosong Zhang, Fang Wan, Chang Liu, Rongrong Ji, Qixiang Ye
In this study, we propose a learning-to-match approach to break IoU restriction, allowing objects to match anchors in a flexible manner.
Ranked #125 on Object Detection on COCO test-dev
1 code implementation • 17 Jul 2020 • Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu
However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias.
1 code implementation • 2 Mar 2021 • Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu
Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.
1 code implementation • CVPR 2023 • Chang Liu, Weiming Zhang, Xiangru Lin, Wei zhang, Xiao Tan, Junyu Han, Xiaomao Li, Errui Ding, Jingdong Wang
It employs a "divide-and-conquer" strategy and separately exploits positives for the classification and localization task, which is more robust to the assignment ambiguity.
Ranked #1 on Semi-Supervised Object Detection on COCO 10% labeled data (detector metric)
1 code implementation • International Conference on Computer Vision Workshops 2019 • Dawei Du, Pengfei Zhu, Longyin Wen, Xiao Bian, Haibin Lin, QinGhua Hu, Tao Peng, Jiayu Zheng, Xinyao Wang, Yue Zhang, Liefeng Bo, Hailin Shi, Rui Zhu, Aashish Kumar, Aijin Li, Almaz Zinollayev, Anuar Askergaliyev, Arne Schumann, Binjie Mao, Byeongwon Lee, Chang Liu, Changrui Chen, Chunhong Pan, Chunlei Huo, Da Yu, Dechun Cong, Dening Zeng, Dheeraj Reddy Pailla, Di Li, Dong Wang, Donghyeon Cho, Dongyu Zhang, Furui Bai, George Jose, Guangyu Gao, Guizhong Liu, Haitao Xiong, Hao Qi, Haoran Wang, Heqian Qiu, Hongliang Li, Huchuan Lu, Ildoo Kim, Jaekyum Kim, Jane Shen, Jihoon Lee, Jing Ge, Jingjing Xu, Jingkai Zhou, Jonas Meier, Jun Won Choi, Junhao Hu, Junyi Zhang, Junying Huang, Kaiqi Huang, Keyang Wang, Lars Sommer, Lei Jin, Lei Zhang
Results of 33 object detection algorithms are presented.
1 code implementation • 11 Mar 2024 • Pengchong Qiao, Lei Shang, Chang Liu, Baigui Sun, Xiangyang Ji, Jie Chen
In this paper, motivated by object-oriented programming, we model the subject as a derived class whose base class is its semantic category.
2 code implementations • CVPR 2020 • Shi-Xue Zhang, Xiaobin Zhu, Jie-Bo Hou, Chang Liu, Chun Yang, Hongfa Wang, Xu-Cheng Yin
In this paper, we propose a novel unified relational reasoning graph network for arbitrary shape text detection.
3 code implementations • 9 Mar 2022 • Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, Tie-Yan Liu
This technical note describes the recent updates of Graphormer, including architecture design modifications, and the adaption to 3D molecular dynamics simulation.
2 code implementations • CVPR 2021 • Zonghao Guo, Chang Liu, Xiaosong Zhang, Jianbin Jiao, Xiangyang Ji, Qixiang Ye
Detecting oriented and densely packed objects remains challenging for spatial feature aliasing caused by the intersection of reception fields between objects.
Ranked #34 on Object Detection In Aerial Images on DOTA (using extra training data)
13 code implementations • ICLR 2018 • Xiaojun Xu, Chang Liu, Dawn Song
Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations.
2 code implementations • 16 Dec 2020 • Chang Liu, Zetian Jiang, Runzhong Wang, Junchi Yan, Lingxiao Huang, Pinyan Lu
As such, the agent can finish inlier matching timely when the affinity score stops growing, for which otherwise an additional parameter i. e. the number of inliers is needed to avoid matching outliers.
1 code implementation • ICLR 2022 • Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, Tie-Yan Liu
Denoising diffusion probabilistic models have been recently proposed to generate high-quality samples by estimating the gradient of the data density.
1 code implementation • 13 May 2019 • Huichu Zhang, Siyuan Feng, Chang Liu, Yaoyao Ding, Yichen Zhu, Zihan Zhou, Wei-Nan Zhang, Yong Yu, Haiming Jin, Zhenhui Li
The most commonly used open-source traffic simulator SUMO is, however, not scalable to large road network and large traffic flow, which hinders the study of reinforcement learning on traffic scenarios.
Multi-agent Reinforcement Learning reinforcement-learning +1
2 code implementations • CVPR 2023 • Chang Liu, Henghui Ding, Xudong Jiang
Existing classic RES datasets and methods commonly support single-target expressions only, i. e., one expression refers to one target object.
Generalized Referring Expression Segmentation Referring Expression +1
10 code implementations • ECCV 2020 • Mingqing Xiao, Shuxin Zheng, Chang Liu, Yaolong Wang, Di He, Guolin Ke, Jiang Bian, Zhouchen Lin, Tie-Yan Liu
High-resolution digital images are usually downscaled to fit various display screens or save the cost of storage and bandwidth, meanwhile the post-upscaling is adpoted to recover the original resolutions or the details in the zoom-in images.
1 code implementation • 9 Oct 2022 • Mingqing Xiao, Shuxin Zheng, Chang Liu, Zhouchen Lin, Tie-Yan Liu
To be specific, we develop invertible models to generate valid degraded images and meanwhile transform the distribution of lost contents to the fixed distribution of a latent variable during the forward degradation.
2 code implementations • 2 Mar 2023 • Xu Ma, Yuqian Zhou, Huan Wang, Can Qin, Bin Sun, Chang Liu, Yun Fu
Context clusters (CoCs) view an image as a set of unorganized points and extract features via simplified clustering algorithm.
1 code implementation • ICCV 2023 • Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Chen Change Loy
To investigate the feasibility of using motion expressions to ground and segment objects in videos, we propose a large-scale dataset called MeViS, which contains numerous motion expressions to indicate target objects in complex environments.
Ranked #2 on Referring Video Object Segmentation on MeViS
1 code implementation • ICCV 2021 • Henghui Ding, Chang Liu, Suchen Wang, Xudong Jiang
We introduce transformer and multi-head attention to build a network with an encoder-decoder attention mechanism architecture that "queries" the given image with the language expression.
Generalized Referring Expression Comprehension Generalized Referring Expression Segmentation +1
1 code implementation • 28 Oct 2022 • Henghui Ding, Chang Liu, Suchen Wang, Xudong Jiang
We propose a Vision-Language Transformer (VLT) framework for referring segmentation to facilitate deep interactions among multi-modal information and enhance the holistic understanding to vision-language features.
Ranked #3 on Referring Video Object Segmentation on MeViS
Referring Expression Segmentation Referring Video Object Segmentation
2 code implementations • 15 Dec 2017 • Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, Dawn Song
In this work, we consider a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor.
1 code implementation • ICCV 2023 • Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Philip H. S. Torr, Song Bai
However, since the target objects in these existing datasets are usually relatively salient, dominant, and isolated, VOS under complex scenes has rarely been studied.
5 code implementations • 11 May 2020 • Lingbo Yang, Chang Liu, Pan Wang, Shanshe Wang, Peiran Ren, Siwei Ma, Wen Gao
Existing face restoration researches typically relies on either the degradation prior or explicit guidance labels for training, which often results in limited generalization ability over real-world images with heterogeneous degradations and rich background contents.
1 code implementation • ECCV 2020 • Boyu Yang, Chang Liu, Bohao Li, Jianbin Jiao, Qixiang Ye
Few-shot segmentation is challenging because objects within the support and query images could significantly differ in appearance and pose.
1 code implementation • 30 Aug 2023 • Shuting He, Henghui Ding, Chang Liu, Xudong Jiang
This dataset encompasses a range of expressions: those referring to multiple targets, expressions with no specific target, and the single-target expressions.
Generalized Referring Expression Comprehension Referring Expression +1
1 code implementation • 1 Jun 2023 • Chang Liu, HaoNing Wu, Yujie Zhong, Xiaoyun Zhang, Yanfeng Wang, Weidi Xie
Generative models have recently exhibited exceptional capabilities in text-to-image generation, but still struggle to generate image sequences coherently.
1 code implementation • 22 Aug 2017 • Xiaojun Xu, Chang Liu, Qian Feng, Heng Yin, Le Song, Dawn Song
The problem of cross-platform binary code similarity detection aims at detecting whether two binary functions coming from different platforms are similar or not.
1 code implementation • 29 Jun 2023 • Zhao Wang, Chang Liu, Shaoting Zhang, Qi Dou
Foundation models have exhibited remarkable success in various applications, such as disease diagnosis and text report generation.
1 code implementation • 3 May 2023 • Chang Liu, Hiromasa Tamaki, Tomoyasu Yokoyama, Kensuke Wakasugi, Satoshi Yotsuhashi, Minoru Kusaba, Ryo Yoshida
Stable or metastable crystal structures of assembled atoms can be predicted by finding the global or local minima of the energy surface defined on the space of the atomic configurations.
2 code implementations • 13 May 2018 • Qi-Zhi Cai, Min Du, Chang Liu, Dawn Song
The existence of adversarial examples hinders such applications.
1 code implementation • 8 Nov 2020 • Chang Liu, Yunjie Tian, Jianbin Jiao, Qixiang Ye
Conventional networks for object skeleton detection are usually hand-crafted.
1 code implementation • ICCV 2023 • Pengxu Wei, Yujing Sun, Xingbei Guo, Chang Liu, Jie Chen, Xiangyang Ji, Liang Lin
Despite substantial advances, single-image super-resolution (SISR) is always in a dilemma to reconstruct high-quality images with limited information from one input image, especially in realistic scenarios.
4 code implementations • ICCV 2023 • Peng Jin, Hao Li, Zesen Cheng, Kehan Li, Xiangyang Ji, Chang Liu, Li Yuan, Jie Chen
Existing text-video retrieval solutions are, in essence, discriminant models focused on maximizing the conditional likelihood, i. e., p(candidates|query).
Ranked #15 on Video Retrieval on MSVD
4 code implementations • CVPR 2023 • Peng Jin, Jinfa Huang, Pengfei Xiong, Shangxuan Tian, Chang Liu, Xiangyang Ji, Li Yuan, Jie Chen
Contrastive learning-based video-language representation learning approaches, e. g., CLIP, have achieved outstanding performance, which pursue semantic interaction upon pre-defined video-text pairs.
Ranked #8 on Video Question Answering on MSRVTT-QA
4 code implementations • 20 May 2023 • Peng Jin, Hao Li, Zesen Cheng, Jinfa Huang, Zhennan Wang, Li Yuan, Chang Liu, Jie Chen
In this paper, we propose the Disentangled Conceptualization and Set-to-set Alignment (DiCoSA) to simulate the conceptualizing and reasoning process of human beings.
1 code implementation • 27 Nov 2023 • Aiyu Cui, Jay Mahajan, Viraj Shah, Preeti Gomathinayagam, Chang Liu, Svetlana Lazebnik
By contrast, it is hard to collect paired data for in-the-wild scenes, and therefore, virtual try-on for casual images of people with more diverse poses against cluttered backgrounds is rarely studied.
Ranked #1 on Virtual Try-on on StreetTryOn
1 code implementation • 29 Aug 2022 • Chang Liu, Yujie Zhong, Andrew Zisserman, Weidi Xie
In this paper, we consider the problem of generalised visual object counting, with the goal of developing a computational model for counting the number of objects from arbitrary semantic categories, using arbitrary number of "exemplars", i. e. zero-shot or few-shot counting.
Ranked #3 on Object Counting on FSC147
1 code implementation • NeurIPS 2021 • Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output.
2 code implementations • CVPR 2021 • Bohao Li, Boyu Yang, Chang Liu, Feng Liu, Rongrong Ji, Qixiang Ye
Few-shot object detection has made substantial progressby representing novel class objects using the feature representation learned upon a set of base class objects.
Ranked #14 on Few-Shot Object Detection on MS-COCO (10-shot)
1 code implementation • CVPR 2019 • Fang Wan, Chang Liu, Wei Ke, Xiangyang Ji, Jianbin Jiao, Qixiang Ye
Weakly supervised object detection (WSOD) is a challenging task when provided with image category supervision but required to simultaneously learn object locations and object detectors.
2 code implementations • 8 Nov 2016 • Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song
In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels.
2 code implementations • CVPR 2023 • Yitian Zhang, Yue Bai, Chang Liu, Huan Wang, Sheng Li, Yun Fu
To fix this issue, we propose a general framework, named Frame Flexible Network (FFN), which not only enables the model to be evaluated at different frames to adjust its computation, but also reduces the memory costs of storing multiple models significantly.
1 code implementation • 1 Apr 2018 • Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li
As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms.
1 code implementation • NeurIPS 2021 • Chang Liu, Haoyue Tang, Tao Qin, Jintao Wang, Tie-Yan Liu
This is motivated by the observation that deep generative models, in addition to a likelihood model $p(x|z)$, often also use an inference model $q(z|x)$ for extracting representation, but they rely on a usually uninformative prior distribution $p(z)$ to define a joint distribution, which may render problems like posterior collapse and manifold mismatch.
1 code implementation • 3 Feb 2022 • Jinhua Zhu, Yingce Xia, Chang Liu, Lijun Wu, Shufang Xie, Yusong Wang, Tong Wang, Tao Qin, Wengang Zhou, Houqiang Li, Haiguang Liu, Tie-Yan Liu
Molecular conformation generation aims to generate three-dimensional coordinates of all the atoms in a molecule and is an important task in bioinformatics and pharmacology.
1 code implementation • 23 Nov 2023 • Chang Liu, Yuanhe Tian, Yan Song
Specifically, we firstly cover pivotal RRG approaches based on the task-specific features of radiographs, reports, and the cross-modal relations between them, and then illustrate the benchmark datasets conventionally used for this task with evaluation metrics, subsequently analyze the performance of different approaches and finally offer our summary on the challenges and the trends in future directions.
1 code implementation • 24 Sep 2019 • Dongling Xiao, Chang Liu, Qi. Wang, Chao Wang, Xin Zhang
For general supervised deep learning classification algorithms, the pixel-by-pixel algorithm achieves precise yet inefficient classification with a small number of labeled pixels, whereas the pixel mapping algorithm achieves efficient yet edge-rough classification with more prior labels required.
1 code implementation • 20 Jun 2020 • Yuan Yao, Chang Liu, Dezhao Luo, Yu Zhou, Qixiang Ye
The generative perception model acts as a feature decoder to focus on comprehending high temporal resolution and short-term representation by introducing a motion-attention mechanism.
1 code implementation • 10 Mar 2020 • Shen Gao, Xiuying Chen, Chang Liu, Li Liu, Dongyan Zhao, Rui Yan
Stickers with vivid and engaging expressions are becoming increasingly popular in online messaging apps, and some works are dedicated to automatically select sticker response by matching text labels of stickers with previous utterances.
1 code implementation • 3 Sep 2020 • Chang Liu, Xuemeng Liu, Derrick Wing Kwan Ng, Jinhong Yuan
To this end, we first develop a versatile DReL-based channel estimation framework where a deep residual network (DRN)-based MMSE estimator is derived in terms of Bayesian philosophy.
1 code implementation • 1 Dec 2020 • Chang Liu, Xuemeng Liu, Derrick Wing Kwan Ng, Jinhong Yuan
Channel estimation is of great importance in realizing practical intelligent reflecting surface-assisted multi-user communication (IRS-MC) systems.
2 code implementations • 22 Feb 2017 • Feng Wang, Xiang Xiang, Chang Liu, Trac. D. Tran, Austin Reiter, Gregory D. Hager, Harry Quon, Jian Cheng, Alan L. Yuille
In this way, the expression intensity regression task can benefit from the rich feature representations trained on a huge amount of data for face verification.
1 code implementation • 9 Jun 2022 • Si Shen, Jiangfeng Liu, Litao Lin, Ying Huang, Lin Zhang, Chang Liu, Yutong Feng, Dongbo Wang
The academic literature of social sciences records human civilization and studies human social problems.
1 code implementation • CVPR 2021 • Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
The existence of noisy labels in real-world data negatively impacts the performance of deep learning models.
1 code implementation • 16 Nov 2021 • Hengzhi Pei, Kan Ren, Yuqing Yang, Chang Liu, Tao Qin, Dongsheng Li
In this paper, we propose a novel generative framework for RTS data - RTSGAN to tackle the aforementioned challenges.
1 code implementation • CVPR 2023 • Xiao Yang, Chang Liu, Longlong Xu, Yikai Wang, Yinpeng Dong, Ning Chen, Hang Su, Jun Zhu
The goal of this work is to develop a more reliable technique that can carry out an end-to-end evaluation of adversarial robustness for commercial systems.
1 code implementation • 20 Jul 2022 • Chang Liu, Xiaoyan Qian, Binxiao Huang, Xiaojuan Qi, Edmund Lam, Siew-Chong Tan, Ngai Wong
By enriching the sparse point clouds, our method achieves 4. 48\% and 4. 03\% better 3D AP on KITTI moderate and hard samples, respectively, versus the state-of-the-art autolabeler.
1 code implementation • 19 May 2023 • Chang Liu, Dong Liu
Specifically, we train a lightweight condition adapter to establish the correlation between external conditions and internal representations of diffusion models.
Ranked #1 on Conditional Text-to-Image Synthesis on COCO 2017 val
Conditional Image Generation Conditional Text-to-Image Synthesis
1 code implementation • CVPR 2022 • Chang Liu, Chun Yang, Xu-Cheng Yin
Contextual information can be decomposed into temporal information and linguistic information.
1 code implementation • 14 May 2023 • Xi Yang, Kejiang Chen, Weiming Zhang, Chang Liu, Yuang Qi, Jie Zhang, Han Fang, Nenghai Yu
To allow third-parties to autonomously inject watermarks into generated text, we develop a watermarking framework for black-box language model usage scenarios.
1 code implementation • NeurIPS 2021 • Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu
Behavioral cloning has proven to be effective for learning sequential decision-making policies from expert demonstrations.
1 code implementation • NeurIPS 2021 • Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-Yan Liu
To avoid such a spurious correlation, we propose \textbf{La}tent \textbf{C}ausal \textbf{I}nvariance \textbf{M}odels (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction.
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
1 code implementation • 27 Mar 2024 • Qiran Zou, Shangyuan Yuan, Shian Du, Yu Wang, Chang Liu, Yi Xu, Jie Chen, Xiangyang Ji
However, these methods encounter challenges such as the lack of coordination between different part motions and difficulties for networks to understand part concepts.
Ranked #8 on Motion Synthesis on HumanML3D
1 code implementation • 4 Jul 2018 • Chang Liu, Jingwei Zhuo, Pengyu Cheng, Ruiyi Zhang, Jun Zhu, Lawrence Carin
Particle-based variational inference methods (ParVIs) have gained attention in the Bayesian inference literature, for their capacity to yield flexible and accurate approximations.
1 code implementation • 7 Jul 2020 • Yunjie Tian, Chang Liu, Lingxi Xie, Jianbin Jiao, Qixiang Ye
The search cost of neural architecture search (NAS) has been largely reduced by weight-sharing methods.
1 code implementation • CVPR 2023 • Yu Wang, Pengchong Qiao, Chang Liu, Guoli Song, Xiawu Zheng, Jie Chen
We argue that an overlooked problem of robust SSL is its corrupted information on semantic level, practically limiting the development of the field.
1 code implementation • 1 Jun 2023 • Chang Liu, Shunxin Xu, Jialun Peng, Kaidong Zhang, Dong Liu
To address this problem, we propose a two-stage image inpainting method termed SketchRefiner.
1 code implementation • 5 Dec 2023 • Hao Li, Curise Jia, Peng Jin, Zesen Cheng, Kehan Li, Jialu Sui, Chang Liu, Li Yuan
In this paper, we propose the Style-Diversified Query-Based Image Retrieval task, which enables retrieval based on various query styles.
1 code implementation • 21 Apr 2022 • Tianyu Cui, Gaopeng Gou, Gang Xiong, Chang Liu, Peipei Fu, Zhen Li
6GAN forces multiple generators to train with a multi-class discriminator and an alias detector to generate non-aliased active targets with different addressing pattern types.
1 code implementation • ICCV 2021 • Kai Li, Chang Liu, Handong Zhao, Yulun Zhang, Yun Fu
This paper studies Semi-Supervised Domain Adaptation (SSDA), a practical yet under-investigated research topic that aims to learn a model of good performance using unlabeled samples and a few labeled samples in the target domain, with the help of labeled samples from a source domain.
1 code implementation • 26 Jan 2022 • Minoru Kusaba, Chang Liu, Ryo Yoshida
The prediction of energetically stable crystal structures formed by a given chemical composition is a central problem in solid-state physics.
1 code implementation • 15 Dec 2022 • Bohao Li, Chang Liu, Mengnan Shi, Xiaozhong Chen, Xiangyang Ji, Qixiang Ye
Adapting object detectors learned with sufficient supervision to novel classes under low data regimes is charming yet challenging.
1 code implementation • 18 Jan 2024 • Zesen Cheng, Kehan Li, Hao Li, Peng Jin, Chang Liu, Xiawu Zheng, Rongrong Ji, Jie Chen
To mold instance queries to follow Brownian bridge and accomplish alignment with class texts, we design Bridge-Text Alignment (BTA) to learn discriminative bridge-level representations of instances via contrastive objectives.
1 code implementation • 17 Jun 2016 • Chang Liu, Yu Cao, Yan Luo, Guanling Chen, Vinod Vokkarane, Yunsheng Ma
We applied our proposed approach to two real-world food image data sets (UEC-256 and Food-101) and achieved impressive results.
1 code implementation • 20 Apr 2022 • Tianyu Cui, Gaopeng Gou, Gang Xiong, Zhen Li, Mingxin Cui, Chang Liu
To do this, we propose an IPv6 address correlation model - SiamHAN.
1 code implementation • ICCV 2023 • Runyi Yu, Zhennan Wang, Yinhuai Wang, Kehan Li, Chang Liu, Haoyi Duan, Xiangyang Ji, Jie Chen
A typical way to introduce position information is adding the absolute Position Embedding (PE) to patch embedding before entering VTs.
1 code implementation • 5 Dec 2023 • Zhuo Huang, Chang Liu, Yinpeng Dong, Hang Su, Shibao Zheng, Tongliang Liu
Concretely, by estimating a transition matrix that captures the probability of one class being confused with another, an instruction containing a correct exemplar and an erroneous one from the most probable noisy class can be constructed.
1 code implementation • 1 Feb 2019 • Chang Liu, Jingwei Zhuo, Jun Zhu
It is known that the Langevin dynamics used in MCMC is the gradient flow of the KL divergence on the Wasserstein space, which helps convergence analysis and inspires recent particle-based variational inference methods (ParVIs).
1 code implementation • 1 Nov 2022 • Chang Liu, Yuwen Yang, Zhe Xie, Hongtao Lu, Yue Ding
2) Prevailing graph augmentation methods for GEL, including rule-based, sample-based, adaptive, and automated methods, are not suitable for augmenting subgraphs because a subgraph contains fewer nodes but richer information such as position, neighbor, and structure.
1 code implementation • CVPR 2020 • Guanlin Li, Shuya Ding, Jun Luo, Chang Liu
Whereas adversarial training is employed as the main defence strategy against specific adversarial samples, it has limited generalization capability and incurs excessive time complexity.
1 code implementation • 2 Jan 2020 • Dezhao Luo, Chang Liu, Yu Zhou, Dongbao Yang, Can Ma, Qixiang Ye, Weiping Wang
As a proxy task, it converts rich self-supervised representations into video clip operations (options), which enhances the flexibility and reduces the complexity of representation learning.
Ranked #11 on Self-supervised Video Retrieval on HMDB51
1 code implementation • 31 Dec 2022 • Xin Ma, Chang Liu, Chunyu Xie, Long Ye, Yafeng Deng, Xiangyang Ji
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency.
1 code implementation • 22 Aug 2023 • Chang Liu, Bo Wu
Yet, the application of LLMs to graph data remains under-explored.
2 code implementations • 17 May 2019 • Hao Liu, Chang Liu, Jason T. L. Wang, Haimin Wang
The essence of our approach is to model data samples in an AR as time series and use LSTMs to capture temporal information of the data samples.
1 code implementation • ICCV 2023 • Bo Fang, Wenhao Wu, Chang Liu, Yu Zhou, Yuxin Song, Weiping Wang, Xiangbo Shu, Xiangyang Ji, Jingdong Wang
In the refined embedding space, we represent text-video pairs as probabilistic distributions where prototypes are sampled for matching evaluation.
1 code implementation • 5 Nov 2018 • Spyridon Bakas, Mauricio Reyes, Andras Jakab, Stefan Bauer, Markus Rempfler, Alessandro Crimi, Russell Takeshi Shinohara, Christoph Berger, Sung Min Ha, Martin Rozycki, Marcel Prastawa, Esther Alberts, Jana Lipkova, John Freymann, Justin Kirby, Michel Bilello, Hassan Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Benedikt Wiestler, Rivka Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-Andre Weber, Abhishek Mahajan, Ujjwal Baid, Elizabeth Gerstner, Dongjin Kwon, Gagan Acharya, Manu Agarwal, Mahbubul Alam, Alberto Albiol, Antonio Albiol, Francisco J. Albiol, Varghese Alex, Nigel Allinson, Pedro H. A. Amorim, Abhijit Amrutkar, Ganesh Anand, Simon Andermatt, Tal Arbel, Pablo Arbelaez, Aaron Avery, Muneeza Azmat, Pranjal B., W Bai, Subhashis Banerjee, Bill Barth, Thomas Batchelder, Kayhan Batmanghelich, Enzo Battistella, Andrew Beers, Mikhail Belyaev, Martin Bendszus, Eze Benson, Jose Bernal, Halandur Nagaraja Bharath, George Biros, Sotirios Bisdas, James Brown, Mariano Cabezas, Shilei Cao, Jorge M. Cardoso, Eric N Carver, Adrià Casamitjana, Laura Silvana Castillo, Marcel Catà, Philippe Cattin, Albert Cerigues, Vinicius S. Chagas, Siddhartha Chandra, Yi-Ju Chang, Shiyu Chang, Ken Chang, Joseph Chazalon, Shengcong Chen, Wei Chen, Jefferson W. Chen, Zhaolin Chen, Kun Cheng, Ahana Roy Choudhury, Roger Chylla, Albert Clérigues, Steven Colleman, Ramiro German Rodriguez Colmeiro, Marc Combalia, Anthony Costa, Xiaomeng Cui, Zhenzhen Dai, Lutao Dai, Laura Alexandra Daza, Eric Deutsch, Changxing Ding, Chao Dong, Shidu Dong, Wojciech Dudzik, Zach Eaton-Rosen, Gary Egan, Guilherme Escudero, Théo Estienne, Richard Everson, Jonathan Fabrizio, Yong Fan, Longwei Fang, Xue Feng, Enzo Ferrante, Lucas Fidon, Martin Fischer, Andrew P. French, Naomi Fridman, Huan Fu, David Fuentes, Yaozong Gao, Evan Gates, David Gering, Amir Gholami, Willi Gierke, Ben Glocker, Mingming Gong, Sandra González-Villá, T. Grosges, Yuanfang Guan, Sheng Guo, Sudeep Gupta, Woo-Sup Han, Il Song Han, Konstantin Harmuth, Huiguang He, Aura Hernández-Sabaté, Evelyn Herrmann, Naveen Himthani, Winston Hsu, Cheyu Hsu, Xiaojun Hu, Xiaobin Hu, Yan Hu, Yifan Hu, Rui Hua, Teng-Yi Huang, Weilin Huang, Sabine Van Huffel, Quan Huo, Vivek HV, Khan M. Iftekharuddin, Fabian Isensee, Mobarakol Islam, Aaron S. Jackson, Sachin R. Jambawalikar, Andrew Jesson, Weijian Jian, Peter Jin, V Jeya Maria Jose, Alain Jungo, B Kainz, Konstantinos Kamnitsas, Po-Yu Kao, Ayush Karnawat, Thomas Kellermeier, Adel Kermi, Kurt Keutzer, Mohamed Tarek Khadir, Mahendra Khened, Philipp Kickingereder, Geena Kim, Nik King, Haley Knapp, Urspeter Knecht, Lisa Kohli, Deren Kong, Xiangmao Kong, Simon Koppers, Avinash Kori, Ganapathy Krishnamurthi, Egor Krivov, Piyush Kumar, Kaisar Kushibar, Dmitrii Lachinov, Tryphon Lambrou, Joon Lee, Chengen Lee, Yuehchou Lee, M Lee, Szidonia Lefkovits, Laszlo Lefkovits, James Levitt, Tengfei Li, Hongwei Li, Hongyang Li, Xiaochuan Li, Yuexiang Li, Heng Li, Zhenye Li, Xiaoyu Li, Zeju Li, Xiaogang Li, Wenqi Li, Zheng-Shen Lin, Fengming Lin, Pietro Lio, Chang Liu, Boqiang Liu, Xiang Liu, Mingyuan Liu, Ju Liu, Luyan Liu, Xavier Llado, Marc Moreno Lopez, Pablo Ribalta Lorenzo, Zhentai Lu, Lin Luo, Zhigang Luo, Jun Ma, Kai Ma, Thomas Mackie, Anant Madabushi, Issam Mahmoudi, Klaus H. Maier-Hein, Pradipta Maji, CP Mammen, Andreas Mang, B. S. Manjunath, Michal Marcinkiewicz, S McDonagh, Stephen McKenna, Richard McKinley, Miriam Mehl, Sachin Mehta, Raghav Mehta, Raphael Meier, Christoph Meinel, Dorit Merhof, Craig Meyer, Robert Miller, Sushmita Mitra, Aliasgar Moiyadi, David Molina-Garcia, Miguel A. B. Monteiro, Grzegorz Mrukwa, Andriy Myronenko, Jakub Nalepa, Thuyen Ngo, Dong Nie, Holly Ning, Chen Niu, Nicholas K Nuechterlein, Eric Oermann, Arlindo Oliveira, Diego D. C. Oliveira, Arnau Oliver, Alexander F. I. Osman, Yu-Nian Ou, Sebastien Ourselin, Nikos Paragios, Moo Sung Park, Brad Paschke, J. Gregory Pauloski, Kamlesh Pawar, Nick Pawlowski, Linmin Pei, Suting Peng, Silvio M. Pereira, Julian Perez-Beteta, Victor M. Perez-Garcia, Simon Pezold, Bao Pham, Ashish Phophalia, Gemma Piella, G. N. Pillai, Marie Piraud, Maxim Pisov, Anmol Popli, Michael P. Pound, Reza Pourreza, Prateek Prasanna, Vesna Prkovska, Tony P. Pridmore, Santi Puch, Élodie Puybareau, Buyue Qian, Xu Qiao, Martin Rajchl, Swapnil Rane, Michael Rebsamen, Hongliang Ren, Xuhua Ren, Karthik Revanuru, Mina Rezaei, Oliver Rippel, Luis Carlos Rivera, Charlotte Robert, Bruce Rosen, Daniel Rueckert, Mohammed Safwan, Mostafa Salem, Joaquim Salvi, Irina Sanchez, Irina Sánchez, Heitor M. Santos, Emmett Sartor, Dawid Schellingerhout, Klaudius Scheufele, Matthew R. Scott, Artur A. Scussel, Sara Sedlar, Juan Pablo Serrano-Rubio, N. Jon Shah, Nameetha Shah, Mazhar Shaikh, B. Uma Shankar, Zeina Shboul, Haipeng Shen, Dinggang Shen, Linlin Shen, Haocheng Shen, Varun Shenoy, Feng Shi, Hyung Eun Shin, Hai Shu, Diana Sima, M Sinclair, Orjan Smedby, James M. Snyder, Mohammadreza Soltaninejad, Guidong Song, Mehul Soni, Jean Stawiaski, Shashank Subramanian, Li Sun, Roger Sun, Jiawei Sun, Kay Sun, Yu Sun, Guoxia Sun, Shuang Sun, Yannick R Suter, Laszlo Szilagyi, Sanjay Talbar, DaCheng Tao, Zhongzhao Teng, Siddhesh Thakur, Meenakshi H Thakur, Sameer Tharakan, Pallavi Tiwari, Guillaume Tochon, Tuan Tran, Yuhsiang M. Tsai, Kuan-Lun Tseng, Tran Anh Tuan, Vadim Turlapov, Nicholas Tustison, Maria Vakalopoulou, Sergi Valverde, Rami Vanguri, Evgeny Vasiliev, Jonathan Ventura, Luis Vera, Tom Vercauteren, C. A. Verrastro, Lasitha Vidyaratne, Veronica Vilaplana, Ajeet Vivekanandan, Qian Wang, Chiatse J. Wang, Wei-Chung Wang, Duo Wang, Ruixuan Wang, Yuanyuan Wang, Chunliang Wang, Guotai Wang, Ning Wen, Xin Wen, Leon Weninger, Wolfgang Wick, Shaocheng Wu, Qiang Wu, Yihong Wu, Yong Xia, Yanwu Xu, Xiaowen Xu, Peiyuan Xu, Tsai-Ling Yang, Xiaoping Yang, Hao-Yu Yang, Junlin Yang, Haojin Yang, Guang Yang, Hongdou Yao, Xujiong Ye, Changchang Yin, Brett Young-Moxon, Jinhua Yu, Xiangyu Yue, Songtao Zhang, Angela Zhang, Kun Zhang, Xue-jie Zhang, Lichi Zhang, Xiaoyue Zhang, Yazhuo Zhang, Lei Zhang, Jian-Guo Zhang, Xiang Zhang, Tianhao Zhang, Sicheng Zhao, Yu Zhao, Xiaomei Zhao, Liang Zhao, Yefeng Zheng, Liming Zhong, Chenhong Zhou, Xiaobing Zhou, Fan Zhou, Hongtu Zhu, Jin Zhu, Ying Zhuge, Weiwei Zong, Jayashree Kalpathy-Cramer, Keyvan Farahani, Christos Davatzikos, Koen van Leemput, Bjoern Menze
This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i. e., 2012-2018.
1 code implementation • 4 Jul 2022 • Chang Liu, Gang Yang, Shuo Wang, Hangxu Wang, Yunhua Zhang, Yutao Wang
We employ the powerful feature extraction capability of Transformer (PVTv2) to extract global semantic information from RGB data and design a lightweight CNN backbone (LWDepthNet) to extract spatial structure information from depth data without pre-training.
1 code implementation • 13 May 2022 • Xingchen Zhao, Chang Liu, Anthony Sicilia, Seong Jae Hwang, Yun Fu
Thus, it is still possible that those methods can overfit to source domains and perform poorly on target domains.
1 code implementation • 6 Nov 2022 • Yu Yang, Xiaotian Cheng, Chang Liu, Hakan Bilen, Xiangyang Ji
In recent years, generative adversarial networks (GANs) have been an actively studied topic and shown to successfully produce high-quality realistic images in various domains.
2 code implementations • ICLR 2022 • Jiaxin Shi, Chang Liu, Lester Mackey
We introduce a new family of particle evolution samplers suitable for constrained domains and non-Euclidean geometries.
1 code implementation • ACL 2022 • Chang Liu, Chongyang Tao, Jiazhan Feng, Dongyan Zhao
Transferring the knowledge to a small model through distillation has raised great interest in recent years.
1 code implementation • 14 Jul 2023 • Jingna Qiu, Frauke Wilm, Mathias Öttl, Maja Schlereth, Chang Liu, Tobias Heimann, Marc Aubreville, Katharina Breininger
We find that the efficiency of this method highly depends on the choice of AL step size (i. e., the combination of region size and the number of selected regions per WSI), and a suboptimal AL step size can result in redundant annotation requests or inflated computation costs.
1 code implementation • 30 Nov 2017 • Chang Liu, Jun Zhu
The benefits are two-folds: (i) for inference tasks in Euclidean spaces, RSVGD has the advantage over SVGD of utilizing information geometry, and (ii) for inference tasks on Riemann manifolds, RSVGD brings the unique advantages of SVGD to the Riemannian world.
1 code implementation • 25 Nov 2022 • Qiran Zou, Yu Yang, Wing Yin Cheung, Chang Liu, Xiangyang Ji
Unsupervised foreground-background segmentation aims at extracting salient objects from cluttered backgrounds, where Generative Adversarial Network (GAN) approaches, especially layered GANs, show great promise.
1 code implementation • 2 Aug 2020 • Guanlin Li, Chang Liu, Han Yu, Yanhong Fan, Libang Zhang, Zongyue Wang, Meiqin Wang
Information about system characteristics such as power consumption, electromagnetic leaks and sound can be exploited by the side-channel attack to compromise the system.
4 code implementations • 27 Aug 2020 • Haodi Jiang, Jiasheng Wang, Chang Liu, Ju Jing, Hao liu, Jason T. L. Wang, Haimin Wang
Deep learning has drawn a lot of interest in recent years due to its effectiveness in processing big and complex observational data gathered from diverse instruments.
1 code implementation • 17 Mar 2024 • Shantanu Kodgirwar, Lars Loetgering, Chang Liu, Aleena Joseph, Leona Licht, Daniel S. Penagos Molina, Wilhelm Eschen, Jan Rothhardt, Michael Habeck
Image information is restricted by the dynamic range of the detector, which can be addressed using multi-exposure image fusion (MEF).
3 code implementations • 22 Feb 2020 • Hao Liu, Chang Liu, Jason T. L. Wang, Haimin Wang
We present two recurrent neural networks (RNNs), one based on gated recurrent units and the other based on long short-term memory, for predicting whether an active region (AR) that produces an M- or X-class flare will also produce a coronal mass ejection (CME).
1 code implementation • 29 Nov 2021 • Mengnan Shi, Chang Liu, Qixiang Ye, Jianbin Jiao
Gating modules have been widely explored in dynamic network pruning to reduce the run-time computational cost of deep neural networks while preserving the representation of features.
1 code implementation • 1 Jun 2023 • Arnaud Deza, Chang Liu, Pashootan Vaezipoor, Elias B. Khalil
In this work, we propose a simple yet novel Constraint Programming approach to find non-commutative algorithms for fast matrix multiplication or provide proof of infeasibility otherwise.
1 code implementation • Pattern Recognition and Computer Vision 2022 • Le Zhang, Qi Feng, Yao Lu, Chang Liu, and Guangming Lu
Attention mechanisms can effectively improve the performance of the mobile networks with a limited computational complexity cost.
1 code implementation • 14 Nov 2023 • Jinpei Guo, Shaofeng Zhang, Runzhong Wang, Chang Liu, Junchi Yan
Meanwhile, on Pascal VOC, QueryTrans improves the accuracy of NGMv2 from $80. 1\%$ to $\mathbf{83. 3\%}$, and BBGM from $79. 0\%$ to $\mathbf{84. 5\%}$.
Ranked #1 on Graph Matching on PASCAL VOC (matching accuracy metric)
1 code implementation • 17 May 2020 • Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong Liu, Dongyan Zhao, Rui Yan
We manually collect a new and high-quality paired dataset, where each pair contains an unordered product attribute set in the source language and an informative product description in the target language.
1 code implementation • 6 Jul 2020 • Yifei Zhang, Chang Liu, Yu Zhou, Wei Wang, Weiping Wang, Qixiang Ye
In this work, we propose a novel clustering based method, which, by iteratively excluding class inconsistent samples during progressive cluster formation, alleviates the impact of noise samples in a simple-yet-effective manner.
1 code implementation • 4 Sep 2020 • Yasser Abduallah, Jason T. L. Wang, Yang Nie, Chang Liu, Haimin Wang
Solar flare prediction plays an important role in understanding and forecasting space weather.
1 code implementation • 24 Jun 2021 • Xianlong Zeng, Fanghao Song, Zhongen Li, Krerkkiat Chusap, Chang Liu
Our method can be divided into three stages: 1) a neighborhood generation stage, which generates instances based on the given sample; 2) a classification stage, which yields classifications on the generated instances to carve out the local decision boundary and delineate the model behavior; and 3) a human-in-the-loop stage, which involves human to refine and explore the neighborhood of interest.
BIG-bench Machine Learning Explainable artificial intelligence +1
1 code implementation • 18 May 2023 • Yitong Li, Chang Liu, Jie Ma
Weakly supervised learning based on scribble annotations in target extraction of remote sensing images has drawn much interest due to scribbles' flexibility in denoting winding objects and low cost of manually labeling.
1 code implementation • 17 Apr 2024 • Ziyu Zhou, Wenyuan Shen, Chang Liu
Colorectal cancer (CRC), which frequently originates from initially benign polyps, remains a significant contributor to global cancer-related mortality.
no code implementations • ICML 2018 • Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, Bo Zhang
Stein variational gradient descent (SVGD) is a recently proposed particle-based Bayesian inference method, which has attracted a lot of interest due to its remarkable approximation ability and particle efficiency compared to traditional variational inference and Markov Chain Monte Carlo methods.
no code implementations • 16 May 2018 • Chang Liu, Xiangrui Zeng, Kaiwen Wang, Qiang Guo, Min Xu
Cellular Electron Cryo-Tomography (CECT) is a powerful 3D imaging tool for studying the native structure and organization of macromolecules inside single cells.
no code implementations • CVPR 2018 • Xiaojun Xu, Xinyun Chen, Chang Liu, Anna Rohrbach, Trevor Darrell, Dawn Song
Our work sheds new light on understanding adversarial attacks on vision systems which have a language component and shows that attention, bounding box localization, and compositional internal structures are vulnerable to adversarial attacks.
no code implementations • 5 Mar 2018 • Chang Liu, Xiaolin Wu, Xiao Shu
All existing image enhancement methods, such as HDR tone mapping, cannot recover A/D quantization losses due to insufficient or excessive lighting, (underflow and overflow problems).
no code implementations • ICLR 2018 • Xinyun Chen, Chang Liu, Dawn Song
In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500x longer than the training samples.
no code implementations • 22 Feb 2018 • Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, Dawn Song
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model.
no code implementations • 20 Feb 2018 • Qiuyuan Huang, Li Deng, Dapeng Wu, Chang Liu, Xiaodong He
This paper proposes a new architecture - Attentive Tensor Product Learning (ATPL) - to represent grammatical structures in deep learning models.
no code implementations • 14 Feb 2018 • Jaime F. Fisac, Chang Liu, Jessica B. Hamrick, S. Shankar Sastry, J. Karl Hedrick, Thomas L. Griffiths, Anca D. Dragan
We introduce $t$-\ACty{}: a measure that quantifies the accuracy and confidence with which human observers can predict the remaining robot plan from the overall task goal and the observed initial $t$ actions in the plan.
no code implementations • 12 Feb 2018 • Chang Liu, Xiangrui Zeng, Ruogu Lin, Xiaodan Liang, Zachary Freyberg, Eric Xing, Min Xu
Cellular Electron Cryo-Tomography (CECT) is a powerful imaging technique for the 3D visualization of cellular structure and organization at submolecular resolution.
no code implementations • ICLR 2018 • Xinyun Chen, Chang Liu, Dawn Song
We observe that program translation is a modular procedure, in which a sub-tree of the source tree is translated into the corresponding target sub-tree at each step.
no code implementations • 6 Feb 2018 • Chang Liu, Jessica B. Hamrick, Jaime F. Fisac, Anca D. Dragan, J. Karl Hedrick, S. Shankar Sastry, Thomas L. Griffiths
The study of human-robot interaction is fundamental to the design and use of robotics in real-world applications.
no code implementations • 20 Jul 2017 • Jaime F. Fisac, Monica A. Gates, Jessica B. Hamrick, Chang Liu, Dylan Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S. Shankar Sastry, Thomas L. Griffiths, Anca D. Dragan
In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users' objectives as they go.
no code implementations • 19 Jan 2018 • He-Liang Huang, Xi-Lin Wang, Peter P. Rohde, Yi-Han Luo, You-Wei Zhao, Chang Liu, Li Li, Nai-Le Liu, Chao-Yang Lu, Jian-Wei Pan
Topological data analysis offers a robust way to extract useful information from noisy, unstructured data by identifying its underlying structure.
no code implementations • 11 Sep 2017 • Sattar Vakili, Qing Zhao, Chang Liu, Chen-Nee Chuah
We consider the problem of detecting a few targets among a large number of hierarchical data streams.
no code implementations • 18 Feb 2017 • Chang Liu, Fuchun Sun, Changhu Wang, Feng Wang, Alan Yuille
In this way, the sequential representation of an image can be naturally translated to a sequence of words, as the target sequence of the RNN model.
no code implementations • 23 Jun 2017 • Wenshuo Wang, Chang Liu, Ding Zhao
For projects that cost millions of dollars, it is critical to determine the right amount of data needed.
no code implementations • 28 Mar 2017 • Shun Yang, Wenshuo Wang, Chang Liu, Kevin Deng, J. Karl Hedrick
We collect a large set of data using The Open Racing Car Simulator (TORCS) and classify the image features into three categories (sky-related, roadside-related, and road-related features). We then design two experimental frameworks to investigate the importance of each single feature for training a CNN controller. The first framework uses the training data with all three features included to train a controller, which is then tested with data that has one feature removed to evaluate the feature's effects.
no code implementations • 11 Dec 2016 • Xiaolin Wu, Xi Zhang, Chang Liu
This article is a sequel to our earlier work [25].
no code implementations • NeurIPS 2016 • Xinyun Chen, Chang Liu, Richard Shin, Dawn Song, Mingcheng Chen
Automatic translation from natural language descriptions into programs is a longstanding challenging problem.
no code implementations • 7 Aug 2016 • Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea
The effectiveness of supervised learning techniques has made them ubiquitous in research and practice.
no code implementations • 20 Oct 2014 • Chang Liu, Yi Xu
We propose a filter method for unsupervised feature selection which is based on the Confidence Machine.
no code implementations • 29 Oct 2014 • Xudong Liu, Bin Zhang, Ting Zhang, Chang Liu
Rating Prediction is a basic problem in Recommender System, and one of the most widely used method is Factorization Machines(FM).
no code implementations • ECCV 2018 • Chang Liu, Wei Ke, Fei Qin, Qixiang Ye
Hinted by this, we formalize a Linear Span framework, and propose Linear Span Network (LSN) modified by Linear Span Units (LSUs), which minimize the reconstruction error of convolutional network.
no code implementations • 12 Nov 2018 • Sophia Collet, Robert Dadashi, Zahi N. Karam, Chang Liu, Parinaz Sobhani, Yevgeniy Vahlis, Ji Chao Zhang
In this work, two approaches for private model aggregation are proposed that enable the transfer of knowledge from existing models trained on other companies' datasets to a new company with limited labeled data while protecting each client company's underlying individual sensitive information.
no code implementations • 21 Dec 2018 • Chang Liu, Zhaowei Shang, Anyong Qin
To address this issue, here we propose a novel deep residual learning model that combines the dilated residual convolution and multi-scale convolution groups.
no code implementations • NeurIPS 2016 • Chang Liu, Jun Zhu, Yang song
We propose two stochastic gradient MCMC methods for sampling from Bayesian posterior distributions defined on Riemann manifolds with a known geodesic flow, e. g. hyperspheres.
no code implementations • ICLR 2019 • Xinyun Chen, Chang Liu, Dawn Song
Most existing neural program synthesis approaches employ an encoder-decoder architecture, which uses an encoder to compute the embedding of the given input-output examples, as well as a decoder to generate the program from the embedding following a given syntax.
no code implementations • 29 Mar 2019 • Weiwei Zong, Joon Lee, Chang Liu, Eric Carver, Aharon Feldman, Branislava Janic, Mohamed Elshaikh, Milan Pantelic, David Hearshen, Indrin Chetty, Benjamin Movsas, Ning Wen
Deep learning models have had a great success in disease classifications using large data pools of skin cancer images or lung X-rays.
no code implementations • 4 Apr 2019 • Zhenzhen Dai, Eric Carver, Chang Liu, Joon Lee, Aharon Feldman, Weiwei Zong, Milan Pantelic, Mohamed Elshaikh, Ning Wen
Prostate cancer (PCa) is the most common cancer in men in the United States.
no code implementations • CVPR 2019 • Chang Liu, Fang Wan, Wei Ke, Zhuowei Xiao, Yuan Yao, Xiaosong Zhang, Qixiang Ye
The weight sharing scheme and spatial pooling operations in Convolutional Neural Networks (CNNs) introduce semantic correlation to neighboring pixels on feature maps and therefore deteriorate their pixel-wise classification performance.
no code implementations • WS 2019 • Xinze Guo, Chang Liu, Xiaolong Li, Yiran Wang, Guoliang Li, Feng Wang, Zhitao Xu, Liuyi Yang, Li Ma, Changliang Li
This paper describes the Kingsoft AI Lab{'}s submission to the WMT2019 news translation shared task.
no code implementations • 27 Aug 2019 • Eddie S. J. Du, Chang Liu, David H. Wayne
The ability to accurately predict the fit of fashion items and recommend the correct size is key to reducing merchandise returns in e-commerce.
no code implementations • 30 Aug 2019 • Chang Liu, Yi Dong, Han Yu, Zhiqi Shen, Zhanning Gao, Pan Wang, Changgong Zhang, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
Video contents have become a critical tool for promoting products in E-commerce.
no code implementations • 13 Sep 2019 • Xianlong Zeng, Soheil Moosavinasab, En-Ju D Lin, Simon Lin, Razvan Bunescu, Chang Liu
Efficient representation of patients is very important in the healthcare domain and can help with many tasks such as medical risk prediction.
no code implementations • 25 Sep 2019 • Chang Liu, Yanan Xu, Yanmin Zhu
In this paper, we study the problem of inferring fine-grained bike demands anywhere in a new city before the deployment of bikes.
no code implementations • 5 Oct 2019 • Pengyu Cheng, Chang Liu, Chunyuan Li, Dinghan Shen, Ricardo Henao, Lawrence Carin
The Straight-Through (ST) estimator is a widely used technique for back-propagating gradients through discrete random variables.
no code implementations • 16 Oct 2019 • Wenqiang Xu, Yanjun Fu, Yuchen Luo, Chang Liu, Cewu Lu
Fine-grained recognition task deals with sub-category classification problem, which is important for real-world applications.
no code implementations • 1 Nov 2019 • Xishan Zhang, Shaoli Liu, Rui Zhang, Chang Liu, Di Huang, Shiyi Zhou, Jiaming Guo, Yu Kang, Qi Guo, Zidong Du, Yunji Chen
Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers.
no code implementations • 10 Mar 2019 • Xing Hu, Ling Liang, Lei Deng, Shuangchen Li, Xinfeng Xie, Yu Ji, Yufei Ding, Chang Liu, Timothy Sherwood, Yuan Xie
As neural networks continue their reach into nearly every aspect of software operations, the details of those networks become an increasingly sensitive subject.
Cryptography and Security Hardware Architecture
no code implementations • 23 Dec 2019 • Minoru Kusaba, Chang Liu, Yukinori Koyama, Kiyoyuki Terakura, Ryo Yoshida
In 1869, the first draft of the periodic table was published by Russian chemist Dmitri Mendeleev.
no code implementations • 3 Feb 2020 • Di Huang, Xishan Zhang, Rui Zhang, Tian Zhi, Deyuan He, Jiaming Guo, Chang Liu, Qi Guo, Zidong Du, Shaoli Liu, Tianshi Chen, Yunji Chen
In this paper, we propose a novel Decomposable Winograd Method (DWM), which breaks through the limitation of original Winograd's minimal filtering algorithm to a wide and general convolutions.
no code implementations • 28 Feb 2020 • Yao Mu, Shengbo Eben Li, Chang Liu, Qi Sun, Bingbing Nie, Bo Cheng, Baiyu Peng
This paper presents a mixed reinforcement learning (mixed RL) algorithm by simultaneously using dual representations of environmental dynamics to search the optimal policy with the purpose of improving both learning accuracy and training speed.
no code implementations • 19 Apr 2020 • Chao Qu, Hui Li, Chang Liu, Junwu Xiong, James Zhang, Wei Chu, Weiqiang Wang, Yuan Qi, Le Song
We propose a \emph{collaborative} multi-agent reinforcement learning algorithm named variational policy propagation (VPP) to learn a \emph{joint} policy through the interactions over agents.
Multi-agent Reinforcement Learning reinforcement-learning +2
no code implementations • 12 Dec 2018 • Zhe Li, Caiwen Ding, Siyue Wang, Wujie Wen, Youwei Zhuo, Chang Liu, Qinru Qiu, Wenyao Xu, Xue Lin, Xuehai Qian, Yanzhi Wang
It is a challenging task to have real-time, efficient, and accurate hardware RNN implementations because of the high sensitivity to imprecision accumulation and the requirement of special activation function implementations.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • CVPR 2020 • Gaurav Mittal, Chang Liu, Nikolaos Karianakis, Victor Fragoso, Mei Chen, Yun Fu
To reduce HPO time, we present HyperSTAR (System for Task Aware Hyperparameter Recommendation), a task-aware method to warm-start HPO for deep neural networks.
no code implementations • 26 May 2020 • Lingbo Yang, Pan Wang, Chang Liu, Zhanning Gao, Peiran Ren, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Xian-Sheng Hua, Wen Gao
Human pose transfer (HPT) is an emerging research topic with huge potential in fashion design, media production, online advertising and virtual reality.
no code implementations • 8 May 2020 • Hao Liu, Yan Xu, Jiasheng Wang, Ju Jing, Chang Liu, Jason T. L. Wang, Haimin Wang
By learning the latent patterns in the training data prepared by the physics-based ME tool, the proposed CNN method is able to infer vector magnetic fields from the Stokes profiles of GST/NIRIS.
Solar and Stellar Astrophysics
no code implementations • 4 Jun 2020 • Weihao Jiang, Zhaozhi Xie, Yaoyi Li, Chang Liu, Hongtao Lu
Many of these applications need to perform a real-time and efficient prediction for semantic segmentation with a light-weighted network.
no code implementations • 22 Jun 2020 • Gelu Nita, Manolis Georgoulis, Irina Kitiashvili, Viacheslav Sadykov, Enrico Camporeale, Alexander Kosovichev, Haimin Wang, Vincent Oria, Jason Wang, Rafal Angryk, Berkay Aydin, Azim Ahmadzadeh, Xiaoli Bai, Timothy Bastian, Soukaina Filali Boubrahimi, Bin Chen, Alisdair Davey, Sheldon Fereira, Gregory Fleishman, Dale Gary, Andrew Gerrard, Gregory Hellbourg, Katherine Herbert, Jack Ireland, Egor Illarionov, Natsuha Kuroda, Qin Li, Chang Liu, Yuexin Liu, Hyomin Kim, Dustin Kempton, Ruizhe Ma, Petrus Martens, Ryan McGranaghan, Edward Semones, John Stefan, Andrey Stejko, Yaireska Collado-Vega, Meiqi Wang, Yan Xu, Sijie Yu
The authors of this white paper met on 16-17 January 2020 at the New Jersey Institute of Technology, Newark, NJ, for a 2-day workshop that brought together a group of heliophysicists, data providers, expert modelers, and computer/data scientists.
no code implementations • 22 Jun 2020 • Yaolong Wang, Mingqing Xiao, Chang Liu, Shuxin Zheng, Tie-Yan Liu
Specifically, ILC introduces an invertible encoding module to replace the encoder-decoder structure to produce the low dimensional informative latent representation, meanwhile, transform the lost information into an auxiliary latent variable that won't be further coded or stored.
no code implementations • 20 Jun 2020 • Lixin Fan, Kam Woh Ng, Ce Ju, Tianyu Zhang, Chang Liu, Chee Seng Chan, Qiang Yang
This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks.
no code implementations • ICML 2020 • Michael Zhu, Chang Liu, Jun Zhu
Particle-based Variational Inference methods (ParVIs), like Stein Variational Gradient Descent, are nonparametric variational inference methods that optimize a set of particles to best approximate a target distribution.
no code implementations • 11 Sep 2020 • Jianan Li, Jimei Yang, Jianming Zhang, Chang Liu, Christina Wang, Tingfa Xu
In this paper, we introduce Attribute-conditioned Layout GAN to incorporate the attributes of design elements for graphic layout generation by forcing both the generator and the discriminator to meet attribute conditions.
no code implementations • 17 Sep 2020 • Chang Liu, Huichu Zhang, Wei-Nan Zhang, Guanjie Zheng, Yong Yu
The heavy traffic congestion problem has always been a concern for modern cities.
no code implementations • 1 Jan 2021 • Chang Liu, Kai Li, Yun Fu
Unsupervised domain adaptation (UDA) is to make predictions for unlabeled data in a target domain with labeled data from source domain available.
no code implementations • 4 Nov 2020 • Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-Yan Liu
To avoid spurious correlation, we propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
no code implementations • 10 Nov 2020 • Chang Liu, Wenzhong Yan, Ankur Mehta
Based on an equivalent plate model, we develop and validate analytical formulas for the behavioral specifications of OADLC mechanisms; the analytical formulas can be described as expressions of design parameters.
Robotics
no code implementations • 19 Nov 2020 • Yuanqiang Cai, Chang Liu, Weiqiang Wang, Qixiang Ye
With only bounding-box annotations in the spatial domain, existing video scene text detection (VSTD) benchmarks lack temporal relation of text instances among video frames, which hinders the development of video text-related applications.
no code implementations • 2 Dec 2020 • Tangqing Cao, Wenqi Guo, Wang Lu, Yunfei Xue, Wenjun Lu, Jing Su, Christian H. Liebscher, Chang Liu, Gerhard Dehm
Such a softening behavior can be related to the interaction of dislocations with short-range clustering.
Materials Science
no code implementations • 11 Sep 2020 • Chang Liu, Zhiqiang Wei, Derrick Wing Kwan Ng, Jinhong Yuan, Ying-Chang Liang
To eliminate the requirement of channel estimation and to improve the system performance, in this paper, we adopt a deep transfer learning (DTL) approach to implicitly extract the features of channel and directly recover tag symbols.
no code implementations • 16 Sep 2020 • Chang Liu, Weijie Yuan, Zhiqiang Wei, Xuemeng Liu, Derrick Wing Kwan Ng
Unmanned aerial vehicle (UAV)-assisted communication becomes a promising technique to realize the beyond fifth generation (5G) wireless networks, due to the high mobility and maneuverability of UAVs which can adapt to heterogeneous requirements of different applications.
no code implementations • 10 Nov 2020 • Chang Liu, Xuemeng Liu, Zhiqiang Wei, Derrick Wing Kwan Ng, Jinhong Yuan, Ying-Chang Liang
Existing tag signal detection algorithms inevitably suffer from a high bit error rate (BER) due to the difficulties in estimating the channel state information (CSI).
no code implementations • 7 Dec 2020 • Chang Liu, Yixing Huang, Joscha Maier, Laura Klein, Marc Kachelrieß, Andreas Maier
For organ-specific AEC, a preliminary CT reconstruction is necessary to estimate organ shapes for dose optimization, where only a few projections are allowed for real-time reconstruction.
no code implementations • 4 Jan 2021 • Ya'nan Wang, Zhuqing Jiang, Chang Liu, Kai Li, Aidong Men, Haiying Wang
This paper proposes a neural network for multi-level low-light image enhancement, which is user-friendly to meet various requirements by selecting different images as brightness reference.
no code implementations • 6 Jan 2021 • Bernhard Kliem, Jeongwoo Lee, Rui Liu, Stephen M. White, Chang Liu, Satoshi Masuda
We present evidence that a magnetic flux rope was formed before a coronal mass ejection (CME) and its associated long-duration flare during a pair of preceding confined eruptions and associated impulsive flares in a compound event in NOAA Active Region 12371.
Solar and Stellar Astrophysics
no code implementations • 20 Jan 2021 • Zhuqing Jiang, Chang Liu, Ya'nan Wang, Kai Li, Aidong Men, Haiying Wang, Haiyong Luo
With the goal of tuning up the brightness, low-light image enhancement enjoys numerous applications, such as surveillance, remote sensing and computational photography.
no code implementations • 22 Jan 2021 • Chang Liu, Henghui Ding, Xudong Jiang
In this paper, we argue that recovering these microscopic details relies on low-level but high-definition texture features.
no code implementations • 5 Mar 2021 • Chang Liu, Xiaoguang Li, Guohao Cai, Zhenhua Dong, Hong Zhu, Lifeng Shang
It is still an open question to leverage various types of information under the BERT framework.
no code implementations • 3 Mar 2021 • Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, Tie-Yan Liu
Being expensive and time-consuming to collect massive COVID-19 image samples to train deep classification models, transfer learning is a promising approach by transferring knowledge from the abundant typical pneumonia datasets for COVID-19 image classification.
no code implementations • 6 Mar 2021 • Jeremy Beauchamp, Razvan Bunescu, Cindy Marling, Zhongen Li, Chang Liu
In this work, we invert the "what-if" scenario and introduce a similar architecture based on chaining two LSTMs that can be trained to make either insulin or carbohydrate recommendations aimed at reaching a desired BG level in the future.
no code implementations • 16 Mar 2021 • Chang Liu, Lixin Fan, Kam Woh Ng, Yilun Jin, Ce Ju, Tianyu Zhang, Chee Seng Chan, Qiang Yang
This paper proposes a novel ternary hash encoding for learning to hash methods, which provides a principled more efficient coding scheme with performances better than those of the state-of-the-art binary hashing counterparts.
no code implementations • 17 Mar 2021 • Juntao Li, Chang Liu, Chongyang Tao, Zhangming Chan, Dongyan Zhao, Min Zhang, Rui Yan
To fill the gap between these up-to-date methods and the real-world applications, we incorporate user-specific dialogue history into the response selection and propose a personalized hybrid matching network (PHMN).
no code implementations • 22 Mar 2021 • Hua Wei, Chacha Chen, Chang Liu, Guanjie Zheng, Zhenhui Li
Simulation of the real-world traffic can be used to help validate the transportation policies.
no code implementations • 22 Mar 2021 • Chang Liu, Xiaojuan Qi, Edmund Lam, Ngai Wong
The neuromorphic event cameras, which capture the optical changes of a scene, have drawn increasing attention due to their high speed and low power consumption.
no code implementations • 8 Oct 2020 • Yunfan Jiang, Jingjing Si, Rui Zhang, Godwin Enemali, Bin Zhou, Hugh McCann, Chang Liu
Chemical Species Tomography (CST) has been widely used for in situ imaging of critical parameters, e. g. species concentration and temperature, in reactive flows.
no code implementations • 6 Apr 2021 • Boyu Yang, Mingbao Lin, Binghao Liu, Mengying Fu, Chang Liu, Rongrong Ji, Qixiang Ye
By tentatively expanding network nodes, LEC-Net enlarges the representation capacity of features, alleviating feature drift of old network from the perspective of model regularization.
no code implementations • 29 Nov 2020 • Yan He, Jifang Qiu, Chang Liu, Yue Liu, Jian Wu
The latest theoretical advances in the field of unlimited sampling framework (USF) show the potential to avoid clipping problems of analog-to-digital converters (ADC).
no code implementations • 20 Nov 2020 • Godwin Enemali, Rui Zhang, Hugh McCann, Chang Liu
Although a fully parallel data acquisition (DAQ) and signal processing system can achieve these functionalities with maximised temporal response, it leads to a highly complex, expensive and power-consuming instrumentation system with high potential for inconsistency between the sampled beams due to the electronics alone.
no code implementations • 4 Sep 2020 • Chang Liu, Jiahui Sun, Haiming Jin, Meng Ai, Qun Li, Cheng Zhang, Kehua Sheng, Guobin Wu, XiaoHu Qie, Xinbing Wang
Thus, in this paper, we exploit adaptive dispatching intervals to boost the platform's profit under a guarantee of the maximum passenger waiting time.
no code implementations • 26 Apr 2021 • Jie Chen, Jie Liu, Chang Liu, Jian Zhang, Bing Han
To overcome this issue and to further improve the recognition performance, we adopt a deep learning approach for underwater target recognition and propose a LOFAR spectrum enhancement (LSE)-based underwater target recognition scheme, which consists of preprocessing, offline training, and online testing.
no code implementations • 18 May 2021 • Chang Liu, Guanjie Zheng, Zhenhui Li
Therefore, in this paper, we propose to learn the human routing model, which is one of the most essential part in the traffic simulator.
no code implementations • 10 Feb 2021 • Rui Zhang, Jingjing Si, Godwin Enemali, Yong Bao, Chang Liu
The proposed scheme was both numerically and experimentally validated using a CST sensor with 32 laser beams using a variety of computational tomographic algorithms.
no code implementations • 5 Jun 2021 • Jingjing Si, Guoliang Li, Yinbo Cheng, Rui Zhang, Godwin Enemali, Chang Liu
As an in situ combustion diagnostic tool, Tunable Diode Laser Absorption Spectroscopy (TDLAS) tomography has been widely used for imaging of two-dimensional temperature distributions in reactive flows.
no code implementations • SEMEVAL 2020 • Chang Liu, Dong Yu
We demonstrate the effectiveness of our approaches, which achieves 0. 95 of subtask 1 in F1 while using only a subset of giving training set to fine-tune the BERT model, and our official submission achieves F1 0. 802, which ranks us 16th in the competition.
no code implementations • 18 Jun 2021 • Chang Liu, Xiaolin Wu
Nighttime photographers are often troubled by light pollution of unwanted artificial lights.
no code implementations • 24 Jun 2021 • Xianlong Zeng, Simon Lin, Chang Liu
In addition, our framework showed a great generalizability potential to transfer learned knowledge from one institution to another, paving the way for future healthcare model pre-training across institutions.
no code implementations • 23 Jun 2021 • Xianlong Zeng, Simon Lin, Chang Liu
The claims data, containing medical codes, services information, and incurred expenditure, can be a good resource for estimating an individual's health condition and medical risk level.
no code implementations • 25 Jun 2021 • Tianle Yue, Hang Yang, Zongliang Du, Chang Liu, Khalil I. Elkhodary, Shan Tang, Xu Guo
During offline training, a mapping function is built between high and low resolution representations of a given design domain.