no code implementations • CCL 2020 • Hongrui Wang, Chang Liu, Dong Yu
道德词典资源的建设是人工智能伦理计算的一个研究重点。由于道德行为复杂多样, 现有的英文道德词典分类体系并不完善, 而中文方面目前尚未有相关的词典资源, 理论体系和构建方法仍待探究。针对以上问题, 该文提出了面向人工智能伦理计算的中文道德词典构建任务, 设计了四类标签和四种类型, 得到包含25, 012个词的中文道德词典资源。实验结果表明, 该词典资源不仅能够使机器学会道德知识, 判断词的道德标签和类型, 而且能够为句子级别的道德文本分析提供数据支持。
no code implementations • ACL 2022 • Chang Liu, Xu Tan, Chongyang Tao, Zhenxin Fu, Dongyan Zhao, Tie-Yan Liu, Rui Yan
To enable the chatbot to foresee the dialogue future, we design a beam-search-like roll-out strategy for dialogue future simulation using a typical dialogue generation model and a dialogue selector.
no code implementations • ICML 2020 • Michael Zhu, Chang Liu, Jun Zhu
Particle-based Variational Inference methods (ParVIs), like Stein Variational Gradient Descent, are nonparametric variational inference methods that optimize a set of particles to best approximate a target distribution.
no code implementations • COLING 2022 • Jiazhan Feng, Chongyang Tao, Zhen Li, Chang Liu, Tao Shen, Dongyan Zhao
In this paper, we propose a reciprocal learning approach to jointly optimize a knowledge retriever and a response ranker for knowledge-grounded response retrieval without ground-truth knowledge labels.
1 code implementation • ACL 2022 • Chang Liu, Chongyang Tao, Jiazhan Feng, Dongyan Zhao
Transferring the knowledge to a small model through distillation has raised great interest in recent years.
no code implementations • CCL 2020 • Chang Liu, Shengxiang Gao, Zhengtao Yu, Yuxin Huang, Congcong You
汉越平行句对抽取是缓解汉越平行语料库数据稀缺的重要方法。平行句对抽取可转换为同一语义空间下的句子相似性分类任务, 其核心在于双语语义空间对齐。传统语义空间对齐方法依赖于大规模的双语平行语料, 越南语作为低资源语言获取大规模平行语料相对困难。针对这个问题本文提出一种利用种子词典进行跨语言双语预训练及Bi-LSTM(Bi-directional Long Short-Term Memory)的汉-越平行句对抽取方法。预训练中仅需要大量的汉越单语和一个汉越种子词典, 通过利用汉越种子词典将汉越双语映射到公共语义空间进行词对齐。再利用Bi-LSTM和CNN(Convolutional Neural Networks)分别提取句子的全局特征和局部特征从而最大化表示汉-越句对之间的语义相关性。实验结果表明, 本文模型在F1得分上提升7. 1%, 优于基线模型。
no code implementations • 28 Mar 2023 • Xiao Yang, Chang Liu, Longlong Xu, Yikai Wang, Yinpeng Dong, Ning Chen, Hang Su, Jun Zhu
The goal of this work is to develop a more reliable technique that can carry out an end-to-end evaluation of adversarial robustness for commercial systems.
no code implementations • 27 Mar 2023 • Xiaoyan Qian, Chang Liu, Xiaojuan Qi, Siew-Chong Tan, Edmund Lam, Ngai Wong
3D automatic annotation has received increased attention since manually annotating 3D point clouds is laborious.
1 code implementation • 27 Mar 2023 • Chang Liu, Weiming Zhang, Xiangru Lin, Wei zhang, Xiao Tan, Junyu Han, Xiaomao Li, Errui Ding, Jingdong Wang
It employs a "divide-and-conquer" strategy and separately exploits positives for the classification and localization task, which is more robust to the assignment ambiguity.
1 code implementation • 26 Mar 2023 • Yitian Zhang, Yue Bai, Chang Liu, Huan Wang, Sheng Li, Yun Fu
To fix this issue, we propose a general framework, named Frame Flexible Network (FFN), which not only enables the model to be evaluated at different frames to adjust its computation, but also reduces the memory costs of storing multiple models significantly.
1 code implementation • 25 Mar 2023 • Peng Jin, Jinfa Huang, Pengfei Xiong, Shangxuan Tian, Chang Liu, Xiangyang Ji, Li Yuan, Jie Chen
Contrastive learning-based video-language representation learning approaches, e. g., CLIP, have achieved outstanding performance, which pursue semantic interaction upon pre-defined video-text pairs.
no code implementations • 23 Mar 2023 • Kehan Li, Yian Zhao, Zhennan Wang, Zesen Cheng, Peng Jin, Xiangyang Ji, Li Yuan, Chang Liu, Jie Chen
Interactive segmentation enables users to segment as needed by providing cues of objects, which introduces human-computer interaction for many fields, such as image editing and medical image analysis.
no code implementations • 17 Mar 2023 • Peng Jin, Hao Li, Zesen Cheng, Kehan Li, Xiangyang Ji, Chang Liu, Li Yuan, Jie Chen
Existing text-video retrieval solutions are, in essence, discriminant models focused on maximizing the conditional likelihood, i. e., p(candidates|query).
Ranked #8 on
Video Retrieval
on ActivityNet
no code implementations • 13 Mar 2023 • Zesen Cheng, Kehan Li, Peng Jin, Xiangyang Ji, Li Yuan, Chang Liu, Jie Chen
An intuitive materialization of our paradigm is Parallel Vertex Diffusion (PVD) to directly set vertex coordinates as the generation target and use a diffusion model to train and infer.
no code implementations • 7 Mar 2023 • Chang Liu, Sandra Paterlini
Stock price prediction is a crucial element in financial trading as it allows traders to make informed decisions about buying, selling, and holding stocks.
1 code implementation • 2 Mar 2023 • Xu Ma, Yuqian Zhou, Huan Wang, Can Qin, Bin Sun, Chang Liu, Yun Fu
Context clusters (CoCs) view an image as a set of unorganized points and extract features via simplified clustering algorithm.
no code implementations • 28 Feb 2023 • Chang Liu, Rui Zhang, Xishan Zhang, Yifan Hao, Zidong Du, Xing Hu, Ling Li, Qi Guo
The energy-efficient works try to decrease the precision of multiplication or replace the multiplication with energy-efficient operations such as addition or bitwise shift, to reduce the energy consumption of FP32 multiplications.
no code implementations • 28 Feb 2023 • Chang Liu, Wenzhao Xiang, Yuan He, Hui Xue, Shibao Zheng, Hang Su
To address this issue, we proposed a novel method of Augmenting data with Adversarial examples via a Wavelet module (AdvWavAug), an on-manifold adversarial data augmentation technique that is simple to implement.
no code implementations • 28 Feb 2023 • Chang Liu, Yinpeng Dong, Wenzhao Xiang, Xiao Yang, Hang Su, Jun Zhu, Yuefeng Chen, Yuan He, Hui Xue, Shibao Zheng
In our benchmark, we evaluate the robustness of 55 typical deep learning models on ImageNet with diverse architectures (e. g., CNNs, Transformers) and learning algorithms (e. g., normal supervised training, pre-training, adversarial training) under numerous adversarial attacks and out-of-distribution (OOD) datasets.
1 code implementation • 28 Feb 2023 • Yifan Yang, Chang Liu, Zheng Zhang
Online optimization has gained increasing interest due to its capability of tracking real-world streaming data.
1 code implementation • 3 Feb 2023 • Henghui Ding, Chang Liu, Shuting He, Xudong Jiang, Philip H. S. Torr, Song Bai
However, since the target objects in these existing datasets are usually relatively salient, dominant, and isolated, VOS under complex scenes has rarely been studied.
no code implementations • 16 Jan 2023 • Bo Fang, Wenhao Wu, Chang Liu, Yu Zhou, Min Yang, Yuxin Song, Fu Li, Weiping Wang, Xiangyang Ji, Wanli Ouyang
In the refined embedding space, we represent text-video pairs as probabilistic distributions where prototypes are sampled for matching evaluation.
1 code implementation • 31 Dec 2022 • Xin Ma, Chang Liu, Chunyu Xie, Long Ye, Yafeng Deng, Xiangyang Ji
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency.
no code implementations • 28 Dec 2022 • Chang Liu, Shuangyang Li, Weijie Yuan, Xuemeng Liu, Derrick Wing Kwan Ng
This paper investigates the orthogonal time frequency space (OTFS) transmission for enabling ultra-reliable low-latency communications (URLLC).
no code implementations • 20 Dec 2022 • Chang Liu, Chongyang Tao, Xiubo Geng, Tao Shen, Dongyan Zhao, Can Xu, Binxing Jiao, Daxin Jiang
Different from previous works that only rely on one positive and hard negatives as candidate passages, we create dark examples that all have moderate relevance to the query through mixing-up and masking in discrete space.
no code implementations • 16 Dec 2022 • Wei Sun, Chengao Liu, Linyan Zhang, Yu Li, Pengxu Wei, Chang Liu, Jialing Zou, Jianbin Jiao, Qixiang Ye
Optimizing a convolutional neural network (CNN) for camouflaged object detection (COD) tends to activate local discriminative regions while ignoring complete object extent, causing the partial activation issue which inevitably leads to missing or redundant regions of objects.
1 code implementation • 15 Dec 2022 • Bohao Li, Chang Liu, Mengnan Shi, Xiaozhong Chen, Xiangyang Ji, Qixiang Ye
Adapting object detectors learned with sufficient supervision to novel classes under low data regimes is charming yet challenging.
no code implementations • 14 Dec 2022 • Kun Tang, Xu Cao, Zhipeng Cao, Tong Zhou, Erlong Li, Ao Liu, Shengtao Zou, Chang Liu, Shuqi Mei, Elena Sizikova, Chao Zheng
THMA has been deployed by the Tencent Map team to provide services to downstream companies and users, serving over 1, 000 labeling workers and producing more than 30, 000 kilometers of HD map data per day at most.
no code implementations • 30 Nov 2022 • Chang Liu
We study a two-period moral hazard problem; there are two agents, with identical action sets that are unknown to the principal.
no code implementations • 25 Nov 2022 • Qiran Zou, Yu Yang, Wing Yin Cheung, Chang Liu, Xiangyang Ji
Unsupervised foreground-background segmentation aims at extracting salient objects from cluttered backgrounds, where Generative Adversarial Network (GAN) approaches, especially layered GANs, show great promise.
no code implementations • 23 Nov 2022 • Chang Liu, Xuemeng Liu, Zhiqiang Wei, Derrick Wing Kwan Ng, Robert Schober
With the proposed predictive approach, we can avoid full-scale CSI estimation and facilitate low-dimensional CE for transmit beamforming design such that the signaling overhead is reduced by a scale of $\frac{1}{N}$, where $N$ is the number of IRS elements.
no code implementations • 22 Nov 2022 • Zesen Cheng, Pengchong Qiao, Kehan Li, Siheng Li, Pengxu Wei, Xiangyang Ji, Li Yuan, Chang Liu, Jie Chen
Weakly supervised semantic segmentation is typically inspired by class activation maps, which serve as pseudo masks with class-discriminative regions highlighted.
Optical Character Recognition (OCR)
Weakly supervised Semantic Segmentation
+1
no code implementations • 19 Nov 2022 • Chang Liu, Yuwen Yang, Yue Ding, Hongtao Lu
While most existing message-passing graph neural networks (MPNNs) are permutation-invariant in graph-level representation learning and permutation-equivariant in node- and edge-level representation learning, their expressive power is commonly limited by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test.
no code implementations • 16 Nov 2022 • Xinyu Zhou, Chang Liu, Jun Zhao
The Metaverse has received much attention recently.
no code implementations • 15 Nov 2022 • Kun He, Chang Liu, Stephen Lin, John E. Hopcroft
And further combination with our feature augmentation techniques, termed LOMA_IF&FO, can continue to strengthen the model and outperform advanced intensity transformation methods for data augmentation.
1 code implementation • 6 Nov 2022 • Yu Yang, Xiaotian Cheng, Chang Liu, Hakan Bilen, Xiangyang Ji
In recent years, generative adversarial networks (GANs) have been an actively studied topic and shown to successfully produce high-quality realistic images in various domains.
no code implementations • 5 Nov 2022 • Yu Yang, Wing Yin Cheung, Chang Liu, Xiangyang Ji
Multiview self-supervised representation learning roots in exploring semantic consistency across data of complex intra-class variation.
no code implementations • 2 Nov 2022 • Yifei Zhang, Chang Liu, Yu Zhou, Weiping Wang, Qixiang Ye, Xiangyang Ji
In this paper, we present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations, i. e., global distribution relation and local interpolation relation, into the CSL framework in a plug-and-play fashion.
1 code implementation • 1 Nov 2022 • Chang Liu, Yuwen Yang, Zhe Xie, Hongtao Lu, Yue Ding
2) Prevailing graph augmentation methods for GEL, including rule-based, sample-based, adaptive, and automated methods, are not suitable for augmenting subgraphs because a subgraph contains fewer nodes but richer information such as position, neighbor, and structure.
1 code implementation • 28 Oct 2022 • Henghui Ding, Chang Liu, Suchen Wang, Xudong Jiang
We propose a Vision-Language Transformer (VLT) framework for referring segmentation to facilitate deep interactions among multi-modal information and enhance the holistic understanding to vision-language features.
Ranked #2 on
Referring Video Object Segmentation
on Refer-YouTube-VOS
(using extra training data)
Referring Expression Segmentation
Referring Video Object Segmentation
no code implementations • 28 Oct 2022 • Chang Liu, Yuwen Yang, Xun Cai, Yue Ding, Hongtao Lu
Federated learning (FL) faces three major difficulties: cross-domain, heterogeneous models, and non-i. i. d.
no code implementations • 28 Oct 2022 • Ligen Shi, Chang Liu, Di He, Xing Zhao, Jun Qiu
A major challenge for matching-based depth estimation is to prevent mismatches in occlusion and smooth regions.
no code implementations • 16 Oct 2022 • Pengchong Qiao, Zhidan Wei, Yu Wang, Zhennan Wang, Guoli Song, Fan Xu, Xiangyang Ji, Chang Liu, Jie Chen
Semi-supervised learning (SSL) essentially pursues class boundary exploration with less dependence on human annotations.
no code implementations • 13 Oct 2022 • Chang Liu, Yuwen Yang, Yue Ding, Hongtao Lu
The normalizing layer has become one of the basic configurations of deep learning models, but it still suffers from computational inefficiency, interpretability difficulties, and low generality.
no code implementations • 12 Oct 2022 • Kehan Li, Zhennan Wang, Zesen Cheng, Runyi Yu, Yian Zhao, Guoli Song, Chang Liu, Li Yuan, Jie Chen
Recently, self-supervised large-scale visual pre-training models have shown great promise in representing pixel-level semantic relationships, significantly promoting the development of unsupervised dense prediction tasks, e. g., unsupervised semantic segmentation (USS).
1 code implementation • 9 Oct 2022 • Mingqing Xiao, Shuxin Zheng, Chang Liu, Zhouchen Lin, Tie-Yan Liu
To be specific, we develop invertible models to generate valid degraded images and meanwhile transform the distribution of lost contents to the fixed distribution of a latent variable during the forward degradation.
no code implementations • 7 Oct 2022 • Chang Liu, Terence Jie Chua, Jun Zhao
Therefore, we formulate a joint learning and communication optimization problem to minimize total model parameter communication and computation delay, by optimizing local iteration counts and edge iteration counts.
no code implementations • 5 Oct 2022 • Wenhan Cao, Chang Liu, Zhiqian Lan, Yingxi Piao, Shengbo Eben Li
Then we design a robust loss function by leveraging the \{beta}-divergence and propose the \{beta} moving horizon estimator robust to outliers.
no code implementations • 26 Sep 2022 • Chang Liu, Xuemeng Liu, Shuangyang Li, Weijie Yuan, Derrick Wing Kwan Ng
Predictive beamforming design is an essential task in realizing high-mobility integrated sensing and communication (ISAC), which highly depends on the accuracy of the channel prediction (CP), i. e., predicting the angular parameters of users.
1 code implementation • 29 Aug 2022 • Chang Liu, Yujie Zhong, Andrew Zisserman, Weidi Xie
In this paper, we consider the problem of generalised visual object counting, with the goal of developing a computational model for counting the number of objects from arbitrary semantic categories, using arbitrary number of "exemplars", i. e. zero-shot or few-shot counting.
Ranked #2 on
Object Counting
on FSC147
1 code implementation • 20 Jul 2022 • Chang Liu, Xiaoyan Qian, Binxiao Huang, Xiaojuan Qi, Edmund Lam, Siew-Chong Tan, Ngai Wong
By enriching the sparse point clouds, our method achieves 4. 48\% and 4. 03\% better 3D AP on KITTI moderate and hard samples, respectively, versus the state-of-the-art autolabeler.
no code implementations • 6 Jul 2022 • Zhennan Wang, Kehan Li, Runyi Yu, Yian Zhao, Pengchong Qiao, Chang Liu, Fan Xu, Xiangyang Ji, Guoli Song, Jie Chen
In this paper, we analyze batch normalization from the perspective of discriminability and find the disadvantages ignored by previous studies: the difference in $l_2$ norms of sample features can hinder batch normalization from obtaining more distinguished inter-class features and more compact intra-class features.
no code implementations • 4 Jul 2022 • Chang Liu, Yugong Luo, Pengfei Li, Chunhui Xing, Weiwei Kong
To deal with this problem, this paper introduces a two-dimensional maneuver management framework with a fault-tolerant mechanism on the basis of the proposed hierarchical architecture for the platoon control system.
1 code implementation • 4 Jul 2022 • Chang Liu, Gang Yang, Shuo Wang, Hangxu Wang, Yunhua Zhang, Yutao Wang
We employ the powerful feature extraction capability of Transformer (PVTv2) to extract global semantic information from RGB data and design a lightweight CNN backbone (LWDepthNet) to extract spatial structure information from depth data without pre-training.
no code implementations • 25 Jun 2022 • HongBing Zhang, Xinyi Liu, Chang Liu, HongTao Fan, YaJing Li, Xinyun Zhu
The proposed function is generalized to tensor cases, yielding tensor MLCP and weighted tensor $L\gamma$-norm.
1 code implementation • 9 Jun 2022 • Si Shen, Jiangfeng Liu, Litao Lin, Ying Huang, Lin Zhang, Chang Liu, Yutong Feng, Dongbo Wang
The academic literature of social sciences records human civilization and studies human social problems.
no code implementations • 2 Jun 2022 • Chang Liu, Zhen-Hua Ling, Ling-Hui Chen
This paper proposes a multilingual speech synthesis method which combines unsupervised phonetic representations (UPR) and supervised phonetic representations (SPR) to avoid reliance on the pronunciation dictionaries of target languages.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 25 May 2022 • Eduardo Pérez-Pellitero, Sibi Catley-Chandar, Richard Shaw, Aleš Leonardis, Radu Timofte, Zexin Zhang, Cen Liu, Yunbo Peng, Yue Lin, Gaocheng Yu, Jin Zhang, Zhe Ma, Hongbin Wang, Xiangyu Chen, Xintao Wang, Haiwei Wu, Lin Liu, Chao Dong, Jiantao Zhou, Qingsen Yan, Song Zhang, Weiye Chen, Yuhang Liu, Zhen Zhang, Yanning Zhang, Javen Qinfeng Shi, Dong Gong, Dan Zhu, Mengdi Sun, Guannan Chen, Yang Hu, Haowei Li, Baozhu Zou, Zhen Liu, Wenjie Lin, Ting Jiang, Chengzhi Jiang, Xinpeng Li, Mingyan Han, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Juan Marín-Vega, Michael Sloth, Peter Schneider-Kamp, Richard Röttger, Chunyang Li, Long Bao, Gang He, Ziyao Xu, Li Xu, Gen Zhan, Ming Sun, Xing Wen, Junlin Li, Shuang Feng, Fei Lei, Rui Liu, Junxiang Ruan, Tianhong Dai, Wei Li, Zhan Lu, Hengyan Liu, Peian Huang, Guangyu Ren, Yonglin Luo, Chang Liu, Qiang Tu, Fangya Li, Ruipeng Gang, Chenghua Li, Jinjing Li, Sai Ma, Chenming Liu, Yizhen Cao, Steven Tel, Barthelemy Heyrman, Dominique Ginhac, Chul Lee, Gahyeon Kim, Seonghyun Park, An Gia Vien, Truong Thanh Nhat Mai, Howoon Yoon, Tu Vo, Alexander Holston, Sheir Zaheer, Chan Y. Park
The challenge is composed of two tracks with an emphasis on fidelity and complexity constraints: In Track 1, participants are asked to optimize objective fidelity scores while imposing a low-complexity constraint (i. e. solutions can not exceed a given number of operations).
no code implementations • 13 May 2022 • Xingchen Zhao, Chang Liu, Anthony Sicilia, Seong Jae Hwang, Yun Fu
Thus, it is still possible that those methods can overfit to source domains and perform poorly on target domains.
no code implementations • 28 Apr 2022 • Sijia Li, Gaopeng Gou, Chang Liu, Chengshang Hou, Zhenzhen Li, Gang Xiong
In this paper, we propose a Temporal Transaction Aggregation Graph Network (TTAGN) to enhance phishing scams detection performance on Ethereum.
no code implementations • 26 Apr 2022 • Chang Liu, Xudong Jiang, Henghui Ding
In this work, we propose a novel framework that simultaneously detects the target-of-interest via feature propagation and generates a fine-grained segmentation mask.
1 code implementation • 21 Apr 2022 • Tianyu Cui, Gaopeng Gou, Gang Xiong, Chang Liu, Peipei Fu, Zhen Li
6GAN forces multiple generators to train with a multi-class discriminator and an alias detector to generate non-aliased active targets with different addressing pattern types.
1 code implementation • 20 Apr 2022 • Tianyu Cui, Gaopeng Gou, Gang Xiong, Zhen Li, Mingxin Cui, Chang Liu
To do this, we propose an IPv6 address correlation model - SiamHAN.
1 code implementation • CVPR 2022 • Chang Liu, Chun Yang, Xu-Cheng Yin
Contextual information can be decomposed into temporal information and linguistic information.
no code implementations • 6 Apr 2022 • Wenhan Cao, Jingliang Duan, Shengbo Eben Li, Chen Chen, Chang Liu, Yu Wang
Both the primal and dual estimators are learned from data using supervised learning techniques, and the explicit sample size is provided, which enables us to guarantee the quality of each learned estimator in terms of feasibility and optimality.
no code implementations • 29 Mar 2022 • Chang Liu, Xiaoyan Qian, Xiaojuan Qi, Edmund Y. Lam, Siew-Chong Tan, Ngai Wong
While a few previous studies tried to automatically generate 3D bounding boxes from weak labels such as 2D boxes, the quality is sub-optimal compared to human annotators.
no code implementations • 10 Mar 2022 • Chang Liu, Chun Yang, Hai-Bo Qin, Xiaobin Zhu, Cheng-Lin Liu, Xu-Cheng Yin
Scene text recognition is a popular topic and extensively used in the industry.
2 code implementations • 9 Mar 2022 • Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, Tie-Yan Liu
This technical note describes the recent updates of Graphormer, including architecture design modifications, and the adaption to 3D molecular dynamics simulation.
Ranked #5 on
Graph Regression
on PCQM4Mv2-LSC
no code implementations • 6 Mar 2022 • Jiayi Zhang, Chang Liu, Junchi Yan, Xijun Li, Hui-Ling Zhen, Mingxuan Yuan
This paper surveys the trend of leveraging machine learning to solve mixed integer programming (MIP) problems.
no code implementations • 1 Mar 2022 • Qi Zhang, Chang Liu, Stephen Wu, Ryo Yoshida
The design variables consist of a set of reactants in a reaction network and its network topology.
no code implementations • 28 Feb 2022 • Yu Shi, Shuxin Zheng, Guolin Ke, Yifei Shen, Jiacheng You, Jiyan He, Shengjie Luo, Chang Liu, Di He, Tie-Yan Liu
This technical note describes the recent updates of Graphormer, including architecture design modifications, and the adaption to 3D molecular dynamics simulation.
no code implementations • 26 Feb 2022 • Vikram Shree, Carlos Diaz-Ruiz, Chang Liu, Bharath Hariharan, Mark Campbell
This paper focuses on the problem of decentralized pedestrian tracking using a sensor network.
no code implementations • 9 Feb 2022 • Jie Chen, Chang Liu, Jiawu Xie, Jie An, Nan Huang
In particular, this method breaks through the limitations of the existing methods, not only achieves good results in multivariate separation, but also effectively separates signals when mixed with 40dB Gaussian noise signals.
1 code implementation • 3 Feb 2022 • Jinhua Zhu, Yingce Xia, Chang Liu, Lijun Wu, Shufang Xie, Yusong Wang, Tong Wang, Tao Qin, Wengang Zhou, Houqiang Li, Haiguang Liu, Tie-Yan Liu
Molecular conformation generation aims to generate three-dimensional coordinates of all the atoms in a molecule and is an important task in bioinformatics and pharmacology.
1 code implementation • 26 Jan 2022 • Minoru Kusaba, Chang Liu, Ryo Yoshida
The prediction of energetically stable crystal structures formed by a given chemical composition is a central problem in solid-state physics.
no code implementations • 19 Jan 2022 • Zhongyuan Guo, Hong Zheng, Changhui You, Tianyu Wang, Chang Liu
We first analyze the production principle of anti-counterfeiting QR code, and convert the identification of copy forgery to device category forensics, and then a Dual-Branch Multi-Scale Feature Fusion network is proposed.
1 code implementation • NeurIPS 2021 • Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-Yan Liu
To avoid such a spurious correlation, we propose \textbf{La}tent \textbf{C}ausal \textbf{I}nvariance \textbf{M}odels (LaCIM) that specifies the underlying causal structure of the data and the source of distributional shifts, guiding us to pursue only causal factor for prediction.
1 code implementation • 29 Nov 2021 • Mengnan Shi, Chang Liu, Qixiang Ye, Jianbin Jiao
Gating modules have been widely explored in dynamic network pruning to reduce the run-time computational cost of deep neural networks while preserving the representation of features.
no code implementations • 20 Nov 2021 • Long Gao, Chang Liu, Dooman Arefan, Ashok Panigrahy, Margarita L. Zuley, Shandong Wu
To address this challenge, we propose a medical-knowledge-guided one-class classification approach that leverages domain-specific knowledge of classification tasks to boost the model's performance.
no code implementations • 20 Nov 2021 • Long Gao, Chang Liu, Dooman Arefan, Ashok Panigrahy, Shandong Wu
These methods mainly focus on capturing either compact or descriptive features, where the information of the samples of a given one class is not sufficiently utilized.
1 code implementation • 16 Nov 2021 • Hengzhi Pei, Kan Ren, Yuqing Yang, Chang Liu, Tao Qin, Dongsheng Li
In this paper, we propose a novel generative framework for RTS data - RTSGAN to tackle the aforementioned challenges.
no code implementations • 1 Nov 2021 • Haoji Liu, Weichao Zhuang, Guodong Yin, Rongcan Li, Chang Liu, Shanxing Zhou
We first formulate the optimal merging control problem, which includes the constraints of safety and vehicle dynamics, with the objectives of minimizing travel time and energy consumption.
no code implementations • 1 Nov 2021 • Chang Liu, Chen Gao, Depeng Jin, Yong Li
We first conduct information propagation on two sub-graphs to learn the representations of POIs and users.
1 code implementation • NeurIPS 2021 • Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu
Behavioral cloning has proven to be effective for learning sequential decision-making policies from expert demonstrations.
1 code implementation • 17 Oct 2021 • Yuefeng Chen, Xiaofeng Mao, Yuan He, Hui Xue, Chao Li, Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Jun Zhu, Fangcheng Liu, Chao Zhang, Hongyang Zhang, Yichi Zhang, Shilong Liu, Chang Liu, Wenzhao Xiang, Yajie Wang, Huipeng Zhou, Haoran Lyu, Yidan Xu, Zixuan Xu, Taoyu Zhu, Wenjun Li, Xianfeng Gao, Guoqiu Wang, Huanqian Yan, Ying Guo, Chaoning Zhang, Zheng Fang, Yang Wang, Bingyang Fu, Yunfei Zheng, Yekui Wang, Haorong Luo, Zhen Yang
Many works have investigated the adversarial attacks or defenses under the settings where a bounded and imperceptible perturbation can be added to the input.
no code implementations • 14 Oct 2021 • Chang Liu, Hairong Tang, Luyan Ji, Yongchao Zhao
Based on the mapping results, we analyzed the changes of Miyun Reservoir from 1984 to 2020 and the driving factors of them.
no code implementations • 11 Oct 2021 • Chang Liu
The principal knows the reward of the task and provides information to the agent over time.
no code implementations • 1 Oct 2021 • Chongyang Tao, Jiazhan Feng, Chang Liu, Juntao Li, Xiubo Geng, Daxin Jiang
For this task, the adoption of pre-trained language models (such as BERT) has led to remarkable progress in a number of benchmarks.
no code implementations • 30 Sep 2021 • Zijian Zhu, Hang Su, Chang Liu, Wenzhao Xiang, Shibao Zheng
Fortunately, most existing adversarial patches can be outwitted, disabled and rejected by a simple classification network called an adversarial patch detector, which distinguishes adversarial patches from original images.
no code implementations • 29 Sep 2021 • Qiwei Ye, Yuxuan Song, Chang Liu, Fangyun Wei, Tao Qin, Tie-Yan Liu
Stochastic polic have been widely applied for their good property in exploration and uncertainty quantification.
Ranked #1 on
MuJoCo Games
on Ant-v3
no code implementations • CVPR 2022 • Chang Liu, Xiang Yu, Yi-Hsuan Tsai, Ramin Moslemi, Masoud Faraki, Manmohan Chandraker, Yun Fu
Convolutional Neural Networks have achieved remarkable success in face recognition, in part due to the abundant availability of data.
no code implementations • 13 Sep 2021 • Wenzhao Xiang, Hang Su, Chang Liu, Yandong Guo, Shibao Zheng
As designers of artificial intelligence try to outwit hackers, both sides continue to hone in on AI's inherent vulnerabilities.
no code implementations • 26 Aug 2021 • Chang Liu, Weijie Yuan, Shuangyang Li, Xuemeng Liu, Husheng Li, Derrick Wing Kwan Ng, Yonghui Li
Specifically, the convolution and LSTM modules are successively adopted in the proposed HCL-Net to exploit the spatial and temporal dependencies of communication channels to further improve the learning performance.
no code implementations • ICML Workshop AML 2021 • Wenzhao Xiang, Chang Liu, Shibao Zheng
Traditional adversarial examples are typically generated by adding perturbation noise to the input image within a small matrix norm.
1 code implementation • ICCV 2021 • Henghui Ding, Chang Liu, Suchen Wang, Xudong Jiang
We introduce transformer and multi-head attention to build a network with an encoder-decoder attention mechanism architecture that "queries" the given image with the language expression.
Ranked #4 on
Referring Expression Segmentation
on RefCOCOg-test
no code implementations • 3 Aug 2021 • Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
Noisy labels are commonly found in real-world data, which cause performance degradation of deep neural networks.
no code implementations • 16 Jul 2021 • Haodi Jiang, Ju Jing, Jiasheng Wang, Chang Liu, Qin Li, Yan Xu, Jason T. L. Wang, Haimin Wang
Our method consists of a data pre-processing component that prepares training data from a threshold-based tool, a deep learning model implemented as a Bayesian convolutional neural network for probabilistic image segmentation with uncertainty quantification to predict fibrils, and a post-processing component containing a fibril-fitting algorithm to determine fibril orientations.
1 code implementation • NeurIPS 2021 • Chang Liu, Haoyue Tang, Tao Qin, Jintao Wang, Tie-Yan Liu
This is motivated by the observation that deep generative models, in addition to a likelihood model $p(x|z)$, often also use an inference model $q(z|x)$ for extracting representation, but they rely on a usually uninformative prior distribution $p(z)$ to define a joint distribution, which may render problems like posterior collapse and manifold mismatch.
no code implementations • 25 Jun 2021 • Tianle Yue, Hang Yang, Zongliang Du, Chang Liu, Khalil I. Elkhodary, Shan Tang, Xu Guo
During offline training, a mapping function is built between high and low resolution representations of a given design domain.
no code implementations • 24 Jun 2021 • Xianlong Zeng, Simon Lin, Chang Liu
In addition, our framework showed a great generalizability potential to transfer learned knowledge from one institution to another, paving the way for future healthcare model pre-training across institutions.
1 code implementation • 24 Jun 2021 • Xianlong Zeng, Fanghao Song, Zhongen Li, Krerkkiat Chusap, Chang Liu
Our method can be divided into three stages: 1) a neighborhood generation stage, which generates instances based on the given sample; 2) a classification stage, which yields classifications on the generated instances to carve out the local decision boundary and delineate the model behavior; and 3) a human-in-the-loop stage, which involves human to refine and explore the neighborhood of interest.
BIG-bench Machine Learning
Explainable artificial intelligence
no code implementations • 23 Jun 2021 • Xianlong Zeng, Simon Lin, Chang Liu
The claims data, containing medical codes, services information, and incurred expenditure, can be a good resource for estimating an individual's health condition and medical risk level.
1 code implementation • ICLR 2022 • Jiaxin Shi, Chang Liu, Lester Mackey
We introduce a new family of particle evolution samplers suitable for constrained domains and non-Euclidean geometries.
2 code implementations • CVPR 2021 • Zonghao Guo, Chang Liu, Xiaosong Zhang, Jianbin Jiao, Xiangyang Ji, Qixiang Ye
Detecting oriented and densely packed objects remains challenging for spatial feature aliasing caused by the intersection of reception fields between objects.
Ranked #31 on
Object Detection In Aerial Images
on DOTA
no code implementations • 18 Jun 2021 • Chang Liu, Xiaolin Wu
Nighttime photographers are often troubled by light pollution of unwanted artificial lights.
1 code implementation • ICLR 2022 • Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, Tie-Yan Liu
Denoising diffusion probabilistic models have been recently proposed to generate high-quality samples by estimating the gradient of the data density.
no code implementations • 5 Jun 2021 • Jingjing Si, Guoliang Li, Yinbo Cheng, Rui Zhang, Godwin Enemali, Chang Liu
As an in situ combustion diagnostic tool, Tunable Diode Laser Absorption Spectroscopy (TDLAS) tomography has been widely used for imaging of two-dimensional temperature distributions in reactive flows.
no code implementations • 18 May 2021 • Chang Liu, Guanjie Zheng, Zhenhui Li
Therefore, in this paper, we propose to learn the human routing model, which is one of the most essential part in the traffic simulator.
no code implementations • 26 Apr 2021 • Jie Chen, Jie Liu, Chang Liu, Jian Zhang, Bing Han
To overcome this issue and to further improve the recognition performance, we adopt a deep learning approach for underwater target recognition and propose a LOFAR spectrum enhancement (LSE)-based underwater target recognition scheme, which consists of preprocessing, offline training, and online testing.
1 code implementation • ICCV 2021 • Kai Li, Chang Liu, Handong Zhao, Yulun Zhang, Yun Fu
This paper studies Semi-Supervised Domain Adaptation (SSDA), a practical yet under-investigated research topic that aims to learn a model of good performance using unlabeled samples and a few labeled samples in the target domain, with the help of labeled samples from a source domain.
no code implementations • 6 Apr 2021 • Boyu Yang, Mingbao Lin, Binghao Liu, Mengying Fu, Chang Liu, Rongrong Ji, Qixiang Ye
By tentatively expanding network nodes, LEC-Net enlarges the representation capacity of features, alleviating feature drift of old network from the perspective of model regularization.
1 code implementation • CVPR 2021 • Chang Liu, Han Yu, Boyang Li, Zhiqi Shen, Zhanning Gao, Peiran Ren, Xuansong Xie, Lizhen Cui, Chunyan Miao
The existence of noisy labels in real-world data negatively impacts the performance of deep learning models.
no code implementations • 22 Mar 2021 • Hua Wei, Chacha Chen, Chang Liu, Guanjie Zheng, Zhenhui Li
Simulation of the real-world traffic can be used to help validate the transportation policies.
no code implementations • 22 Mar 2021 • Chang Liu, Xiaojuan Qi, Edmund Lam, Ngai Wong
The neuromorphic event cameras, which capture the optical changes of a scene, have drawn increasing attention due to their high speed and low power consumption.
no code implementations • 17 Mar 2021 • Juntao Li, Chang Liu, Chongyang Tao, Zhangming Chan, Dongyan Zhao, Min Zhang, Rui Yan
To fill the gap between these up-to-date methods and the real-world applications, we incorporate user-specific dialogue history into the response selection and propose a personalized hybrid matching network (PHMN).
no code implementations • 16 Mar 2021 • Chang Liu, Lixin Fan, Kam Woh Ng, Yilun Jin, Ce Ju, Tianyu Zhang, Chee Seng Chan, Qiang Yang
This paper proposes a novel ternary hash encoding for learning to hash methods, which provides a principled more efficient coding scheme with performances better than those of the state-of-the-art binary hashing counterparts.
2 code implementations • CVPR 2021 • Bohao Li, Boyu Yang, Chang Liu, Feng Liu, Rongrong Ji, Qixiang Ye
Few-shot object detection has made substantial progressby representing novel class objects using the feature representation learned upon a set of base class objects.
Ranked #10 on
Few-Shot Object Detection
on MS-COCO (10-shot)
no code implementations • 6 Mar 2021 • Jeremy Beauchamp, Razvan Bunescu, Cindy Marling, Zhongen Li, Chang Liu
In this work, we invert the "what-if" scenario and introduce a similar architecture based on chaining two LSTMs that can be trained to make either insulin or carbohydrate recommendations aimed at reaching a desired BG level in the future.
no code implementations • 5 Mar 2021 • Chang Liu, Xiaoguang Li, Guohao Cai, Zhenhua Dong, Hong Zhu, Lifeng Shang
It is still an open question to leverage various types of information under the BERT framework.
no code implementations • 3 Mar 2021 • Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, Tie-Yan Liu
Being expensive and time-consuming to collect massive COVID-19 image samples to train deep classification models, transfer learning is a promising approach by transferring knowledge from the abundant typical pneumonia datasets for COVID-19 image classification.
1 code implementation • 2 Mar 2021 • Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, Philip S. Yu
Domain generalization deals with a challenging setting where one or several different but related domain(s) are given, and the goal is to learn a model that can generalize to an unseen test domain.
no code implementations • 10 Feb 2021 • Rui Zhang, Jingjing Si, Godwin Enemali, Yong Bao, Chang Liu
The proposed scheme was both numerically and experimentally validated using a CST sensor with 32 laser beams using a variety of computational tomographic algorithms.
no code implementations • 22 Jan 2021 • Chang Liu, Henghui Ding, Xudong Jiang
In this paper, we argue that recovering these microscopic details relies on low-level but high-definition texture features.
no code implementations • 20 Jan 2021 • Zhuqing Jiang, Chang Liu, Ya'nan Wang, Kai Li, Aidong Men, Haiying Wang, Haiyong Luo
With the goal of tuning up the brightness, low-light image enhancement enjoys numerous applications, such as surveillance, remote sensing and computational photography.
no code implementations • 6 Jan 2021 • Bernhard Kliem, Jeongwoo Lee, Rui Liu, Stephen M. White, Chang Liu, Satoshi Masuda
We present evidence that a magnetic flux rope was formed before a coronal mass ejection (CME) and its associated long-duration flare during a pair of preceding confined eruptions and associated impulsive flares in a compound event in NOAA Active Region 12371.
Solar and Stellar Astrophysics
no code implementations • 4 Jan 2021 • Ya'nan Wang, Zhuqing Jiang, Chang Liu, Kai Li, Aidong Men, Haiying Wang
This paper proposes a neural network for multi-level low-light image enhancement, which is user-friendly to meet various requirements by selecting different images as brightness reference.
no code implementations • 1 Jan 2021 • Chang Liu, Kai Li, Yun Fu
Unsupervised domain adaptation (UDA) is to make predictions for unlabeled data in a target domain with labeled data from source domain available.
2 code implementations • 16 Dec 2020 • Chang Liu, Zetian Jiang, Runzhong Wang, Junchi Yan, Lingxiao Huang, Pinyan Lu
As such, the agent can finish inlier matching timely when the affinity score stops growing, for which otherwise an additional parameter i. e. the number of inliers is needed to avoid matching outliers.
no code implementations • 7 Dec 2020 • Chang Liu, Yixing Huang, Joscha Maier, Laura Klein, Marc Kachelrieß, Andreas Maier
For organ-specific AEC, a preliminary CT reconstruction is necessary to estimate organ shapes for dose optimization, where only a few projections are allowed for real-time reconstruction.
no code implementations • 2 Dec 2020 • Tangqing Cao, Wenqi Guo, Wang Lu, Yunfei Xue, Wenjun Lu, Jing Su, Christian H. Liebscher, Chang Liu, Gerhard Dehm
Such a softening behavior can be related to the interaction of dislocations with short-range clustering.
Materials Science
1 code implementation • 1 Dec 2020 • Chang Liu, Xuemeng Liu, Derrick Wing Kwan Ng, Jinhong Yuan
Channel estimation is of great importance in realizing practical intelligent reflecting surface-assisted multi-user communication (IRS-MC) systems.
no code implementations • SEMEVAL 2020 • Chang Liu, Dong Yu
We demonstrate the effectiveness of our approaches, which achieves 0. 95 of subtask 1 in F1 while using only a subset of giving training set to fine-tune the BERT model, and our official submission achieves F1 0. 802, which ranks us 16th in the competition.
no code implementations • 29 Nov 2020 • Yan He, Jifang Qiu, Chang Liu, Yue Liu, Jian Wu
The latest theoretical advances in the field of unlimited sampling framework (USF) show the potential to avoid clipping problems of analog-to-digital converters (ADC).
no code implementations • 20 Nov 2020 • Godwin Enemali, Rui Zhang, Hugh McCann, Chang Liu
Although a fully parallel data acquisition (DAQ) and signal processing system can achieve these functionalities with maximised temporal response, it leads to a highly complex, expensive and power-consuming instrumentation system with high potential for inconsistency between the sampled beams due to the electronics alone.
no code implementations • 19 Nov 2020 • Yuanqiang Cai, Chang Liu, Weiqiang Wang, Qixiang Ye
With only bounding-box annotations in the spatial domain, existing video scene text detection (VSTD) benchmarks lack temporal relation of text instances among video frames, which hinders the development of video text-related applications.
no code implementations • 10 Nov 2020 • Chang Liu, Xuemeng Liu, Zhiqiang Wei, Derrick Wing Kwan Ng, Jinhong Yuan, Ying-Chang Liang
Existing tag signal detection algorithms inevitably suffer from a high bit error rate (BER) due to the difficulties in estimating the channel state information (CSI).
no code implementations • 10 Nov 2020 • Chang Liu, Wenzhong Yan, Ankur Mehta
Based on an equivalent plate model, we develop and validate analytical formulas for the behavioral specifications of OADLC mechanisms; the analytical formulas can be described as expressions of design parameters.
Robotics
1 code implementation • 8 Nov 2020 • Chang Liu, Yunjie Tian, Jianbin Jiao, Qixiang Ye
Conventional networks for object skeleton detection are usually hand-crafted.
no code implementations • 4 Nov 2020 • Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-Yan Liu
To avoid spurious correlation, we propose a Latent Causal Invariance Model (LaCIM) which pursues causal prediction.
1 code implementation • NeurIPS 2021 • Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output.
no code implementations • 8 Oct 2020 • Yunfan Jiang, Jingjing Si, Rui Zhang, Godwin Enemali, Bin Zhou, Hugh McCann, Chang Liu
Chemical Species Tomography (CST) has been widely used for in situ imaging of critical parameters, e. g. species concentration and temperature, in reactive flows.
no code implementations • 17 Sep 2020 • Chang Liu, Huichu Zhang, Wei-Nan Zhang, Guanjie Zheng, Yong Yu
The heavy traffic congestion problem has always been a concern for modern cities.
no code implementations • 16 Sep 2020 • Chang Liu, Weijie Yuan, Zhiqiang Wei, Xuemeng Liu, Derrick Wing Kwan Ng
Unmanned aerial vehicle (UAV)-assisted communication becomes a promising technique to realize the beyond fifth generation (5G) wireless networks, due to the high mobility and maneuverability of UAVs which can adapt to heterogeneous requirements of different applications.
no code implementations • 11 Sep 2020 • Jianan Li, Jimei Yang, Jianming Zhang, Chang Liu, Christina Wang, Tingfa Xu
In this paper, we introduce Attribute-conditioned Layout GAN to incorporate the attributes of design elements for graphic layout generation by forcing both the generator and the discriminator to meet attribute conditions.
no code implementations • 11 Sep 2020 • Chang Liu, Zhiqiang Wei, Derrick Wing Kwan Ng, Jinhong Yuan, Ying-Chang Liang
To eliminate the requirement of channel estimation and to improve the system performance, in this paper, we adopt a deep transfer learning (DTL) approach to implicitly extract the features of channel and directly recover tag symbols.
1 code implementation • 4 Sep 2020 • Yasser Abduallah, Jason T. L. Wang, Yang Nie, Chang Liu, Haimin Wang
Solar flare prediction plays an important role in understanding and forecasting space weather.
no code implementations • 4 Sep 2020 • Chang Liu, Jiahui Sun, Haiming Jin, Meng Ai, Qun Li, Cheng Zhang, Kehua Sheng, Guobin Wu, XiaoHu Qie, Xinbing Wang
Thus, in this paper, we exploit adaptive dispatching intervals to boost the platform's profit under a guarantee of the maximum passenger waiting time.
1 code implementation • 3 Sep 2020 • Chang Liu, Xuemeng Liu, Derrick Wing Kwan Ng, Jinhong Yuan
To this end, we first develop a versatile DReL-based channel estimation framework where a deep residual network (DRN)-based MMSE estimator is derived in terms of Bayesian philosophy.
4 code implementations • 27 Aug 2020 • Haodi Jiang, Jiasheng Wang, Chang Liu, Ju Jing, Hao liu, Jason T. L. Wang, Haimin Wang
Deep learning has drawn a lot of interest in recent years due to its effectiveness in processing big and complex observational data gathered from diverse instruments.
1 code implementation • ECCV 2020 • Boyu Yang, Chang Liu, Bohao Li, Jianbin Jiao, Qixiang Ye
Few-shot segmentation is challenging because objects within the support and query images could significantly differ in appearance and pose.
1 code implementation • 2 Aug 2020 • Guanlin Li, Chang Liu, Han Yu, Yanhong Fan, Libang Zhang, Zongyue Wang, Meiqin Wang
Information about system characteristics such as power consumption, electromagnetic leaks and sound can be exploited by the side-channel attack to compromise the system.
1 code implementation • 17 Jul 2020 • Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu
However, it remains challenging to determine which method is suitable for a given application since they are built with certain priors or bias.
1 code implementation • 7 Jul 2020 • Yunjie Tian, Chang Liu, Lingxi Xie, Jianbin Jiao, Qixiang Ye
The search cost of neural architecture search (NAS) has been largely reduced by weight-sharing methods.
1 code implementation • 6 Jul 2020 • Yifei Zhang, Chang Liu, Yu Zhou, Wei Wang, Weiping Wang, Qixiang Ye
In this work, we propose a novel clustering based method, which, by iteratively excluding class inconsistent samples during progressive cluster formation, alleviates the impact of noise samples in a simple-yet-effective manner.
no code implementations • 22 Jun 2020 • Yaolong Wang, Mingqing Xiao, Chang Liu, Shuxin Zheng, Tie-Yan Liu
Specifically, ILC introduces an invertible encoding module to replace the encoder-decoder structure to produce the low dimensional informative latent representation, meanwhile, transform the lost information into an auxiliary latent variable that won't be further coded or stored.
no code implementations • 22 Jun 2020 • Gelu Nita, Manolis Georgoulis, Irina Kitiashvili, Viacheslav Sadykov, Enrico Camporeale, Alexander Kosovichev, Haimin Wang, Vincent Oria, Jason Wang, Rafal Angryk, Berkay Aydin, Azim Ahmadzadeh, Xiaoli Bai, Timothy Bastian, Soukaina Filali Boubrahimi, Bin Chen, Alisdair Davey, Sheldon Fereira, Gregory Fleishman, Dale Gary, Andrew Gerrard, Gregory Hellbourg, Katherine Herbert, Jack Ireland, Egor Illarionov, Natsuha Kuroda, Qin Li, Chang Liu, Yuexin Liu, Hyomin Kim, Dustin Kempton, Ruizhe Ma, Petrus Martens, Ryan McGranaghan, Edward Semones, John Stefan, Andrey Stejko, Yaireska Collado-Vega, Meiqi Wang, Yan Xu, Sijie Yu
The authors of this white paper met on 16-17 January 2020 at the New Jersey Institute of Technology, Newark, NJ, for a 2-day workshop that brought together a group of heliophysicists, data providers, expert modelers, and computer/data scientists.
1 code implementation • 20 Jun 2020 • Yuan Yao, Chang Liu, Dezhao Luo, Yu Zhou, Qixiang Ye
The generative perception model acts as a feature decoder to focus on comprehending high temporal resolution and short-term representation by introducing a motion-attention mechanism.
no code implementations • 20 Jun 2020 • Lixin Fan, Kam Woh Ng, Ce Ju, Tianyu Zhang, Chang Liu, Chee Seng Chan, Qiang Yang
This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks.
no code implementations • 4 Jun 2020 • Weihao Jiang, Zhaozhi Xie, Yaoyi Li, Chang Liu, Hongtao Lu
Many of these applications need to perform a real-time and efficient prediction for semantic segmentation with a light-weighted network.
no code implementations • 26 May 2020 • Lingbo Yang, Pan Wang, Chang Liu, Zhanning Gao, Peiran Ren, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Xian-Sheng Hua, Wen Gao
Human pose transfer (HPT) is an emerging research topic with huge potential in fashion design, media production, online advertising and virtual reality.
no code implementations • CVPR 2020 • Gaurav Mittal, Chang Liu, Nikolaos Karianakis, Victor Fragoso, Mei Chen, Yun Fu
To reduce HPO time, we present HyperSTAR (System for Task Aware Hyperparameter Recommendation), a task-aware method to warm-start HPO for deep neural networks.
1 code implementation • 17 May 2020 • Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong Liu, Dongyan Zhao, Rui Yan
We manually collect a new and high-quality paired dataset, where each pair contains an unordered product attribute set in the source language and an informative product description in the target language.
7 code implementations • ECCV 2020 • Mingqing Xiao, Shuxin Zheng, Chang Liu, Yaolong Wang, Di He, Guolin Ke, Jiang Bian, Zhouchen Lin, Tie-Yan Liu
High-resolution digital images are usually downscaled to fit various display screens or save the cost of storage and bandwidth, meanwhile the post-upscaling is adpoted to recover the original resolutions or the details in the zoom-in images.
1 code implementation • 11 May 2020 • Lingbo Yang, Chang Liu, Pan Wang, Shanshe Wang, Peiran Ren, Siwei Ma, Wen Gao
Existing face restoration researches typically relies on either the degradation prior or explicit guidance labels for training, which often results in limited generalization ability over real-world images with heterogeneous degradations and rich background contents.
no code implementations • 8 May 2020 • Hao Liu, Yan Xu, Jiasheng Wang, Ju Jing, Chang Liu, Jason T. L. Wang, Haimin Wang
By learning the latent patterns in the training data prepared by the physics-based ME tool, the proposed CNN method is able to infer vector magnetic fields from the Stokes profiles of GST/NIRIS.
Solar and Stellar Astrophysics