no code implementations • ICML 2020 • Liu Liu, Lei Deng, Zhaodong Chen, yuke wang, Shuangchen Li, Jingwei Zhang, Yihua Yang, Zhenyu Gu, Yufei Ding, Yuan Xie
Using Deep Neural Networks (DNNs) in machine learning tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements and energy constraints because of the memory-bound and the compute-bound execution pattern of DNNs.
1 code implementation • ICML 2020 • Zhen Wang, Liu Liu, DaCheng Tao
In order to fill in these research gaps, we propose a novel deep neural network (DNN) based framework, Deep Streaming Label Learning (DSLL), to classify instances with newly emerged labels effectively.
no code implementations • 16 May 2023 • Hai-Miao Hu, Zhenbo Xu, Wenshuai Xu, You Song, YiTao Zhang, Liu Liu, Zhilin Han, Ajin Meng
To solve this ill-posed inverse problem, most band selection methods adopt hand-crafted priors or exploit clustering or sparse regularization constraints to find most prominent bands.
no code implementations • 9 Apr 2023 • Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, QinGhua Hu, Bingzhe Wu, Changqing Zhang, Jianhua Yao
Subpopulation shift exists widely in many real-world applications, which refers to the training and test distributions that contain the same subpopulation groups but with different subpopulation proportions.
no code implementations • 13 Mar 2023 • Ziniu Li, Ke Xu, Liu Liu, Lanqing Li, Deheng Ye, Peilin Zhao
To address this issue, we propose an alternative framework that involves a human supervising the RL models and providing additional feedback in the online deployment phase.
no code implementations • 6 Jan 2023 • Liu Liu, Yukai Lin, Xiao Liang, Qichao Xu, Miao Jia, Yangdong Liu, Yuxiang Wen, Wei Luo, Jiangwei Li
Second, a single-image-based localization pipeline (retrieval--matching--PnP) is performed to estimate 6-DoF camera poses for each query image, one for each 3D map.
no code implementations • 5 Jan 2023 • Yan Yang, Liyuan Pan, Liu Liu
Our model is a self-supervised learning framework, and uses paired event camera data and natural RGB images for training.
no code implementations • CVPR 2023 • Yan Yang, Liyuan Pan, Liu Liu, Miaomiao Liu
It estimates a disparity feature map, which is used to query a trainable kernel set to estimate a blur kernel that best describes the spatially-varying blur.
no code implementations • 30 Oct 2022 • Yan Yang, Liyuan Pan, Liu Liu, Eric A Stone
Instead, we present ISG framework that harnesses interactions among discriminative features from texture-abundant regions by three new modules: 1) a Shannon Selection module, based on the Shannon information content and Solomonoff's theory, to filter out textureless image regions; 2) a Feature Extraction network to extract expressive low-dimensional feature representations for efficient region interactions among a high-resolution image; 3) a Dual Attention network attends to regions with desired gene expression features and aggregates them for the prediction task.
no code implementations • 19 Oct 2022 • Chengqian Gao, Ke Xu, Liu Liu, Deheng Ye, Peilin Zhao, Zhiqiang Xu
A promising paradigm for offline reinforcement learning (RL) is to constrain the learned policy to stay close to the dataset behaviors, known as policy constraint offline RL.
1 code implementation • 19 Sep 2022 • Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, Bingzhe Wu, Changqing Zhang, Jianhua Yao
Importance reweighting is a normal way to handle the subpopulation shift issue by imposing constant or adaptive sampling weights on each sample in the training dataset.
no code implementations • 25 Jul 2022 • Yajing Kong, Liu Liu, Zhen Wang, DaCheng Tao
Continual learning is a learning paradigm that learns tasks sequentially with resources constraints, in which the key challenge is stability-plasticity dilemma, i. e., it is uneasy to simultaneously have the stability to prevent catastrophic forgetting of old tasks and the plasticity to learn new tasks well.
no code implementations • 24 Jul 2022 • Zhen Wang, Liu Liu, Yajing Kong, Jiaxian Guo, DaCheng Tao
Based on the learnable focuses, we design a focal contrastive loss to rebalance contrastive learning between new and past classes and consolidate previously learned representations.
1 code implementation • 23 Jul 2022 • Siyuan Zhou, Liu Liu, Li Niu, Liqing Zhang
Object placement aims to place a foreground object over a background image with a suitable location and size.
1 code implementation • CVPR 2022 • Lixin Yang, Kailin Li, Xinyu Zhan, Fei Wu, Anran Xu, Liu Liu, Cewu Lu
We start to collect 1, 800 common household objects and annotate their affordances to construct the first knowledge base: Oak.
1 code implementation • 26 Mar 2022 • Yujiao Shi, Xin Yu, Liu Liu, Dylan Campbell, Piotr Koniusz, Hongdong Li
We address the problem of ground-to-satellite image geo-localization, that is, estimating the camera latitude, longitude and orientation (azimuth angle) by matching a query image captured at the ground level against a large-scale database with geotagged satellite images.
no code implementations • 22 Mar 2022 • Guangqian Yang, Yibing Zhan, Jinlong Li, Baosheng Yu, Liu Liu, Fengxiang He
In this paper, we analyze the adversarial attack on graphs from the perspective of feature smoothness which further contributes to an efficient new adversarial defensive algorithm for GNNs.
no code implementations • 17 Mar 2022 • Kaining Zhang, Liu Liu, Min-Hsiu Hsieh, DaCheng Tao
Experimental results verify our theoretical findings in the quantum simulation and quantum chemistry.
no code implementations • 28 Feb 2022 • Zhaodong Chen, Yuying Quan, Zheng Qu, Liu Liu, Yufei Ding, Yuan Xie
We evaluate the 1:2 and 2:4 sparsity under different configurations and achieve 1. 27~ 1. 89x speedups over the full-attention mechanism.
no code implementations • CVPR 2022 • Liu Liu, Wenqiang Xu, Haoyuan Fu, Sucheng Qian, Yang Han, Cewu Lu
To bridge the gap, we present AKB-48: a large-scale Articulated object Knowledge Base which consists of 2, 037 real-world 3D articulated object models of 48 categories.
no code implementations • 29 Jan 2022 • Liu Liu, Ziyang Tang, Lanqing Li, Dijun Luo
We consider offline Imitation Learning from corrupted demonstrations where a constant fraction of data can be noise or even arbitrary outliers.
1 code implementation • 18 Jan 2022 • Chao Chen, Yibing Zhan, Baosheng Yu, Liu Liu, Yong Luo, Bo Du
To address this problem, we propose Resistance Training using Prior Bias (RTPB) for the scene graph generation.
no code implementations • CVPR 2022 • Zhen Wang, Liu Liu, Yiqun Duan, Yajing Kong, DaCheng Tao
Continual learning methods aim at training a neural network from sequential data with streaming labels, relieving catastrophic forgetting.
no code implementations • CVPR 2022 • Fuxiang Wu, Liu Liu, Fusheng Hao, Fengxiang He, Jun Cheng
Object-guided text-to-image synthesis aims to generate images from natural language descriptions built by two-step frameworks, i. e., the model generates the layout and then synthesizes images from the layout and captions.
no code implementations • 24 Dec 2021 • Sucheng Qian, Liu Liu, Wenqiang Xu, Cewu Lu
It can obtain a satisfied segmentation result with minimal human clicks (< 10).
no code implementations • 22 Dec 2021 • Weigang Lu, Yibing Zhan, Binbin Lin, Ziyu Guan, Liu Liu, Baosheng Yu, Wei Zhao, Yaming Yang, DaCheng Tao
In this paper, we conduct theoretical and experimental analysis to explore the fundamental causes of performance degradation in deep GCNs: over-smoothing and gradient vanishing have a mutually reinforcing effect that causes the performance to deteriorate more quickly in deep GCNs.
no code implementations • 17 Dec 2021 • Dongxu Li, Chenchen Xu, Liu Liu, Yiran Zhong, Rong Wang, Lars Petersson, Hongdong Li
This work studies the task of glossification, of which the aim is to em transcribe natural spoken language sentences for the Deaf (hard-of-hearing) community to ordered sign language glosses.
no code implementations • 14 Dec 2021 • Han Xue, Liu Liu, Wenqiang Xu, Haoyuan Fu, Cewu Lu
With the full representation of the object shape and joint states, we can address several tasks including category-level object pose estimation and the articulated object retrieval.
no code implementations • NeurIPS 2021 • Sheng Wan, Yibing Zhan, Liu Liu, Baosheng Yu, Shirui Pan, Chen Gong
Essentially, our CGPN can enhance the learning performance of GNNs under extremely limited labels by contrastively propagating the limited labels to the entire graph.
no code implementations • 21 Oct 2021 • Liu Liu, Zheng Qu, Zhaodong Chen, Yufei Ding, Yuan Xie
We demonstrate that the sparse patterns are dynamic, depending on input sequences.
no code implementations • 29 Sep 2021 • Zhaodong Chen, Liu Liu, Yuying Quan, Zheng Qu, Yufei Ding, Yuan Xie
Transformers are becoming mainstream solutions for various tasks like NLP and Computer vision.
no code implementations • 29 Sep 2021 • Chuang Liu, Yibing Zhan, Baosheng Yu, Liu Liu, Bo Du, Wenbin Hu, Tongliang Liu
Graph pooling is essential in learning effective graph-level representations.
no code implementations • 29 Sep 2021 • Zhihao Cheng, Li Shen, Meng Fang, Liu Liu, DaCheng Tao
Imitation Learning (IL) merely concentrates on reproducing expert behaviors and could take dangerous actions, which is unbearable in safety-critical scenarios.
no code implementations • 7 Jul 2021 • Sachihiro Youoku, Takahisa Yamamoto, Junya Saito, Akiyoshi Uchida, Xiaoyu Mi, Ziqiang Shi, Liu Liu, Zhongling Liu, Osafumi Nakayama, Kentaro Murase
Therefore, after learning the common features for each frame, we constructed a facial expression estimation model and valence-arousal model using time-series data after combining the common features and the standardized features for each video.
2 code implementations • 5 Jul 2021 • Liu Liu, Zhenchen Liu, Bo Zhang, Jiangtong Li, Li Niu, Qingyang Liu, Liqing Zhang
Image composition aims to generate realistic composite image by inserting an object from one image into another background image, where the placement (e. g., location, size, occlusion) of inserted object may be unreasonable, which would significantly degrade the quality of the composite image.
1 code implementation • 28 Jun 2021 • Li Niu, Wenyan Cong, Liu Liu, Yan Hong, Bo Zhang, Jing Liang, Liqing Zhang
We also point out the limitations of existing methods in each sub-task and the problem of the whole image composition task.
no code implementations • CVPR 2021 • Liu Liu, Hongdong Li, Haodong Yao, Ruyi Zha
Aligning two partially-overlapped 3D line reconstructions in Euclidean space is challenging, as we need to simultaneously solve line correspondences and relative pose between reconstructions.
no code implementations • 7 May 2021 • Liu Liu, Han Xue, Wenqiang Xu, Haoyuan Fu, Cewu Lu
This setting allows varied kinematic structures within a semantic category, and multiple instances to co-exist in an observation of real world.
no code implementations • 24 Feb 2021 • Liu Liu, Tieyong Zeng, Zecheng Zhang
In our framework, the solution is approximated by a neural network that satisfies both the governing equation and other constraints.
Numerical Analysis Numerical Analysis
1 code implementation • 19 Jan 2021 • Ye Huang, Di Kang, Wenjing Jia, Xiangjian He, Liu Liu
Spatial and channel attentions, modelling the semantic interdependencies in spatial and channel dimensions respectively, have recently been widely used for semantic segmentation.
Ranked #9 on
Semantic Segmentation
on COCO-Stuff test
no code implementations • 1 Jan 2021 • Zhen Wang, Liu Liu, Yiqun Duan, DaCheng Tao
In this work, we formulate and study few-shot streaming label learning (FSLL), which models emerging new labels with only a few annotated examples by utilizing the knowledge learned from past labels.
no code implementations • ICCV 2021 • Yajing Kong, Liu Liu, Jun Wang, DaCheng Tao
Therefore, in contrast to recent works using a fixed curriculum, we devise a new curriculum learning method, Adaptive Curriculum Learning (Adaptive CL), adapting the difficulty of examples to the current state of the model.
2 code implementations • 2 Dec 2020 • Liu Liu, Hongdong Li, Haodong Yao, Ruyi Zha
Aligning two partially-overlapped 3D line reconstructions in Euclidean space is challenging, as we need to simultaneously solve correspondences and relative pose between line reconstructions.
no code implementations • 16 Oct 2020 • Zhihao Cheng, Liu Liu, Aishan Liu, Hao Sun, Meng Fang, DaCheng Tao
By contrast, this paper proves that LfO is almost equivalent to LfD in the deterministic robot environment, and more generally even in the robot environment with bounded randomness.
no code implementations • 23 Sep 2020 • Ziqiang Shi, Liu Liu, Zhongling Liu, Rujie Liu, Xiaoyu Mi, and Kentaro Murase
The features are input to a bidirectional long short-term memory (BiLSTM) layer for learning the intensity distribution.
1 code implementation • NeurIPS 2021 • Junjie Chen, Li Niu, Liu Liu, Liqing Zhang
In this setting, we propose a method called SimTrans to transfer pairwise semantic similarity from base categories to novel categories.
2 code implementations • ECCV 2020 • Dylan Campbell, Liu Liu, Stephen Gould
We instead propose the first fully end-to-end trainable network for solving the blind PnP problem efficiently and globally, that is, without the need for pose priors.
no code implementations • 7 Jul 2020 • Jiacheng Zhuo, Liu Liu, Constantine Caramanis
However, the existing CG type methods are not robust to data corruption.
1 code implementation • NeurIPS 2020 • Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis
In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the vector is represented by a deep generative model $G: \mathbb{R}^k \rightarrow \mathbb{R}^n$.
no code implementations • CVPR 2020 • Xibin Song, Yuchao Dai, Dingfu Zhou, Liu Liu, Wei Li, Hongdng Li, Ruigang Yang
Second, we propose a new framework for real-world DSR, which consists of four modules : 1) An iterative residual learning module with deep supervision to learn effective high-frequency components of depth maps in a coarse-to-fine manner; 2) A channel attention strategy to enhance channels with abundant high-frequency components; 3) A multi-stage fusion module to effectively re-exploit the results in the coarse-to-fine process; and 4) A depth refinement module to improve the depth map by TGV regularization and input loss.
no code implementations • 24 Apr 2020 • Fei Sun, Minghai Qin, Tianyun Zhang, Liu Liu, Yen-Kuang Chen, Yuan Xie
We show that for practically complicated problems, it is more beneficial to search large and sparse models in the weight dominated region.
1 code implementation • 15 Mar 2020 • Liu Liu, Dylan Campbell, Hongdong Li, Dingfu Zhou, Xibin Song, Ruigang Yang
Conventional absolute camera pose via a Perspective-n-Point (PnP) solver often assumes that the correspondences between 2D image pixels and 3D points are given.
1 code implementation • NeurIPS 2019 • Yujiao Shi, Liu Liu, Xin Yu, Hongdong Li
The first step is to apply a regular polar transform to warp an aerial image such that its domain is closer to that of a ground-view panorama.
1 code implementation • CVPR 2020 • Wenyan Cong, Jianfu Zhang, Li Niu, Liu Liu, Zhixin Ling, Weiyuan Li, Liqing Zhang
Image composition is an important operation in image processing, but the inconsistency between foreground and background significantly degrades the quality of composite image.
Ranked #5 on
Image Harmonization
on HAdobe5k(1024$\times$1024)
no code implementations • 25 Sep 2019 • Liu Liu, Ji Liu, Cho-Jui Hsieh, DaCheng Tao
The strategy is also accompanied by a mini-batch version of the proposed method that improves query complexity with respect to the size of the mini-batch.
no code implementations • 25 Sep 2019 • Kaining Zhang, Min-Hsiu Hsieh, Liu Liu, DaCheng Tao
Moreover, we propose an efficient algorithm to achieve the classical read-out of the target state.
no code implementations • 25 Sep 2019 • Liu Liu, Lei Deng, Shuangchen Li, Jingwei Zhang, Yihua Yang, Zhenyu Gu, Yufei Ding, Yuan Xie
Using Recurrent Neural Networks (RNNs) in sequence modeling tasks is promising in delivering high-quality results but challenging to meet stringent latency requirements because of the memory-bound execution pattern of RNNs.
no code implementations • 18 Sep 2019 • Liu Liu, Miroslaw Truszczynski
It is common for search and optimization problems to have alternative equivalent encodings in ASP.
no code implementations • 17 Sep 2019 • Kaining Zhang, Min-Hsiu Hsieh, Liu Liu, DaCheng Tao
Moreover, we propose an efficient quantum algorithm to achieve the classical read-out of the target state.
no code implementations • 11 Sep 2019 • Hao Guan, Liu Liu, Sean Moran, Fenglong Song, Gregory Slabaugh
In this paper, we propose a multi-task deep neural network called Noise Decomposition (NODE) that explicitly and separately estimates defective pixel noise, in conjunction with Gaussian and Poisson noise, to denoise an extreme low light image.
1 code implementation • 28 Aug 2019 • Wenyan Cong, Jianfu Zhang, Li Niu, Liu Liu, Zhixin Ling, Weiyuan Li, Liqing Zhang
Image composition is an important operation in image processing, but the inconsistency between foreground and background significantly degrades the quality of composite image.
1 code implementation • 11 Jul 2019 • Yujiao Shi, Xin Yu, Liu Liu, Tong Zhang, Hongdong Li
This paper proposes a novel Cross-View Feature Transport (CVFT) technique to explicitly establish cross-view domain transfer that facilitates feature alignment between ground and aerial images.
1 code implementation • CVPR 2019 • Liu Liu, Hongdong Li
The goal is to predict the spatial location of a ground-level query image by matching it to a large geotagged aerial image database (e. g., satellite imagery).
no code implementations • 12 Feb 2019 • Ziqiang Shi, Huibin Lin, Liu Liu, Rujie Liu, Jiqing Han, Anyan Shi
Deep dilated temporal convolutional networks (TCN) have been proved to be very effective in sequence modeling.
Sound Audio and Speech Processing
no code implementations • 24 Jan 2019 • Liu Liu, Tianyang Li, Constantine Caramanis
We define a natural condition we call the Robust Descent Condition (RDC), and show that if a gradient estimator satisfies the RDC, then Robust Hard Thresholding (IHT using this gradient estimator), is guaranteed to obtain good statistical rates.
no code implementations • ICLR 2019 • Liu Liu, Lei Deng, Xing Hu, Maohua Zhu, Guoqi Li, Yufei Ding, Yuan Xie
We propose to execute deep neural networks (DNNs) with dynamic and sparse graph (DSG) structure for compressive memory and accelerative execution during both training and inference.
no code implementations • 26 Sep 2018 • Liu Liu, Xuanqing Liu, Cho-Jui Hsieh, DaCheng Tao
Trust region and cubic regularization methods have demonstrated good performance in small scale non-convex optimization, showing the ability to escape from saddle points.
no code implementations • 6 Sep 2018 • Liu Liu, Ji Liu, Cho-Jui Hsieh, DaCheng Tao
In this paper, we consider the convex and non-convex composition problem with the structure $\frac{1}{n}\sum\nolimits_{i = 1}^n {{F_i}( {G( x )} )}$, where $G( x )=\frac{1}{n}\sum\nolimits_{j = 1}^n {{G_j}( x )} $ is the inner function, and $F_i(\cdot)$ is the outer function.
5 code implementations • ICCV 2019 • Liu Liu, Hongdong Li, Yuchao Dai
This paper tackles the problem of large-scale image-based localization (IBL) where the spatial location of a query image is determined by finding out the most similar reference images in a large database.
no code implementations • 30 May 2018 • Liu Liu, Minhao Cheng, Cho-Jui Hsieh, DaCheng Tao
However, due to the variance in the search direction, the convergence rates and query complexities of existing methods suffer from a factor of $d$, where $d$ is the problem dimension.
no code implementations • 29 May 2018 • Liu Liu, Yanyao Shen, Tianyang Li, Constantine Caramanis
Our algorithm recovers the true sparse parameters with sub-linear sample complexity, in the presence of a constant fraction of arbitrary corruptions.
no code implementations • 23 May 2018 • Tianyang Li, Anastasios Kyrillidis, Liu Liu, Constantine Caramanis
We present a novel statistical inference framework for convex empirical risk minimization, using approximate stochastic Newton steps.
no code implementations • 4 Apr 2018 • Liu Liu, Hairong Qi
Learning compact representation is vital and challenging for large scale multimedia data.
no code implementations • 27 Feb 2018 • Shuang Wu, Guoqi Li, Lei Deng, Liu Liu, Yuan Xie, Luping Shi
Batch Normalization (BN) has been proven to be quite effective at accelerating and improving the training of deep neural networks (DNNs).
no code implementations • 13 Nov 2017 • Liu Liu, Ji Liu, DaCheng Tao
In this paper, we apply the variance-reduced technique to derive two variance reduced algorithms that significantly improve the query complexity if the number of inner component functions is large.
no code implementations • 26 Oct 2017 • Liu Liu, Ji Liu, DaCheng Tao
We consider the composition optimization with two expected-value functions in the form of $\frac{1}{n}\sum\nolimits_{i = 1}^n F_i(\frac{1}{m}\sum\nolimits_{j = 1}^m G_j(x))+R(x)$, { which formulates many important problems in statistical learning and machine learning such as solving Bellman equations in reinforcement learning and nonlinear embedding}.
no code implementations • ICCV 2017 • Liu Liu, Hongdong Li, Yuchao Dai
In this paper, we introduce a global method which harnesses global contextual information exhibited both within the query image and among all the 3D points in the map.
no code implementations • 23 Jul 2017 • Alireza Rahimpour, Liu Liu, Ali Taalimi, Yang song, Hairong Qi
Despite recent attempts for solving the person re-identification problem, it remains a challenging task since a person's appearance can vary significantly when large variations in view angle, human pose, and illumination are involved.
no code implementations • 30 May 2017 • Ali Taalimi, Alireza Rahimpour, Liu Liu, Hairong Qi
Nowadays, distributed smart cameras are deployed for a wide set of tasks in several application scenarios, ranging from object recognition, image retrieval, and forensic applications.
no code implementations • 30 May 2017 • Ali Taalimi, Liu Liu, Hairong Qi
We use a network flow approach to link detections in low-level and tracklets in high-level.
no code implementations • 21 May 2017 • Tianyang Li, Liu Liu, Anastasios Kyrillidis, Constantine Caramanis
We present a novel method for frequentist statistical inference in $M$-estimation problems, based on stochastic gradient descent (SGD) with a fixed step size: we demonstrate that the average of such SGD sequences can be used for statistical inference, after proper scaling.
no code implementations • 20 Apr 2017 • Ziqiang Shi, Liu Liu, Mengjiao Wang, Rujie Liu
However, in practical use, when using multi-task learned network as feature extractor, the extracted feature are always attached to several labels.
no code implementations • 15 Mar 2017 • Liu Liu, Alireza Rahimpour, Ali Taalimi, Hairong Qi
Furthermore, in the effort of handling multilabel images, we design a joint cross entropy loss that includes both softmax cross entropy and weighted binary cross entropy in consideration of the correlation and independence of labels, respectively.
no code implementations • 20 Jun 2016 • Maohua Zhu, Liu Liu, Chao Wang, Yuan Xie
To improve the performance and maintain the scalability, we present CNNLab, a novel deep learning framework using GPU and FPGA-based accelerators.
no code implementations • 12 May 2016 • Liu Liu, Hongdong Li, Yuchao Dai
When the solver is used in combination with RANSAC, we are able to quickly prune unpromising hypotheses, significantly improve the chance of finding inliers.
no code implementations • 18 Dec 2014 • Yunchao Gong, Liu Liu, Ming Yang, Lubomir Bourdev
In this paper, we tackle this model storage issue by investigating information theoretical vector quantization methods for compressing the parameters of CNNs.
no code implementations • NeurIPS 2010 • Ni Lao, Jun Zhu, Liu Liu, Yandong Liu, William W. Cohen
Markov networks (MNs) can incorporate arbitrarily complex features in modeling relational data.