no code implementations • COLING 2022 • Li Gao, Lingyun Song, Jie Liu, Bolin Chen, Xuequn Shang
However, little attention is paid to the issues of both authenticity of the relationships and topology imbalance in the structure of NPG, which trick existing methods and thus lead to incorrect prediction results.
1 code implementation • ECCV 2020 • Miao Zhang, Sun Xiao Fei, Jie Liu, Shuang Xu, Yongri Piao, Huchuan Lu
In this paper, we propose an asymmetric two-stream architecture taking account of the inherent differences between RGB and depth data for saliency detection.
Ranked #19 on
Thermal Image Segmentation
on RGB-T-Glass-Segmentation
no code implementations • 26 Mar 2023 • Zhuoying Zhao, Ziling Tan, Pinghui Mo, Xiaonan Wang, Dan Zhao, Xin Zhang, Ming Tao, Jie Liu
This paper proposes a special-purpose system to achieve high-accuracy and high-efficiency machine learning (ML) molecular dynamics (MD) calculations.
no code implementations • 12 Mar 2023 • Hao Chen, Zhe-Ming Lu, Jie Liu
This paper focuses on proposing a deep learning-based monkey swing counting algorithm.
no code implementations • 9 Mar 2023 • Jie Liu, Yixuan Liu, Xue Han, Chao Deng, Junlan Feng
Previous contrastive learning methods for sentence representations often focus on insensitive transformations to produce positive pairs, but neglect the role of sensitive transformations that are harmful to semantic representations.
no code implementations • 8 Feb 2023 • Xubo Qin, Xiyuan Liu, Xiongfeng Zheng, Jie Liu, Yutao Zhu
Specifically, when the student models are in cross-encoder architecture, a pairwise loss of hard labels is critical for training student models, whereas the distillation objectives of intermediate Transformer layers may hurt performance.
no code implementations • 16 Jan 2023 • Shanshan Chen, Jie Liu, Yixiang Wu
In this paper, we study a three-patch two-species Lotka-Volterra competition patch model over a stream network.
no code implementations • 16 Jan 2023 • Shanshan Chen, Jie Liu, Yixiang Wu
In this paper, we study a two stream species Lotka-Volterra competition patch model with the patches aligned along a line.
no code implementations • 9 Jan 2023 • Jie Liu, Yanqi Bao, Wenzhe Yin, Haochen Wang, Yang Gao, Jan-Jakob Sonke, Efstratios Gavves
However, the appearance variations between objects from the same category could be extremely large, leading to unreliable feature matching and query mask prediction.
Ranked #22 on
Few-Shot Semantic Segmentation
on PASCAL-5i (1-Shot)
1 code implementation • 2 Jan 2023 • Jie Liu, Yixiao Zhang, Jie-Neng Chen, Junfei Xiao, Yongyi Lu, Bennett A. Landman, Yixuan Yuan, Alan Yuille, Yucheng Tang, Zongwei Zhou
The model is developed from an assembly of 14 datasets with 3, 410 CT scans and evaluated on 6, 162 external CT scans from 3 datasets.
Ranked #1 on
Organ Segmentation
on BTCV
no code implementations • 28 Dec 2022 • Hao Zhang, Tingting Wu, Siyao Cheng, Jie Liu
Federated learning (FL) is a method to train model with distributed data from numerous participants such as IoT devices.
no code implementations • 30 Nov 2022 • Yue Li, Li Zhang, Namin Wang, Jie Liu, Lei Xie
Specifically, the weight transfer fine-tuning aims to constrain the distance of the weights between the pre-trained model and the fine-tuned model, which takes advantage of the previously acquired discriminative ability from the large-scale out-domain datasets and avoids catastrophic forgetting and overfitting at the same time.
1 code implementation • 30 Nov 2022 • Jie Liu, Chao Chen, Jie Tang, Gangshan Wu
In the fine area, we use an Intra-Patch Self-Attention (IPSA) module to model long-range pixel dependencies in a local patch, and then a $3\times3$ convolution is applied to process the finest details.
1 code implementation • 29 Nov 2022 • Chuming Li, Jie Liu, Yinmin Zhang, Yuhong Wei, Yazhe Niu, Yaodong Yang, Yu Liu, Wanli Ouyang
In the learning phase, each agent minimizes the TD error that is dependent on how the subsequent agents have reacted to their chosen action.
Ranked #1 on
SMAC
on SMAC 3s5z_vs_3s6z
no code implementations • 7 Nov 2022 • Andrey Ignatov, Radu Timofte, Maurizio Denna, Abdel Younes, Ganzorig Gankhuyag, Jingang Huh, Myeong Kyun Kim, Kihwan Yoon, Hyeon-Cheol Moon, Seungho Lee, Yoonsik Choe, Jinwoo Jeong, Sungjei Kim, Maciej Smyl, Tomasz Latkowski, Pawel Kubik, Michal Sokolski, Yujie Ma, Jiahao Chao, Zhou Zhou, Hongfan Gao, Zhengfeng Yang, Zhenbing Zeng, Zhengyang Zhuge, Chenghua Li, Dan Zhu, Mengdi Sun, Ran Duan, Yan Gao, Lingshun Kong, Long Sun, Xiang Li, Xingdong Zhang, Jiawei Zhang, Yaqi Wu, Jinshan Pan, Gaocheng Yu, Jin Zhang, Feng Zhang, Zhe Ma, Hongbin Wang, Hojin Cho, Steve Kim, Huaen Li, Yanbo Ma, Ziwei Luo, Youwei Li, Lei Yu, Zhihong Wen, Qi Wu, Haoqiang Fan, Shuaicheng Liu, Lize Zhang, Zhikai Zong, Jeremy Kwon, Junxi Zhang, Mengyuan Li, Nianxiang Fu, Guanchen Ding, Han Zhu, Zhenzhong Chen, Gen Li, Yuanfan Zhang, Lei Sun, Dafeng Zhang, Neo Yang, Fitz Liu, Jerry Zhao, Mustafa Ayazoglu, Bahri Batuhan Bilecen, Shota Hirose, Kasidis Arunruangsirilert, Luo Ao, Ho Chun Leung, Andrew Wei, Jie Liu, Qiang Liu, Dahai Yu, Ao Li, Lei Luo, Ce Zhu, Seongmin Hong, Dongwon Park, Joonhee Lee, Byeong Hyun Lee, Seunggyu Lee, Se Young Chun, Ruiyuan He, Xuhao Jiang, Haihang Ruan, Xinjian Zhang, Jing Liu, Garas Gendy, Nabil Sabor, Jingchao Hou, Guanghui He
While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints.
no code implementations • 6 Nov 2022 • Jixun Yao, Qing Wang, Yi Lei, Pengcheng Guo, Lei Xie, Namin Wang, Jie Liu
By directly scaling the formant and F0, the speaker distinguishability degradation of the anonymized speech caused by the introduction of other speakers is prevented.
no code implementations • 28 Oct 2022 • Peipei Liu, Xin Zheng, Hong Li, Jie Liu, Yimo Ren, Hongsong Zhu, Limin Sun
At the second stage, a self-supervised contrastive learning is designed for the improvement of the distilled unimodal representations after cross-modal interaction.
1 code implementation • 19 Oct 2022 • Peipei Liu, Gaosheng Wang, Hong Li, Jie Liu, Yimo Ren, Hongsong Zhu, Limin Sun
With social media posts tending to be multimodal, Multimodal Named Entity Recognition (MNER) for the text with its accompanying image is attracting more and more attention since some textual components can only be understood in combination with visual information.
no code implementations • 19 Oct 2022 • Peipei Liu, Hong Li, Zhiyu Wang, Yimo Ren, Jie Liu, Fei Lyu, Hongsong Zhu, Limin Sun
Enterprise relation extraction aims to detect pairs of enterprise entities and identify the business relations between them from unstructured or semi-structured text data, and it is crucial for several real-world applications such as risk analysis, rating research and supply chain security.
no code implementations • 18 Oct 2022 • Xinhai Chen, Jie Liu, Junjun Yan, Zhichao Wang, Chunye Gong
To improve the prediction accuracy of the neural network, we also introduce a novel auxiliary line strategy and an efficient network model during meshing.
no code implementations • 17 Oct 2022 • Joey Wang, Yingcan Wei, Minseok Lee, Matthias Langer, Fan Yu, Jie Liu, Alex Liu, Daniel Abel, Gems Guo, Jianbing Dong, Jerry Shi, Kunlun Li
In this talk, we introduce Merlin HugeCTR.
no code implementations • 8 Oct 2022 • Jie Liu, Jingjing Wang, Peng Zhang, Chunmao Wang, Di Xie, ShiLiang Pu
To overcome these limitations, we propose a multi-scale wavelet transformer framework for face forgery detection.
no code implementations • 23 Sep 2022 • Tan Yu, Zhipeng Jin, Jie Liu, Yi Yang, Hongliang Fei, Ping Li
To overcome the limitations of behavior ID features in modeling new ads, we exploit the visual content in ads to boost the performance of CTR prediction models.
no code implementations • 19 Sep 2022 • Tan Yu, Jie Liu, Yi Yang, Yi Li, Hongliang Fei, Ping Li
How to pair the video ads with the user search is the core task of Baidu video advertising.
1 code implementation • 1 Aug 2022 • Yilan Zhang, Fengying Xie, Xuedong Song, Hangning Zhou, Yiguang Yang, Haopeng Zhang, Jie Liu
As such they have made great improvements in many tasks of dermoscopy images.
1 code implementation • 1 Jul 2022 • Peipei Liu, Hong Li, Zuoguang Wang, Jie Liu, Yimo Ren, Hongsong Zhu
Extracting cybersecurity entities such as attackers and vulnerabilities from unstructured network texts is an important part of security analysis.
1 code implementation • 7 Jun 2022 • Zhifeng Ma, Hao Zhang, Jie Liu
Spatiotemporal predictive learning, which predicts future frames through historical prior knowledge with the aid of deep learning, is widely used in many fields.
1 code implementation • ICLR 2022 • Wei Ji, Jingjing Li, Qi Bi, Chuan Guo, Jie Liu, Li Cheng
The laborious and time-consuming manual annotation has become a real bottleneck in various practical scenarios.
2 code implementations • 11 May 2022 • Yawei Li, Kai Zhang, Radu Timofte, Luc van Gool, Fangyuan Kong, Mingxi Li, Songwei Liu, Zongcai Du, Ding Liu, Chenhui Zhou, Jingyi Chen, Qingrui Han, Zheyuan Li, Yingqi Liu, Xiangyu Chen, Haoming Cai, Yu Qiao, Chao Dong, Long Sun, Jinshan Pan, Yi Zhu, Zhikai Zong, Xiaoxiao Liu, Zheng Hui, Tao Yang, Peiran Ren, Xuansong Xie, Xian-Sheng Hua, Yanbo Wang, Xiaozhong Ji, Chuming Lin, Donghao Luo, Ying Tai, Chengjie Wang, Zhizhong Zhang, Yuan Xie, Shen Cheng, Ziwei Luo, Lei Yu, Zhihong Wen, Qi Wu1, Youwei Li, Haoqiang Fan, Jian Sun, Shuaicheng Liu, Yuanfei Huang, Meiguang Jin, Hua Huang, Jing Liu, Xinjian Zhang, Yan Wang, Lingshun Long, Gen Li, Yuanfan Zhang, Zuowei Cao, Lei Sun, Panaetov Alexander, Yucong Wang, Minjie Cai, Li Wang, Lu Tian, Zheyuan Wang, Hongbing Ma, Jie Liu, Chao Chen, Yidong Cai, Jie Tang, Gangshan Wu, Weiran Wang, Shirui Huang, Honglei Lu, Huan Liu, Keyan Wang, Jun Chen, Shi Chen, Yuchun Miao, Zimo Huang, Lefei Zhang, Mustafa Ayazoğlu, Wei Xiong, Chengyi Xiong, Fei Wang, Hao Li, Ruimian Wen, Zhijing Yang, Wenbin Zou, Weixin Zheng, Tian Ye, Yuncheng Zhang, Xiangzhen Kong, Aditya Arora, Syed Waqas Zamir, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Dandan Gaoand Dengwen Zhouand Qian Ning, Jingzhu Tang, Han Huang, YuFei Wang, Zhangheng Peng, Haobo Li, Wenxue Guan, Shenghua Gong, Xin Li, Jun Liu, Wanjun Wang, Dengwen Zhou, Kun Zeng, Hanjiang Lin, Xinyu Chen, Jinsheng Fang
The aim was to design a network for single image super-resolution that achieved improvement of efficiency measured according to several metrics including runtime, parameters, FLOPs, activations, and memory consumption while at least maintaining the PSNR of 29. 00dB on DIV2K validation set.
no code implementations • CVPR 2022 • Jie Liu, Yanqi Bao, Guo-Sen Xie, Huan Xiong, Jan-Jakob Sonke, Efstratios Gavves
Specifically, in DPCN, a dynamic convolution module (DCM) is firstly proposed to generate dynamic kernels from support foreground, then information interaction is achieved by convolution operations over query features using these kernels.
Ranked #17 on
Few-Shot Semantic Segmentation
on PASCAL-5i (1-Shot)
1 code implementation • 18 Apr 2022 • Zongcai Du, Ding Liu, Jie Liu, Jie Tang, Gangshan Wu, Lean Fu
Besides, FMEN-S achieves the lowest memory consumption and the second shortest runtime in NTIRE 2022 challenge on efficient super-resolution.
no code implementations • 7 Apr 2022 • Hao Zhang, Tingting Wu, Siyao Cheng, Jie Liu
On the other hand, it enlarges the distances between local models, resulting in an aggregated global model with poor performance.
1 code implementation • CVPR 2022 • Xiaoqing Guo, Jie Liu, Tongliang Liu, Yixuan Yuan
By exploiting computational geometry analysis and properties of segmentation, we design three complementary regularizers, i. e. volume regularization, anchor guidance, convex guarantee, to approximate the true SimT.
1 code implementation • 16 Mar 2022 • Feiyang Cai, Zhenkai Zhang, Jie Liu, Xenofon Koutsoukos
However, in a more realistic open set scenario, traditional classifiers with incomplete knowledge cannot tackle test data that are not from the training classes.
no code implementations • 13 Feb 2022 • Hao Wang, Yu Bai, Guangmin Sun, Jie Liu
Powerful recognition algorithms are widely used in the Internet or important medical systems, which poses a serious threat to personal privacy.
no code implementations • 7 Dec 2021 • Huiling Zhou, Jie Liu, Zhikang Li, Jin Yu, Hongxia Yang
With user history represented by a domain-aware sequential model, a frequency encoder is applied to the underlying tags for user content preference learning.
1 code implementation • 27 Nov 2021 • Jie Liu, Jie Tang, Gangshan Wu
We found that the standard deviation of the residual feature shrinks a lot after normalization layers, which causes the performance degradation in SR networks.
no code implementations • 24 Nov 2021 • Shiqi Liu, Lu Wang, Jie Lian, Ting Chen, Cong Liu, Xuchen Zhan, Jintao Lu, Jie Liu, Ting Wang, Dong Geng, Hongwei Duan, Yuze Tian
Relative radiometric normalization(RRN) of different satellite images of the same terrain is necessary for change detection, object classification/segmentation, and map-making tasks.
no code implementations • 9 Nov 2021 • Shang Li, GuiXuan Zhang, Zhengxiong Luo, Jie Liu, Zhi Zeng, Shuwu Zhang
In this paper, instead of directly applying the LR guidance, we propose an additional invertible flow guidance module (FGM), which can transform the downscaled representation to the visually plausible image during downscaling and transform it back during upscaling.
no code implementations • 24 Sep 2021 • Tai-Hsien Wu, Chunfeng Lian, Sanghee Lee, Matthew Pastewait, Christian Piers, Jie Liu, Fang Wang, Li Wang, Chiung-Ying Chiu, Wenchi Wang, Christina Jackson, Wei-Lun Chao, Dinggang Shen, Ching-Chang Ko
Our TS-MDL first adopts an end-to-end \emph{i}MeshSegNet method (i. e., a variant of the existing MeshSegNet with both improved accuracy and efficiency) to label each tooth on the downsampled scan.
no code implementations • 29 Aug 2021 • Zhiqiang Cao, Zhijun Li, Pan Heng, Yongrui Chen, Daqi Xie, Jie Liu
To address this challenge, we propose a small-big model framework that deploys a big model in the cloud and a small model on the edge devices.
no code implementations • 9 Jul 2021 • Zhenhou Hong, Jianzong Wang, Xiaoyang Qu, Jie Liu, Chendong Zhao, Jing Xiao
Text to speech (TTS) is a crucial task for user interaction, but TTS model training relies on a sizable set of high-quality original datasets.
no code implementations • 6 Jul 2021 • Shang Li, GuiXuan Zhang, Zhengxiong Luo, Jie Liu, Zhi Zeng, Shuwu Zhang
As a result, most previous methods may suffer a performance drop when the degradations of test images are unknown and various (i. e. the case of blind SR).
1 code implementation • CVPR 2021 • Guo-Sen Xie, Jie Liu, Huan Xiong, Ling Shao
However, they fail to fully leverage the high-order appearance relationships between multi-scale features among the support-query image pairs, thus leading to an inaccurate localization of the query objects.
no code implementations • 9 Jun 2021 • Chunzhi Yi, Feng Jiang, Baichun Wei, Chifu Yang, Zhen Ding, Jubo Jin, Jie Liu
The results demonstrate our method is a promising solution to detecting and correcting IMU movements during JAE.
2 code implementations • 20 May 2021 • Zongcai Du, Jie Liu, Jie Tang, Gangshan Wu
Along with the rapid development of real-world applications, higher requirements on the accuracy and efficiency of image super-resolution (SR) are brought forward.
1 code implementation • 17 May 2021 • Andrey Ignatov, Radu Timofte, Maurizio Denna, Abdel Younes, Andrew Lek, Mustafa Ayazoglu, Jie Liu, Zongcai Du, Jiaming Guo, Xueyi Zhou, Hao Jia, Youliang Yan, Zexin Zhang, Yixin Chen, Yunbo Peng, Yue Lin, Xindong Zhang, Hui Zeng, Kun Zeng, Peirong Li, Zhihuang Liu, Shiqi Xue, Shengpeng Wang
Image super-resolution is one of the most popular computer vision problems with many important applications to mobile devices.
no code implementations • 26 Apr 2021 • Jie Chen, Jie Liu, Chang Liu, Jian Zhang, Bing Han
To overcome this issue and to further improve the recognition performance, we adopt a deep learning approach for underwater target recognition and propose a LOFAR spectrum enhancement (LSE)-based underwater target recognition scheme, which consists of preprocessing, offline training, and online testing.
no code implementations • 16 Apr 2021 • Weiqi Shu, Ling Wang, Bolong Liu, Jie Liu
How to measure LAI accurately and efficiently is the key to the crop yield estimation problem.
2 code implementations • 13 Mar 2021 • Shaowei Chen, Yu Wang, Jie Liu, Yuelin Wang
Aspect sentiment triplet extraction (ASTE), which aims to identify aspects from review sentences along with their corresponding opinion expressions and sentiments, is an emerging task in fine-grained opinion mining.
Aspect Sentiment Triplet Extraction
Machine Reading Comprehension
+2
no code implementations • 1 Mar 2021 • Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, Jie Zhang, Jianwei Zhang, Xu Zou, Zhikang Li, Xiaodong Deng, Jie Liu, Jinbao Xue, Huiling Zhou, Jianxin Ma, Jin Yu, Yong Li, Wei Lin, Jingren Zhou, Jie Tang, Hongxia Yang
In this work, we construct the largest dataset for multimodal pretraining in Chinese, which consists of over 1. 9TB images and 292GB texts that cover a wide range of domains.
no code implementations • 18 Feb 2021 • Jin Li, Jie Liu, Shangzhou Li, Yao Xu, Ran Cao, Qi Li, Biye Jiang, Guan Wang, Han Zhu, Kun Gai, Xiaoqiang Zhu
When receiving a user request, matching system (i) finds the crowds that the user belongs to; (ii) retrieves all ads that have targeted those crowds.
no code implementations • ICCV 2021 • Guo-Sen Xie, Huan Xiong, Jie Liu, Yazhou Yao, Ling Shao
Specifically, we first generate N pairs (key and value) of multi-resolution query features guided by the support feature and its mask.
1 code implementation • ICCV 2021 • Miao Zhang, Jie Liu, Yifei Wang, Yongri Piao, Shunyu Yao, Wei Ji, Jingjing Li, Huchuan Lu, Zhongxuan Luo
Our bidirectional dynamic fusion strategy encourages the interaction of spatial and temporal information in a dynamic manner.
Ranked #11 on
Video Polyp Segmentation
on SUN-SEG-Easy (Unseen)
no code implementations • 31 Dec 2020 • Zhi-Qin Zhan, Huazhu Fu, Yan-Yao Yang, Jingjing Chen, Jie Liu, Yu-Gang Jiang
However, there are several issues between the image-based training and video-based inference, including domain differences, lack of positive samples, and temporal smoothness.
1 code implementation • CVPR 2021 • Jie Liu, Chuming Li, Feng Liang, Chen Lin, Ming Sun, Junjie Yan, Wanli Ouyang, Dong Xu
To develop a practical method for learning complex inception convolution based on the data, a simple but effective search algorithm, referred to as efficient dilation optimization (EDO), is developed.
no code implementations • 27 Oct 2020 • Yitong Meng, Jie Liu, Xiao Yan, James Cheng
When a new user just signs up on a website, we usually have no information about him/her, i. e. no interaction with items, no user profile and no social links with other users.
no code implementations • 21 Oct 2020 • Jie Liu, Chen Lin, Chuming Li, Lu Sheng, Ming Sun, Junjie Yan, Wanli Ouyang
Several variants of stochastic gradient descent (SGD) have been proposed to improve the learning effectiveness and efficiency when training deep neural networks, among which some recent influential attempts would like to adaptively control the parameter-wise learning rate (e. g., Adam and RMSProp).
no code implementations • 15 Oct 2020 • Ling Wang, Cheng Zhang, Zejian Luo, ChenGuang Liu, Jie Liu, Xi Zheng, Athanasios Vasilakos
To reduce the computational cost without loss of generality, we present a defense strategy called a progressive defense against adversarial attacks (PDAAA) for efficiently and effectively filtering out the adversarial pixel mutations, which could mislead the neural network towards erroneous outputs, without a-priori knowledge about the attack type.
2 code implementations • 24 Sep 2020 • Jie Liu, Jie Tang, Gangshan Wu
Thanks to FDC, we can rethink the information multi-distillation network (IMDN) and propose a lightweight and accurate SISR model called residual feature distillation network (RFDN).
3 code implementations • 15 Sep 2020 • Kai Zhang, Martin Danelljan, Yawei Li, Radu Timofte, Jie Liu, Jie Tang, Gangshan Wu, Yu Zhu, Xiangyu He, Wenjie Xu, Chenghua Li, Cong Leng, Jian Cheng, Guangyang Wu, Wenyi Wang, Xiaohong Liu, Hengyuan Zhao, Xiangtao Kong, Jingwen He, Yu Qiao, Chao Dong, Maitreya Suin, Kuldeep Purohit, A. N. Rajagopalan, Xiaochuan Li, Zhiqiang Lang, Jiangtao Nie, Wei Wei, Lei Zhang, Abdul Muqeet, Jiwon Hwang, Subin Yang, JungHeum Kang, Sung-Ho Bae, Yongwoo Kim, Geun-Woo Jeon, Jun-Ho Choi, Jun-Hyuk Kim, Jong-Seok Lee, Steven Marty, Eric Marty, Dongliang Xiong, Siang Chen, Lin Zha, Jiande Jiang, Xinbo Gao, Wen Lu, Haicheng Wang, Vineeth Bhaskara, Alex Levinshtein, Stavros Tsogkas, Allan Jepson, Xiangzhen Kong, Tongtong Zhao, Shanshan Zhao, Hrishikesh P. S, Densen Puthussery, Jiji C. V, Nan Nan, Shuai Liu, Jie Cai, Zibo Meng, Jiaming Ding, Chiu Man Ho, Xuehui Wang, Qiong Yan, Yuzhi Zhao, Long Chen, Jiangtao Zhang, Xiaotong Luo, Liang Chen, Yanyun Qu, Long Sun, Wenhao Wang, Zhenbing Liu, Rushi Lan, Rao Muhammad Umer, Christian Micheloni
This paper reviews the AIM 2020 challenge on efficient single image super-resolution with focus on the proposed solutions and results.
no code implementations • 26 Aug 2020 • Wenqian Dong, Jie Liu, Zhen Xie, Dong Li
Evaluating with 20, 480 input problems, we show that Smartfluidnet achieves 1. 46x and 590x speedup comparing with a state-of-the-art neural network model and the original fluid simulation respectively on an NVIDIA Titan X Pascal GPU, while providing better simulation quality than the state-of-the-art model.
1 code implementation • 16 Aug 2020 • Shengyu Zhang, Ziqi Tan, Jin Yu, Zhou Zhao, Kun Kuang, Jie Liu, Jingren Zhou, Hongxia Yang, Fei Wu
Then, based on the aspects of the video-associated product, we perform knowledge-enhanced spatial-temporal inference on those graphs for capturing the dynamic change of fine-grained product-part characteristics.
no code implementations • ACL 2020 • Liting Liu, Jie Liu, Wenzheng Zhang, Ziming Chi, Wenxuan Shi, YaLou Huang
To deal with this task, we devise a data-driven global Skill-Aware Multi-Attention generation model, named SAMA.
1 code implementation • ACL 2020 • Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang, Ziming Chi
The opinion entity extraction unit and the relation detection unit are developed as two channels to extract opinion entities and relations simultaneously.
no code implementations • CVPR 2020 • Jie Liu, Wenjie Zhang, Yuting Tang, Jie Tang, Gangshan Wu
To maximize the power of the RFA framework, we further propose an enhanced spatial attention (ESA) block to make the residual features to be more focused on critical spatial contents.
no code implementations • 24 May 2020 • Zhongxu Hu, Yang Xing, Chen Lv, Peng Hang, Jie Liu
This paper proposes a novel Bernoulli heatmap for head pose estimation from a single RGB image.
no code implementations • 13 May 2020 • Forrest Sheng Bao, Youbiao He, Jie Liu, Yuanfang Chen, Qian Li, Christina R. Zhang, Lei Han, Baoli Zhu, Yaorong Ge, Shi Chen, Ming Xu, Liu Ouyang
The COVID-19 is sweeping the world with deadly consequences.
no code implementations • 12 May 2020 • Sinong Geng, Zhaobin Kuang, Jie Liu, Stephen Wright, David Page
We study the $L_1$-regularized maximum likelihood estimator/estimation (MLE) problem for discrete Markov random fields (MRFs), where efficient and scalable learning requires both sparse regularization and approximate inference.
no code implementations • 2 May 2020 • Yu Wang, Yuelin Wang, Jie Liu, Zhuo Liu
More importantly, we discuss four kinds of basic approaches, including statistical machine translation based approach, neural machine translation based approach, classification based approach and language model based approach, six commonly applied performance boosting techniques for GEC systems and two data augmentation methods.
1 code implementation • ACL 2021 • He Bai, Peng Shi, Jimmy Lin, Luchen Tan, Kun Xiong, Wen Gao, Jie Liu, Ming Li
Experimental results show that the Chinese GPT2 can generate better essay endings with \eop.
no code implementations • 1 Apr 2020 • Jie Liu, Xiaotian Wu, Kai Zhang, Bing Liu, Renyi Bao, Xiao Chen, Yiran Cai, Yiming Shen, Xinjun He, Jun Yan, Weixing Ji
With the booming of next generation sequencing technology and its implementation in clinical practice and life science research, the need for faster and more efficient data analysis methods becomes pressing in the field of sequencing.
no code implementations • 30 Mar 2020 • Junyang Lin, An Yang, Yichang Zhang, Jie Liu, Jingren Zhou, Hongxia Yang
We pretrain the model with three pretraining tasks, including masked segment modeling (MSM), masked region modeling (MRM) and image-text matching (ITM); and finetune the model on a series of vision-and-language downstream tasks.
no code implementations • 3 Mar 2020 • Jie Liu, Jiawen Liu, Zhen Xie, Dong Li
How to accurately and efficiently label data on a mobile device is critical for the success of training machine learning models on mobile devices.
no code implementations • 8 Feb 2020 • Qian Liu, Tao Wang, Jie Liu, Yang Guan, Qi Bu, Longfei Yang
In order to learn powerful feature of videos, we propose a Collaborative Temporal Modeling (CTM) block (Figure 1) to learn temporal information for action recognition.
no code implementations • 7 Feb 2020 • Qian Liu, Dongyang Cai, Jie Liu, Nan Ding, Tao Wang
The standard non-local (NL) module is effective in aggregating frame-level features on the task of video classification but presents low parameters efficiency and high computational cost.
no code implementations • 1 Dec 2019 • Tinghao Zhang, Jing Luo, Ping Chen, Jie Liu
At high latitudes, many cities adopt a centralized heating system to improve the energy generation efficiency and to reduce pollution.
no code implementations • 1 Dec 2019 • Tinghao Zhang, Jingxu Li, Jingfeng Li, Ling Wang, Feng Li, Jie Liu
Greenhouse environment is the key to influence crops production.
no code implementations • 18 Nov 2019 • XiaoQian Li, Jie Liu, Shuwu Zhang, GuiXuan Zhang
At present, multi-oriented text detection methods based on deep neural network have achieved promising performances on various benchmarks.
2 code implementations • 12 Nov 2019 • Xinyan Dai, Xiao Yan, Kelvin K. W. Ng, Jie Liu, James Cheng
In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error.
no code implementations • 12 Nov 2019 • Liangyi Kang, Jie Liu, Lingqiao Liu, Qinfeng Shi, Dan Ye
Thus, we propose to create auxiliary fact representations from charge definitions to augment fact descriptions representation.
no code implementations • 30 Sep 2019 • Jie Liu, Xiao Yan, Xinyan Dai, Zhirong Li, James Cheng, Ming-Chang Yang
Then we explain the good performance of ip-NSW as matching the norm bias of the MIPS problem - large norm items have big in-degrees in the ip-NSW proximity graph and a walk on the graph spends the majority of computation on these items, thus effectively avoids unnecessary computation on small norm items.
no code implementations • 20 Aug 2019 • Yuan Liu, Zhongwei Cheng, Jie Liu, Bourhan Yassin, Zhe Nan, Jiebo Luo
Saving rainforests is a key to halting adverse climate changes.
1 code implementation • 1 Jul 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
In this work, we alleviate the NAS search cost down to less than 3 hours, while achieving state-of-the-art image classification results under mobile latency constraints.
no code implementations • 10 Jun 2019 • Jie Liu, Jiawen Liu, Wan Du, Dong Li
In this paper, we perform a variety of experiments on a representative mobile device (the NVIDIA TX2) to study the performance of training deep learning models.
no code implementations • 10 May 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the latency constraint of a mobile device?
5 code implementations • 5 Apr 2019 • Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, Diana Marculescu
Can we automatically design a Convolutional Network (ConvNet) with the highest image classification accuracy under the runtime constraint of a mobile device?
Ranked #787 on
Image Classification
on ImageNet
no code implementations • 23 Mar 2019 • Zhixin Zhang, Xudong Chen, Jie Liu, Kaibo Zhou
General detectors follow the pipeline that feature maps extracted from ConvNets are shared between classification and regression tasks.
1 code implementation • 7 Jan 2019 • Baoyuan Wu, Weidong Chen, Yanbo Fan, Yong Zhang, Jinlong Hou, Jie Liu, Tong Zhang
In this work, we propose to train CNNs from images annotated with multiple tags, to enhance the quality of visual representation of the trained CNN model.
1 code implementation • 22 Oct 2018 • Xiao Yan, Xinyan Dai, Jie Liu, Kaiwen Zhou, James Cheng
Recently, locality sensitive hashing (LSH) was shown to be effective for MIPS and several algorithms including $L_2$-ALSH, Sign-ALSH and Simple-LSH have been proposed.
no code implementations • 3 Oct 2018 • Andrey Ignatov, Radu Timofte, Thang Van Vu, Tung Minh Luu, Trung X. Pham, Cao Van Nguyen, Yongwoo Kim, Jae-Seok Choi, Munchurl Kim, Jie Huang, Jiewen Ran, Chen Xing, Xingguang Zhou, Pengfei Zhu, Mingrui Geng, Yawei Li, Eirikur Agustsson, Shuhang Gu, Luc van Gool, Etienne de Stoutz, Nikolay Kobyshev, Kehui Nie, Yan Zhao, Gen Li, Tong Tong, Qinquan Gao, Liu Hanwen, Pablo Navarrete Michelini, Zhu Dan, Hu Fengshuo, Zheng Hui, Xiumei Wang, Lirui Deng, Rang Meng, Jinghui Qin, Yukai Shi, Wushao Wen, Liang Lin, Ruicheng Feng, Shixiang Wu, Chao Dong, Yu Qiao, Subeesh Vasu, Nimisha Thekke Madam, Praveen Kandula, A. N. Rajagopalan, Jie Liu, Cheolkon Jung
This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones.
no code implementations • 14 Jul 2018 • Jie Liu, Yu Rong, Martin Takac, Junzhou Huang
This paper proposes a framework of L-BFGS based on the (approximate) second-order information with stochastic batches, as a novel approach to the finite-sum minimization problems.
no code implementations • 3 Jul 2018 • Jie Liu, Cheng Sun, Xiang Xu, Baomin Xu, Shuangyuan Yu
In this paper we propose a novel Spatial and Temporal Features Mixture Model (STFMM) based on convolutional neural network (CNN) and recurrent neural network (RNN), in which the human body is split into $N$ parts in horizontal direction so that we can obtain more specific features.
no code implementations • 10 Apr 2018 • Yingqi Qu, Jie Liu, Liangyi Kang, Qinfeng Shi, Dan Ye
To preserve more original information, we propose an attentive recurrent neural network with similarity matrix based convolutional neural network (AR-SMCNN) model, which is able to capture comprehensive hierarchical information utilizing the advantages of both RNN and CNN.
no code implementations • 27 Mar 2018 • Jie Liu, Hao Zheng
Especially as the size of the MRF increases, both the numerical performance and the computational cost of our approach remain consistently satisfactory, whereas Laplace approximation deteriorates and pseudolikelihood becomes computationally unbearable.
no code implementations • 20 May 2017 • Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses.
no code implementations • ICML 2017 • Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Takáč
In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems.
no code implementations • 16 Dec 2016 • Jie Liu, Martin Takac
We propose a projected semi-stochastic gradient descent method with mini-batch for improving both the theoretical complexity and practical performance of the general stochastic gradient descent method (SGD).
1 code implementation • 21 Jun 2016 • Chen Xing, Wei Wu, Yu Wu, Jie Liu, YaLou Huang, Ming Zhou, Wei-Ying Ma
We consider incorporating topic information into the sequence-to-sequence framework to generate informative and interesting responses for chatbots.
no code implementations • 16 Apr 2015 • Jakub Konečný, Jie Liu, Peter Richtárik, Martin Takáč
Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps.
no code implementations • 17 Oct 2014 • Jakub Konečný, Jie Liu, Peter Richtárik, Martin Takáč
Our method first performs a deterministic step (computation of the gradient of the objective function at the starting point), followed by a large number of stochastic steps.
no code implementations • BioMedical Engineering OnLine 2014 • Huifang Huang, Jie Liu, Qiang Zhu, Ruiping Wang, Guangshu Hu
This was done in order to improve the classification performance of these two classes of heartbeats by using different features and classification methods.
Ranked #2 on
Heartbeat Classification
on MIT-BIH AR
no code implementations • 20 Mar 2014 • Yunpeng Li, Ya Li, Jie Liu, Yong Deng
The results of defuzzification at the first step are not coincide with the results of defuzzification at the final step. It seems that the alternative is to defuzzification in the final step in fuzzy DEMATEL.
no code implementations • NeurIPS 2013 • Jie Liu, David Page
In large-scale applications of undirected graphical models, such as social networks and biological networks, similar patterns occur frequently and give rise to similar parameters.
no code implementations • 23 Nov 2013 • Yunpeng Li, Jie Liu, Yong Deng
In this paper, we present an illustration to the history of Artificial Intelligence(AI) with a statistical analysis of publish since 1940.