1 code implementation • 4 Dec 2023 • Lei Wang, Jiabang He, Shenshen Li, Ning Liu, Ee-Peng Lim
The fine-grained object attributes and behaviors non-existent in the image may still be generated but not measured by the current evaluation methods.
no code implementations • 13 Nov 2023 • Fanlong Zeng, Wensheng Gan, Yongheng Wang, Ning Liu, Philip S. Yu
Understanding and assessing this intelligence is a complex task.
1 code implementation • 22 Oct 2023 • Zhuo Wang, Wei zhang, Ning Liu, Jianyong Wang
Rule-based models, e. g., decision trees, are widely used in scenarios demanding high model interpretability for their transparent inner structures and good model expressivity.
1 code implementation • 23 Aug 2023 • Zhenyu Li, Sunqi Fan, Yu Gu, Xiuxing Li, Zhichao Duan, Bowen Dong, Ning Liu, Jianyong Wang
Knowledge base question answering (KBQA) is a critical yet challenging task due to the vast number of entities within knowledge bases and the diversity of natural language questions posed by users.
no code implementations • 15 Aug 2023 • Wenzhuo Zheng, Junhao Zhao, Xiaohong Liu, Yongyang Pan, Zhenghao Gan, Haozhe Han, Ning Liu
Our work mainly addresses the problem of combining parametric models of faces with multi-view face 3D reconstruction and explores the implementation of a Flame based multi-view training and testing framework for contributing to the field of face 3D reconstruction.
1 code implementation • 9 Aug 2023 • Tengchuan Kou, Xiaohong Liu, Wei Sun, Jun Jia, Xiongkuo Min, Guangtao Zhai, Ning Liu
Indeed, most existing quality assessment models evaluate video quality as a whole without specifically taking the subjective experience of video stability into consideration.
1 code implementation • 25 Jul 2023 • Hongzuo Xu, Yijie Wang, Guansong Pang, Songlei Jian, Ning Liu, Yongjun Wang
anomaly contamination.
Semi-supervised Anomaly Detection
Supervised Anomaly Detection
+1
1 code implementation • 5 Jun 2023 • Jiabang He, Yi Hu, Lei Wang, Xing Xu, Ning Liu, Hui Liu, Heng Tao Shen
Results from the experiments demonstrate that there is a significant performance gap between the in-distribution (ID) and OOD settings for document images, and that fine-grained analysis of distribution shifts can reveal the brittle nature of existing pre-trained VDU models and OOD generalization algorithms.
no code implementations • 2 Jun 2023 • Zhuo Wang, Rongzhen Li, Bowen Dong, Jie Wang, Xiuxing Li, Ning Liu, Chenhui Mao, Wei zhang, Liling Dong, Jing Gao, Jianyong Wang
In this paper, we explore the potential of LLMs such as GPT-4 to outperform traditional AI tools in dementia diagnosis.
2 code implementations • 25 May 2023 • Hongzuo Xu, Yijie Wang, Juhui Wei, Songlei Jian, Yizhou Li, Ning Liu
Due to the unsupervised nature of anomaly detection, the key to fueling deep models is finding supervisory signals.
no code implementations • 5 May 2023 • Lei Wang, Yi Hu, Jiabang He, Xing Xu, Ning Liu, Hui Liu, Heng Tao Shen
To address these issues, we propose a novel method termed \emph{T-SciQ} that aims at teaching science question answering with LLM signals.
no code implementations • 6 Apr 2023 • Zhichao Duan, Xiuxing Li, Zhengyan Zhang, Zhenyu Li, Ning Liu, Jianyong Wang
As a popular topic in natural language processing tasks, extractive question answering task (extractive QA) has gained extensive attention in the past few years.
no code implementations • 23 Mar 2023 • Mingze Wei, Yaomin Huang, Zhiyuan Xu, Ning Liu, Zhengping Che, Xinyu Zhang, Chaomin Shen, Feifei Feng, Chun Shan, Jian Tang
Our work significantly outperforms the state-of-the-art for three-finger robotic hands.
no code implementations • 23 Mar 2023 • Yaomin Huang, Ning Liu, Zhengping Che, Zhiyuan Xu, Chaomin Shen, Yaxin Peng, Guixu Zhang, Xinmei Liu, Feifei Feng, Jian Tang
CP$^3$ is elaborately designed to leverage the characteristics of point clouds and PNNs in order to enable 2D channel pruning methods for PNNs.
1 code implementation • ICCV 2023 • Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, Heng Tao Shen
To this end, we propose a simple but effective in-context learning framework called ICL-D3IE, which enables LLMs to perform DIE with different types of demonstration examples.
no code implementations • 9 Mar 2023 • Ning Liu, Benjamin Grimmer
We consider feasibility and constrained optimization problems defined over smooth and/or strongly convex sets.
no code implementations • CVPR 2023 • Yaomin Huang, Ning Liu, Zhengping Che, Zhiyuan Xu, Chaomin Shen, Yaxin Peng, Guixu Zhang, Xinmei Liu, Feifei Feng, Jian Tang
Directly implementing the 2D CNN channel pruning methods to PNNs undermine the performance of PNNs because of the different representations of 2D images and 3D point clouds as well as the network architecture disparity.
no code implementations • CVPR 2023 • Yichen Zhu, Qiqi Zhou, Ning Liu, Zhiyuan Xu, Zhicai Ou, Xiaofeng Mou, Jian Tang
Unlike existing works that struggle to balance the trade-off between inference speed and SOD performance, in this paper, we propose a novel Scale-aware Knowledge Distillation (ScaleKD), which transfers knowledge of a complex teacher model to a compact student model.
no code implementations • 29 Dec 2022 • Ning Liu, Yue Yu, Huaiqian You, Neeraj Tatikola
Neural operators, which emerge as implicit solution operators of hidden governing equations, have recently become popular tools for learning responses of complex real-world physical systems.
1 code implementation • 20 Dec 2022 • Zhenyu Li, Xiuxing Li, Zhichao Duan, Bowen Dong, Ning Liu, Jianyong Wang
To bridge the gap between the programs and natural language sentences, we design a powerful "NL-Generator" module to generate natural language sentences with complex logic from these programs.
1 code implementation • 27 Nov 2022 • Lei Wang, Jiabang He, Xing Xu, Ning Liu, Hui Liu
In this paper, we propose a new model architecture with alignment-enriched tuning (dubbed AETNet) upon pre-trained document image models, to adapt downstream tasks with the joint task-specific supervised and alignment-aware contrastive objective.
no code implementations • 29 Aug 2022 • Rui Ma, Ning Liu, Jingsong Yuan, Huafeng Yang, Jiandong Zhang
Traditional recommendation systems mainly focus on modeling user interests.
no code implementations • 26 Aug 2022 • Lichen Jia, Bowen Tang, Chenggang Wu, Zhe Wang, Zihan Jiang, Yuanming Lai, Yan Kang, Ning Liu, Jingfeng Zhang
The binary code similarity detection (BCSD) method measures the similarity of two binary executable codes.
1 code implementation • 12 Jul 2022 • Xiuxing Li, Zhenyu Li, Zhengyan Zhang, Ning Liu, Haitao Yuan, Wei zhang, Zhiyuan Liu, Jianyong Wang
In this paper, we endeavor to solve the problem of few-shot entity linking, which only requires a minimal amount of in-domain labeled data and is more practical in real situations.
no code implementations • 21 Jun 2022 • Shaoyi Huang, Ning Liu, Yueying Liang, Hongwu Peng, Hongjia Li, Dongkuan Xu, Mimi Xie, Caiwen Ding
On MRPC, we obtain a 4. 6 higher score than the SOTA at the same overall pruning ratio of 0. 5.
no code implementations • 9 Jun 2022 • Yu Fan, ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, Tao Wang, Ning Liu, Guangtao Zhai
Point cloud is one of the most widely used digital formats of 3D models, the visual quality of which is quite sensitive to distortions such as downsampling, noise, and compression.
no code implementations • ACL 2022 • Sanjeev Kumar Karn, Ning Liu, Hinrich Schuetze, Oladimeji Farri
A cascade of tasks are required to automatically generate an abstractive summary of the typical information-rich radiology report.
no code implementations • 4 Dec 2021 • Ziqiang Wang, Yimao Sun, Qun Wan, Lei Xie, Ning Liu
Emitter localization is widely applied in the military and civilian _elds.
1 code implementation • NeurIPS 2021 • Geng Yuan, Xiaolong Ma, Wei Niu, Zhengang Li, Zhenglun Kong, Ning Liu, Yifan Gong, Zheng Zhan, Chaoyang He, Qing Jin, Siyue Wang, Minghai Qin, Bin Ren, Yanzhi Wang, Sijia Liu, Xue Lin
Systematical evaluation on accuracy, training speed, and memory footprint are conducted, where the proposed MEST framework consistently outperforms representative SOTA works.
no code implementations • 21 Oct 2021 • Wenzheng Hu, Zhengping Che, Ning Liu, Mingyang Li, Jian Tang, ChangShui Zhang, Jianqiang Wang
Deep convolutional neural networks are shown to be overkill with high parametric and computational redundancy in many application scenarios, and an increasing number of works have explored model pruning to obtain lightweight and efficient networks.
2 code implementations • NeurIPS 2021 • Zhuo Wang, Wei zhang, Ning Liu, Jianyong Wang
Rule-based models, e. g., decision trees, are widely used in scenarios demanding high model interpretability for their transparent inner structures and good model expressivity.
2 code implementations • NeurIPS 2021 • Xiaolong Ma, Geng Yuan, Xuan Shen, Tianlong Chen, Xuxi Chen, Xiaohan Chen, Ning Liu, Minghai Qin, Sijia Liu, Zhangyang Wang, Yanzhi Wang
Based on our analysis, we summarize a guideline for parameter settings in regards of specific architecture characteristics, which we hope to catalyze the research progress on the topic of lottery ticket hypothesis.
no code implementations • 16 Jun 2021 • Geng Yuan, Zhiheng Liao, Xiaolong Ma, Yuxuan Cai, Zhenglun Kong, Xuan Shen, Jingyan Fu, Zhengang Li, Chengming Zhang, Hongwu Peng, Ning Liu, Ao Ren, Jinhui Wang, Yanzhi Wang
More importantly, our method does not require extra hardware cost compared to the traditional two-column mapping scheme.
1 code implementation • 19 Apr 2021 • Hongzuo Xu, Yijie Wang, Songlei Jian, Zhenyu Huang, Ning Liu, Yongjun Wang, Fei Li
We obtain an optimal attention-guided embedding space with expanded high-level information and rich semantics, and thus outlying behaviors of the queried outlier can be better unfolded.
no code implementations • 13 Apr 2021 • Ning Liu, Songlei Jian, Dongsheng Li, Yiming Zhang, Zhiquan Lai, Hongzuo Xu
Graph neural networks (GNN) have been proven to be mature enough for handling graph-structured data on node-level graph representation learning tasks.
no code implementations • 22 Mar 2021 • Wei zhang, Yunfeng Zhang, Ning Liu, Kai Ren, Pengfei Wang
This paper studies how to improve the generalization performance and learning speed of the navigation agents trained with deep reinforcement learning (DRL).
no code implementations • 19 Feb 2021 • Ning Liu, Geng Yuan, Zhengping Che, Xuan Shen, Xiaolong Ma, Qing Jin, Jian Ren, Jian Tang, Sijia Liu, Yanzhi Wang
In deep model compression, the recent finding "Lottery Ticket Hypothesis" (LTH) (Frankle & Carbin, 2018) pointed out that there could exist a winning ticket (i. e., a properly pruned sub-network together with original weight initialization) that can achieve competitive performance than the original dense network.
no code implementations • 1 Jan 2021 • Zhuo Wang, Wei zhang, Ning Liu, Jianyong Wang
Rule-based models, e. g., decision trees, are widely used in scenarios demanding high model interpretability for their transparent inner structures and good model expressivity.
no code implementations • ICCV 2021 • Ye Chen, Jinxian Liu, Bingbing Ni, Hang Wang, Jiancheng Yang, Ning Liu, Teng Li, Qi Tian
Then the destroyed shape and the normal shape are sent into a point cloud network to get representations, which are employed to segment points that belong to distorted parts and further reconstruct them to restore the shape to normal.
no code implementations • 4 Nov 2020 • Yushuo Guan, Ning Liu, Pengyu Zhao, Zhengping Che, Kaigui Bian, Yanzhi Wang, Jian Tang
The convolutional neural network has achieved great success in fulfilling computer vision tasks despite large computation overhead against efficient deployment.
no code implementations • 20 Oct 2020 • Wei zhang, Ning Liu, Yunfeng Zhang
Deep reinforcement learning (DRL) demonstrates great potential in mapless navigation domain.
no code implementations • 25 Apr 2020 • Ning Liu, Zhanchun Tu
The momentum distribution and dynamical structure factor in a weakly interacting Bose gas with a time-dependent periodic modulation in terms of the Bogoliubov treatment are investigated.
Quantum Gases
1 code implementation • 10 Dec 2019 • Zhuo Wang, Wei zhang, Ning Liu, Jianyong Wang
In this paper, we propose a new hierarchical rule-based model for classification tasks, named Concept Rule Sets (CRS), which has both a strong expressive ability and a transparent inner structure.
no code implementations • 4 Nov 2019 • Hongjia Li, Sheng Lin, Ning Liu, Caiwen Ding, Yanzhi Wang
Deep neural networks (DNNs) have been expanded into medical fields and triggered the revolution of some medical applications by extracting complex features and achieving high accuracy and performance, etc.
no code implementations • 29 Sep 2019 • Caiwen Ding, Shuo Wang, Ning Liu, Kaidi Xu, Yanzhi Wang, Yun Liang
To achieve real-time, highly-efficient implementations on FPGA, we present the detailed hardware implementation of block circulant matrices on CONV layers and develop an efficient processing element (PE) structure supporting the heterogeneous weight quantization, CONV dataflow and pipelining techniques, design optimization, and a template-based automatic synthesis framework to optimally exploit hardware resource.
2 code implementations • CVPR 2020 • Lifeng Huang, Chengying Gao, Yuyin Zhou, Cihang Xie, Alan Yuille, Changqing Zou, Ning Liu
In this paper, we study physical adversarial attacks on object detectors in the wild.
no code implementations • 9 Aug 2019 • Qi He, Qijun Zhao, Ning Liu, Peng Chen, Zhihe Zhang, Rong Hou
We are going to release our database and model in the public domain to promote the research on automatic animal identification and particularly on the technique for protecting red pandas.
no code implementations • 22 Jul 2019 • Ruizhe Cai, Ao Ren, Olivia Chen, Ning Liu, Caiwen Ding, Xuehai Qian, Jie Han, Wenhui Luo, Nobuyuki Yoshikawa, Yanzhi Wang
Further, the application of SC has been investigated in DNNs in prior work, and the suitability has been illustrated as SC is more compatible with approximate computations.
no code implementations • 6 Jul 2019 • Ning Liu, Xiaolong Ma, Zhiyuan Xu, Yanzhi Wang, Jian Tang, Jieping Ye
This work proposes AutoCompress, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem.
no code implementations • 2019 IEEE 35th International Conference on Data Engineering (ICDE) 2019 • Ning Liu, Pan Lu, Wei zhang, Jianyong Wang
To address the above issues, we propose novel Knowledge-aware Deep Dual Networks (K-DDN) for the text-based mortality prediction task.
1 code implementation • CVPR 2019 • Ning Liu, Yongchao Long, Changqing Zou, Qun Niu, Li Pan, Hefeng Wu
We propose an attention-injective deformable convolutional network called ADCrowdNet for crowd understanding that can address the accuracy degradation problem of highly congested noisy scenes.
Ranked #2 on
Crowd Counting
on TRANCOS
1 code implementation • 29 Jul 2018 • Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Xiaolong Ma, Ning Liu, Linfeng Zhang, Jian Tang, Kaisheng Ma, Xue Lin, Makan Fardad, Yanzhi Wang
Without loss of accuracy on the AlexNet model, we achieve 2. 58X and 3. 65X average measured speedup on two GPUs, clearly outperforming the prior work.
no code implementations • 28 Mar 2018 • Caiwen Ding, Ao Ren, Geng Yuan, Xiaolong Ma, Jiayu Li, Ning Liu, Bo Yuan, Yanzhi Wang
For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10.
no code implementations • 2 Feb 2018 • Ruizhe Cai, Ao Ren, Ning Liu, Caiwen Ding, Luhao Wang, Xuehai Qian, Massoud Pedram, Yanzhi Wang
In this paper, we propose VIBNN, an FPGA-based hardware accelerator design for variational inference on BNNs.
no code implementations • 28 Jan 2018 • Ning Liu, Ying Liu, Brent Logan, Zhiyuan Xu, Jian Tang, Yanzhi Wang
This paper presents the first deep reinforcement learning (DRL) framework to estimate the optimal Dynamic Treatment Regimes from observational medical data.
no code implementations • 13 Dec 2017 • Sheng Lin, Ning Liu, Mahdi Nazemi, Hongjia Li, Caiwen Ding, Yanzhi Wang, Massoud Pedram
The large model size of DNNs, while providing excellent accuracy, also burdens the embedded platforms with intensive computation and storage.
no code implementations • 29 Aug 2017 • Caiwen Ding, Siyu Liao, Yanzhi Wang, Zhe Li, Ning Liu, Youwei Zhuo, Chao Wang, Xuehai Qian, Yu Bai, Geng Yuan, Xiaolong Ma, Yi-Peng Zhang, Jian Tang, Qinru Qiu, Xue Lin, Bo Yuan
As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy.
no code implementations • 13 Mar 2017 • Ning Liu, Zhe Li, Zhiyuan Xu, Jielong Xu, Sheng Lin, Qinru Qiu, Jian Tang, Yanzhi Wang
Automatic decision-making approaches, such as reinforcement learning (RL), have been applied to (partially) solve the resource allocation problem adaptively in the cloud computing system.