Search Results for author: Jie Lin

Found 40 papers, 7 papers with code

Mask-Guided Divergence Loss Improves the Generalization and Robustness of Deep Neural Network

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

While in adversarial training, the maximum improvement is $1. 68\%$ on natural data, $4. 03\%$ on FGSM attack and $2. 65\%$ on PGD attack.

FACM: Correct the Output of Deep Neural Network with Middle Layers Features against Adversarial Samples

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

In the strong adversarial attacks against deep neural network (DNN), the output of DNN will be misclassified if and only if the last feature layer of the DNN is completely destroyed by adversarial samples, while our studies found that the middle feature layers of the DNN can still extract the effective features of the original normal category in these adversarial attacks.

Long-tailed Recognition by Learning from Latent Categories

no code implementations2 Jun 2022 Weide Liu, Zhonghua Wu, Yiming Wang, Henghui Ding, Fayao Liu, Jie Lin, Guosheng Lin

Previous long-tailed recognition methods commonly focus on the data augmentation or re-balancing strategy of the tail classes to give more attention to tail classes during the model training.

Data Augmentation

OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization

1 code implementation23 May 2022 Peng Hu, Xi Peng, Hongyuan Zhu, Mohamed M. Sabry Aly, Jie Lin

Numerous network compression methods such as pruning and quantization are proposed to reduce the model size significantly, of which the key is to find suitable compression allocation (e. g., pruning sparsity and quantization codebook) of each layer.

Quantization

Enhancing the Transferability of Adversarial Examples via a Few Queries

no code implementations19 May 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

Due to the vulnerability of deep neural networks, the black-box attack has drawn great attention from the community.

On Representation Knowledge Distillation for Graph Neural Networks

no code implementations9 Nov 2021 Chaitanya K. Joshi, Fayao Liu, Xu Xun, Jie Lin, Chuan-Sheng Foo

Knowledge distillation is a learning paradigm for boosting resource-efficient graph neural networks (GNNs) using more expressive yet cumbersome teacher models.

Computer Vision Contrastive Learning +1

Global Magnitude Pruning With Minimum Threshold Is All We Need

1 code implementation29 Sep 2021 Manas Gupta, Vishandi Rudy Keneta, Abhishek Vaidyanathan, Ritwik Kanodia, Efe Camci, Chuan-Sheng Foo, Jie Lin

We showcase that magnitude based pruning, specifically, global magnitude pruning (GP) is sufficient to achieve SOTA performance on a range of neural network architectures.

Network Pruning

Delving into Channels: Exploring Hyperparameter Space of Channel Bit Widths with Linear Complexity

no code implementations29 Sep 2021 Zhe Wang, Jie Lin, Xue Geng, Mohamed M. Sabry Aly, Vijay Chandrasekhar

We formulate the quantization of deep neural networks as a rate-distortion optimization problem, and present an ultra-fast algorithm to search the bit allocation of channels.

Quantization

Few-Shot Segmentation with Global and Local Contrastive Learning

1 code implementation11 Aug 2021 Weide Liu, Zhonghua Wu, Henghui Ding, Fayao Liu, Jie Lin, Guosheng Lin

To this end, we first propose a prior extractor to learn the query information from the unlabeled images with our proposed global-local contrastive learning.

Contrastive Learning Semantic Segmentation

Point Discriminative Learning for Unsupervised Representation Learning on 3D Point Clouds

no code implementations4 Aug 2021 Fayao Liu, Guosheng Lin, Chuan-Sheng Foo, Chaitanya K. Joshi, Jie Lin

In this work we propose a point discriminative learning method for unsupervised representation learning on 3D point clouds, which is specially designed for point cloud data and can learn local and global shape features.

3D Object Classification 3D Part Segmentation +2

PSRR-MaxpoolNMS: Pyramid Shifted MaxpoolNMS with Relationship Recovery

no code implementations CVPR 2021 Tianyi Zhang, Jie Lin, Peng Hu, Bin Zhao, Mohamed M. Sabry Aly

Unlike convolutions which are inherently parallel, the de-facto standard for NMS, namely GreedyNMS, cannot be easily parallelized and thus could be the performance bottleneck in convolutional object detection pipelines.

object-detection Object Detection

FFConv: Fast Factorized Convolutional Neural Network Inference on Encrypted Data

no code implementations6 Feb 2021 Yuxiao Lu, Jie Lin, Chao Jin, Zhe Wang, Min Wu, Khin Mi Mi Aung, XiaoLi Li

Despite the faster HECNN inference, the mainstream packing schemes Dense Packing (DensePack) and Convolution Packing (ConvPack) introduce expensive rotation overhead, which prolongs the inference latency of HECNN for deeper and wider CNN architectures.

Privacy Preserving

Dimension Reduction in Quantum Key Distribution for Continuous- and Discrete-Variable Protocols

no code implementations14 Jan 2021 Twesh Upadhyaya, Thomas van Himbeeck, Jie Lin, Norbert Lütkenhaus

We develop a method to connect the infinite-dimensional description of optical continuous-variable quantum key distribution (QKD) protocols to a finite-dimensional formulation.

Dimensionality Reduction Quantum Physics

A*HAR: A New Benchmark towards Semi-supervised learning for Class-imbalanced Human Activity Recognition

1 code implementation13 Jan 2021 Govind Narasimman, Kangkang Lu, Arun Raja, Chuan Sheng Foo, Mohamed Sabry Aly, Jie Lin, Vijay Chandrasekhar

Despite the vast literature on Human Activity Recognition (HAR) with wearable inertial sensor data, it is perhaps surprising that there are few studies investigating semisupervised learning for HAR, particularly in a challenging scenario with class imbalance problem.

Human Activity Recognition

The Tsinghua University-Ma Huateng Telescopes for Survey: Overview and Performance of the System

no code implementations21 Dec 2020 Ji-Cheng Zhang, Xiao-Feng Wang, Jun Mo, Gao-Bo Xi, Jie Lin, Xiao-Jun Jiang, Xiao-Ming Zhang, Wen-Xiong Li, Sheng-Yu Yan, Zhi-Hao Chen, Lei Hu, Xue Li, Wei-Li Lin, Han Lin, Cheng Miao, Li-Ming Rui, Han-Na Sai, Dan-Feng Xiang, Xing-Han Zhang

The TMTS system can have a FoV of about 9 deg2 when monitoring the sky with two bands (i. e., SDSS g and r filters) at the same time, and a maximum FoV of ~18 deg2 when four telescopes monitor different sky areas in monochromatic filter mode.

Instrumentation and Methods for Astrophysics

Role-Wise Data Augmentation for Knowledge Distillation

1 code implementation ICLR 2020 Jie Fu, Xue Geng, Zhijian Duan, Bohan Zhuang, Xingdi Yuan, Adam Trischler, Jie Lin, Chris Pal, Hao Dong

To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated.

Data Augmentation Knowledge Distillation

Security proof of practical quantum key distribution with detection-efficiency mismatch

no code implementations9 Apr 2020 Yanbao Zhang, Patrick J. Coles, Adam Winick, Jie Lin, Norbert Lutkenhaus

Our method also shows that in the absence of efficiency mismatch in our detector model, the key rate increases if the loss due to detection inefficiency is assumed to be outside of the adversary's control, as compared to the view where for a security proof this loss is attributed to the action of the adversary.

Quantum Physics

Deeply Activated Salient Region for Instance Search

no code implementations1 Feb 2020 Hui-Chu Xiao, Wan-Lei Zhao, Jie Lin, Chong-Wah Ngo

Due to the lack of proper mechanism in locating instances and deriving feature representation, instance search is generally only effective for retrieving instances of known object categories.

Image Retrieval Instance Search

MaskConvNet: Training Efficient ConvNets from Scratch via Budget-constrained Filter Pruning

no code implementations ICLR 2020 Raden Mu'az Mun'im, Jie Lin, Vijay Chandrasekhar, Koichi Shinoda

(4) Fast, it is observed that the number of training epochs required by MaskConvNet is close to training a baseline without pruning.

Network Pruning

Towards Effective 2-bit Quantization: Pareto-optimal Bit Allocation for Deep CNNs Compression

no code implementations25 Sep 2019 Zhe Wang, Jie Lin, Mohamed M. Sabry Aly, Sean I Young, Vijay Chandrasekhar, Bernd Girod

In this paper, we address an important problem of how to optimize the bit allocation of weights and activations for deep CNNs compression.

Quantization

A*3D Dataset: Towards Autonomous Driving in Challenging Environments

1 code implementation17 Sep 2019 Quang-Hieu Pham, Pierre Sevestre, Ramanpreet Singh Pahwa, Huijing Zhan, Chun Ho Pang, Yuda Chen, Armin Mustafa, Vijay Chandrasekhar, Jie Lin

With the increasing global popularity of self-driving cars, there is an immediate need for challenging real-world datasets for benchmarking and training various computer vision tasks such as 3D object detection.

3D Object Detection Autonomous Driving +3

Quantum-enhanced least-square support vector machine: simplified quantum algorithm and sparse solutions

no code implementations5 Aug 2019 Jie Lin, Dan-Bo Zhang, Shuo Zhang, Xiang Wang, Tan Li, Wan-su Bao

We also incorporate kernel methods into the above quantum algorithms, which uses both exponential growth Hilbert space of qubits and infinite dimensionality of continuous variable for quantum feature maps.

Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks

no code implementations4 Jan 2019 Xue Geng, Jie Fu, Bin Zhao, Jie Lin, Mohamed M. Sabry Aly, Christopher Pal, Vijay Chandrasekhar

This paper addresses a challenging problem - how to reduce energy consumption without incurring performance drop when deploying deep neural networks (DNNs) at the inference stage.

Quantization

TEA-DNN: the Quest for Time-Energy-Accuracy Co-optimized Deep Neural Networks

no code implementations29 Nov 2018 Lile Cai, Anne-Maelle Barneche, Arthur Herbout, Chuan Sheng Foo, Jie Lin, Vijay Ramaseshan Chandrasekhar, Mohamed M. Sabry

To this end, we introduce TEA-DNN, a NAS algorithm targeting multi-objective optimization of execution time, energy consumption, and classification accuracy of CNN workloads on embedded architectures.

General Classification Image Classification +1

Simple security analysis of phase-matching measurement-device-independent quantum key distribution

no code implementations26 Jul 2018 Jie Lin, Norbert Lütkenhaus

Variations of phase-matching measurement-device-independent quantum key distribution (PM-MDI QKD) protocols have been investigated before, but it was recently discovered that this type of protocol (under the name of twin-field QKD) can beat the linear scaling of the repeaterless bound on secret key rate capacity.

Quantum Physics

End-to-End Video Classification with Knowledge Graphs

no code implementations6 Nov 2017 Fang Yuan, Zhe Wang, Jie Lin, Luis Fernando D'Haro, Kim Jung Jae, Zeng Zeng, Vijay Chandrasekhar

In particular, we unify traditional "knowledgeless" machine learning models and knowledge graphs in a novel end-to-end framework.

Classification General Classification +3

Pruning Convolutional Neural Networks for Image Instance Retrieval

no code implementations18 Jul 2017 Gaurav Manek, Jie Lin, Vijay Chandrasekhar, Ling-Yu Duan, Sateesh Giduthuri, Xiao-Li Li, Tomaso Poggio

In this work, we focus on the problem of image instance retrieval with deep descriptors extracted from pruned Convolutional Neural Networks (CNN).

Image Instance Retrieval

Compact Descriptors for Video Analysis: the Emerging MPEG Standard

no code implementations26 Apr 2017 Ling-Yu Duan, Vijay Chandrasekhar, Shiqi Wang, Yihang Lou, Jie Lin, Yan Bai, Tiejun Huang, Alex ChiChung Kot, Wen Gao

This paper provides an overview of the on-going compact descriptors for video analysis standard (CDVA) from the ISO/IEC moving pictures experts group (MPEG).

Compression of Deep Neural Networks for Image Instance Retrieval

no code implementations18 Jan 2017 Vijay Chandrasekhar, Jie Lin, Qianli Liao, Olivier Morère, Antoine Veillard, Ling-Yu Duan, Tomaso Poggio

One major drawback of CNN-based {\it global descriptors} is that uncompressed deep neural network models require hundreds of megabytes of storage making them inconvenient to deploy in mobile applications or in custom hardware.

Image Instance Retrieval Model Compression +1

Nested Invariance Pooling and RBM Hashing for Image Instance Retrieval

no code implementations15 Mar 2016 Olivier Morère, Jie Lin, Antoine Veillard, Vijay Chandrasekhar, Tomaso Poggio

The first one is Nested Invariance Pooling (NIP), a method inspired from i-theory, a mathematical theory for computing group invariant transformations with feed-forward neural networks.

Image Instance Retrieval Translation

Egocentric Activity Recognition with Multimodal Fisher Vector

no code implementations25 Jan 2016 Sibo Song, Ngai-Man Cheung, Vijay Chandrasekhar, Bappaditya Mandal, Jie Lin

With the increasing availability of wearable devices, research on egocentric activity recognition has received much attention recently.

Egocentric Activity Recognition

Group Invariant Deep Representations for Image Instance Retrieval

no code implementations9 Jan 2016 Olivier Morère, Antoine Veillard, Jie Lin, Julie Petta, Vijay Chandrasekhar, Tomaso Poggio

Based on a thorough empirical evaluation using several publicly available datasets, we show that our method is able to significantly and consistently improve retrieval results every time a new type of invariance is incorporated.

Dimensionality Reduction Image Classification +2

Tiny Descriptors for Image Retrieval with Unsupervised Triplet Hashing

no code implementations10 Nov 2015 Jie Lin, Olivier Morère, Julie Petta, Vijay Chandrasekhar, Antoine Veillard

Then, triplet networks, a rank learning scheme based on weight sharing nets is used to fine-tune the binary embedding functions to retain as much as possible of the useful metric properties of the original space.

Image Classification Image Retrieval

Co-Regularized Deep Representations for Video Summarization

no code implementations30 Jan 2015 Olivier Morère, Hanlin Goh, Antoine Veillard, Vijay Chandrasekhar, Jie Lin

A comprehensive user study is conducted comparing our proposed method to a variety of schemes, including the summarization currently in use by one of the most popular video sharing websites.

Informativeness Video Summarization

DeepHash: Getting Regularization, Depth and Fine-Tuning Right

no code implementations20 Jan 2015 Jie Lin, Olivier Morere, Vijay Chandrasekhar, Antoine Veillard, Hanlin Goh

This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval.

Cannot find the paper you are looking for? You can Submit a new open access paper.