Search Results for author: Jie Lin

Found 50 papers, 12 papers with code

Improving Group Connectivity for Generalization of Federated Deep Learning

no code implementations29 Feb 2024 Zexi Li, Jie Lin, Zhiqi Li, Didi Zhu, Chao Wu

Bridging the gap between LMC and FL, in this paper, we leverage fixed anchor models to empirically and theoretically study the transitivity property of connectivity from two models (LMC) to a group of models (model fusion in FL).

Federated Learning Linear Mode Connectivity

Training-time Neuron Alignment through Permutation Subspace for Improving Linear Mode Connectivity and Model Fusion

no code implementations2 Feb 2024 Zexi Li, Zhiqi Li, Jie Lin, Tao Shen, Tao Lin, Chao Wu

In deep learning, stochastic gradient descent often yields functionally similar yet widely scattered solutions in the weight space even under the same initialization, causing barriers in the Linear Mode Connectivity (LMC) landscape.

Federated Learning Linear Mode Connectivity

Understanding AI Cognition: A Neural Module for Inference Inspired by Human Memory Mechanisms

no code implementations1 Oct 2023 Xiangyu Zeng, Jie Lin, Piao Hu, Ruizheng Huang, Zhicheng Zhang

How humans and machines make sense of current inputs for relation reasoning and question-answering while putting the perceived information into context of our past memories, has been a challenging conundrum in cognitive science and artificial intelligence.

Image Classification Question Answering

Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks

2 code implementations ICCV 2023 Kaixin Xu, Zhe Wang, Xue Geng, Jie Lin, Min Wu, XiaoLi Li, Weisi Lin

On ImageNet, we achieve up to 4. 7% and 4. 6% higher top-1 accuracy compared to other methods for VGG-16 and ResNet-50, respectively.

Combinatorial Optimization

Improving the Transferability of Adversarial Examples via Direction Tuning

2 code implementations27 Mar 2023 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

Although considerable efforts have been developed on improving the transferability of adversarial examples generated by transfer-based adversarial attacks, our investigation found that, the big deviation between the actual and steepest update directions of the current transfer-based adversarial attacks is caused by the large update step length, resulting in the generated adversarial examples can not converge well.

Network Pruning

Fuzziness-tuned: Improving the Transferability of Adversarial Examples

no code implementations17 Mar 2023 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

In this paper, we first systematically investigated this issue and found that the enormous difference of attack success rates between the surrogate model and victim model is caused by the existence of a special area (known as fuzzy domain in our paper), in which the adversarial examples in the area are classified wrongly by the surrogate model while correctly by the victim model.

Deep Feature Selection for Anomaly Detection Based on Pretrained Network and Gaussian Discriminative Analysis

1 code implementation IEEE Open Journal of Instrumentation and Measurement (Volume: 1) 2022 Jie Lin, Song Chen, Enping Lin, Yu Yang

Deep learning neural network serves as a powerful tool for visual anomaly detection (AD) and fault diagnosis, attributed to its strong abstractive interpretation ability in the representation domain.

Anomaly Detection feature selection

Hybrid Multimodal Feature Extraction, Mining and Fusion for Sentiment Analysis

1 code implementation5 Aug 2022 Jia Li, Ziyang Zhang, Junjie Lang, Yueqi Jiang, Liuwei An, Peng Zou, Yangyang Xu, Sheng Gao, Jie Lin, Chunxiao Fan, Xiao Sun, Meng Wang

In this paper, we present our solutions for the Multimodal Sentiment Analysis Challenge (MuSe) 2022, which includes MuSe-Humor, MuSe-Reaction and MuSe-Stress Sub-challenges.

Data Augmentation Humor Detection +1

Improving the Robustness and Generalization of Deep Neural Network with Confidence Threshold Reduction

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

The empirical and theoretical analysis demonstrates that the MDL loss improves the robustness and generalization of the model simultaneously for natural training.

FACM: Intermediate Layer Still Retain Effective Features against Adversarial Examples

no code implementations2 Jun 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

To enhance the robustness of the classifier, in our paper, a \textbf{F}eature \textbf{A}nalysis and \textbf{C}onditional \textbf{M}atching prediction distribution (FACM) model is proposed to utilize the features of intermediate layers to correct the classification.

Long-tailed Recognition by Learning from Latent Categories

no code implementations2 Jun 2022 Weide Liu, Zhonghua Wu, Yiming Wang, Henghui Ding, Fayao Liu, Jie Lin, Guosheng Lin

Previous long-tailed recognition methods commonly focus on the data augmentation or re-balancing strategy of the tail classes to give more attention to tail classes during the model training.

Data Augmentation Long-tail Learning

OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization

no code implementations23 May 2022 Peng Hu, Xi Peng, Hongyuan Zhu, Mohamed M. Sabry Aly, Jie Lin

Numerous network compression methods such as pruning and quantization are proposed to reduce the model size significantly, of which the key is to find suitable compression allocation (e. g., pruning sparsity and quantization codebook) of each layer.

Quantization

Gradient Aligned Attacks via a Few Queries

no code implementations19 May 2022 Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao

Specifically, we propose a gradient aligned mechanism to ensure that the derivatives of the loss function with respect to the logit vector have the same weight coefficients between the surrogate and victim models.

On Representation Knowledge Distillation for Graph Neural Networks

1 code implementation9 Nov 2021 Chaitanya K. Joshi, Fayao Liu, Xu Xun, Jie Lin, Chuan-Sheng Foo

Past work on distillation for GNNs proposed the Local Structure Preserving loss (LSP), which matches local structural relationships defined over edges across the student and teacher's node embeddings.

Contrastive Learning Knowledge Distillation

Delving into Channels: Exploring Hyperparameter Space of Channel Bit Widths with Linear Complexity

no code implementations29 Sep 2021 Zhe Wang, Jie Lin, Xue Geng, Mohamed M. Sabry Aly, Vijay Chandrasekhar

We formulate the quantization of deep neural networks as a rate-distortion optimization problem, and present an ultra-fast algorithm to search the bit allocation of channels.

Quantization

Global Magnitude Pruning With Minimum Threshold Is All We Need

1 code implementation29 Sep 2021 Manas Gupta, Vishandi Rudy Keneta, Abhishek Vaidyanathan, Ritwik Kanodia, Efe Camci, Chuan-Sheng Foo, Jie Lin

We showcase that magnitude based pruning, specifically, global magnitude pruning (GP) is sufficient to achieve SOTA performance on a range of neural network architectures.

Network Pruning

Few-Shot Segmentation with Global and Local Contrastive Learning

1 code implementation11 Aug 2021 Weide Liu, Zhonghua Wu, Henghui Ding, Fayao Liu, Jie Lin, Guosheng Lin

To this end, we first propose a prior extractor to learn the query information from the unlabeled images with our proposed global-local contrastive learning.

Contrastive Learning Image Segmentation +2

Point Discriminative Learning for Data-efficient 3D Point Cloud Analysis

no code implementations4 Aug 2021 Fayao Liu, Guosheng Lin, Chuan-Sheng Foo, Chaitanya K. Joshi, Jie Lin

In this work we propose PointDisc, a point discriminative learning method to leverage self-supervisions for data-efficient 3D point cloud classification and segmentation.

3D Object Classification 3D Part Segmentation +5

PSRR-MaxpoolNMS: Pyramid Shifted MaxpoolNMS with Relationship Recovery

no code implementations CVPR 2021 Tianyi Zhang, Jie Lin, Peng Hu, Bin Zhao, Mohamed M. Sabry Aly

Unlike convolutions which are inherently parallel, the de-facto standard for NMS, namely GreedyNMS, cannot be easily parallelized and thus could be the performance bottleneck in convolutional object detection pipelines.

object-detection Object Detection

FFConv: Fast Factorized Convolutional Neural Network Inference on Encrypted Data

no code implementations6 Feb 2021 Yuxiao Lu, Jie Lin, Chao Jin, Zhe Wang, Min Wu, Khin Mi Mi Aung, XiaoLi Li

Despite the faster HECNN inference, the mainstream packing schemes Dense Packing (DensePack) and Convolution Packing (ConvPack) introduce expensive rotation overhead, which prolongs the inference latency of HECNN for deeper and wider CNN architectures.

Privacy Preserving

Dimension Reduction in Quantum Key Distribution for Continuous- and Discrete-Variable Protocols

no code implementations14 Jan 2021 Twesh Upadhyaya, Thomas van Himbeeck, Jie Lin, Norbert Lütkenhaus

We develop a method to connect the infinite-dimensional description of optical continuous-variable quantum key distribution (QKD) protocols to a finite-dimensional formulation.

Dimensionality Reduction Quantum Physics

A*HAR: A New Benchmark towards Semi-supervised learning for Class-imbalanced Human Activity Recognition

1 code implementation13 Jan 2021 Govind Narasimman, Kangkang Lu, Arun Raja, Chuan Sheng Foo, Mohamed Sabry Aly, Jie Lin, Vijay Chandrasekhar

Despite the vast literature on Human Activity Recognition (HAR) with wearable inertial sensor data, it is perhaps surprising that there are few studies investigating semisupervised learning for HAR, particularly in a challenging scenario with class imbalance problem.

Human Activity Recognition

The Tsinghua University-Ma Huateng Telescopes for Survey: Overview and Performance of the System

no code implementations21 Dec 2020 Ji-Cheng Zhang, Xiao-Feng Wang, Jun Mo, Gao-Bo Xi, Jie Lin, Xiao-Jun Jiang, Xiao-Ming Zhang, Wen-Xiong Li, Sheng-Yu Yan, Zhi-Hao Chen, Lei Hu, Xue Li, Wei-Li Lin, Han Lin, Cheng Miao, Li-Ming Rui, Han-Na Sai, Dan-Feng Xiang, Xing-Han Zhang

The TMTS system can have a FoV of about 9 deg2 when monitoring the sky with two bands (i. e., SDSS g and r filters) at the same time, and a maximum FoV of ~18 deg2 when four telescopes monitor different sky areas in monochromatic filter mode.

Instrumentation and Methods for Astrophysics

Role-Wise Data Augmentation for Knowledge Distillation

1 code implementation ICLR 2020 Jie Fu, Xue Geng, Zhijian Duan, Bohan Zhuang, Xingdi Yuan, Adam Trischler, Jie Lin, Chris Pal, Hao Dong

To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated.

Data Augmentation Knowledge Distillation

Security proof of practical quantum key distribution with detection-efficiency mismatch

no code implementations9 Apr 2020 Yanbao Zhang, Patrick J. Coles, Adam Winick, Jie Lin, Norbert Lutkenhaus

Our method also shows that in the absence of efficiency mismatch in our detector model, the key rate increases if the loss due to detection inefficiency is assumed to be outside of the adversary's control, as compared to the view where for a security proof this loss is attributed to the action of the adversary.

Quantum Physics

Deeply Activated Salient Region for Instance Search

no code implementations1 Feb 2020 Hui-Chu Xiao, Wan-Lei Zhao, Jie Lin, Chong-Wah Ngo

Due to the lack of proper mechanism in locating instances and deriving feature representation, instance search is generally only effective for retrieving instances of known object categories.

Image Retrieval Instance Search

MaskConvNet: Training Efficient ConvNets from Scratch via Budget-constrained Filter Pruning

no code implementations ICLR 2020 Raden Mu'az Mun'im, Jie Lin, Vijay Chandrasekhar, Koichi Shinoda

(4) Fast, it is observed that the number of training epochs required by MaskConvNet is close to training a baseline without pruning.

Network Pruning

Towards Effective 2-bit Quantization: Pareto-optimal Bit Allocation for Deep CNNs Compression

no code implementations25 Sep 2019 Zhe Wang, Jie Lin, Mohamed M. Sabry Aly, Sean I Young, Vijay Chandrasekhar, Bernd Girod

In this paper, we address an important problem of how to optimize the bit allocation of weights and activations for deep CNNs compression.

Quantization

A*3D Dataset: Towards Autonomous Driving in Challenging Environments

1 code implementation17 Sep 2019 Quang-Hieu Pham, Pierre Sevestre, Ramanpreet Singh Pahwa, Huijing Zhan, Chun Ho Pang, Yuda Chen, Armin Mustafa, Vijay Chandrasekhar, Jie Lin

With the increasing global popularity of self-driving cars, there is an immediate need for challenging real-world datasets for benchmarking and training various computer vision tasks such as 3D object detection.

3D Object Detection Autonomous Driving +4

Quantum-enhanced least-square support vector machine: simplified quantum algorithm and sparse solutions

no code implementations5 Aug 2019 Jie Lin, Dan-Bo Zhang, Shuo Zhang, Xiang Wang, Tan Li, Wan-su Bao

We also incorporate kernel methods into the above quantum algorithms, which uses both exponential growth Hilbert space of qubits and infinite dimensionality of continuous variable for quantum feature maps.

BIG-bench Machine Learning

Dataflow-based Joint Quantization of Weights and Activations for Deep Neural Networks

no code implementations4 Jan 2019 Xue Geng, Jie Fu, Bin Zhao, Jie Lin, Mohamed M. Sabry Aly, Christopher Pal, Vijay Chandrasekhar

This paper addresses a challenging problem - how to reduce energy consumption without incurring performance drop when deploying deep neural networks (DNNs) at the inference stage.

Quantization

TEA-DNN: the Quest for Time-Energy-Accuracy Co-optimized Deep Neural Networks

no code implementations29 Nov 2018 Lile Cai, Anne-Maelle Barneche, Arthur Herbout, Chuan Sheng Foo, Jie Lin, Vijay Ramaseshan Chandrasekhar, Mohamed M. Sabry

To this end, we introduce TEA-DNN, a NAS algorithm targeting multi-objective optimization of execution time, energy consumption, and classification accuracy of CNN workloads on embedded architectures.

General Classification Image Classification +1

Simple security analysis of phase-matching measurement-device-independent quantum key distribution

no code implementations26 Jul 2018 Jie Lin, Norbert Lütkenhaus

Variations of phase-matching measurement-device-independent quantum key distribution (PM-MDI QKD) protocols have been investigated before, but it was recently discovered that this type of protocol (under the name of twin-field QKD) can beat the linear scaling of the repeaterless bound on secret key rate capacity.

Quantum Physics

End-to-End Video Classification with Knowledge Graphs

no code implementations6 Nov 2017 Fang Yuan, Zhe Wang, Jie Lin, Luis Fernando D'Haro, Kim Jung Jae, Zeng Zeng, Vijay Chandrasekhar

In particular, we unify traditional "knowledgeless" machine learning models and knowledge graphs in a novel end-to-end framework.

BIG-bench Machine Learning Classification +4

Pruning Convolutional Neural Networks for Image Instance Retrieval

no code implementations18 Jul 2017 Gaurav Manek, Jie Lin, Vijay Chandrasekhar, Ling-Yu Duan, Sateesh Giduthuri, Xiao-Li Li, Tomaso Poggio

In this work, we focus on the problem of image instance retrieval with deep descriptors extracted from pruned Convolutional Neural Networks (CNN).

Image Instance Retrieval Retrieval

Compact Descriptors for Video Analysis: the Emerging MPEG Standard

no code implementations26 Apr 2017 Ling-Yu Duan, Vijay Chandrasekhar, Shiqi Wang, Yihang Lou, Jie Lin, Yan Bai, Tiejun Huang, Alex ChiChung Kot, Wen Gao

This paper provides an overview of the on-going compact descriptors for video analysis standard (CDVA) from the ISO/IEC moving pictures experts group (MPEG).

Compression of Deep Neural Networks for Image Instance Retrieval

no code implementations18 Jan 2017 Vijay Chandrasekhar, Jie Lin, Qianli Liao, Olivier Morère, Antoine Veillard, Ling-Yu Duan, Tomaso Poggio

One major drawback of CNN-based {\it global descriptors} is that uncompressed deep neural network models require hundreds of megabytes of storage making them inconvenient to deploy in mobile applications or in custom hardware.

Image Instance Retrieval Model Compression +2

Nested Invariance Pooling and RBM Hashing for Image Instance Retrieval

no code implementations15 Mar 2016 Olivier Morère, Jie Lin, Antoine Veillard, Vijay Chandrasekhar, Tomaso Poggio

The first one is Nested Invariance Pooling (NIP), a method inspired from i-theory, a mathematical theory for computing group invariant transformations with feed-forward neural networks.

Image Instance Retrieval Retrieval +1

Egocentric Activity Recognition with Multimodal Fisher Vector

no code implementations25 Jan 2016 Sibo Song, Ngai-Man Cheung, Vijay Chandrasekhar, Bappaditya Mandal, Jie Lin

With the increasing availability of wearable devices, research on egocentric activity recognition has received much attention recently.

Egocentric Activity Recognition

Group Invariant Deep Representations for Image Instance Retrieval

no code implementations9 Jan 2016 Olivier Morère, Antoine Veillard, Jie Lin, Julie Petta, Vijay Chandrasekhar, Tomaso Poggio

Based on a thorough empirical evaluation using several publicly available datasets, we show that our method is able to significantly and consistently improve retrieval results every time a new type of invariance is incorporated.

Dimensionality Reduction Image Classification +3

Tiny Descriptors for Image Retrieval with Unsupervised Triplet Hashing

no code implementations10 Nov 2015 Jie Lin, Olivier Morère, Julie Petta, Vijay Chandrasekhar, Antoine Veillard

Then, triplet networks, a rank learning scheme based on weight sharing nets is used to fine-tune the binary embedding functions to retain as much as possible of the useful metric properties of the original space.

Image Classification Image Retrieval +1

Co-Regularized Deep Representations for Video Summarization

no code implementations30 Jan 2015 Olivier Morère, Hanlin Goh, Antoine Veillard, Vijay Chandrasekhar, Jie Lin

A comprehensive user study is conducted comparing our proposed method to a variety of schemes, including the summarization currently in use by one of the most popular video sharing websites.

Informativeness Video Summarization

DeepHash: Getting Regularization, Depth and Fine-Tuning Right

no code implementations20 Jan 2015 Jie Lin, Olivier Morere, Vijay Chandrasekhar, Antoine Veillard, Hanlin Goh

This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval.

Retrieval

Cannot find the paper you are looking for? You can Submit a new open access paper.