no code implementations • 6 Dec 2024 • Jie Lin, I Chiu, Kuan-Chen Wang, Kai-Chun Liu, Hsin-Min Wang, Ping-Cheng Yeh, Yu Tsao
Electrocardiogram (ECG) signals play a crucial role in diagnosing cardiovascular diseases.
no code implementations • 11 Nov 2024 • Xiaowei Long, Jie Lin, Xiangyuan Yang
Particularly, in our paper, the generation of adversarial examples is considered as the perturbation process of a Lyapunov dynamic system, and we propose an example stability mechanism, in which a novel control term is added in adversarial example generation to ensure that the normal examples can achieve dynamic stability while the adversarial examples cannot achieve the stability.
1 code implementation • 2 Jul 2024 • Kaixin Xu, Zhe Wang, Chunyun Chen, Xue Geng, Jie Lin, Xulei Yang, Min Wu, XiaoLi Li, Weisi Lin
Vision transformers have emerged as a promising alternative to convolutional neural networks for various image analysis tasks, offering comparable or superior performance.
no code implementations • 9 May 2024 • Xue Geng, Zhe Wang, Chunyun Chen, Qing Xu, Kaixin Xu, Chao Jin, Manas Gupta, Xulei Yang, Zhenghua Chen, Mohamed M. Sabry Aly, Jie Lin, Min Wu, XiaoLi Li
To address these challenges, researchers have developed various model compression techniques such as model quantization and model pruning.
no code implementations • 29 Feb 2024 • Zexi Li, Jie Lin, Zhiqi Li, Didi Zhu, Chao Wu
Bridging the gap between LMC and FL, in this paper, we leverage fixed anchor models to empirically and theoretically study the transitivity property of connectivity from two models (LMC) to a group of models (model fusion in FL).
no code implementations • 2 Feb 2024 • Zexi Li, Zhiqi Li, Jie Lin, Tao Shen, Tao Lin, Chao Wu
In deep learning, stochastic gradient descent often yields functionally similar yet widely scattered solutions in the weight space even under the same initialization, causing barriers in the Linear Mode Connectivity (LMC) landscape.
1 code implementation • 1 Oct 2023 • Xiangyu Zeng, Jie Lin, Piao Hu, Ruizheng Huang, Zhicheng Zhang
How humans and machines make sense of current inputs for relation reasoning and question-answering while putting the perceived information into context of our past memories, has been a challenging conundrum in cognitive science and artificial intelligence.
no code implementations • 13 Sep 2023 • Weide Liu, Zhonghua Wu, Yiming Wang, Henghui Ding, Fayao Liu, Jie Lin, Guosheng Lin
In this work, we tackle the challenging problem of long-tailed image recognition.
2 code implementations • ICCV 2023 • Kaixin Xu, Zhe Wang, Xue Geng, Jie Lin, Min Wu, XiaoLi Li, Weisi Lin
On ImageNet, we achieve up to 4. 7% and 4. 6% higher top-1 accuracy compared to other methods for VGG-16 and ResNet-50, respectively.
no code implementations • 18 Aug 2023 • Ruibing Jin, Guosheng Lin, Min Wu, Jie Lin, Zhengguo Li, XiaoLi Li, Zhenghua Chen
To address this issue, we propose an unlimited knowledge distillation (UKD) in this paper.
2 code implementations • 27 Mar 2023 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
Although considerable efforts have been developed on improving the transferability of adversarial examples generated by transfer-based adversarial attacks, our investigation found that, the big deviation between the actual and steepest update directions of the current transfer-based adversarial attacks is caused by the large update step length, resulting in the generated adversarial examples can not converge well.
no code implementations • 17 Mar 2023 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
In this paper, we first systematically investigated this issue and found that the enormous difference of attack success rates between the surrogate model and victim model is caused by the existence of a special area (known as fuzzy domain in our paper), in which the adversarial examples in the area are classified wrongly by the surrogate model while correctly by the victim model.
1 code implementation • IEEE Open Journal of Instrumentation and Measurement (Volume: 1) 2022 • Jie Lin, Song Chen, Enping Lin, Yu Yang
Deep learning neural network serves as a powerful tool for visual anomaly detection (AD) and fault diagnosis, attributed to its strong abstractive interpretation ability in the representation domain.
Ranked #73 on Anomaly Detection on MVTec AD
1 code implementation • 5 Aug 2022 • Jia Li, Ziyang Zhang, Junjie Lang, Yueqi Jiang, Liuwei An, Peng Zou, Yangyang Xu, Sheng Gao, Jie Lin, Chunxiao Fan, Xiao Sun, Meng Wang
In this paper, we present our solutions for the Multimodal Sentiment Analysis Challenge (MuSe) 2022, which includes MuSe-Humor, MuSe-Reaction and MuSe-Stress Sub-challenges.
no code implementations • 2 Jun 2022 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
To enhance the robustness of the classifier, in our paper, a \textbf{F}eature \textbf{A}nalysis and \textbf{C}onditional \textbf{M}atching prediction distribution (FACM) model is proposed to utilize the features of intermediate layers to correct the classification.
no code implementations • 2 Jun 2022 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
The empirical and theoretical analysis demonstrates that the MDL loss improves the robustness and generalization of the model simultaneously for natural training.
no code implementations • 2 Jun 2022 • Weide Liu, Zhonghua Wu, Yiming Wang, Henghui Ding, Fayao Liu, Jie Lin, Guosheng Lin
Previous long-tailed recognition methods commonly focus on the data augmentation or re-balancing strategy of the tail classes to give more attention to tail classes during the model training.
Ranked #9 on Long-tail Learning on CIFAR-10-LT (ρ=10)
no code implementations • 23 May 2022 • Peng Hu, Xi Peng, Hongyuan Zhu, Mohamed M. Sabry Aly, Jie Lin
Numerous network compression methods such as pruning and quantization are proposed to reduce the model size significantly, of which the key is to find suitable compression allocation (e. g., pruning sparsity and quantization codebook) of each layer.
no code implementations • 19 May 2022 • Xiangyuan Yang, Jie Lin, HANLIN ZHANG, Xinyu Yang, Peng Zhao
Specifically, we propose a gradient aligned mechanism to ensure that the derivatives of the loss function with respect to the logit vector have the same weight coefficients between the surrogate and victim models.
1 code implementation • 9 Nov 2021 • Chaitanya K. Joshi, Fayao Liu, Xu Xun, Jie Lin, Chuan-Sheng Foo
Past work on distillation for GNNs proposed the Local Structure Preserving loss (LSP), which matches local structural relationships defined over edges across the student and teacher's node embeddings.
1 code implementation • 29 Sep 2021 • Manas Gupta, Vishandi Rudy Keneta, Abhishek Vaidyanathan, Ritwik Kanodia, Efe Camci, Chuan-Sheng Foo, Jie Lin
We showcase that magnitude based pruning, specifically, global magnitude pruning (GP) is sufficient to achieve SOTA performance on a range of neural network architectures.
no code implementations • 29 Sep 2021 • Zhe Wang, Jie Lin, Xue Geng, Mohamed M. Sabry Aly, Vijay Chandrasekhar
We formulate the quantization of deep neural networks as a rate-distortion optimization problem, and present an ultra-fast algorithm to search the bit allocation of channels.
1 code implementation • 11 Aug 2021 • Weide Liu, Zhonghua Wu, Henghui Ding, Fayao Liu, Jie Lin, Guosheng Lin
To this end, we first propose a prior extractor to learn the query information from the unlabeled images with our proposed global-local contrastive learning.
no code implementations • 4 Aug 2021 • Fayao Liu, Guosheng Lin, Chuan-Sheng Foo, Chaitanya K. Joshi, Jie Lin
In this work we propose PointDisc, a point discriminative learning method to leverage self-supervisions for data-efficient 3D point cloud classification and segmentation.
1 code implementation • CVPR 2021 • Peng Hu, Xi Peng, Hongyuan Zhu, Liangli Zhen, Jie Lin
Recently, cross-modal retrieval is emerging with the help of deep multimodal learning.
no code implementations • CVPR 2021 • Tianyi Zhang, Jie Lin, Peng Hu, Bin Zhao, Mohamed M. Sabry Aly
Unlike convolutions which are inherently parallel, the de-facto standard for NMS, namely GreedyNMS, cannot be easily parallelized and thus could be the performance bottleneck in convolutional object detection pipelines.
no code implementations • 6 Feb 2021 • Yuxiao Lu, Jie Lin, Chao Jin, Zhe Wang, Min Wu, Khin Mi Mi Aung, XiaoLi Li
Despite the faster HECNN inference, the mainstream packing schemes Dense Packing (DensePack) and Convolution Packing (ConvPack) introduce expensive rotation overhead, which prolongs the inference latency of HECNN for deeper and wider CNN architectures.
no code implementations • 14 Jan 2021 • Twesh Upadhyaya, Thomas van Himbeeck, Jie Lin, Norbert Lütkenhaus
We develop a method to connect the infinite-dimensional description of optical continuous-variable quantum key distribution (QKD) protocols to a finite-dimensional formulation.
Dimensionality Reduction Quantum Physics
1 code implementation • 13 Jan 2021 • Govind Narasimman, Kangkang Lu, Arun Raja, Chuan Sheng Foo, Mohamed Sabry Aly, Jie Lin, Vijay Chandrasekhar
Despite the vast literature on Human Activity Recognition (HAR) with wearable inertial sensor data, it is perhaps surprising that there are few studies investigating semisupervised learning for HAR, particularly in a challenging scenario with class imbalance problem.
no code implementations • 21 Dec 2020 • Ji-Cheng Zhang, Xiao-Feng Wang, Jun Mo, Gao-Bo Xi, Jie Lin, Xiao-Jun Jiang, Xiao-Ming Zhang, Wen-Xiong Li, Sheng-Yu Yan, Zhi-Hao Chen, Lei Hu, Xue Li, Wei-Li Lin, Han Lin, Cheng Miao, Li-Ming Rui, Han-Na Sai, Dan-Feng Xiang, Xing-Han Zhang
The TMTS system can have a FoV of about 9 deg2 when monitoring the sky with two bands (i. e., SDSS g and r filters) at the same time, and a maximum FoV of ~18 deg2 when four telescopes monitor different sky areas in monochromatic filter mode.
Instrumentation and Methods for Astrophysics
1 code implementation • ICLR 2020 • Jie Fu, Xue Geng, Zhijian Duan, Bohan Zhuang, Xingdi Yuan, Adam Trischler, Jie Lin, Chris Pal, Hao Dong
To our knowledge, existing methods overlook the fact that although the student absorbs extra knowledge from the teacher, both models share the same input data -- and this data is the only medium by which the teacher's knowledge can be demonstrated.
no code implementations • 9 Apr 2020 • Yanbao Zhang, Patrick J. Coles, Adam Winick, Jie Lin, Norbert Lutkenhaus
Our method also shows that in the absence of efficiency mismatch in our detector model, the key rate increases if the loss due to detection inefficiency is assumed to be outside of the adversary's control, as compared to the view where for a security proof this loss is attributed to the action of the adversary.
Quantum Physics
no code implementations • 1 Feb 2020 • Hui-Chu Xiao, Wan-Lei Zhao, Jie Lin, Chong-Wah Ngo
Due to the lack of proper mechanism in locating instances and deriving feature representation, instance search is generally only effective for retrieving instances of known object categories.
no code implementations • ICLR 2020 • Raden Mu'az Mun'im, Jie Lin, Vijay Chandrasekhar, Koichi Shinoda
(4) Fast, it is observed that the number of training epochs required by MaskConvNet is close to training a baseline without pruning.
no code implementations • 25 Sep 2019 • Zhe Wang, Jie Lin, Mohamed M. Sabry Aly, Sean I Young, Vijay Chandrasekhar, Bernd Girod
In this paper, we address an important problem of how to optimize the bit allocation of weights and activations for deep CNNs compression.
1 code implementation • 17 Sep 2019 • Quang-Hieu Pham, Pierre Sevestre, Ramanpreet Singh Pahwa, Huijing Zhan, Chun Ho Pang, Yuda Chen, Armin Mustafa, Vijay Chandrasekhar, Jie Lin
With the increasing global popularity of self-driving cars, there is an immediate need for challenging real-world datasets for benchmarking and training various computer vision tasks such as 3D object detection.
no code implementations • 5 Aug 2019 • Jie Lin, Dan-Bo Zhang, Shuo Zhang, Xiang Wang, Tan Li, Wan-su Bao
We also incorporate kernel methods into the above quantum algorithms, which uses both exponential growth Hilbert space of qubits and infinite dimensionality of continuous variable for quantum feature maps.
no code implementations • 4 Jan 2019 • Xue Geng, Jie Fu, Bin Zhao, Jie Lin, Mohamed M. Sabry Aly, Christopher Pal, Vijay Chandrasekhar
This paper addresses a challenging problem - how to reduce energy consumption without incurring performance drop when deploying deep neural networks (DNNs) at the inference stage.
no code implementations • 29 Nov 2018 • Lile Cai, Anne-Maelle Barneche, Arthur Herbout, Chuan Sheng Foo, Jie Lin, Vijay Ramaseshan Chandrasekhar, Mohamed M. Sabry
To this end, we introduce TEA-DNN, a NAS algorithm targeting multi-objective optimization of execution time, energy consumption, and classification accuracy of CNN workloads on embedded architectures.
no code implementations • 2 Nov 2018 • Ahmad Al Badawi, Jin Chao, Jie Lin, Chan Fook Mun, Jun Jie Sim, Benjamin Hong Meng Tan, Xiao Nan, Khin Mi Mi Aung, Vijay Ramaseshan Chandrasekhar
In this paper, we show how to accelerate the performance of running CNNs on encrypted data with GPUs.
no code implementations • 26 Jul 2018 • Jie Lin, Norbert Lütkenhaus
Variations of phase-matching measurement-device-independent quantum key distribution (PM-MDI QKD) protocols have been investigated before, but it was recently discovered that this type of protocol (under the name of twin-field QKD) can beat the linear scaling of the repeaterless bound on secret key rate capacity.
Quantum Physics
no code implementations • 6 Nov 2017 • Fang Yuan, Zhe Wang, Jie Lin, Luis Fernando D'Haro, Kim Jung Jae, Zeng Zeng, Vijay Chandrasekhar
In particular, we unify traditional "knowledgeless" machine learning models and knowledge graphs in a novel end-to-end framework.
no code implementations • 18 Jul 2017 • Gaurav Manek, Jie Lin, Vijay Chandrasekhar, Ling-Yu Duan, Sateesh Giduthuri, Xiao-Li Li, Tomaso Poggio
In this work, we focus on the problem of image instance retrieval with deep descriptors extracted from pruned Convolutional Neural Networks (CNN).
1 code implementation • 17 Jun 2017 • Zhe Wang, Kingsley Kuan, Mathieu Ravaut, Gaurav Manek, Sibo Song, Yuan Fang, Seokhwan Kim, Nancy Chen, Luis Fernando D'Haro, Luu Anh Tuan, Hongyuan Zhu, Zeng Zeng, Ngai Man Cheung, Georgios Piliouras, Jie Lin, Vijay Chandrasekhar
Beyond that, we extend the original competition by including text information in the classification, making this a truly multi-modal approach with vision, audio and text.
no code implementations • 26 May 2017 • Kingsley Kuan, Mathieu Ravaut, Gaurav Manek, Huiling Chen, Jie Lin, Babar Nazir, Cen Chen, Tse Chiang Howe, Zeng Zeng, Vijay Chandrasekhar
We present a deep learning framework for computer-aided lung cancer diagnosis.
no code implementations • 26 Apr 2017 • Ling-Yu Duan, Vijay Chandrasekhar, Shiqi Wang, Yihang Lou, Jie Lin, Yan Bai, Tiejun Huang, Alex ChiChung Kot, Wen Gao
This paper provides an overview of the on-going compact descriptors for video analysis standard (CDVA) from the ISO/IEC moving pictures experts group (MPEG).
no code implementations • 18 Jan 2017 • Vijay Chandrasekhar, Jie Lin, Qianli Liao, Olivier Morère, Antoine Veillard, Ling-Yu Duan, Tomaso Poggio
One major drawback of CNN-based {\it global descriptors} is that uncompressed deep neural network models require hundreds of megabytes of storage making them inconvenient to deploy in mobile applications or in custom hardware.
no code implementations • 15 Mar 2016 • Olivier Morère, Jie Lin, Antoine Veillard, Vijay Chandrasekhar, Tomaso Poggio
The first one is Nested Invariance Pooling (NIP), a method inspired from i-theory, a mathematical theory for computing group invariant transformations with feed-forward neural networks.
no code implementations • 25 Jan 2016 • Sibo Song, Ngai-Man Cheung, Vijay Chandrasekhar, Bappaditya Mandal, Jie Lin
With the increasing availability of wearable devices, research on egocentric activity recognition has received much attention recently.
no code implementations • 9 Jan 2016 • Olivier Morère, Antoine Veillard, Jie Lin, Julie Petta, Vijay Chandrasekhar, Tomaso Poggio
Based on a thorough empirical evaluation using several publicly available datasets, we show that our method is able to significantly and consistently improve retrieval results every time a new type of invariance is incorporated.
no code implementations • 10 Nov 2015 • Jie Lin, Olivier Morère, Julie Petta, Vijay Chandrasekhar, Antoine Veillard
Then, triplet networks, a rank learning scheme based on weight sharing nets is used to fine-tune the binary embedding functions to retain as much as possible of the useful metric properties of the original space.
no code implementations • 11 Aug 2015 • Vijay Chandrasekhar, Jie Lin, Olivier Morère, Hanlin Goh, Antoine Veillard
The second part of the study focuses on the impact of geometrical transformations such as rotations and scale changes.
no code implementations • 30 Jan 2015 • Olivier Morère, Hanlin Goh, Antoine Veillard, Vijay Chandrasekhar, Jie Lin
A comprehensive user study is conducted comparing our proposed method to a variety of schemes, including the summarization currently in use by one of the most popular video sharing websites.
no code implementations • 20 Jan 2015 • Jie Lin, Olivier Morere, Vijay Chandrasekhar, Antoine Veillard, Hanlin Goh
This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval.