Search Results for author: Tianxiang Li

Found 5 papers, 3 papers with code

Unveiling Single-Bit-Flip Attacks on DNN Executables

no code implementations12 Sep 2023 Yanzuo Chen, Zhibo Liu, Yuanyuan Yuan, Sihang Hu, Tianxiang Li, Shuai Wang

Nevertheless, we find that DNN executables contain extensive, severe (e. g., single-bit flip), and transferrable attack surfaces that are not present in high-level DNN models and can be exploited to deplete full model intelligence and control output labels.

MISSRec: Pre-training and Transferring Multi-modal Interest-aware Sequence Representation for Recommendation

1 code implementation22 Aug 2023 Jinpeng Wang, Ziyun Zeng, Yunxiao Wang, Yuting Wang, Xingyu Lu, Tianxiang Li, Jun Yuan, Rui Zhang, Hai-Tao Zheng, Shu-Tao Xia

We propose MISSRec, a multi-modal pre-training and transfer learning framework for SR. On the user side, we design a Transformer-based encoder-decoder model, where the contextual encoder learns to capture the sequence-level multi-modal user interests while a novel interest-aware decoder is developed to grasp item-modality-interest relations for better sequence representation.

Contrastive Learning Sequential Recommendation +1

New Adversarial Image Detection Based on Sentiment Analysis

1 code implementation3 May 2023 Yulong Wang, Tianxiang Li, Shenghong Li, Xin Yuan, Wei Ni

Deep Neural Networks (DNNs) are vulnerable to adversarial examples, while adversarial attack models, e. g., DeepFool, are on the rise and outrunning adversarial example detection techniques.

Adversarial Attack Sentiment Analysis

Controller-Guided Partial Label Consistency Regularization with Unlabeled Data

no code implementations20 Oct 2022 Qian-Wei Wang, Bowen Zhao, Mingyan Zhu, Tianxiang Li, Zimo Liu, Shu-Tao Xia

Partial label learning (PLL) learns from training examples each associated with multiple candidate labels, among which only one is valid.

Contrastive Learning Data Augmentation +2

Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks

2 code implementations ICCV 2019 Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, Junjie Yan

Hardware-friendly network quantization (e. g., binary/uniform quantization) can efficiently accelerate the inference and meanwhile reduce memory consumption of the deep neural networks, which is crucial for model deployment on resource-limited devices like mobile phones.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.