Search Results for author: Lu Yu

Found 61 papers, 17 papers with code

Model Partition and Resource Allocation for Split Learning in Vehicular Edge Networks

no code implementations11 Nov 2024 Lu Yu, Zheng Chang, Yunjian Jia, Geyong Min

The integration of autonomous driving technologies with vehicular networks presents significant challenges in privacy preservation, communication efficiency, and resource allocation.

Autonomous Driving Deep Reinforcement Learning +1

Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models

1 code implementation29 Oct 2024 Lu Yu, Haiyang Zhang, Changsheng Xu

Our goal is to maintain the generalization of the CLIP model and enhance its adversarial robustness: The Attention Refinement module aligns the text-guided attention obtained from the target model via adversarial examples with the text-guided attention acquired from the original model via clean examples.

Adversarial Robustness

Faithful Interpretation for Graph Neural Networks

no code implementations9 Oct 2024 Lijie Hu, Tianhao Huang, Lu Yu, WanYu Lin, Tianhang Zheng, Di Wang

In this paper, we propose a solution to this problem by introducing a novel notion called Faithful Graph Attention-based Interpretation (FGAI).

Graph Attention

Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning

1 code implementation2 Aug 2024 Lu Yu, Zhe Tao, Hantao Yao, Joost Van de Weijer, Changsheng Xu

The semantic knowledge available in the label information of the images, offers important semantic information that can be related with previously acquired knowledge of semantic classes.

Continual Learning Knowledge Distillation +4

Order-theoretical fixed point theorems for correspondences and application in game theory

no code implementations26 Jul 2024 Lu Yu

For an ascending correspondence $F:X\to 2^X$ with chain-complete values on a complete lattice $X$, we prove that the set of fixed points is a complete lattice.

Generalization of Zhou fixed point theorem

no code implementations25 Jul 2024 Lu Yu

We give two generalizations of the Zhou fixed point theorem.

Nash equilibria of games with generalized complementarities

no code implementations30 Jun 2024 Lu Yu

To generalize complementarities for games, we introduce some conditions weaker than quasisupermodularity and the single crossing property.

Semi-supervised Concept Bottleneck Models

no code implementations27 Jun 2024 Lijie Hu, Tianhao Huang, Huanyi Xie, Chenyang Ren, Zhengyu Hu, Lu Yu, Di Wang

Concept Bottleneck Models (CBMs) have garnered increasing attention due to their ability to provide concept-based explanations for black-box deep learning models while achieving high final prediction accuracy using human-like concepts.

Nash equilibria of quasisupermodular games

no code implementations19 Jun 2024 Lu Yu

We prove three results on the existence and structure of Nash equilibria for quasisupermodular games.

Existence and structure of Nash equilibria for supermodular games

no code implementations13 Jun 2024 Lu Yu

Two theorems announced by Topkis about the topological description of sublattices are proved.

Log-Concave Sampling on Compact Supports: A Versatile Proximal Framework

no code implementations24 May 2024 Lu Yu

In this paper, we explore sampling from strongly log-concave distributions defined on convex and compact supports.

SEP: Self-Enhanced Prompt Tuning for Visual-Language Model

1 code implementation24 May 2024 Hantao Yao, Rui Zhang, Lu Yu, Yongdong Zhang, Changsheng Xu

Comprehensive evaluations across various benchmarks and tasks confirm SEP's efficacy in prompt tuning.

Language Modelling

Leveraging Logical Rules in Knowledge Editing: A Cherry on the Top

no code implementations24 May 2024 Keyuan Cheng, Muhammad Asif Ali, Shu Yang, Gang Lin, Yuxuan zhai, Haoyang Fei, Ke Xu, Lu Yu, Lijie Hu, Di Wang

To address these issues, in this paper, we propose a novel framework named RULE-KE, i. e., RULE based Knowledge Editing, which is a cherry on the top for augmenting the performance of all existing MQA methods under KE.

knowledge editing Multi-hop Question Answering +2

Prompt-SAW: Leveraging Relation-Aware Graphs for Textual Prompt Compression

no code implementations30 Mar 2024 Muhammad Asif Ali, ZhengPing Li, Shu Yang, Keyuan Cheng, Yang Cao, Tianhao Huang, Guimin Hu, Weimin Lyu, Lijie Hu, Lu Yu, Di Wang

We also propose GSM8K-aug, i. e., an extended version of the existing GSM8K benchmark for task-agnostic prompts in order to provide a comprehensive evaluation platform.

GSM8K Relation

Multi-hop Question Answering under Temporal Knowledge Editing

no code implementations30 Mar 2024 Keyuan Cheng, Gang Lin, Haoyang Fei, Yuxuan zhai, Lu Yu, Muhammad Asif Ali, Lijie Hu, Di Wang

Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models.

knowledge editing Multi-hop Question Answering +3

Parallelized Midpoint Randomization for Langevin Monte Carlo

no code implementations22 Feb 2024 Lu Yu, Arnak Dalalyan

We explore the sampling problem within the framework where parallel evaluations of the gradient of the log-density are feasible.

MONAL: Model Autophagy Analysis for Modeling Human-AI Interactions

no code implementations17 Feb 2024 Shu Yang, Muhammad Asif Ali, Lu Yu, Lijie Hu, Di Wang

The increasing significance of large models and their multi-modal variants in societal information processing has ignited debates on social safety and ethics.

Diversity Ethics

Professional Agents -- Evolving Large Language Models into Autonomous Experts with Human-Level Competencies

no code implementations6 Feb 2024 Zhixuan Chu, Yan Wang, Feng Zhu, Lu Yu, Longfei Li, Jinjie Gu

The advent of large language models (LLMs) such as ChatGPT, PaLM, and GPT-4 has catalyzed remarkable advances in natural language processing, demonstrating human-like language fluency and reasoning capacities.

Position

Hierarchical Prompts for Rehearsal-free Continual Learning

no code implementations21 Jan 2024 Yukun Zuo, Hantao Yao, Lu Yu, Liansheng Zhuang, Changsheng Xu

Nonetheless, these learnable prompts tend to concentrate on the discriminatory knowledge of the current task while ignoring past task knowledge, leading to that learnable prompts still suffering from catastrophic forgetting.

Continual Learning

Edit-DiffNeRF: Editing 3D Neural Radiance Fields using 2D Diffusion Model

no code implementations15 Jun 2023 Lu Yu, Wei Xiang, Kang Han

To address this challenge, we propose the Edit-DiffNeRF framework, which is composed of a frozen diffusion model, a proposed delta module to edit the latent semantic space of the diffusion model, and a NeRF.

3D Generation Text to 3D

Langevin Monte Carlo for strongly log-concave distributions: Randomized midpoint revisited

no code implementations14 Jun 2023 Lu Yu, Avetik Karagulyan, Arnak Dalalyan

To provide a more thorough explanation of our method for establishing the computable upper bound, we conduct an analysis of the midpoint discretization for the vanilla Langevin process.

Camera-Incremental Object Re-Identification with Identity Knowledge Evolution

1 code implementation25 May 2023 Hantao Yao, Lu Yu, Jifei Luo, Changsheng Xu

In this paper, we propose a novel Identity Knowledge Evolution (IKE) framework for CIOR, consisting of the Identity Knowledge Association (IKA), Identity Knowledge Distillation (IKD), and Identity Knowledge Update (IKU).

Knowledge Distillation Object

Quality-agnostic Image Captioning to Safely Assist People with Vision Impairment

no code implementations28 Apr 2023 Lu Yu, Malvina Nikandrou, Jiali Jin, Verena Rieser

In this paper, we propose a quality-agnostic framework to improve the performance and robustness of image captioning models for visually impaired people.

Data Augmentation Image Captioning

Knowledge Distillation for Efficient Sequences of Training Runs

no code implementations11 Mar 2023 Xingyu Liu, Alex Leonardi, Lu Yu, Chris Gilmer-Hill, Matthew Leavitt, Jonathan Frankle

We find that augmenting future runs with KD from previous runs dramatically reduces the time necessary to train these models, even taking into account the overhead of KD.

Knowledge Distillation

X-Pruner: eXplainable Pruning for Vision Transformers

1 code implementation CVPR 2023 Lu Yu, Wei Xiang

Recent studies have proposed to prune transformers in an unexplainable manner, which overlook the relationship between internal units of the model and the target class, thereby leading to inferior performance.

DCMT: A Direct Entire-Space Causal Multi-Task Framework for Post-Click Conversion Estimation

no code implementations13 Feb 2023 Feng Zhu, Mingjie Zhong, Xinxing Yang, Longfei Li, Lu Yu, Tiehua Zhang, Jun Zhou, Chaochao Chen, Fei Wu, Guanfeng Liu, Yan Wang

In recommendation scenarios, there are two long-standing challenges, i. e., selection bias and data sparsity, which lead to a significant drop in prediction accuracy for both Click-Through Rate (CTR) and post-click Conversion Rate (CVR) tasks.

counterfactual Multi-Task Learning +1

SteerNeRF: Accelerating NeRF Rendering via Smooth Viewpoint Trajectory

no code implementations CVPR 2023 Sicheng Li, Hao Li, Yue Wang, Yiyi Liao, Lu Yu

Neural Radiance Fields (NeRF) have demonstrated superior novel view synthesis performance but are slow at rendering.

Novel View Synthesis

Going for GOAL: A Resource for Grounded Football Commentaries

1 code implementation8 Nov 2022 Alessandro Suglia, José Lopes, Emanuele Bastianelli, Andrea Vanzo, Shubham Agarwal, Malvina Nikandrou, Lu Yu, Ioannis Konstas, Verena Rieser

As the course of a game is unpredictable, so are commentaries, which makes them a unique resource to investigate dynamic language grounding.

Moment Retrieval Retrieval

Task Formulation Matters When Learning Continually: A Case Study in Visual Question Answering

1 code implementation30 Sep 2022 Mavina Nikandrou, Lu Yu, Alessandro Suglia, Ioannis Konstas, Verena Rieser

We first propose three plausible task formulations and demonstrate their impact on the performance of continual learning algorithms.

Continual Learning Question Answering +1

eX-ViT: A Novel eXplainable Vision Transformer for Weakly Supervised Semantic Segmentation

no code implementations12 Jul 2022 Lu Yu, Wei Xiang, Juan Fang, Yi-Ping Phoebe Chen, Lianhua Chi

To close these crucial gaps, we propose a novel vision transformer dubbed the eXplainable Vision Transformer (eX-ViT), an intrinsically interpretable transformer model that is able to jointly discover robust interpretable features and perform the prediction.

Attribute Weakly supervised Semantic Segmentation +1

Adversarial Robustness of Visual Dialog

no code implementations6 Jul 2022 Lu Yu, Verena Rieser

This study is the first to investigate the robustness of visually grounded dialog models towards textual attacks.

Adversarial Robustness Visual Dialog

Q-LIC: Quantizing Learned Image Compression with Channel Splitting

no code implementations28 May 2022 Heming Sun, Lu Yu, Jiro Katto

Learned image compression (LIC) has reached a comparable coding gain with traditional hand-crafted methods such as VVC intra.

Image Compression MS-SSIM +2

MTANet: Multitask-Aware Network With Hierarchical Multimodal Fusion for RGB-T Urban Scene Understanding

no code implementations journal 2022 WuJie Zhou, Shaohua Dong, Jingsheng Lei, Lu Yu

To improve the fusion of multimodal features and the segmentation accuracy, we propose a multitask-aware network (MTANet) with hierarchical multimodal fusion (multiscale fusion strategy) for RGB-T urban scene understanding.

Autonomous Vehicles Scene Understanding +2

Continually Learning Self-Supervised Representations with Projected Functional Regularization

1 code implementation30 Dec 2021 Alex Gomez-Villa, Bartlomiej Twardowski, Lu Yu, Andrew D. Bagdanov, Joost Van de Weijer

Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches.

Continual Learning Incremental Learning +1

End-to-End Learned Image Compression with Quantized Weights and Activations

no code implementations17 Nov 2021 Heming Sun, Lu Yu, Jiro Katto

To our best knowledge, this is the first work to give a complete analysis on the coding gain and the memory cost for a quantized LIC network, which validates the feasibility of the hardware implementation.

Image Compression MS-SSIM +2

A Simple and Debiased Sampling Method for Personalized Ranking

no code implementations29 Sep 2021 Lu Yu, Shichao Pei, Chuxu Zhang, Xiangliang Zhang

Pairwise ranking models have been widely used to address various problems, such as recommendation.

Distilling GANs with Style-Mixed Triplets for X2I Translation with Limited Data

no code implementations ICLR 2022 Yaxing Wang, Joost Van de Weijer, Lu Yu, Shangling Jui

Therefore, we investigate knowledge distillation to transfer knowledge from a high-quality unconditioned generative model (e. g., StyleGAN) to a conditioned synthetic image generation modules in a variety of systems.

Image Generation Knowledge Distillation +2

Fully Neural Network Mode Based Intra Prediction of Variable Block Size

1 code implementation5 Aug 2021 Heming Sun, Lu Yu, Jiro Katto

As far as we know, this is the first work to explore a fully NM based framework for intra prediction, and we reach a better coding gain with a lower complexity compared with the previous work.

regression

Subjective evaluation of traditional and learning-based image coding methods

no code implementations28 Jul 2021 Zhigao Fang, JiaQi Zhang, Lu Yu, Yin Zhao

Additionally, we utilize some typical and frequently used objective quality metrics to evaluate the coding methods in the experiment as comparison.

DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs

1 code implementation NeurIPS 2020 Yaxing Wang, Lu Yu, Joost Van de Weijer

To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs.

Attribute Image-to-Image Translation +2

Compressing Facial Makeup Transfer Networks by Collaborative Distillation and Kernel Decomposition

1 code implementation16 Sep 2020 Bianjiang Yang, Zi Hui, Haoji Hu, Xinyi Hu, Lu Yu

Although the facial makeup transfer network has achieved high-quality performance in generating perceptually pleasing makeup images, its capability is still restricted by the massive computation and storage of the network architecture.

Decoder Facial Makeup Transfer

SAIL: Self-Augmented Graph Contrastive Learning

no code implementations2 Sep 2020 Lu Yu, Shichao Pei, Lizhong Ding, Jun Zhou, Longfei Li, Chuxu Zhang, Xiangliang Zhang

This paper studies learning node representations with graph neural networks (GNNs) for unsupervised scenario.

Contrastive Learning Knowledge Distillation +1

An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias

no code implementations NeurIPS 2021 Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu

Structured non-convex learning problems, for which critical points have favorable statistical properties, arise frequently in statistical machine learning.

Semantic Drift Compensation for Class-Incremental Learning

2 code implementations CVPR 2020 Lu Yu, Bartłomiej Twardowski, Xialei Liu, Luis Herranz, Kai Wang, Yongmei Cheng, Shangling Jui, Joost Van de Weijer

The vast majority of methods have studied this scenario for classification networks, where for each new task the classification layer of the network must be augmented with additional weights to make room for the newly added classes.

class-incremental learning Class Incremental Learning +2

False Discovery Rates in Biological Networks

1 code implementation8 Jul 2019 Lu Yu, Tobias Kaufmann, Johannes Lederer

The increasing availability of data has generated unprecedented prospects for network analyses in many biological fields, such as neuroscience (e. g., brain networks), genomics (e. g., gene-gene interaction networks), and ecology (e. g., species interaction networks).

Methodology Quantitative Methods Applications

Three Dimensional Convolutional Neural Network Pruning with Regularization-Based Method

no code implementations NIPS Workshop CDNNRIA 2018 Yuxin Zhang, Huan Wang, Yang Luo, Lu Yu, Haoji Hu, Hangguan Shan, Tony Q. S. Quek

Despite enjoying extensive applications in video analysis, three-dimensional convolutional neural networks (3D CNNs)are restricted by their massive computation and storage consumption.

Model Compression Network Pruning

Weakly Supervised Domain-Specific Color Naming Based on Attention

1 code implementation11 May 2018 Lu Yu, Yongmei Cheng, Joost Van de Weijer

The attention branch is used to modulate the pixel-wise color naming predictions of the network.

General Classification

Oracle Inequalities for High-dimensional Prediction

no code implementations1 Aug 2016 Johannes Lederer, Lu Yu, Irina Gaynanova

The abundance of high-dimensional data in the modern sciences has generated tremendous interest in penalized estimators such as the lasso, scaled lasso, square-root lasso, elastic net, and many others.

Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.