Search Results for author: Yan Feng

Found 27 papers, 14 papers with code

Distributionally Robust Graph-based Recommendation System

1 code implementation20 Feb 2024 Bohao Wang, Jiawei Chen, Changdong Li, Sheng Zhou, Qihao Shi, Yang Gao, Yan Feng, Chun Chen, Can Wang

DR-GNN addresses two core challenges: 1) To enable DRO to cater to graph data intertwined with GNN, we reinterpret GNN as a graph smoothing regularizer, thereby facilitating the nuanced application of DRO; 2) Given the typically sparse nature of recommendation data, which might impede robust optimization, we introduce slight perturbations in the training distribution to expand its support.

Recommendation Systems

WildfireGPT: Tailored Large Language Model for Wildfire Analysis

no code implementations12 Feb 2024 Yangxinyu Xie, Tanwi Mallick, Joshua David Bergerson, John K. Hutchison, Duane R. Verner, Jordan Branham, M. Ross Alexander, Robert B. Ross, Yan Feng, Leslie-Anne Levy, Weijie Su

The recent advancement of large language models (LLMs) represents a transformational capability at the frontier of artificial intelligence (AI) and machine learning (ML).

Language Modelling Large Language Model

Knowledge Translation: A New Pathway for Model Compression

1 code implementation11 Jan 2024 Wujie Sun, Defang Chen, Jiawei Chen, Yan Feng, Chun Chen, Can Wang

Deep learning has witnessed significant advancements in recent years at the cost of increasing training, inference, and model storage overhead.

Data Augmentation Model Compression +1

CDR: Conservative Doubly Robust Learning for Debiased Recommendation

1 code implementation13 Aug 2023 Zijie Song, Jiawei Chen, Sheng Zhou, Qihao Shi, Yan Feng, Chun Chen, Can Wang

In recommendation systems (RS), user behavior data is observational rather than experimental, resulting in widespread bias in the data.

Imputation Recommendation Systems

A data-driven approach to predict decision point choice during normal and evacuation wayfinding in multi-story buildings

no code implementations7 Aug 2023 Yan Feng, Panchamy Krishnakumari

This paper demonstrates the potential of applying a machine learning algorithm to study pedestrian route choice behavior in complex indoor buildings.

OpenGSL: A Comprehensive Benchmark for Graph Structure Learning

1 code implementation NeurIPS 2023 Zhiyao Zhou, Sheng Zhou, Bochao Mao, Xuanyi Zhou, Jiawei Chen, Qiaoyu Tan, Daochen Zha, Yan Feng, Chun Chen, Can Wang

Moreover, we observe that the learned graph structure demonstrates a strong generalization ability across different GNN models, despite the high computational and space consumption.

Graph structure learning Representation Learning

Generalizable Black-Box Adversarial Attack with Meta Learning

1 code implementation1 Jan 2023 Fei Yin, Yong Zhang, Baoyuan Wu, Yan Feng, Jingyi Zhang, Yanbo Fan, Yujiu Yang

In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget.

Adversarial Attack Meta-Learning

Robust Sequence Networked Submodular Maximization

no code implementations28 Dec 2022 Qihao Shi, Bingyang Fu, Can Wang, Jiawei Chen, Sheng Zhou, Yan Feng, Chun Chen

The approximation ratio of the algorithm depends both on the number of the removed elements and the network topology.

Link Prediction

Statistical treatment of convolutional neural network super-resolution of inland surface wind for subgrid-scale variability quantification

1 code implementation30 Nov 2022 Daniel Getter, Julie Bessac, Johann Rudi, Yan Feng

For each downscaling factor, we consider three CNN configurations that generate super-resolved predictions of fine-scale wind speed, which take between 1 to 3 input fields: coarse wind speed, fine-scale topography, and diurnal cycle.


Accelerating Diffusion Sampling with Classifier-based Feature Distillation

1 code implementation22 Nov 2022 Wujie Sun, Defang Chen, Can Wang, Deshi Ye, Yan Feng, Chun Chen

Instead of aligning output images, we distill teacher's sharpened feature distribution into the student with a dataset-independent classifier, making the student focus on those important features to improve performance.

Multi-Scale Architectures Matter: On the Adversarial Robustness of Flow-based Lossless Compression

no code implementations26 Aug 2022 Yi-chong Xia, Bin Chen, Yan Feng, Tian-shuo Ge

As a probabilistic modeling technique, the flow-based model has demonstrated remarkable potential in the field of lossless compression \cite{idf, idf++, lbb, ivpf, iflow},.

Adversarial Robustness Density Estimation

sqSGD: Locally Private and Communication Efficient Federated Learning

no code implementations21 Jun 2022 Yan Feng, Tao Xiong, Ruofan Wu, LingJuan Lv, Leilei Shi

In addition, with fixed privacy and communication level, the performance of sqSGD significantly dominates that of various baseline algorithms.

Federated Learning Privacy Preserving +1

Improving Knowledge Graph Embedding via Iterative Self-Semantic Knowledge Distillation

no code implementations7 Jun 2022 Zhehui Zhou, Defang Chen, Can Wang, Yan Feng, Chun Chen

Iteratively incorporating and accumulating iteration-based semantic information enables the low-dimensional model to be more expressive for better link prediction in KGs.

Knowledge Distillation Knowledge Graph Embedding +2

Knowledge Distillation with the Reused Teacher Classifier

1 code implementation CVPR 2022 Defang Chen, Jian-Ping Mei, Hailin Zhang, Can Wang, Yan Feng, Chun Chen

Knowledge distillation aims to compress a powerful yet cumbersome teacher model into a lightweight student model without much sacrifice of performance.

Knowledge Distillation

Online Adversarial Distillation for Graph Neural Networks

no code implementations28 Dec 2021 Can Wang, Zhe Wang, Defang Chen, Sheng Zhou, Yan Feng, Chun Chen

However, its effect on graph neural networks is less than satisfactory since the graph topology and node attributes are likely to change in a dynamic way and in this case a static teacher model is insufficient in guiding student training.

Knowledge Distillation

Spectral Variability Augmented Sparse Unmixing of Hyperspectral Images

no code implementations19 Oct 2021 Ge Zhang, Shaohui Mei, Mingyang Ma, Yan Feng, Qian Du

Spectral unmixing (SU) expresses the mixed pixels existed in hyperspectral images as the product of endmember and abundance, which has been widely used in hyperspectral imagery analysis.

Spectral Reconstruction

Practical Locally Private Federated Learning with Communication Efficiency

no code implementations1 Jan 2021 Yan Feng, Tao Xiong, Ruofan Wu, Yuan Qi

We also initialize a discussion about the role of quantization and perturbation in FL algorithm design with privacy and communication constraints.

Federated Learning Privacy Preserving +1

Cross-Layer Distillation with Semantic Calibration

2 code implementations6 Dec 2020 Defang Chen, Jian-Ping Mei, Yuan Zhang, Can Wang, Yan Feng, Chun Chen

Knowledge distillation is a technique to enhance the generalization ability of a student model by exploiting outputs from a teacher model.

Knowledge Distillation Transfer Learning

SamWalker++: recommendation with informative sampling strategy

1 code implementation16 Nov 2020 Can Wang, Jiawei Chen, Sheng Zhou, Qihao Shi, Yan Feng, Chun Chen

However, the social network information may not be available in many recommender systems, which hinders application of SamWalker.

Recommendation Systems

CoSam: An Efficient Collaborative Adaptive Sampler for Recommendation

no code implementations16 Nov 2020 Jiawei Chen, Chengquan Jiang, Can Wang, Sheng Zhou, Yan Feng, Chun Chen, Martin Ester, Xiangnan He

To deal with these problems, we propose an efficient and effective collaborative sampling method CoSam, which consists of: (1) a collaborative sampler model that explicitly leverages user-item interaction information in sampling probability and exhibits good properties of normalization, adaption, interaction information awareness, and sampling efficiency; and (2) an integrated sampler-recommender framework, leveraging the sampler model in prediction to offset the bias caused by uneven sampling.

Recommendation Systems

Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution

1 code implementation CVPR 2022 Yan Feng, Baoyuan Wu, Yanbo Fan, Li Liu, Zhifeng Li, Shutao Xia

This work studies black-box adversarial attacks against deep neural networks (DNNs), where the attacker can only access the query feedback returned by the attacked DNN model, while other information such as model parameters or the training datasets are unknown.

Adversarial Attack

Toward Adversarial Robustness via Semi-supervised Robust Training

1 code implementation16 Mar 2020 Yiming Li, Baoyuan Wu, Yan Feng, Yanbo Fan, Yong Jiang, Zhifeng Li, Shu-Tao Xia

In this work, we propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_{stand}$ and $R_{rob}$), which is with respect to the benign example and its neighborhoods respectively.

Adversarial Defense Adversarial Robustness

Fast Adaptively Weighted Matrix Factorization for Recommendation with Implicit Feedback

no code implementations4 Mar 2020 Jiawei Chen, Can Wang, Sheng Zhou, Qihao Shi, Jingbang Chen, Yan Feng, Chun Chen

A popular and effective approach for implicit recommendation is to treat unobserved data as negative but downweight their confidence.

Adversarial Attack on Deep Product Quantization Network for Image Retrieval

no code implementations26 Feb 2020 Yan Feng, Bin Chen, Tao Dai, Shu-Tao Xia

Deep product quantization network (DPQN) has recently received much attention in fast image retrieval tasks due to its efficiency of encoding high-dimensional visual features especially when dealing with large-scale datasets.

Adversarial Attack Image Retrieval +2

An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning

1 code implementation23 Feb 2020 Xue Yang, Yan Feng, Weijun Fang, Jun Shao, Xiaohu Tang, Shu-Tao Xia, Rongxing Lu

However, the strong defence ability and high learning accuracy of these schemes cannot be ensured at the same time, which will impede the wide application of FL in practice (especially for medical or financial institutions that require both high accuracy and strong privacy guarantee).

Federated Learning

Online Knowledge Distillation with Diverse Peers

2 code implementations1 Dec 2019 Defang Chen, Jian-Ping Mei, Can Wang, Yan Feng, Chun Chen

The second-level distillation is performed to transfer the knowledge in the ensemble of auxiliary peers further to the group leader, i. e., the model used for inference.

Knowledge Distillation Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.