Search Results for author: Yun Cheng

Found 18 papers, 13 papers with code

PCDCNet: A Surrogate Model for Air Quality Forecasting with Physical-Chemical Dynamics and Constraints

2 code implementations26 May 2025 Shuo Wang, Yun Cheng, Qingye Meng, Olga Saukh, Jiang Zhang, Jingfang Fan, YuanTing Zhang, Xingyuan Yuan, Lothar Thiele

Air quality forecasting (AQF) is critical for public health and environmental management, yet remains challenging due to the complex interplay of emissions, meteorology, and chemical transformations.

Deep Learning

FoCTTA: Low-Memory Continual Test-Time Adaptation with Focus

no code implementations28 Feb 2025 Youbing Hu, Yun Cheng, Zimu Zhou, Anqi Lu, Zhiqiang Cao, Zhijun Li

Second, updating all BN layers requires storing the activations of all BN layers for backpropagation, exacerbating the memory demand.

All Test-time Adaptation

FocusDD: Real-World Scene Infusion for Robust Dataset Distillation

no code implementations11 Jan 2025 Youbing Hu, Yun Cheng, Olga Saukh, Firat Ozdemir, Anqi Lu, Zhiqiang Cao, Zhijun Li

To further improve the generalization of the distilled dataset, each synthesized image is augmented with a downsampled view of the original image.

Dataset Distillation object-detection +1

Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?

1 code implementation5 Jan 2025 Simon Park, Abhishek Panigrahi, Yun Cheng, Dingli Yu, Anirudh Goyal, Sanjeev Arora

We seek strategies for training on the SIMPLE version of the tasks that improve performance on the corresponding HARD task, i. e., S2H generalization.

Image Captioning Image to text +3

ESP-PCT: Enhanced VR Semantic Performance through Efficient Compression of Temporal and Spatial Redundancies in Point Cloud Transformers

1 code implementation2 Sep 2024 Luoyu MEI, Yun Cheng, Ruofeng Liu, Zhimeng Yin, Wenchao Jiang, Shuai Wang, Wei Gong

Notably, ESP-PCT achieves a remarkable accuracy of 93. 2% while reducing the computational requirements (FLOPs) by 76. 9% and memory usage by 78. 2% compared to the existing Point Transformer model simultaneously.

Multimodal Fusion Interactions: A Study of Human and Automatic Quantification

1 code implementation7 Jun 2023 Paul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency

In order to perform multimodal fusion of heterogeneous signals, we need to understand their interactions: how each modality individually provides information useful for a task and how this information changes in the presence of other modalities.

counterfactual

Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications

1 code implementation7 Jun 2023 Paul Pu Liang, Chun Kai Ling, Yun Cheng, Alex Obolenskiy, Yudong Liu, Rohan Pandey, Alex Wilf, Louis-Philippe Morency, Ruslan Salakhutdinov

In many machine learning systems that jointly learn from multiple modalities, a core research question is to understand the nature of multimodal interactions: how modalities combine to provide new task-relevant information that was not present in either alone.

Self-Supervised Learning

CamDiff: Camouflage Image Augmentation via Diffusion Model

1 code implementation11 Apr 2023 Xue-Jing Luo, Shuo Wang, Zongwei Wu, Christos Sakaridis, Yun Cheng, Deng-Ping Fan, Luc van Gool

Specifically, we leverage the latent diffusion model to synthesize salient objects in camouflaged scenes, while using the zero-shot image classification ability of the Contrastive Language-Image Pre-training (CLIP) model to prevent synthesis failures and ensure the synthesized object aligns with the input prompt.

Dataset Generation Image Augmentation +6

Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework

1 code implementation NeurIPS 2023 Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard Chen, Zihao Deng, Nicholas Allen, Randy Auerbach, Faisal Mahmood, Ruslan Salakhutdinov, Louis-Philippe Morency

The recent explosion of interest in multimodal applications has resulted in a wide selection of datasets and methods for representing and integrating information from different modalities.

Model Selection

MultiBench: Multiscale Benchmarks for Multimodal Representation Learning

3 code implementations15 Jul 2021 Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle A. Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency

In order to accelerate progress towards understudied modalities and tasks while ensuring real-world robustness, we release MultiBench, a systematic and unified large-scale benchmark spanning 15 datasets, 10 modalities, 20 prediction tasks, and 6 research areas.

Representation Learning

AKG: Automatic Kernel Generation for Neural Processing Units using Polyhedral Transformations

1 code implementation Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation 2021 Jie Zhao, Bojie Li, Wang Nie, Zhen Geng, Renwei Zhang, Xiong Gao, Bin Cheng, Chen Wu, Yun Cheng, Zheng Li, Peng Di, Kun Zhang, Xuefeng Jin

Existing tensor compilers have proven their effectiveness in deploying deep neural networks on general-purpose hardware like CPU and GPU, but optimizing for neural processing units (NPUs) is still challenging due to the heterogeneous compute units and complicated memory hierarchy.

Code Generation Management +1

Optical manipulation of electronic dimensionality in a quantum material

no code implementations21 Jan 2021 Shaofeng Duan, Yun Cheng, Wei Xia, Yuanyuan Yang, Fengfeng Qi, Tianwei Tang, Yanfeng Guo, Dong Qian, Dao Xiang, Jie Zhang, Wentao Zhang

Exotic phenomenon can be achieved in quantum materials by confining electronic states into two dimensions.

Strongly Correlated Electrons Materials Science Superconductivity

Interpretable and Transferable Models to Understand the Impact of Lockdown Measures on Local Air Quality

1 code implementation19 Nov 2020 Johanna Einsiedler, Yun Cheng, Franz Papst, Olga Saukh

In this work, we estimate pollution reduction over the lockdown period by using the measurements from ground air pollution monitoring stations, training a long-term prediction model and comparing its predictions to measured values over the lockdown month. We show that our models achieve state-of-the-art performance on the data from air pollution measurement stations in Switzerland and in China: evaluate up to -15. 8% / +34. 4% change in NO2 / PM10 in Zurich; -35. 3 % / -3. 5 % and -42. 4 % / -34. 7 % in NO2 / PM2. 5 in Beijing and Wuhan respectively.

Transfer Learning

Adaptive Loss-aware Quantization for Multi-bit Networks

1 code implementation CVPR 2020 Zhongnan Qu, Zimu Zhou, Yun Cheng, Lothar Thiele

We investigate the compression of deep neural networks by quantizing their weights and activations into multiple binary bases, known as multi-bit networks (MBNs), which accelerate the inference and reduce the storage for the deployment on low-resource mobile and embedded platforms.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.