Search Results for author: Dongdong Wang

Found 18 papers, 6 papers with code

Perturbing Attention Gives You More Bang for the Buck: Subtle Imaging Perturbations That Efficiently Fool Customized Diffusion Models

no code implementations23 Apr 2024 Jingyao Xu, Yuetong Lu, Yandong Li, Siyang Lu, Dongdong Wang, Xiang Wei

Diffusion models (DMs) embark a new era of generative modeling and offer more opportunities for efficient generating high-quality and realistic data samples.

Enhancing Traffic Safety with Parallel Dense Video Captioning for End-to-End Event Analysis

no code implementations12 Apr 2024 Maged Shoman, Dongdong Wang, Armstrong Aboah, Mohamed Abdel-Aty

Our solution mainly focuses on the following points: 1) To solve dense video captioning, we leverage the framework of dense video captioning with parallel decoding (PDVC) to model visual-language sequences and generate dense caption by chapters for video.

Dense Video Captioning Transfer Learning +1

The Causal Impact of Credit Lines on Spending Distributions

1 code implementation16 Dec 2023 Yijun Li, Cheuk Hang Leung, Xiangqian Sun, Chaoqun Wang, Yiyan Huang, Xing Yan, Qi Wu, Dongdong Wang, Zhixiang Huang

Consumer credit services offered by e-commerce platforms provide customers with convenient loan access during shopping and have the potential to stimulate sales.

UTBoost: A Tree-boosting based System for Uplift Modeling

1 code implementation5 Dec 2023 Junjie Gao, Xiangyu Zheng, Dongdong Wang, Zhixiang Huang, Bangqi Zheng, Kai Yang

Uplift modeling refers to the set of machine learning techniques that a manager may use to estimate customer uplift, that is, the net effect of an action on some customer outcome.

Ensemble Learning

DeLELSTM: Decomposition-based Linear Explainable LSTM to Capture Instantaneous and Long-term Effects in Time Series

no code implementations26 Aug 2023 Chaoqun Wang, Yijun Li, Xiangqian Sun, Qi Wu, Dongdong Wang, Zhixiang Huang

The tensorized LSTM assigns each variable with a unique hidden state making up a matrix $\mathbf{h}_t$, and the standard LSTM models all the variables with a shared hidden state $\mathbf{H}_t$.

Time Series Time Series Forecasting

TrafficSafetyGPT: Tuning a Pre-trained Large Language Model to a Domain-Specific Expert in Transportation Safety

1 code implementation28 Jul 2023 Ou Zheng, Mohamed Abdel-Aty, Dongdong Wang, Chenzhu Wang, Shengxuan Ding

Large Language Models (LLMs) have shown remarkable effectiveness in various general-domain natural language processing (NLP) tasks.

2k Language Modelling +1

Deep into The Domain Shift: Transfer Learning through Dependence Regularization

1 code implementation31 May 2023 Shumin Ma, Zhiri Yuan, Qi Wu, Yiyan Huang, Xixu Hu, Cheuk Hang Leung, Dongdong Wang, Zhixiang Huang

This paper proposes a new domain adaptation approach in which one can measure the differences in the internal dependence structure separately from those in the marginals.

Domain Adaptation Transfer Learning

ChatGPT for Shaping the Future of Dentistry: The Potential of Multi-Modal Large Language Model

no code implementations23 Mar 2023 Hanyao Huang, Ou Zheng, Dongdong Wang, Jiayi Yin, Zijin Wang, Shengxuan Ding, Heng Yin, Chuan Xu, Renjie Yang, Qian Zheng, Bing Shi

Overall, LLMs have the potential to revolutionize dental diagnosis and treatment, which indicates a promising avenue for clinical application and research in dentistry.

Language Modelling Large Language Model

On Calibrating Semantic Segmentation Models: Analyses and An Algorithm

1 code implementation CVPR 2023 Dongdong Wang, Boqing Gong, Liqiang Wang

Then, we study popular existing calibration methods and compare them with selective scaling on semantic segmentation calibration.

Image Classification Segmentation +1

Robust Causal Learning for the Estimation of Average Treatment Effects

no code implementations5 Sep 2022 Yiyan Huang, Cheuk Hang Leung, Xing Yan, Qi Wu, Shumin Ma, Zhiri Yuan, Dongdong Wang, Zhixiang Huang

Theoretically, the RCL estimators i) are as consistent and doubly robust as the DML estimators, and ii) can get rid of the error-compounding issue.

Decision Making

Moderately-Balanced Representation Learning for Treatment Effects with Orthogonality Information

no code implementations5 Sep 2022 Yiyan Huang, Cheuk Hang Leung, Shumin Ma, Qi Wu, Dongdong Wang, Zhixiang Huang

In this paper, we propose a moderately-balanced representation learning (MBRL) framework based on recent covariates balanced representation learning methods and orthogonal machine learning theory.

Learning Theory Multi-Task Learning +2

An Automatic Detection Method Of Cerebral Aneurysms In Time-Of-Flight Magnetic Resonance Angiography Images Based On Attention 3D U-Net

no code implementations26 Oct 2021 Chen Geng, Meng Chen, Ruoyu Di, Dongdong Wang, Liqin Yang, Wei Xia, Yuxin Li, Daoying Geng

Conclusions:Compared with the results of our previous studies and other studies, the method in this paper achieves a very competitive sensitivity with less training data and maintains a low false positive rate. As the only method currently using 3D U-Net for aneurysm detection, it proves the feasibility and superior performance of this network in aneurysm detection, and also explores the potential of the channel attention mechanism in this task.

Deep Epidemiological Modeling by Black-box Knowledge Distillation: An Accurate Deep Learning Model for COVID-19

no code implementations20 Jan 2021 Dongdong Wang, Shunpu Zhang, Liqiang Wang

Next, we use simulated observation sequences to query the simulation system to retrieve simulated projection sequences as knowledge.

Knowledge Distillation

The Causal Learning of Retail Delinquency

no code implementations17 Dec 2020 Yiyan Huang, Cheuk Hang Leung, Xing Yan, Qi Wu, Nanbo Peng, Dongdong Wang, Zhixiang Huang

Classical estimators overlook the confounding effects and hence the estimation error can be magnificent.

Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model

1 code implementation CVPR 2020 Dongdong Wang, Yandong Li, Liqiang Wang, Boqing Gong

The other is that the number of images used for the knowledge distillation should be small; otherwise, it violates our expectation of reducing the dependence on large-scale datasets.

Active Learning Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.