Search Results for author: Jiawei Du

Found 23 papers, 16 papers with code

LLM Knows Geometry Better than Algebra: Numerical Understanding of LLM-Based Agents in A Trading Arena

1 code implementation25 Feb 2025 Tianmi Ma, Jiawei Du, Wenxin Huang, Wenjie Wang, Liang Xie, Xian Zhong, Joey Tianyi Zhou

Recent advancements in large language models (LLMs) have significantly improved performance in natural language processing tasks.

The Evolution of Dataset Distillation: Toward Scalable and Generalizable Solutions

no code implementations8 Feb 2025 Ping Liu, Jiawei Du

Dataset distillation, which condenses large-scale datasets into compact synthetic representations, has emerged as a critical solution for training modern deep learning models efficiently.

Dataset Distillation Survey

MedCoT: Medical Chain of Thought via Hierarchical Expert

1 code implementation18 Dec 2024 Jiaxiang Liu, YuAn Wang, Jiawei Du, Joey Tianyi Zhou, Zuozhu Liu

Artificial intelligence has advanced in Medical Visual Question Answering (Med-VQA), but prevalent research tends to focus on the accuracy of the answers, often overlooking the reasoning paths and interpretability, which are crucial in clinical settings.

Diagnostic Medical Visual Question Answering +2

Diversity-Driven Synthesis: Enhancing Dataset Distillation through Directed Weight Adjustment

1 code implementation26 Sep 2024 Jiawei Du, Xin Zhang, Juncheng Hu, Wenxin Huang, Joey Tianyi Zhou

Specifically, we introduce a novel method that employs dynamic and directed weight adjustment techniques to modulate the synthesis process, thereby maximizing the representativeness and diversity of each synthetic instance.

Dataset Distillation Diversity

Long-horizon Embodied Planning with Implicit Logical Inference and Hallucination Mitigation

no code implementations24 Sep 2024 Siyuan Liu, Jiawei Du, Sicheng Xiang, Zibo Wang, Dingsheng Luo

When constructing the dataset, we considered the implicit logical relationships, enabling the model to learn implicit logical relationships and dispel hallucinations.

Diversity Hallucination +2

Codec-SUPERB @ SLT 2024: A lightweight benchmark for neural audio codec models

1 code implementation21 Sep 2024 Haibin Wu, Xuanjun Chen, Yi-Cheng Lin, KaiWei Chang, Jiawei Du, Ke-Han Lu, Alexander H. Liu, Ho-Lam Chung, Yuan-Kuei Wu, Dongchao Yang, Songxiang Liu, Yi-Chiao Wu, Xu Tan, James Glass, Shinji Watanabe, Hung-Yi Lee

Neural audio codec models are becoming increasingly important as they serve as tokenizers for audio, enabling efficient transmission or facilitating speech language modeling.

Language Modeling Language Modelling

ViLReF: An Expert Knowledge Enabled Vision-Language Retinal Foundation Model

1 code implementation20 Aug 2024 Shengzhu Yang, Jiawei Du, Jia Guo, Weihang Zhang, Hanruo Liu, Huiqi Li, Ningli Wang

The experimental results demonstrate the powerful zero-shot and transfer learning capabilities of ViLReF, verifying the effectiveness of our pre-training strategy.

Diagnostic Transfer Learning

Breaking Class Barriers: Efficient Dataset Distillation via Inter-Class Feature Compensator

no code implementations13 Aug 2024 Xin Zhang, Jiawei Du, Ping Liu, Joey Tianyi Zhou

This leads to inefficient utilization of the distillation budget and oversight of inter-class feature distributions, which ultimately limits the effectiveness and efficiency, as demonstrated in our analysis.

Dataset Distillation

RET-CLIP: A Retinal Image Foundation Model Pre-trained with Clinical Diagnostic Reports

1 code implementation23 May 2024 Jiawei Du, Jia Guo, Weihang Zhang, Shengzhu Yang, Hanruo Liu, Huiqi Li, Ningli Wang

The Vision-Language Foundation model is increasingly investigated in the fields of computer vision and natural language processing, yet its exploration in ophthalmology and broader medical applications remains limited.

Diagnostic Multi-Label Classification +1

DD-RobustBench: An Adversarial Robustness Benchmark for Dataset Distillation

1 code implementation20 Mar 2024 Yifan Wu, Jiawei Du, Ping Liu, Yuewei Lin, Wei Xu, Wenqing Cheng

Dataset distillation is an advanced technique aimed at compressing datasets into significantly smaller counterparts, while preserving formidable training performance.

Adversarial Attack Adversarial Robustness +1

Deep Reinforcement Learning for Quantitative Trading

no code implementations25 Dec 2023 Maochun Xu, Zixun Lan, Zheng Tao, Jiawei Du, Zongao Ye

Incorporating deep reinforcement learning (DRL) with imitative learning methodologies, we bolster the proficiency of our model.

Deep Reinforcement Learning reinforcement-learning

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

3 code implementations CVPR 2023 Jiawei Du, Yidi Jiang, Vincent Y. F. Tan, Joey Tianyi Zhou, Haizhou Li

To mitigate the adverse impact of this accumulated trajectory error, we propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.

Dataset Distillation Dataset Distillation - 1IPC +1

Sharpness-Aware Training for Free

1 code implementation27 May 2022 Jiawei Du, Daquan Zhou, Jiashi Feng, Vincent Y. F. Tan, Joey Tianyi Zhou

Intuitively, SAF achieves this by avoiding sudden drops in the loss in the sharp local minima throughout the trajectory of the updates of the weights.

Efficient Sharpness-aware Minimization for Improved Training of Neural Networks

1 code implementation ICLR 2022 Jiawei Du, Hanshu Yan, Jiashi Feng, Joey Tianyi Zhou, Liangli Zhen, Rick Siow Mong Goh, Vincent Y. F. Tan

Recently, the relation between the sharpness of the loss landscape and the generalization error has been established by Foret et al. (2020), in which the Sharpness Aware Minimizer (SAM) was proposed to mitigate the degradation of the generalization.

A Research on Cross-sectional Return Dispersion and Volatility of US Stock Market during COVID-19

no code implementations6 Jul 2020 Jiawei Du

We also found that the epidemic has a significant negative impact on the return of the energy sector, and finally we provided our suggestions to investors.

On Robustness of Neural Ordinary Differential Equations

2 code implementations ICLR 2020 Hanshu Yan, Jiawei Du, Vincent Y. F. Tan, Jiashi Feng

We then provide an insightful understanding of this phenomenon by exploiting a certain desirable property of the flow of a continuous-time ODE, namely that integral curves are non-intersecting.

Adversarial Attack

Query-efficient Meta Attack to Deep Neural Networks

1 code implementation ICLR 2020 Jiawei Du, Hu Zhang, Joey Tianyi Zhou, Yi Yang, Jiashi Feng

Black-box attack methods aim to infer suitable attack patterns to targeted DNN models by only using output feedback of the models and the corresponding input queries.

Adversarial Attack Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.