no code implementations • 12 Mar 2025 • Yong Li, Menglin Liu, Zhen Cui, Yi Ding, Yuan Zong, Wenming Zheng, Shiguang Shan, Cuntai Guan
To achieve the feature decoupling, D$^2$CA is trained to disentangle AU and domain factors by assessing the quality of synthesized faces in cross-domain scenarios when either AU or domain attributes are modified.
no code implementations • 12 Mar 2025 • Yong Li, Yi Ren, Xuesong Niu, Yi Ding, Xiu-Shen Wei, Cuntai Guan
To prevent excessive feature dropout, a progressive training strategy is used, allowing for selective exclusion of sensitive features at any model layer.
1 code implementation • 11 Mar 2025 • Meghna Roy Chowdhury, Wei Xuan, Shreyas Sen, Yixue Zhao, Yi Ding
Mental health issues among college students have reached critical levels, significantly impacting academic performance and overall wellbeing.
no code implementations • 1 Mar 2025 • Xinliang Zhou, Chenyu Liu, Zhisheng Chen, Kun Wang, Yi Ding, Ziyu Jia, Qingsong Wen
Brain foundation models (BFMs) have emerged as a transformative paradigm in computational neuroscience, offering a revolutionary framework for processing diverse neural signals across different brain-related tasks.
no code implementations • 16 Feb 2025 • Yanran Wu, Inez Hua, Yi Ding
Large language models (LLMs) offer powerful capabilities but come with significant environmental costs, particularly in carbon emissions.
no code implementations • 12 Feb 2025 • Wei Xuan, Meghna Roy Chowdhury, Yi Ding, Yixue Zhao
The global mental health crisis is a pressing concern, with college students particularly vulnerable to rising mental health disorders.
1 code implementation • 6 Feb 2025 • Yi Ding, Joon Hei Lee, Shuailei Zhang, Tianze Luo, Cuntai Guan
Learning the spatial topology of electroencephalogram (EEG) channels and their temporal dynamics is crucial for decoding attention states.
no code implementations • 30 Jan 2025 • Yi Ding, Lijun Li, Bing Cao, Jing Shao
Our experiments demonstrate that fine-tuning InternVL2. 5-8B with MIS significantly outperforms both powerful open-source models and API-based models in challenging multi-image tasks requiring safety-related visual reasoning.
no code implementations • 25 Jan 2025 • Meiyan Xu, Qingqing Chen, Duo Chen, Yi Ding, Jingyuan Wang, Peipei Gu, Yijie Pan, DeShuang Huang, Xun Zhang, Jiayang Guo
EEG-based fatigue monitoring can effectively reduce the incidence of related traffic accidents.
1 code implementation • 20 Jan 2025 • Chaoqing Tang, Huanze Zhuang, Guiyun Tian, Zhenli Zeng, Yi Ding, Wenzhong Liu, Xiang Bai
Compressed Sensing (CS) is a well-proved theory that drives many recent breakthroughs in these applications.
no code implementations • 7 Jan 2025 • Siyuan Zhao, Chenyu Liu, Yi Ding, Xinliang Zhou
By finetuning the model with selective source data, our SelectiveFinetuning enhances the model's performance on target domain that exhibits domain shifts compared to the data used for training.
no code implementations • 31 Dec 2024 • Sophia Nguyen, Beihao Zhou, Yi Ding, Sihang Liu
In this work, we study LLMs from a carbon emission perspective, addressing both operational and embodied emissions, and paving the way for sustainable LLM serving.
no code implementations • 22 Nov 2024 • Hong Ding, ZiMing Wang, Yi Ding, Hongjie Lin, SuYang Xi, Chia Chao Kang
Addressing the challenge of ensuring safety in ever-changing and unpredictable environments, particularly in the swiftly advancing realm of autonomous driving in today's 5G wireless communication world, we present Navigation Secure (NavSecure).
1 code implementation • 5 Nov 2024 • Bing Cao, Yinan Xia, Yi Ding, Changqing Zhang, QinGhua Hu
The decomposed components represent the effective information from the source data, thus the gap between them reflects the Relative Dominability (RD) of the uni-source data in constructing the fusion image.
no code implementations • 3 Nov 2024 • Xinliang Zhou, Yuzhe Han, Zhisheng Chen, Chenyu Liu, Yi Ding, Ziyu Jia, Yang Liu
In this paper, we address the challenges in automatic sleep stage classification, particularly the high computational cost, inadequate modeling of bidirectional temporal dependencies, and class imbalance issues faced by Transformer-based models.
1 code implementation • 9 Oct 2024 • Yi Ding, Bolian Li, Ruqi Zhang
Vision Language Models (VLMs) have become essential backbones for multimodal intelligence, yet significant safety challenges limit their real-world application.
no code implementations • 27 Sep 2024 • Su Chen, Yi Ding, Hiroe Miyake, Xiaojun Li
In scientific machine learning, the task of identifying partial differential equations accurately from sparse and noisy data poses a significant challenge.
no code implementations • 12 Sep 2024 • Hua Yan, Heng Tan, Yi Ding, Pengfei Zhou, Vinod Namboodiri, Yu Yang
To address this, we propose LanHAR, a novel system that leverages Large Language Models (LLMs) to generate semantic interpretations of sensor readings and activity labels for cross-dataset HAR.
no code implementations • 12 Aug 2024 • Chenyu Liu, Xinliang Zhou, Yihao Wu, Yi Ding, Liming Zhai, Kun Wang, Ziyu Jia, Yang Liu
In this paper, we present a comprehensive survey of these studies, delivering a systematic review of graph-related methods in this field from a methodological perspective.
no code implementations • 2 Jul 2024 • Amy Li, Sihang Liu, Yi Ding
We identify and analyze two types of uncertainty -- temporal and spatial -- and discuss their system implications.
1 code implementation • 2 Jul 2024 • Jintu Zheng, Yi Ding, Qizhe Liu, Yi Cao, Ying Hu, Zenan Wang
Traditional fluorescence staining is phototoxic to live cells, slow, and expensive; thus, the subcellular structure prediction (SSP) from transmitted light (TL) images is emerging as a label-free, faster, low-cost alternative.
1 code implementation • 26 Jun 2024 • Yi Ding, Chengxuan Tong, Shuailei Zhang, Muyun Jiang, Yong Li, Kevin Lim Jun Liang, Cuntai Guan
Furthermore, we design a temporal contextual transformer module (TCT) with two types of token mixers to learn the temporal contextual information.
1 code implementation • 7 Jun 2024 • Bing Cao, Yinan Xia, Yi Ding, Changqing Zhang, QinGhua Hu
Accordingly, we further propose a relative calibration strategy to calibrate the predicted Co-Belief for potential uncertainty.
1 code implementation • 25 Apr 2024 • Yi Ding, Yong Li, Hao Sun, Rui Liu, Chengxuan Tong, Chenyu Liu, Xinliang Zhou, Cuntai Guan
Effectively learning the temporal dynamics in electroencephalogram (EEG) signals is challenging yet essential for decoding brain activities using brain-computer interfaces (BCIs).
no code implementations • 1 Nov 2023 • Uthman Jinadu, Yi Ding
Incorporating every annotator's perspective is crucial for unbiased data modeling.
no code implementations • 21 Sep 2023 • Alex Renda, Yi Ding, Michael Carbin
We first characterize the proportion of data to sample from each region of a program's input space (corresponding to different execution paths of the program) based on the complexity of learning a surrogate of the corresponding execution path.
1 code implementation • 30 Aug 2023 • Yi Ding, Su Zhang, Chuangao Tang, Cuntai Guan
A natural method is to learn the temporal dynamic patterns.
no code implementations • 14 Aug 2023 • Rui Liu, YuanYuan Chen, Anran Li, Yi Ding, Han Yu, Cuntai Guan
Though numerous research groups and institutes collect a multitude of EEG datasets for the same BCI task, sharing EEG data from multiple sites is still challenging due to the heterogeneity of devices.
1 code implementation • 28 Jun 2023 • Haihao Shen, Hengyu Meng, Bo Dong, Zhe Wang, Ofir Zafrir, Yi Ding, Yu Luo, Hanwen Chang, Qun Gao, Ziheng Wang, Guy Boudoukh, Moshe Wasserblat
We apply our sparse accelerator on widely-used Transformer-based language models including Bert-Mini, DistilBERT, Bert-Base, and BERT-Large.
no code implementations • 6 May 2023 • Helen Shang, Yi Ding, Vidhya Venkateswaran, Kristin Boulier, Nikhita Kathuria-Prakash, Parisa Boodaghi Malidarreh, Jacob M. Luber, Bogdan Pasaniuc
We found that the PRS313 achieved overlapping Areas under the ROC Curve (AUCs) in females of Lantix (AUC, 0. 68; 95 CI, 0. 65-0. 71) and European ancestry (AUC, 0. 70; 95 CI, 0. 69-0. 71) but lower AUCs for the AFR and EAA populations (AFR: AUC, 0. 61; 95 CI, 0. 56-0. 65; EAA: AUC, 0. 64; 95 CI, 0. 60-0. 680).
no code implementations • 10 Dec 2022 • Yi Ding, Aijia Gao, Thibaud Ryden, Kaushik Mitra, Sukumar Kalmanje, Yanai Golany, Michael Carbin, Henry Hoffmann
While it is tempting to use prior machine learning techniques for predicting job duration, we find that the structure of the maintenance job scheduling problem creates a unique challenge.
1 code implementation • 27 Oct 2022 • Haihao Shen, Ofir Zafrir, Bo Dong, Hengyu Meng, Xinyu Ye, Zhe Wang, Yi Ding, Hanwen Chang, Guy Boudoukh, Moshe Wasserblat
In this work, we propose a new pipeline for creating and running Fast Transformer models on CPUs, utilizing hardware-aware pruning, knowledge distillation, quantization, and our own Transformer inference runtime engine with optimized kernels for sparse and quantized operators.
no code implementations • 22 Apr 2022 • Hyunji Kim, Ahsan Pervaiz, Henry Hoffmann, Michael Carbin, Yi Ding
Such solutions monitor past system executions to learn the system's behavior under different hardware resource allocations before dynamically tuning resources to optimize the application execution.
no code implementations • 11 Apr 2022 • Yi Ding, Alex Renda, Ahsan Pervaiz, Michael Carbin, Henry Hoffmann
Our evaluation shows that compared to the state-of-the-art SEML approach in computer systems optimization, Cello improves latency by 1. 19X for minimizing latency under a power constraint, and improves energy by 1. 18X for minimizing energy under a latency constraint.
1 code implementation • 24 Mar 2022 • Su Zhang, Ruyi An, Yi Ding, Cuntai Guan
The visual encoding from the visual block is concatenated with the attention feature to emphasize the visual information.
1 code implementation • 16 Mar 2022 • Yi Ding, Avinash Rao, Hyebin Song, Rebecca Willett, Henry Hoffmann
To predict stragglers accurately and early without labeled positive examples or assumptions on latency distributions, this paper presents NURD, a novel Negative-Unlabeled learning approach with Reweighting and Distribution-compensation that only trains on negative and unlabeled streaming data.
no code implementations • 20 Jan 2022 • Yi Ding, YingYing Li, Rui Song
We show that our proposed Discretization and Regression with generalized fOlded concaVe penalty on Effect discontinuity (DROVE) approach enjoys desirable theoretical properties and allows for statistical inference of the optimal value associated with optimal decision-making.
1 code implementation • 12 Dec 2021 • Alex Renda, Yi Ding, Michael Carbin
With surrogate adaptation, programmers develop a surrogate of a program then retrain that surrogate on a different task.
no code implementations • 23 Nov 2021 • Yi Ding, Alex Rich, Mason Wang, Noah Stier, Matthew Turk, Pradeep Sen, Tobias Höllerer
Multimodal classification is a core task in human-centric machine learning.
1 code implementation • 2 Jul 2021 • Su Zhang, Yi Ding, Ziquan Wei, Cuntai Guan
We propose an audio-visual spatial-temporal deep neural network with: (1) a visual block containing a pretrained 2D-CNN followed by a temporal convolutional network (TCN); (2) an aural block containing several parallel TCNs; and (3) a leader-follower attentive fusion block combining the audio-visual information.
1 code implementation • 5 May 2021 • Yi Ding, Neethu Robinson, Chengxuan Tong, Qiuhao Zeng, Cuntai Guan
It captures temporal dynamics of EEG which then serves as input to the proposed local and global graph-filtering layers.
2 code implementations • 7 Apr 2021 • Yi Ding, Neethu Robinson, Su Zhang, Qiuhao Zeng, Cuntai Guan
TSception consists of dynamic temporal, asymmetric spatial, and high-level fusion layers, which learn discriminative representations in the time and channel dimensions simultaneously.
1 code implementation • CVPR 2021 • Kento Nishi, Yi Ding, Alex Rich, Tobias Höllerer
In this paper, we evaluate different augmentation strategies for algorithms tackling the "learning with noisy labels" problem.
Ranked #9 on
Image Classification
on Clothing1M
(using extra training data)
no code implementations • 22 Dec 2020 • Yi Ding, Qiqi Yang, Guozheng Wu, Jian Zhang, Zhiguang Qin
In this paper, a network called Brachial Plexus Multi-instance Segmentation Network (BPMSegNet) is proposed to identify different tissues (nerves, arteries, veins, muscles) in ultrasound images.
no code implementations • 21 Dec 2020 • Yi Ding, Wei Zheng, Guozheng Wu, Ji Geng, Mingsheng Cao, Zhiguang Qin
Moreover, the multi-view fusion loss, which consists of the segmentation loss, the transition loss and the decision loss, is proposed to facilitate the training process of multi-view learning networks so as to keep the consistency of appearance and space, not only in the process of fusing segmentation results, but also in the process of training the learning network.
no code implementations • 21 Dec 2020 • Yi Ding, Fuyuan Tan, Zhen Qin, Mingsheng Cao, Kim-Kwang Raymond Choo, Zhiguang Qin
In this paper, a novel deep learning-based key generation network (DeepKeyGen) is proposed as a stream cipher generator to generate the private key, which can then be used for encrypting and decrypting of medical images.
no code implementations • 8 Nov 2020 • Mengran Liu, Weiwei Fang, Xiaodong Ma, Wenyuan Xu, Naixue Xiong, Yi Ding
Guided by the scale values generated by SCA for measuring channel importance, we further propose a new channel pruning approach called Channel Pruning guided by Spatial and Channel Attention (CPSCA).
1 code implementation • 11 Aug 2020 • Gang Chen, Yi Ding, Hugo Edwards, Chong Hin Chau, Sai Hou, Grace Johnson, Mohammed Sharukh Syed, Haoyuan Tang, Yue Wu, Ye Yan, Gil Tidhar, Nir Lipovetzky
Planimation is a modular and extensible open source framework to visualise sequential solutions of planning problems specified in PDDL.
1 code implementation • NeurIPS 2020 • Ming Gao, Yi Ding, Bryon Aragam
We establish finite-sample guarantees for a polynomial-time algorithm for learning a nonlinear, nonparametric directed acyclic graphical (DAG) model from data.
no code implementations • 12 Apr 2020 • Yi Ding, Guozheng Wu, Dajiang Chen, Ning Zhang, Linpeng Gong, Mingsheng Cao, Zhiguang Qin
Specifically, in DeepEDN, the Cycle-Generative Adversarial Network (Cycle-GAN) is employed as the main learning network to transfer the medical image from its original domain into the target domain.
1 code implementation • 2 Apr 2020 • Yi Ding, Neethu Robinson, Qiuhao Zeng, Duo Chen, Aung Aung Phyo Wai, Tih-Shih Lee, Cuntai Guan
TSception consists of temporal and spatial convolutional layers, which learn discriminative representations in the time and channel domains simultaneously.
no code implementations • 15 Jan 2019 • Xuefeng Peng, Yi Ding, David Wihl, Omer Gottesman, Matthieu Komorowski, Li-wei H. Lehman, Andrew Ross, Aldo Faisal, Finale Doshi-Velez
On a large retrospective cohort, this mixture-based approach outperforms physician, kernel only, and DRL-only experts.
1 code implementation • 27 Aug 2018 • Yi Ding, Panos Toulis
In this setting, we propose to screen out control units that have a weak dynamical relationship to the single treated unit before the model is fit.
Methodology
no code implementations • 31 May 2018 • Omer Gottesman, Fredrik Johansson, Joshua Meier, Jack Dent, Dong-hun Lee, Srivatsan Srinivasan, Linying Zhang, Yi Ding, David Wihl, Xuefeng Peng, Jiayu Yao, Isaac Lage, Christopher Mosch, Li-wei H. Lehman, Matthieu Komorowski, Aldo Faisal, Leo Anthony Celi, David Sontag, Finale Doshi-Velez
Much attention has been devoted recently to the development of machine learning algorithms with the goal of improving treatment policies in healthcare.
no code implementations • 30 May 2018 • Tian Lan, Yuanyuan Li, Jonah Kimani Murugi, Yi Ding, Zhiguang Qin
The early detection and early diagnosis of lung cancer are crucial to improve the survival rate of lung cancer patients.
1 code implementation • NeurIPS 2017 • Yi Ding, Risi Kondor, Jonathan Eskreis-Winkler
Gaussian process regression generally does not scale to beyond a few thousands data points without applying some sort of kernel approximation method.
no code implementations • 1 Feb 2016 • Yi Ding, Peilin Zhao, Steven C. H. Hoi, Yew-Soon Ong
Despite their encouraging results reported, the existing online AUC maximization algorithms often adopt simple online gradient descent approaches that fail to exploit the geometrical knowledge of the data observed during the online learning process, and thus could suffer from relatively larger regret.