1 code implementation • COLING 2022 • Zhen Huang, Zhilong Lv, Xiaoyun Han, Binyang Li, Menglong Lu, Dongsheng Li
SBAG firstly pre-trains a multi-layer perception network to capture social bot features, and then constructs multiple graph neural networks by embedding the features to model the early propagation of posts, which is further used to detect rumors.
no code implementations • Findings (EMNLP) 2021 • Sen yang, Qingyu Zhou, Dawei Feng, Yang Liu, Chao Li, Yunbo Cao, Dongsheng Li
Moreover, this task can be used to improve visual question generation and visual question answering.
no code implementations • Findings (NAACL) 2022 • Li Zhenzhen, Yuyang Zhang, Jian-Yun Nie, Dongsheng Li
In this paper, we propose to learn a prototype encoder from relation definition in a way that is useful for relation instance classification.
1 code implementation • 1 Dec 2023 • Lei Guan, Dongsheng Li, Jiye Liang, Wenjian Wang, Xicheng Lu
The key insight of our proposal is that we employ a weight prediction strategy in the forward pass to ensure that each mini-batch uses consistent and staleness-free weights to compute the forward pass.
1 code implementation • 30 Nov 2023 • Yongliang Shen, Kaitao Song, Xu Tan, Wenqi Zhang, Kan Ren, Siyu Yuan, Weiming Lu, Dongsheng Li, Yueting Zhuang
To this end, we introduce TaskBench to evaluate the capability of LLMs in task automation.
no code implementations • 24 Nov 2023 • Jie Lian, Xufang Luo, Caihua Shan, Dongqi Han, Varut Vardhanabhuti, Dongsheng Li
However, selecting the appropriate edge feature to define patient similarity and construct the graph is challenging, given that each patient is depicted by high-dimensional features from diverse sources.
no code implementations • 24 Nov 2023 • Xiaoxuan He, Yifan Yang, Xinyang Jiang, Xufang Luo, Haoji Hu, Siyun Zhao, Dongsheng Li, Yuqing Yang, Lili Qiu
To overcome the aforementioned challenges, we propose an Unified Medical Image Pre-training framework, namely UniMedI, which utilizes diagnostic reports as common semantic space to create unified representations for diverse modalities of medical images (especially for 2D and 3D images).
no code implementations • 24 Nov 2023 • Zimian Wei, Hengyue Pan, Lujun Li, Peijie Dong, Zhiliang Tian, Xin Niu, Dongsheng Li
In this paper, for the first time, we investigate how to search in a training-free manner with the help of teacher models and devise an effective Training-free ViT (TVT) search framework.
no code implementations • 22 Nov 2023 • Zefan Qu, Xinyang Jiang, Yifan Yang, Dongsheng Li, Cairong Zhao
To the best of our knowledge, we are the first to exploit the LUT structure to extract temporal information in video tasks.
2 code implementations • Proceedings of the 32nd ACM International Conference on Information and Knowledge Management 2023 • Fangye Wang, Hansu Gu, Dongsheng Li, Tun Lu, Peng Zhang, Ning Gu
It is crucial to effectively model feature interactions to improve the prediction performance of CTR models.
Ranked #1 on
Click-Through Rate Prediction
on Criteo
1 code implementation • 8 Nov 2023 • Fangye Wang, Hansu Gu, Dongsheng Li, Tun Lu, Peng Zhang, Li Shang, Ning Gu
In addition, we present a new architecture of assigning independent FR modules to separate sub-networks for parallel CTR models, as opposed to the conventional method of inserting a shared FR module on top of the embedding layer.
no code implementations • 25 Oct 2023 • Yao Cheng, Caihua Shan, Yifei Shen, Xiang Li, Siqiang Luo, Dongsheng Li
In this paper, we study graph label noise in the context of arbitrary heterophily, with the aim of rectifying noisy labels and assigning labels to previously unlabeled nodes.
no code implementations • 23 Oct 2023 • Tao Sun, Congliang Chen, Peng Qiao, Li Shen, Xinwang Liu, Dongsheng Li
Sign-based stochastic methods have gained attention due to their ability to achieve robust performance despite using only the sign information for parameter updates.
1 code implementation • 13 Oct 2023 • Meiqi Chen, Yubo Ma, Kaitao Song, Yixin Cao, Yan Zhang, Dongsheng Li
Large language models (LLMs) have gained enormous attention from both academia and industry, due to their exceptional ability in language generation and extremely powerful generalization.
1 code implementation • 10 Oct 2023 • Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu
Inspired by these findings, we propose LongLLMLingua for prompt compression towards improving LLMs' perception of the key information to simultaneously address the three challenges.
no code implementations • 9 Oct 2023 • Zhihua Wen, Zhiliang Tian, Wei Wu, Yuxin Yang, Yanqi Shi, Zhen Huang, Dongsheng Li
Finally, we select the most fitting chains of evidence from the evidence forest and integrate them into the generated story, thereby enhancing the narrative's complexity and credibility.
no code implementations • 19 Aug 2023 • Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
The rapid evolution of the web has led to an exponential growth in content.
no code implementations • 18 Aug 2023 • Xiaoge Deng, Li Shen, Shengwei Li, Tao Sun, Dongsheng Li, DaCheng Tao
Stochastic gradient descent (SGD) performed in an asynchronous manner plays a crucial role in training large-scale machine learning models.
1 code implementation • 14 Aug 2023 • Sijia Liu, Jiahao Liu, Hansu Gu, Dongsheng Li, Tun Lu, Peng Zhang, Ning Gu
Sequential recommendation demonstrates the capability to recommend items by modeling the sequential behavior of users.
no code implementations • 5 Aug 2023 • Menglong Lu, Zhen Huang, Yunxiang Zhao, Zhiliang Tian, Yang Liu, Dongsheng Li
To this end, we employ domain adversarial learning as a heuristic neural network initialization method, which can help the meta-learning module converge to a better optimal.
no code implementations • 4 Aug 2023 • Menglong Lu, Zhen Huang, Zhiliang Tian, Yunxiang Zhao, Xuanyu Fei, Dongsheng Li
Theoretically, we prove the convergence of the meta-learning algorithm in MTEM and analyze the effectiveness of MTEM in achieving domain adaptation.
no code implementations • 29 Jul 2023 • Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Jiongran Wu, Peng Zhang, Li Shang, Ning Gu
We conducted comprehensive experiments to validate the effectiveness of IMCorrect and the results demonstrate that IMCorrect is superior in completeness, utility, and efficiency, and is applicable in many recommendation unlearning scenarios.
no code implementations • 27 Jul 2023 • Yu-Ting Lan, Kan Ren, Yansen Wang, Wei-Long Zheng, Dongsheng Li, Bao-liang Lu, Lili Qiu
Seeing is believing, however, the underlying mechanism of how human visual perceptions are intertwined with our cognitions is still a mystery.
no code implementations • 7 Jul 2023 • Dongsheng Li, Chen Shen
In this paper, a three-machine equivalent method applicable to asymmetrical faults is proposed considering the operating wind speed and fault severity.
no code implementations • 6 Jul 2023 • Yuchen Fang, Zhenggang Tang, Kan Ren, Weiqing Liu, Li Zhao, Jiang Bian, Dongsheng Li, Weinan Zhang, Yong Yu, Tie-Yan Liu
Order execution is a fundamental task in quantitative finance, aiming at finishing acquisition or liquidation for a number of trading orders of the specific assets.
no code implementations • 6 Jul 2023 • Yifei Shen, Jiawei Shao, Xinjie Zhang, Zehong Lin, Hao Pan, Dongsheng Li, Jun Zhang, Khaled B. Letaief
The evolution of wireless networks gravitates towards connected intelligence, a concept that envisions seamless interconnectivity among humans, objects, and intelligence in a hyper-connected cyber-physical world.
no code implementations • 2 Jul 2023 • Ruiwen Zhou, Minghuan Liu, Kan Ren, Xufang Luo, Weinan Zhang, Dongsheng Li
Due to the nature of risk management in learning applicable policies, risk-sensitive reinforcement learning (RSRL) has been realized as an important direction.
no code implementations • 2 Jul 2023 • Ziyue Li, Yuchen Fang, You Li, Kan Ren, Yansen Wang, Xufang Luo, Juanyong Duan, Congrui Huang, Dongsheng Li, Lili Qiu
A timely detection of seizures for newborn infants with electroencephalogram (EEG) has been a common yet life-saving practice in the Neonatal Intensive Care Unit (NICU).
no code implementations • 5 Jun 2023 • Yukang Liang, Kaitao Song, Shaoguang Mao, Huiqiang Jiang, Luna Qiu, Yuqing Yang, Dongsheng Li, Linli Xu, Lili Qiu
Pronunciation assessment is a major challenge in the computer-aided pronunciation training system, especially at the word (phoneme)-level.
1 code implementation • 26 May 2023 • Lei Guan, Dongsheng Li, Jian Meng, Yanqi Shi
the future weights to update the DNN parameters, making the gradient-based optimizer achieve better convergence and generalization compared to the original optimizer without weight prediction.
no code implementations • 23 May 2023 • Guangping Zhang, Dongsheng Li, Hansu Gu, Tun Lu, Li Shang, Ning Gu
In this work, we propose SimuLine, a simulation platform to dissect the evolution of news recommendation ecosystems and present a detailed analysis of the evolutionary process and underlying mechanisms.
2 code implementations • 22 May 2023 • Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang
In this paper, we propose DiffusionNER, which formulates the named entity recognition task as a boundary-denoising diffusion process and thus generates named entities from noisy spans.
Ranked #2 on
Nested Named Entity Recognition
on GENIA
no code implementations • 22 May 2023 • Haoqi Zheng, Qihuang Zhong, Liang Ding, Zhiliang Tian, Xin Niu, Dongsheng Li, DaCheng Tao
However, most of the mixup methods do not consider the varying degree of learning difficulty in different stages of training and generate new samples with one hot labels, resulting in the model over confidence.
1 code implementation • The Eleventh International Conference on Learning Representations 2023 • Shuguang Dou, Xinyang Jiang, Cai Rong Zhao, Dongsheng Li
The energy consumption for training deep learning models is increasing at an alarming rate due to the growth of training data and model scale, resulting in a negative impact on carbon neutrality.
1 code implementation • International Conference on Learning Representations 2023 • Ziyue Li, Kan Ren, Xinyang Jiang, Yifei Shen, Haipeng Zhang, Dongsheng Li
Moreover, our method is highly efficient and achieves more than 1000 times training speedup compared to the conventional DG methods with fine-tuning a pretrained model.
Ranked #1 on
Domain Generalization
on PACS
no code implementations • 28 Apr 2023 • Lei Zhang, Yuge Zhang, Kan Ren, Dongsheng Li, Yuqing Yang
In contrast, though human engineers have the incredible ability to understand tasks and reason about solutions, their experience and knowledge are often sparse and difficult to utilize by quantitative approaches.
no code implementations • 23 Apr 2023 • Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Li Shang, Ning Gu
Specifically, TriSIM4Rec consists of 1) a dynamic ideal low-pass graph filter to dynamically mine co-occurrence information in user-item interactions, which is implemented by incremental singular value decomposition (SVD); 2) a parameter-free attention module to capture sequential information of user interactions effectively and efficiently; and 3) an item transition matrix to store the transition probabilities of item pairs.
1 code implementation • 11 Apr 2023 • Xinnan Dai, Caihua Shan, Jie Zheng, Xiaoxiao Li, Dongsheng Li
BFReg-NN starts from gene expression data and is capable of merging most existing biological knowledge into the model, including the regulatory relations among genes or proteins (e. g., gene regulatory networks (GRN), protein-protein interaction networks (PPI)) and the hierarchical relations among genes, proteins and pathways (e. g., several genes/proteins are contained in a pathway).
1 code implementation • 11 Apr 2023 • Dongqi Han, Kenji Doya, Dongsheng Li, Jun Tani
The habitual behavior is generated by using prior distribution of intention, which is goal-less; and the goal-directed behavior is generated by the posterior distribution of intention, which is conditioned on the goal.
1 code implementation • NeurIPS 2023 • Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang
Solving complicated AI tasks with different domains and modalities is a key step toward artificial general intelligence.
no code implementations • 14 Mar 2023 • Jinchao Li, Kaitao Song, Junan Li, Bo Zheng, Dongsheng Li, Xixin Wu, Xunying Liu, Helen Meng
This paper presents several efficient methods to extract better AD-related cues from high-level acoustic and linguistic features.
1 code implementation • 14 Mar 2023 • Jinchao Li, Xixin Wu, Kaitao Song, Dongsheng Li, Xunying Liu, Helen Meng
Experimental results based on the ACII Challenge 2022 dataset demonstrate the superior performance of the proposed system and the effectiveness of considering multiple relationships using hierarchical regression chain models.
Ranked #1 on
Vocal Bursts Intensity Prediction
on HUME-VB
no code implementations • 14 Mar 2023 • Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang
In this paper, we consider an offline-to-online setting where the agent is first learned from the offline dataset and then trained online, and propose a framework called Adaptive Policy Learning for effectively taking advantage of offline and online data.
1 code implementation • CVPR 2023 • Hao Yu, Zheng Qin, Ji Hou, Mahdi Saleh, Dongsheng Li, Benjamin Busam, Slobodan Ilic
To this end, we introduce RoITr, a Rotation-Invariant Transformer to cope with the pose variations in the point cloud matching task.
1 code implementation • 1 Mar 2023 • Guanghao Yin, Zefan Qu, Xinyang Jiang, Shan Jiang, Zhenhua Han, Ningxin Zheng, Xiaohong Liu, Huan Yang, Yuqing Yang, Dongsheng Li, Lili Qiu
To facilitate the research on this problem, a new benchmark dataset named LDV-WebRTC is constructed based on a real-world online streaming system.
no code implementations • 27 Feb 2023 • Jiaqi Gao, Xinyang Jiang, Yuqing Yang, Dongsheng Li, Lili Qiu
Correspondingly, we propose a Dual Stream deep model for Stereotypical Behaviours Detection, DS-SBD, based on the temporal trajectory of human poses and the repetition patterns of human actions.
no code implementations • 4 Feb 2023 • Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Li Shang, Ning Gu
However, the interaction signal may not be sufficient to accurately characterize user interests and the low-pass filters may ignore the useful information contained in the high-frequency component of the observed signals, resulting in suboptimal accuracy.
no code implementations • 29 Jan 2023 • Ziyue Li, Kan Ren, Yifan Yang, Xinyang Jiang, Yuqing Yang, Dongsheng Li
Ensemble methods can deliver surprising performance gains but also bring significantly higher computational costs, e. g., can be up to 2048X in large-scale ensemble tasks.
no code implementations • 29 Jan 2023 • Xiang Li, Tiandi Ye, Caihua Shan, Dongsheng Li, Ming Gao
In this paper, to comprehensively enhance the performance of generative graph SSL against other GCL models on both unsupervised and supervised learning tasks, we propose the SeeGera model, which is based on the family of self-supervised variational graph auto-encoder (VGAE).
no code implementations • 24 Jan 2023 • Peijie Dong, Xin Niu, Zhiliang Tian, Lujun Li, Xiaodong Wang, Zimian Wei, Hengyue Pan, Dongsheng Li
Practical networks for edge devices adopt shallow depth and small convolutional kernels to save memory and computational cost, which leads to a restricted receptive field.
1 code implementation • 24 Jan 2023 • Peijie Dong, Xin Niu, Lujun Li, Zhiliang Tian, Xiaodong Wang, Zimian Wei, Hengyue Pan, Dongsheng Li
In this paper, we propose Ranking Distillation one-shot NAS (RD-NAS) to enhance ranking consistency, which utilizes zero-cost proxies as the cheap teacher and adopts the margin ranking loss to distill the ranking knowledge.
no code implementations • 28 Dec 2022 • Zimian Wei, Hengyue Pan, Xin Niu, Dongsheng Li
OVO samples sub-nets for both teacher and student networks for better distillation results.
no code implementations • 8 Dec 2022 • Cairong Zhao, Yubin Wang, Xinyang Jiang, Yifei Shen, Kaitao Song, Dongsheng Li, Duoqian Miao
Prompt learning is one of the most effective and trending ways to adapt powerful vision-language foundation models like CLIP to downstream datasets by tuning learnable prompt vectors with very few samples.
no code implementations • 7 Dec 2022 • Jiangsu Du, Dongsheng Li, Yingpeng Wen, Jiazhi Jiang, Dan Huang, Xiangke Liao, Yutong Lu
In this paper, we propose a scalable evaluation methodology (SAIH) for analyzing the AI performance trend of HPC systems with scaling the problem sizes of customized AI applications.
1 code implementation • 1 Dec 2022 • Fangye Wang, Yingxu Wang, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Ning Gu
Many Click-Through Rate (CTR) prediction works focused on designing advanced architectures to model complex feature interactions but neglected the importance of feature representation learning, e. g., adopting a plain embedding layer for each feature, which results in sub-optimal feature representations and thus inferior CTR prediction performance.
no code implementations • 23 Nov 2022 • Dongsheng Li, Chen Shen, Ye Liu, Ying Chen, Shaowei Huang
In order to reduce the complexity of simulation of power systems including large-scale wind farms, it is critical to develop dynamic equivalent methods for wind farms which are applicable to the expected contingency analysis.
no code implementations • 20 Nov 2022 • Wenli Sun, Xinyang Jiang, Shuguang Dou, Dongsheng Li, Duoqian Miao, Cheng Deng, Cairong Zhao
Instead of learning fixed triggers for the target classes from the training set, DT-IBA can dynamically generate new triggers for any unknown identities.
1 code implementation • 14 Nov 2022 • Yicheng Zou, Kaitao Song, Xu Tan, Zhongkai Fu, Qi Zhang, Dongsheng Li, Tao Gui
By analyzing this dataset, we find that a large improvement in summarization quality can be achieved by providing ground-truth omission labels for the summarization model to recover omission information, which demonstrates the importance of omission detection for omission mitigation in dialogue summarization.
1 code implementation • 15 Oct 2022 • Jiahao Liu, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Ning Gu
Dynamic interaction graphs have been widely adopted to model the evolution of user-item interactions over time.
no code implementations • 12 Oct 2022 • Tairan He, Yuge Zhang, Kan Ren, Minghuan Liu, Che Wang, Weinan Zhang, Yuqing Yang, Dongsheng Li
A good state representation is crucial to solving complicated reinforcement learning (RL) challenges.
no code implementations • 23 Sep 2022 • Zhigang Kan, Linhui Feng, Zhangyue Yin, Linbo Qiao, Xipeng Qiu, Dongsheng Li
In this paper, we propose a novel composable prompt-based generative framework, which could be applied to a wide range of tasks in the field of Information Extraction.
no code implementations • 16 Sep 2022 • Zimian Wei, Hengyue Pan, Lujun Li, Menglong Lu, Xin Niu, Peijie Dong, Dongsheng Li
Vision transformers have shown excellent performance in computer vision tasks.
no code implementations • 10 Aug 2022 • Kaitao Song, Teng Wan, Bixia Wang, Huiqiang Jiang, Luna Qiu, Jiahang Xu, Liping Jiang, Qun Lou, Yuqing Yang, Dongsheng Li, Xudong Wang, Lili Qiu
Specifically, we first pre-train an encoder-decoder framework in an automatic speech recognition (ASR) objective by using speech-to-text dataset, and then fine-tune ASR encoder on the cleft palate dataset for hypernasality estimation.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
no code implementations • 4 Aug 2022 • Jun Xiao, Xinyang Jiang, Ningxin Zheng, Huan Yang, Yifan Yang, Yuqing Yang, Dongsheng Li, Kin-Man Lam
Then, our proposed CKBG method enhances this lightweight base model by bypassing the original network with ``kernel grafts'', which are extra convolutional kernels containing the prior knowledge of external pretrained image SR models.
no code implementations • 15 Jul 2022 • Shuguang Dou, Xinyang Jiang, Qingsong Zhao, Dongsheng Li, Cairong Zhao
In this paper, we aim to develop a technique that can achieve a good trade-off between privacy protection and data usability for person ReID.
no code implementations • 26 Jun 2022 • Yezhen Wang, Tong Che, Bo Li, Kaitao Song, Hengzhi Pei, Yoshua Bengio, Dongsheng Li
Autoregressive generative models are commonly used, especially for those tasks involving sequential data.
no code implementations • CVPR 2022 • Ruoxi Shi, Xinyang Jiang, Caihua Shan, Yansen Wang, Dongsheng Li
Instead of looking at one format, it is a good solution to utilize the formats of VG and RG together to avoid these shortcomings.
no code implementations • 17 Jun 2022 • Kerong Wang, Hanye Zhao, Xufang Luo, Kan Ren, Weinan Zhang, Dongsheng Li
Offline reinforcement learning (RL) aims at learning policies from previously collected static trajectory data without interacting with the real environment.
1 code implementation • 10 Jun 2022 • Zhiquan Lai, Shengwei Li, Xudong Tang, Keshi Ge, Weijie Liu, Yabo Duan, Linbo Qiao, Dongsheng Li
These features make it necessary to apply 3D parallelism, which integrates data parallelism, pipeline model parallelism and tensor model parallelism, to achieve high training efficiency.
1 code implementation • 25 May 2022 • Kaitao Song, Yichong Leng, Xu Tan, Yicheng Zou, Tao Qin, Dongsheng Li
Previous works on sentence scoring mainly adopted either causal language modeling (CLM) like GPT or masked language modeling (MLM) like BERT, which have some limitations: 1) CLM only utilizes unidirectional information for the probability estimation of a sentence without considering bidirectional context, which affects the scoring quality; 2) MLM can only estimate the probability of partial tokens at a time and thus requires multiple forward passes to estimate the probability of the whole sentence, which incurs large computation and time cost.
no code implementations • 19 May 2022 • Zhengyu Yang, Kan Ren, Xufang Luo, Minghuan Liu, Weiqing Liu, Jiang Bian, Weinan Zhang, Dongsheng Li
Considering the great performance of ensemble methods on both accuracy and generalization in supervised learning (SL), we design a robust and applicable method named Ensemble Proximal Policy Optimization (EPPO), which learns ensemble policies in an end-to-end manner.
1 code implementation • 15 May 2022 • Xiang Li, Renyu Zhu, Yao Cheng, Caihua Shan, Siqiang Luo, Dongsheng Li, Weining Qian
Further, for other homophilous nodes excluded in the neighborhood, they are ignored for information aggregation.
Ranked #1 on
Node Classification
on pokec
1 code implementation • 19 Apr 2022 • Fangye Wang, Yingxu Wang, Dongsheng Li, Hansu Gu, Tun Lu, Peng Zhang, Ning Gu
However, most methods only learn a fixed representation for each feature without considering the varying importance of each feature under different contexts, resulting in inferior performance.
1 code implementation • 14 Apr 2022 • Hengyue Pan, Yixin Chen, Xin Niu, Wenbo Zhou, Dongsheng Li
The most important motivation of this research is that we can use the straightforward element-wise multiplication operation to replace the image convolution in the frequency domain based on the Cross-Correlation Theorem, which obviously reduces the computation complexity.
1 code implementation • IEEE Transactions on Knowledge and Data Engineering 2021 • Chao Chen, Dongsheng Li, Junchi Yan, Xiaokang Yang
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
1 code implementation • 30 Mar 2022 • Yu Tang, Chenyu Wang, Yufan Zhang, Yuliang Liu, Xingcheng Zhang, Linbo Qiao, Zhiquan Lai, Dongsheng Li
To the best of our knowledge, we are the first to make a reasonable dynamic runtime scheduler on the combination of tensor swapping and tensor recomputation without user oversight.
no code implementations • 30 Mar 2022 • Shifu Yan, Caihua Shan, Wenyi Yang, Bixiong Xu, Dongsheng Li, Lili Qiu, Jie Tong, Qi Zhang
To this end, we propose a cross-metric multi-dimensional root cause analysis method, named CMMD, which consists of two key components: 1) relationship modeling, which utilizes graph neural network (GNN) to model the unknown complex calculation among metrics and aggregation function among dimensions from historical data; 2) root cause localization, which adopts the genetic algorithm to efficiently and effectively dive into the raw data and localize the abnormal dimension(s) once the KPI anomalies are detected.
no code implementations • 9 Mar 2022 • Ziyue Li, Kan Ren, Xinyang Jiang, Bo Li, Haipeng Zhang, Dongsheng Li
Fine-tuning pretrained models is a common practice in domain generalization (DG) tasks.
Ranked #6 on
Domain Generalization
on PACS
no code implementations • 8 Mar 2022 • Zimian Wei, Hengyue Pan, Lujun Li, Menglong Lu, Xin Niu, Peijie Dong, Dongsheng Li
Neural architecture search (NAS) has brought significant progress in recent image recognition tasks.
1 code implementation • 17 Feb 2022 • Che Wang, Xufang Luo, Keith Ross, Dongsheng Li
We propose VRL3, a powerful data-driven framework with a simple design for solving challenging visual deep reinforcement learning (DRL) tasks.
no code implementations • 4 Jan 2022 • Qunxi Zhu, Yifei Shen, Dongsheng Li, Wei Lin
Continuous-depth neural networks, such as the Neural Ordinary Differential Equations (ODEs), have aroused a great deal of interest from the communities of machine learning and data science in recent years, which bridge the connection between deep neural networks and dynamical systems.
no code implementations • NeurIPS 2021 • Caihua Shan, Yifei Shen, Yao Zhang, Xiang Li, Dongsheng Li
To address these issues, we propose a RL-enhanced GNN explainer, RG-Explainer, which consists of three main components: starting point selection, iterative graph generation and stopping criteria learning.
1 code implementation • 16 Nov 2021 • Hengzhi Pei, Kan Ren, Yuqing Yang, Chang Liu, Tao Qin, Dongsheng Li
In this paper, we propose a novel generative framework for RTS data - RTSGAN to tackle the aforementioned challenges.
2 code implementations • NeurIPS 2021 • Xinyang Jiang, Lu Liu, Caihua Shan, Yifei Shen, Xuanyi Dong, Dongsheng Li
In this paper, we consider a different data format for images: vector graphics.
no code implementations • 18 Oct 2021 • Shengwei Li, Zhiquan Lai, Dongsheng Li, Yiming Zhang, Xiangyu Ye, Yabo Duan
EmbRace introduces Sparsity-aware Hybrid Communication, which integrates AlltoAll and model parallelism into data-parallel training, so as to reduce the communication overhead of highly sparse parameters.
no code implementations • 18 Oct 2021 • Tao Sun, Huaming Ling, Zuoqiang Shi, Dongsheng Li, Bao Wang
In this paper, to eliminate the effort for tuning the momentum-related hyperparameter, we propose a new adaptive momentum inspired by the optimal choice of the heavy ball momentum for quadratic optimization.
no code implementations • 5 Oct 2021 • Keshi Ge, Yongquan Fu, Zhiquan Lai, Xiaoge Deng, Dongsheng Li
Distributed stochastic gradient descent (SGD) approach has been widely used in large-scale deep learning, and the gradient collective method is vital to ensure the training scalability of the distributed deep learning system.
no code implementations • ICLR 2022 • Yixuan Chen, Yubin Shi, Dongsheng Li, Yujiang Wang, Mingzhi Dong, Yingying Zhao, Robert Dick, Qin Lv, Fan Yang, Li Shang
The feature space of deep models is inherently compositional.
no code implementations • 29 Sep 2021 • Ziyue Li, Kan Ren, Xinyang Jiang, Mingzhe Han, Haipeng Zhang, Dongsheng Li
Real-world data is often generated by some complex distribution, which can be approximated by a composition of multiple simpler distributions.
no code implementations • 29 Sep 2021 • Zhengyu Yang, Kan Ren, Xufang Luo, Weiqing Liu, Jiang Bian, Weinan Zhang, Dongsheng Li
Ensemble learning, which can consistently improve the prediction performance in supervised learning, has drawn increasing attentions in reinforcement learning (RL).
no code implementations • ICLR 2022 • Dongqi Han, Tadashi Kozuno, Xufang Luo, Zhao-Yun Chen, Kenji Doya, Yuqing Yang, Dongsheng Li
How to make intelligent decisions is a central problem in machine learning and cognitive science.
no code implementations • 29 Sep 2021 • Han Zheng, Xufang Luo, Pengfei Wei, Xuan Song, Dongsheng Li, Jing Jiang
Specifically, we explicitly consider the difference between the online and offline data and apply an adaptive update scheme accordingly, i. e., a pessimistic update strategy for the offline dataset and a greedy or no pessimistic update scheme for the online dataset.
no code implementations • 29 Sep 2021 • Tairan He, Yuge Zhang, Kan Ren, Che Wang, Weinan Zhang, Dongsheng Li, Yuqing Yang
A good state representation is crucial to reinforcement learning (RL) while an ideal representation is hard to learn only with signals from the RL objective.
no code implementations • 30 Aug 2021 • Bo Li, Xinyang Jiang, Donglin Bai, Yuge Zhang, Ningxin Zheng, Xuanyi Dong, Lu Liu, Yuqing Yang, Dongsheng Li
The energy consumption of deep learning models is increasing at a breathtaking rate, which raises concerns due to potential negative effects on carbon neutrality in the context of global warming and climate change.
1 code implementation • 17 Aug 2021 • Yifei Shen, Yongji Wu, Yao Zhang, Caihua Shan, Jun Zhang, Khaled B. Letaief, Dongsheng Li
In this paper, we endeavor to obtain a better understanding of GCN-based CF methods via the lens of graph signal processing.
Ranked #5 on
Collaborative Filtering
on Gowalla
no code implementations • ICCV 2021 • Yezhen Wang, Bo Li, Tong Che, Kaiyang Zhou, Ziwei Liu, Dongsheng Li
Confidence calibration is of great importance to the reliability of decisions made by machine learning systems.
no code implementations • 11 Jun 2021 • Bo Li, Yifei Shen, Yezhen Wang, Wenzhen Zhu, Colorado J. Reed, Jun Zhang, Dongsheng Li, Kurt Keutzer, Han Zhao
IIB significantly outperforms IRM on synthetic datasets, where the pseudo-invariant features and geometric skews occur, showing the effectiveness of proposed formulation in overcoming failure modes of IRM.
no code implementations • 9 Jun 2021 • Baoyun Peng, Min Liu, Heng Yang, Zhaoning Zhang, Dongsheng Li
Based on the proposed quality measurement, we propose a deep Tiny Face Quality network (tinyFQnet) to learn a quality prediction function from data.
1 code implementation • 7 Jun 2021 • Sanshi Yu, Zhuoxuan Jiang, Dong-Dong Chen, Shanshan Feng, Dongsheng Li, Qi Liu, JinFeng Yi
Hence, the key is to make full use of rich interaction information among streamers, users, and products.
1 code implementation • AAAI 2021 • Chao Chen, Dongsheng Li, Junchi Yan, Hanchi Huang, Xiaokang Yang
One-bit matrix completion is an important class of positiveunlabeled (PU) learning problems where the observations consist of only positive examples, eg, in top-N recommender systems.
1 code implementation • 4 May 2021 • Yunsheng Pang, Yunxiang Zhao, Dongsheng Li
Graph pooling that summaries the information in a large graph into a compact form is essential in hierarchical graph representation learning.
no code implementations • 23 Apr 2021 • Tao Sun, Dongsheng Li, Bao Wang
In FedAvg, clients keep their data locally for privacy protection; a central parameter server is used to communicate between clients.
no code implementations • 15 Apr 2021 • Dongsheng Li, Haodong Liu, Chao Chen, Yingying Zhao, Stephen M. Chu, Bo Yang
In collaborative filtering (CF) algorithms, the optimal models are usually learned by globally minimizing the empirical risks averaged over all the observed data.
no code implementations • 13 Apr 2021 • Ning Liu, Songlei Jian, Dongsheng Li, Yiming Zhang, Zhiquan Lai, Hongzuo Xu
Graph neural networks (GNN) have been proven to be mature enough for handling graph-structured data on node-level graph representation learning tasks.
no code implementations • 9 Apr 2021 • Yingying Zhao, Mingzhi Dong, Yujiang Wang, Da Feng, Qin Lv, Robert P. Dick, Dongsheng Li, Tun Lu, Ning Gu, Li Shang
By monitoring the impact of varying resolution on the quality of high-dimensional video analytics features, hence the accuracy of video analytics results, the proposed end-to-end optimization framework learns the best non-myopic policy for dynamically controlling the resolution of input video streams to globally optimize energy efficiency.
1 code implementation • 4 Apr 2021 • He Wang, Yifei Shen, Ziyuan Wang, Dongsheng Li, Jun Zhang, Khaled B. Letaief, Jie Lu
In this paper, we investigate the decentralized statistical inference problem, where a network of agents cooperatively recover a (structured) vector from private noisy samples without centralized coordination.
no code implementations • 2 Feb 2021 • Tao Sun, Dongsheng Li, Bao Wang
The stability and generalization of stochastic gradient-based methods provide valuable insights into understanding the algorithmic performance of machine learning models.
no code implementations • 30 Jan 2021 • Linbo Qiao, Tao Sun, Hengyue Pan, Dongsheng Li
In recent years, the Deep Learning Alternating Minimization (DLAM), which is actually the alternating minimization applied to the penalty form of the deep neutral networks training, has been developed as an alternative algorithm to overcome several drawbacks of Stochastic Gradient Descent (SGD) algorithms.
1 code implementation • 21 Dec 2020 • Chao Yang, Su Feng, Dongsheng Li, HuaWei Shen, Guoqing Wang, Bin Jiang
Many works concentrate on how to reduce language bias which makes models answer questions ignoring visual content and language context.
no code implementations • 20 Dec 2020 • Chao Yang, Guoqing Wang, Dongsheng Li, HuaWei Shen, Su Feng, Bin Jiang
Reference expression comprehension (REC) aims to find the location that the phrase refer to in a given image.
no code implementations • 26 Oct 2020 • Zhenzhen Li, Jian-Yun Nie, Benyou Wang, Pan Du, Yuhan Zhang, Lixin Zou, Dongsheng Li
Distant supervision provides a means to create a large number of weakly labeled data at low cost for relation classification.
no code implementations • CVPR 2021 • Bo Li, Yezhen Wang, Shanghang Zhang, Dongsheng Li, Trevor Darrell, Kurt Keutzer, Han Zhao
First, we provide a finite sample bound for both classification and regression problems under Semi-DA.
no code implementations • 24 Jul 2020 • Yiqin Yu, Xu Min, Shiwan Zhao, Jing Mei, Fei Wang, Dongsheng Li, Kenney Ng, Shaochun Li
In real world applications like healthcare, it is usually difficult to build a machine learning prediction model that works universally well across different institutions.
1 code implementation • 10 Jun 2020 • Yu Tang, Zhigang Kan, Dequan Sun, Jingjing Xiao, Zhiquan Lai, Linbo Qiao, Dongsheng Li
We also provide novel update rules and theoretical convergence analysis.
no code implementations • 20 Feb 2020 • Tao Sun, Han Shen, Tianyi Chen, Dongsheng Li
Typically, the performance of TD(0) and TD($\lambda$) is very sensitive to the choice of stepsizes.
1 code implementation • ICCV 2019 • Ke Yang, Dongsheng Li, Yong Dou
It is challenging for weakly supervised object detection network to precisely predict the positions of the objects, since there are no instance-level category annotations.
no code implementations • 24 Oct 2019 • Lei Guan, Wotao Yin, Dongsheng Li, Xicheng Lu
It allows the overlapping of the pipelines of multiple micro-batches, including those belonging to different mini-batches.
no code implementations • NeurIPS 2019 • Tao Sun, Yuejiao Sun, Dongsheng Li, Qing Liao
In this paper, we propose a general proximal incremental aggregated gradient algorithm, which contains various existing algorithms including the basic incremental aggregated gradient method.
no code implementations • 23 Sep 2019 • Tao Sun, Dongsheng Li
Decentralized stochastic gradient method emerges as a promising solution for solving large-scale machine learning problems.
1 code implementation • IJCNLP 2019 • Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li
Rapid progress has been made in the field of reading comprehension and question answering, where several systems have achieved human parity in some simplified settings.
Ranked #8 on
Question Answering
on DROP Test
no code implementations • 23 Jul 2019 • Tao Sun, Dongsheng Li, Zhe Quan, Hao Jiang, Shengguo Li, Yong Dou
In this paper, we answer a question: can the nonconvex heavy-ball algorithms with random initialization avoid saddle points?
no code implementations • ACL 2019 • Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, Dongsheng Li
Traditional approaches to the task of ACE event extraction usually depend on manually annotated data, which is often laborious to create and limited in size.
1 code implementation • ACL 2019 • Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li
This paper considers the reading comprehension task in which multiple documents are given as input.
1 code implementation • ACL 2019 • Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, Yiwei Lv
Open-domain targeted sentiment analysis aims to detect opinion targets along with their sentiment polarities from a sentence.
Aspect-Based Sentiment Analysis (ABSA)
Aspect Term Extraction and Sentiment Classification
+1
2 code implementations • ICCV 2019 • Baoyun Peng, Xiao Jin, Jiaheng Liu, Shunfeng Zhou, Yi-Chao Wu, Yu Liu, Dongsheng Li, Zhaoning Zhang
Most teacher-student frameworks based on knowledge distillation (KD) depend on a strong congruent constraint on instance level.
no code implementations • 26 Feb 2019 • Ke Yang, Peng Qiao, Dongsheng Li, Yong Dou
Focusing on discriminate spatiotemporal feature learning, we propose Information Fused Temporal Transformation Network (IF-TTN) for action recognition on top of popular Temporal Segment Network (TSN) framework.
no code implementations • 14 Feb 2019 • Ke Yang, Xiaolong Shen, Peng Qiao, Shijie Li, Dongsheng Li, Yong Dou
The proposed FSN can make dense predictions at frame-level for a video clip using both spatial and temporal context information.
no code implementations • 9 Feb 2019 • Tao Sun, Dongsheng Li, Hao Jiang, Zhe Quan
In this paper, we consider a class of nonconvex problems with linear constraints appearing frequently in the area of image processing.
no code implementations • 6 Nov 2018 • Dongsheng Li, Chao Chen, Qin Lv, Junchi Yan, Li Shang, Stephen M. Chu
Collaborative filtering (CF) is a popular technique in today's recommender systems, and matrix approximation-based CF methods have achieved great success in both rating prediction and top-N recommendation tasks.
no code implementations • 5 Nov 2018 • Tao Sun, Penghang Yin, Dongsheng Li, Chun Huang, Lei Guan, Hao Jiang
For objective functions satisfying a relaxed strongly convex condition, the linear convergence is established under weaker assumptions on the step size and inertial parameter than made in the existing literature.
no code implementations • 11 Sep 2018 • Lei Guan, Linbo Qiao, Dongsheng Li, Tao Sun, Keshi Ge, Xicheng Lu
Support vector machines (SVMs) with sparsity-inducing nonconvex penalties have received considerable attentions for the characteristics of automatic classification and variable selection.
no code implementations • EMNLP 2018 • Minghao Hu, Yuxing Peng, Furu Wei, Zhen Huang, Dongsheng Li, Nan Yang, Ming Zhou
Despite that current reading comprehension systems have achieved significant advancements, their promising performances are often obtained at the cost of making an ensemble of numerous models.
no code implementations • 17 Aug 2018 • Minghao Hu, Furu Wei, Yuxing Peng, Zhen Huang, Nan Yang, Dongsheng Li
Machine reading comprehension with unanswerable questions aims to abstain from answering when no answer can be inferred.
Ranked #11 on
Question Answering
on SQuAD2.0 dev
no code implementations • 10 Apr 2018 • Hao Yu, Zhaoning Zhang, Zheng Qin, Hao Wu, Dongsheng Li, Jun Zhao, Xicheng Lu
LRM is a general method for real-time detectors, as it utilizes the final feature map which exists in all real-time detectors to mine hard examples.
3 code implementations • 27 Mar 2018 • Zheng Qin, Zhaoning Zhang, Dongsheng Li, Yiming Zhang, Yuxing Peng
Depthwise convolutions provide significant performance benefits owing to the reduction in both parameters and mult-adds.
no code implementations • 23 Jan 2018 • Tao Sun, Linbo Qiao, Dongsheng Li
The non-ergodic O(1/k) rate is proved for proximal inertial gradient descent with constant stepzise when the objective function is coercive.
1 code implementation • NeurIPS 2017 • Dongsheng Li, Chao Chen, Wei Liu, Tun Lu, Ning Gu, Stephen Chu
However, our studies show that submatrices with different ranks could coexist in the same user-item rating matrix, so that approximations with fixed ranks cannot perfectly describe the internal structures of the rating matrix, therefore leading to inferior recommendation accuracy.
Ranked #4 on
Recommendation Systems
on MovieLens 10M
no code implementations • 10 Aug 2017 • Ke Yang, Peng Qiao, Dongsheng Li, Shaohe Lv, Yong Dou
A newly proposed work exploits Convolutional-Deconvolutional-Convolutional (CDC) filters to upsample the predictions of 3D ConvNets, making it possible to perform per-frame action predictions and achieving promising performance in terms of temporal action localization.
Open-Ended Question Answering
Temporal Action Localization
+1
no code implementations • 5 May 2017 • Minne Li, Zhaoning Zhang, Hao Yu, Xinyuan Chen, Dongsheng Li
S-OHEM exploits OHEM with stratified sampling, a widely-adopted sampling technique, to choose the training examples according to this influence during hard example mining, and thus enhance the performance of object detectors.
no code implementations • 16 Jul 2016 • Ke Yang, Dongsheng Li, Yong Dou, Shaohe Lv, Qiang Wang
Object detection is an import task of computer vision. A variety of methods have been proposed, but methods using the weak labels still do not have a satisfactory result. In this paper, we propose a new framework that using the weakly supervised method's output as the pseudo-strong labels to train a strongly supervised model. One weakly supervised method is treated as black-box to generate class-specific bounding boxes on train dataset. A de-noise method is then applied to the noisy bounding boxes. Then the de-noised pseudo-strong labels are used to train a strongly object detection network. The whole framework is still weakly supervised because the entire process only uses the image-level labels. The experiment results on PASCAL VOC 2007 prove the validity of our framework, and we get result 43. 4% on mean average precision compared to 39. 5% of the previous best result and 34. 5% of the initial method, respectively. And this frame work is simple and distinct, and is promising to be applied to other method easily.