no code implementations • ICML 2020 • Cheng Zheng, Bo Zong, Wei Cheng, Dongjin Song, Jingchao Ni, Wenchao Yu, Haifeng Chen, Wei Wang
Graph representation learning serves as the core of important prediction tasks, ranging from product recommendation to fraud detection.
no code implementations • 23 Aug 2023 • Jiangwei Wang, Lili Su, Songyang Han, Dongjin Song, Fei Miao
Then through extensive experiments on SUMO simulator, we show that our proposed algorithm has great detection performance in both highway and urban traffic.
no code implementations • 25 Jul 2023 • Yang Jiao, Kai Yang, Dongjin Song
Distributionally Robust Optimization (DRO), which aims to find an optimal decision that minimizes the worst case cost over the ambiguity set of probability distribution, has been widely applied in diverse applications, e. g., network behavior analysis, risk management, etc.
1 code implementation • 16 Jun 2023 • Kexin Zhang, Qingsong Wen, Chaoli Zhang, Rongyao Cai, Ming Jin, Yong liu, James Zhang, Yuxuan Liang, Guansong Pang, Dongjin Song, Shirui Pan
To fill this gap, we review current state-of-the-art SSL methods for time series data in this article.
no code implementations • 8 Mar 2023 • Muzi Peng, Jiangwei Wang, Dongjin Song, Fei Miao, Lili Su
Deep learning is the method of choice for trajectory prediction for autonomous vehicles.
1 code implementation • 20 Dec 2022 • Yang Jiao, Kai Yang, Tiancheng Wu, Dongjin Song, Chengtao Jian
Bilevel optimization plays an essential role in many machine learning tasks, ranging from hyperparameter optimization to meta-learning.
no code implementations • 14 Oct 2022 • Yang Jiao, Kai Yang, Dongjin Song
Distributionally Robust Optimization (DRO), which aims to find an optimal decision that minimizes the worst case cost over the ambiguity set of probability distribution, has been widely applied in diverse applications, e. g., network behavior analysis, risk management, etc.
no code implementations • 9 May 2022 • Wei Zhu, Dongjin Song, Yuncong Chen, Wei Cheng, Bo Zong, Takehiko Mizoguchi, Cristian Lumezanu, Haifeng Chen, Jiebo Luo
Specifically, we first design an Exemplar-based Deep Neural network (ExDNN) to learn local time series representations based on their compatibility with an exemplar module which consists of hidden parameters learned to capture varieties of normal patterns on each edge device.
no code implementations • 24 Jan 2022 • Jurijs Nazarovs, Cristian Lumezanu, Qianying Ren, Yuncong Chen, Takehiko Mizoguchi, Dongjin Song, Haifeng Chen
In this paper, we propose an ordered time series classification framework that is robust against missing classes in the training data, i. e., during testing we can prescribe classes that are missing during training.
no code implementations • 30 Nov 2021 • Xikun Zhang, Dongjin Song, DaCheng Tao
Despite significant advances in graph representation learning, little attention has been paid to the more practical continual learning scenario in which new categories of nodes (e. g., new research areas in citation networks, or new types of products in co-purchasing networks) and their associated edges are continuously emerging, causing catastrophic forgetting on previous categories.
no code implementations • 29 Jul 2021 • Xinyang Feng, Dongjin Song, Yuncong Chen, Zhengzhang Chen, Jingchao Ni, Haifeng Chen
Next, a dual discriminator based adversarial training procedure, which jointly considers an image discriminator that can maintain the local consistency at frame-level and a video discriminator that can enforce the global coherence of temporal dynamics, is employed to enhance the future frame prediction.
no code implementations • NeurIPS 2021 • Xikun Zhang, Dongjin Song, DaCheng Tao
The key challenge is to incorporate the feature and topological information of new nodes in a continuous and effective manner such that performance over existing nodes is uninterrupted.
1 code implementation • CVPR 2021 • Liang Tong, Zhengzhang Chen, Jingchao Ni, Wei Cheng, Dongjin Song, Haifeng Chen, Yevgeniy Vorobeychik
Moreover, we observe that open-set face recognition systems are more vulnerable than closed-set systems under different types of attacks.
1 code implementation • 26 Mar 2021 • Dongsheng Luo, Wei Cheng, Jingchao Ni, Wenchao Yu, Xuchao Zhang, Bo Zong, Yanchi Liu, Zhengzhang Chen, Dongjin Song, Haifeng Chen, Xiang Zhang
We present a contrasting learning approach with data augmentation techniques to learn document representations in an unsupervised manner.
1 code implementation • 3 Mar 2021 • Yinjun Wu, Jingchao Ni, Wei Cheng, Bo Zong, Dongjin Song, Zhengzhang Chen, Yanchi Liu, Xuchao Zhang, Haifeng Chen, Susan Davidson
Forecasting on sparse multivariate time series (MTS) aims to model the predictors of future values of time series given their incomplete past, which is important for many emerging applications.
no code implementations • 1 Jan 2021 • Chang Li, Dongjin Song, DaCheng Tao
Derived from a novel discovery that the SMDP option framework has an MDP equivalence, SA hierarchically extracts skills (abstract actions) from primary actions and explicitly encodes these knowledge into skill context vectors (embedding vectors).
Hierarchical Reinforcement Learning
reinforcement-learning
+1
no code implementations • 4 Oct 2020 • Yang Jiao, Kai Yang, Shaoyu Dou, Pan Luo, Sijia Liu, Dongjin Song
To this end, we propose an autonomous representation learning approach for multivariate time series (TimeAutoML) with irregular sampling rates and variable lengths.
no code implementations • ICLR 2020 • Lichen Wang, Bo Zong, Qianqian Ma, Wei Cheng, Jingchao Ni, Wenchao Yu, Yanchi Liu, Dongjin Song, Haifeng Chen, Yun Fu
Inductive and unsupervised graph learning is a critical technique for predictive or information retrieval tasks where label information is difficult to obtain.
no code implementations • 18 Dec 2019 • Xin Dong, Jingchao Ni, Wei Cheng, Zhengzhang Chen, Bo Zong, Dongjin Song, Yanchi Liu, Haifeng Chen, Gerard de Melo
In practice, however, these two sets of reviews are notably different: users' reviews reflect a variety of items that they have bought and are hence very heterogeneous in their topics, while an item's reviews pertain only to that single item and are thus topically homogeneous.
5 code implementations • 20 Nov 2018 • Chuxu Zhang, Dongjin Song, Yuncong Chen, Xinyang Feng, Cristian Lumezanu, Wei Cheng, Jingchao Ni, Bo Zong, Haifeng Chen, Nitesh V. Chawla
Subsequently, given the signature matrices, a convolutional encoder is employed to encode the inter-sensor (time series) correlations and an attention based Convolutional Long-Short Term Memory (ConvLSTM) network is developed to capture the temporal patterns.
1 code implementation • ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2018 • Wenchao Yu, Cheng Zheng, Wei Cheng, Charu C. Aggarwal, Dongjin Song, Bo Zong, Haifeng Chen, Wei Wang
The problem of network representation learning, also known as network embedding, arises in many machine learning tasks assuming that there exist a small number of variabilities in the vertex representations which can capture the "semantics" of the original network structure.
14 code implementations • 7 Apr 2017 • Yao Qin, Dongjin Song, Haifeng Chen, Wei Cheng, Guofei Jiang, Garrison Cottrell
The Nonlinear autoregressive exogenous (NARX) model, which predicts the current value of a time series based upon its previous values as well as the current and past values of multiple driving (exogenous) series, has been studied for decades.
no code implementations • 21 Feb 2017 • Martin Renqiang Min, Hongyu Guo, Dongjin Song
Our strategy learns a shallow high-order parametric embedding function and compares training/test data only with learned or precomputed exemplars, resulting in a cost function with linear computational complexity for both training and testing.
no code implementations • 16 Aug 2016 • Martin Renqiang Min, Hongyu Guo, Dongjin Song
These exemplars in combination with the feature mapping learned by HOPE effectively capture essential data variations.
no code implementations • ICCV 2015 • Dongjin Song, Wei Liu, Rongrong Ji, David A. Meyer, John R. Smith
In this paper, we propose a novel supervised binary coding approach, namely Top Rank Supervised Binary Coding (Top-RSBC), which explicitly focuses on optimizing the precision of top positions in a Hamming-distance ranking list towards preserving the supervision information.