no code implementations • WMT (EMNLP) 2020 • Tanfang Chen, Weiwei Wang, Wenyang Wei, Xing Shi, Xiangang Li, Jieping Ye, Kevin Knight
This paper describes the DiDi AI Labs’ submission to the WMT2020 news translation shared task.
no code implementations • 26 Sep 2023 • Zhihao Shi, Jie Wang, Fanghua Lu, Hanzhu Chen, Defu Lian, Zheng Wang, Jieping Ye, Feng Wu
The inverse mapping leads to an objective function that is equivalent to that by the joint training, while it can effectively incorporate GNNs in the training phase of NEs against the learning bias.
no code implementations • 3 Sep 2023 • Haomin Wen, Youfang Lin, Lixia Wu, Xiaowei Mao, Tianyue Cai, Yunfeng Hou, Shengnan Guo, Yuxuan Liang, Guangyin Jin, Yiji Zhao, Roger Zimmermann, Jieping Ye, Huaiyu Wan
An emerging research area within these services is service Route\&Time Prediction (RTP), which aims to estimate the future service route as well as the arrival time of a given worker.
no code implementations • ICCV 2023 • Kai Liu, Sheng Jin, Zhihang Fu, Ze Chen, Rongxin Jiang, Jieping Ye
The resulting accurate pseudo-tracklets boost learning the feature consistency.
no code implementations • 15 Jun 2023 • Zhentao Tan, Yue Wu, Qiankun Liu, Qi Chu, Le Lu, Jieping Ye, Nenghai Yu
Inspired by the various successful applications of large-scale pre-trained models (e. g, CLIP), in this paper, we explore the potential benefits of them for this task through both spatial feature representation learning and semantic information embedding aspects: 1) for spatial feature representation learning, we design a Spatially-Adaptive Residual (\textbf{SAR}) Encoder to extract degraded areas adaptively.
1 code implementation • CVPR 2023 • Deyi Ji, Feng Zhao, Hongtao Lu, Mingyuan Tao, Jieping Ye
With the increasing interest and rapid development of methods for Ultra-High Resolution (UHR) segmentation, a large-scale benchmark covering a wide range of scenes with full fine-grained dense annotations is urgently needed to facilitate the field.
Ranked #1 on
Semantic Segmentation
on INRIA Aerial Image Labeling
(mIOU metric)
no code implementations • 12 May 2023 • Junjie Liu, Junlong Liu, Rongxin Jiang, Yaowu Chen, Chen Shen, Jieping Ye
Then, SLS-MPC proposes a novel self-learning probability function without any prior knowledge and hyper-parameters to learn each view's individual distribution.
1 code implementation • 3 May 2023 • Xiong-Hui Chen, Bowei He, Yang Yu, Qingyang Li, Zhiwei Qin, Wenjie Shang, Jieping Ye, Chen Ma
However, building a user simulator with no reality-gap, i. e., can predict user's feedback exactly, is unrealistic because the users' reaction patterns are complex and historical logs for each user are limited, which might mislead the simulator-based recommendation policy.
1 code implementation • CVPR 2023 • Linzhi Huang, Yulong Li, Hongbo Tian, Yue Yang, Xiangang Li, Weihong Deng, Jieping Ye
The previous method ignored two problems: (i) When conducting interactive training between large model and lightweight model, the pseudo label of lightweight model will be used to guide large models.
no code implementations • 25 Jul 2022 • Shuang Qiu, Xiaohan Wei, Jieping Ye, Zhaoran Wang, Zhuoran Yang
Our algorithms feature a combination of Upper Confidence Bound (UCB)-type optimism and fictitious play under the scope of simultaneous policy optimization in a non-stationary environment.
no code implementations • 16 Jun 2022 • Langzhang Liang, Zenglin Xu, Zixing Song, Irwin King, Jieping Ye
In detail, by studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs, which is termed ResNorm (\textbf{Res}haping the long-tailed distribution into a normal-like distribution via \textbf{norm}alization).
1 code implementation • 12 Jun 2022 • Lijie Xu, Shuang Qiu, Binhang Yuan, Jiawei Jiang, Cedric Renggli, Shaoduo Gan, Kaan Kara, Guoliang Li, Ji Liu, Wentao Wu, Jieping Ye, Ce Zhang
In this paper, we first conduct a systematic empirical study on existing data shuffling strategies, which reveals that all existing strategies have room for improvement -- they all suffer in terms of I/O performance or convergence rate.
no code implementations • 22 Feb 2022 • Shikai Luo, Ying Yang, Chengchun Shi, Fang Yao, Jieping Ye, Hongtu Zhu
The aim of this paper is to establish causal relationship between ride-sharing platform's policies and outcomes of interest under complex temporal and/or spatial dependent experiments.
2 code implementations • 8 Feb 2022 • Zhanqiu Zhang, Jie Wang, Jieping Ye, Feng Wu
Surprisingly, we observe from experiments that the graph structure modeling in GCNs does not have a significant impact on the performance of KGC models, which is in contrast to the common belief.
1 code implementation • NeurIPS 2021 • Xiong-Hui Chen, Yang Yu, Qingyang Li, Fan-Ming Luo, Zhiwei Qin, Wenjie Shang, Jieping Ye
Current offline reinforcement learning methods commonly learn in the policy space constrained to in-support regions by the offline dataset, in order to ensure the robustness of the outcome policies.
no code implementations • 19 Oct 2021 • Shuang Qiu, Jieping Ye, Zhaoran Wang, Zhuoran Yang
Then, given any extrinsic reward, the agent computes the policy via a planning algorithm with offline data collected in the exploration phase.
no code implementations • 8 Jun 2021 • Xiaocheng Tang, Zhiwei Qin, Fan Zhang, Zhaodong Wang, Zhe Xu, Yintai Ma, Hongtu Zhu, Jieping Ye
In this work, we propose a deep reinforcement learning based solution for order dispatching and we conduct large scale online A/B tests on DiDi's ride-dispatching platform to show that the proposed method achieves significant improvement on both total driver income and user experience related metrics.
no code implementations • 18 May 2021 • Xiaocheng Tang, Fan Zhang, Zhiwei Qin, Yansheng Wang, Dingyuan Shi, Bingchen Song, Yongxin Tong, Hongtu Zhu, Jieping Ye
In this paper we propose a unified value-based dynamic learning framework (V1D3) for tackling both tasks.
no code implementations • 3 May 2021 • Zhiwei Qin, Hongtu Zhu, Jieping Ye
In this paper, we present a comprehensive, in-depth survey of the literature on reinforcement learning approaches to decision optimization problems in a typical ridesharing system.
no code implementations • 8 Mar 2021 • Yan Jiao, Xiaocheng Tang, Zhiwei Qin, Shuaiji Li, Fan Zhang, Hongtu Zhu, Jieping Ye
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle repositioning on ride-hailing (a type of mobility-on-demand, MoD) platforms.
1 code implementation • 11 Feb 2021 • Fan Zhou, Shikai Luo, XiaoHu Qie, Jieping Ye, Hongtu Zhu
How to dynamically measure the local-to-global spatio-temporal coherence between demand and supply networks is a fundamental task for ride-sourcing platforms, such as DiDi.
Optimization and Control Applications
no code implementations • 1 Jan 2021 • Chengchun Shi, Xiaoyu Wang, Shikai Luo, Rui Song, Hongtu Zhu, Jieping Ye
A/B testing, or online experiment is a standard business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
no code implementations • 1 Jan 2021 • Xiong-Hui Chen, Yang Yu, Qingyang Li, Zhiwei Tony Qin, Wenjie Shang, Yiping Meng, Jieping Ye
Instead of increasing the fidelity of models for policy learning, we handle the distortion issue via learning to adapt to diverse simulators generated by the offline dataset.
no code implementations • 7 Dec 2020 • Bingyu Liu, Yuhong Guo, Jieping Ye, Weihong Deng
Inspired by the effectiveness of pseudo-labels in domain adaptation, we propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
no code implementations • 3 Dec 2020 • Zhenpeng Li, Jianan Jiang, Yuhong Guo, Tiantian Tang, Chengxiang Zhuo, Jieping Ye
In the proposed model, we design a data imputation module to fill the missing feature values based on the partial observations in the target domain, while aligning the two domains via deep adversarial adaption.
no code implementations • 14 Nov 2020 • Zhen Zhao, Yuhong Guo, Jieping Ye
Recently the problem of cross-domain object detection has started drawing attention in the computer vision community.
no code implementations • 11 Nov 2020 • Jintao Ke, Siyuan Feng, Zheng Zhu, Hai Yang, Jieping Ye
To address this issue, we propose a deep multi-task multi-graph learning approach, which combines two components: (1) multiple multi-graph convolutional (MGC) networks for predicting demands for different service modes, and (2) multi-task learning modules that enable knowledge sharing across multiple MGC networks.
no code implementations • 5 Nov 2020 • Xuanzhao Wang, Zhengping Che, Bo Jiang, Ning Xiao, Ke Yang, Jian Tang, Jieping Ye, Jingyu Wang, Qi Qi
In this paper, we propose a novel and robust unsupervised video anomaly detection method by frame prediction with proper design which is more in line with the characteristics of surveillance videos.
no code implementations • 16 Oct 2020 • Tanfang Chen, Weiwei Wang, Wenyang Wei, Xing Shi, Xiangang Li, Jieping Ye, Kevin Knight
This paper describes DiDi AI Labs' submission to the WMT2020 news translation shared task.
1 code implementation • NeurIPS 2020 • Zhiyuan Xu, Kun Wu, Zhengping Che, Jian Tang, Jieping Ye
While Deep Reinforcement Learning (DRL) has emerged as a promising approach to many complex tasks, it remains challenging to train a single DRL agent that is capable of undertaking multiple different continuous control tasks.
no code implementations • 9 Oct 2020 • Yucheng Lin, Huiting Hong, Xiaoqing Yang, Xiaodi Yang, Pinghua Gong, Jieping Ye
Graph neural networks have become an important tool for modeling structured data.
no code implementations • 6 Sep 2020 • Jiang Lu, Pinghua Gong, Jieping Ye, Jianwei Zhang, ChangShui Zhang
The capability of learning and generalizing from very few samples successfully is a noticeable demarcation separating artificial intelligence and human intelligence since humans can readily establish their cognition to novelty from just a single or a handful of examples whereas machine learning algorithms typically entail hundreds or thousands of supervised samples to guarantee generalization ability.
no code implementations • 23 Aug 2020 • Shuang Qiu, Zhuoran Yang, Xiaohan Wei, Jieping Ye, Zhaoran Wang
Existing approaches for this problem are based on two-timescale or double-loop stochastic gradient algorithms, which may also require sampling large-batch data.
no code implementations • 7 Aug 2020 • Teng Ye, Wei Ai, Lingyu Zhang, Ning Luo, Lulu Zhang, Jieping Ye, Qiaozhu Mei
Through interpreting the best-performing models, we discover many novel and actionable insights regarding how to optimize the design and the execution of team competitions on ride-sharing platforms.
no code implementations • CVPR 2020 • Xiehe Huang, Weihong Deng, Haifeng Shen, Xiubao Zhang, Jieping Ye
Deep learning technique has dramatically boosted the performance of face alignment algorithms.
Ranked #3 on
Face Alignment
on WFLW
no code implementations • 24 Jun 2020 • Yiwen Sun, Kun fu, Zheng Wang, Chang-Shui Zhang, Jieping Ye
To address the data sparsity problem, we propose the Road Network Metric Learning framework for ETA (RNML-ETA).
no code implementations • 8 Jun 2020 • Zhen Zhao, Bingyu Liu, Yuhong Guo, Jieping Ye
In this paper, we present our proposed ensemble model with batch spectral regularization and data blending mechanisms for the Track 2 problem of the cross-domain few-shot learning (CD-FSL) challenge.
1 code implementation • 8 Jun 2020 • Jianan Jiang, Zhenpeng Li, Yuhong Guo, Jieping Ye
The TMHFS method extends the Meta-Confidence Transduction (MCT) and Dense Feature-Matching Networks (DFMN) method [2] by introducing a new prediction head, i. e, an instance-wise global classification network based on semantic information, after the common feature embedding network.
no code implementations • 7 Jun 2020 • Yiwen Sun, Yulu Wang, Kun fu, Zheng Wang, Chang-Shui Zhang, Jieping Ye
Furthermore, in order to evaluate Fusion RNN's sequence feature extraction capability, we choose a representative data mining task for sequence data, estimated time of arrival (ETA) and present a novel model based on Fusion RNN.
no code implementations • 7 Jun 2020 • Yiwen Sun, Yulu Wang, Kun fu, Zheng Wang, Ziang Yan, Chang-Shui Zhang, Jieping Ye
Estimated time of arrival (ETA) is one of the most important services in intelligent transportation systems and becomes a challenging spatial-temporal (ST) data mining task in recent years.
no code implementations • 18 May 2020 • Bingyu Liu, Zhen Zhao, Zhenpeng Li, Jianan Jiang, Yuhong Guo, Jieping Ye
In this paper, we propose a feature transformation ensemble model with batch spectral regularization for the Cross-domain few-shot learning (CD-FSL) challenge.
no code implementations • 16 May 2020 • Chao Xiong, Che Liu, Zijun Xu, Junfeng Jiang, Jieping Ye
In this work, we propose a matching network, called sequential sentence matching network (S2M), to use the sentence-level semantic information to address the problem.
no code implementations • 23 Apr 2020 • Yiwen Sun, Yulu Wang, Kun fu, Zheng Wang, Chang-Shui Zhang, Jieping Ye
Recently, deep learning based methods have achieved promising results by adopting graph convolutional network (GCN) to extract the spatial correlations and recurrent neural network (RNN) to capture the temporal dependencies.
1 code implementation • 2 Apr 2020 • Mengyue Yang, Qingyang Li, Zhiwei Qin, Jieping Ye
In this paper, we propose a hierarchical adaptive contextual bandit method (HATCH) to conduct the policy learning of contextual bandits with a budget constraint.
1 code implementation • 2020 IEEE 36th International Conference on Data Engineering (ICDE) 2020 • Hongzhi Shi, Quanming Yao, Qi Guo, Yaguang Li, Lingyu Zhang, Jieping Ye, Yong Li, Yan Liu
Predicting Origin-Destination (OD) flow is a crucial problem for intelligent transportation.
no code implementations • 29 Mar 2020 • Zhenpeng Li, Zhen Zhao, Yuhong Guo, Haifeng Shen, Jieping Ye
However, in practice the labeled data can come from multiple source domains with different distributions.
no code implementations • ECCV 2020 • Zhen Zhao, Yuhong Guo, Haifeng Shen, Jieping Ye
In this paper, we propose a novel end-to-end unsupervised deep domain adaptation model for adaptive object detection by exploiting multi-label object recognition as a dual auxiliary task.
no code implementations • 17 Mar 2020 • Luanxuan Hou, Jie Cao, Yuan Zhao, Haifeng Shen, Yiping Meng, Ran He, Jieping Ye
At last, we proposed a differentiable auto data augmentation method to further improve estimation accuracy.
no code implementations • NeurIPS 2020 • Shuang Qiu, Xiaohan Wei, Zhuoran Yang, Jieping Ye, Zhaoran Wang
In particular, we prove that the proposed algorithm achieves $\widetilde{\mathcal{O}}(L|\mathcal{S}|\sqrt{|\mathcal{A}|T})$ upper bounds of both the regret and the constraint violation, where $L$ is the length of each episode.
1 code implementation • 5 Feb 2020 • Chengchun Shi, Xiaoyu Wang, Shikai Luo, Hongtu Zhu, Jieping Ye, Rui Song
A/B testing, or online experiment is a standard business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries.
no code implementations • 20 Jan 2020 • Jie Gui, Zhenan Sun, Yonggang Wen, DaCheng Tao, Jieping Ye
Generative adversarial networks (GANs) are a hot research topic recently.
1 code implementation • 19 Dec 2019 • Huiting Hong, Hantao Guo, Yu-Cheng Lin, Xiaoqing Yang, Zang Li, Jieping Ye
In this paper, we focus on graph representation learning of heterogeneous information network (HIN), in which various types of vertices are connected by various types of relations.
no code implementations • 25 Nov 2019 • John Holler, Risto Vuorio, Zhiwei Qin, Xiaocheng Tang, Yan Jiao, Tiancheng Jin, Satinder Singh, Chenxi Wang, Jieping Ye
Order dispatching and driver repositioning (also known as fleet management) in the face of spatially and temporally varying supply and demand are central to a ride-sharing platform marketplace.
1 code implementation • 11 Nov 2019 • Yang Liu, Fanyou Wu, Baosheng Yu, Zhiyuan Liu, Jieping Ye
How to build an effective large-scale traffic state prediction system is a challenging but highly valuable problem.
1 code implementation • 17 Oct 2019 • Jintao Ke, Xiaoran Qin, Hai Yang, Zhengfei Zheng, Zheng Zhu, Jieping Ye
To overcome this challenge, we propose the Spatio-Temporal Encoder-Decoder Residual Multi-Graph Convolutional network (ST-ED-RMGC), a novel deep learning model for predicting ride-sourcing demand of various OD pairs.
no code implementations • 7 Oct 2019 • Ming Zhou, Jiarui Jin, Wei-Nan Zhang, Zhiwei Qin, Yan Jiao, Chenxi Wang, Guobin Wu, Yong Yu, Jieping Ye
Improving the efficiency of dispatching orders to vehicles is a research hotspot in online ride-hailing systems.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
no code implementations • 26 Sep 2019 • Huapeng Wu, Zhengxia Zou, Jie Gui, Wen-Jun Zeng, Jieping Ye, Jun Zhang, Hongyi Liu, Zhihui Wei
In this paper, we make a thorough investigation on the attention mechanisms in a SR model and shed light on how simple and effective improvements on these ideas improve the state-of-the-arts.
no code implementations • 25 Sep 2019 • Xu Geng, Lingyu Zhang, Shulin Li, Yuanbo Zhang, Lulu Zhang, Leye Wang, Qiang Yang, Hongtu Zhu, Jieping Ye
Deep learning based approaches have been widely used in various urban spatio-temporal forecasting problems, but most of them fail to account for the unsmoothness issue of urban data in their architecture design, which significantly deteriorates their prediction performance.
no code implementations • 25 Sep 2019 • Shupeng Gui, Xiangliang Zhang, Pan Zhong, Shuang Qiu, Mingrui Wu, Jieping Ye, Zhengdao Wang, Ji Liu
The key problem in graph node embedding lies in how to define the dependence to neighbors.
no code implementations • 20 Aug 2019 • Yu-Cheng Lin, Xiaoqing Yang, Zang Li, Jieping Ye
In this paper, we propose two novel algorithms, GHINE (General Heterogeneous Information Network Embedding) and AHINE (Adaptive Heterogeneous Information Network Embedding), to compute distributed representations for elements in heterogeneous networks.
1 code implementation • AAAI 2019 • Xu Geng, Yaguang Li, Leye Wang, Lingyu Zhang, Qiang Yang, Jieping Ye, Yan Liu
This task is challenging due to the complicated spatiotemporal dependencies among regions.
no code implementations • 12 Jul 2019 • Wenjie Shang, Yang Yu, Qingyang Li, Zhiwei Qin, Yiping Meng, Jieping Ye
DEMER also derives a recommendation policy with a significantly improved performance in the test phase of the real application.
no code implementations • 11 Jul 2019 • Guojun Wu, Yanhua Li, Zhenming Liu, Jie Bao, Yu Zheng, Jieping Ye, Jun Luo
In this paper, we define and investigate a general reward trans-formation problem (namely, reward advancement): Recovering the range of additional reward functions that transform the agent's policy from original policy to a predefined target policy under MCE principle.
no code implementations • 6 Jul 2019 • Ning Liu, Xiaolong Ma, Zhiyuan Xu, Yanzhi Wang, Jian Tang, Jieping Ye
This work proposes AutoCompress, an automatic structured pruning framework with the following key performance improvements: (i) effectively incorporate the combination of structured pruning schemes in the automatic process; (ii) adopt the state-of-art ADMM-based structured weight pruning as the core algorithm, and propose an innovative additional purification step for further weight reduction without accuracy loss; and (iii) develop effective heuristic search method enhanced by experience-based guided search, replacing the prior deep reinforcement learning technique which has underlying incompatibility with the target pruning problem.
no code implementations • 27 May 2019 • Jiarui Jin, Ming Zhou, Wei-Nan Zhang, Minne Li, Zilong Guo, Zhiwei Qin, Yan Jiao, Xiaocheng Tang, Chenxi Wang, Jun Wang, Guobin Wu, Jieping Ye
How to optimally dispatch orders to vehicles and how to trade off between immediate and future returns are fundamental questions for a typical ride-hailing platform.
Multiagent Systems
no code implementations • 27 May 2019 • Xu Geng, Xiyu Wu, Lingyu Zhang, Qiang Yang, Yan Liu, Jieping Ye
To incorporate multiple relationships into spatial feature extraction, we define the problem as a multi-modal machine learning problem on multi-graph convolution networks.
no code implementations • 23 May 2019 • Zhenyu Shou, Xuan Di, Jieping Ye, Hongtu Zhu, Hua Zhang, Robert Hampshire
Vacant taxi drivers' passenger seeking process in a road network generates additional vehicle miles traveled, adding congestion and pollution into the road network and the environment.
1 code implementation • 13 May 2019 • Zhengxia Zou, Keyan Chen, Zhenwei Shi, Yuhong Guo, Jieping Ye
Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years.
no code implementations • 3 Apr 2019 • Zhengping Che, Guangyu Li, Tracy Li, Bo Jiang, Xuefeng Shi, Xinsheng Zhang, Ying Lu, Guobin Wu, Yan Liu, Jieping Ye
Driving datasets accelerate the development of intelligent driving and related computer vision technologies, while substantial and detailed annotations serve as fuels and powers to boost the efficacy of such datasets to improve learning-based models.
no code implementations • 18 Mar 2019 • Ji Zhao, Meiyu Yu, Huan Chen, Boning Li, Lingyu Zhang, Qi Song, Li Ma, Hua Chai, Jieping Ye
An accurate similarity calculation is challenging since the mismatch between a query and a retrieval text may exist in the case of a mistyped query or an alias inquiry.
no code implementations • 30 Jan 2019 • Ming Lin, Shuang Qiu, Jieping Ye, Xiaomin Song, Qi Qian, Liang Sun, Shenghuo Zhu, Rong Jin
This bound is sub-optimal comparing to the information theoretical lower bound $\mathcal{O}(kd)$.
no code implementations • 11 Nov 2018 • Ishan Jindal, Zhiwei Qin, Xue-wen Chen, Matthew Nokleby, Jieping Ye
In this paper, we develop a reinforcement learning (RL) based system to learn an effective policy for carpooling that maximizes transportation efficiency so that fewer cars are required to fulfill the given amount of trip demand.
no code implementations • 27 Sep 2018 • Shupeng Gui, Xiangliang Zhang, Shuang Qiu, Mingrui Wu, Jieping Ye, Ji Liu
Our method can 1) learn an arbitrary form of the representation function from the neighborhood, without losing any potential dependence structures, 2) automatically decide the significance of neighbors at different distances, and 3) be applicable to both homogeneous and heterogeneous graph embedding, which may contain multiple types of nodes.
no code implementations • 28 May 2018 • Shupeng Gui, Xiangliang Zhang, Shuang Qiu, Mingrui Wu, Jieping Ye, Ji Liu
Graph embedding is a central problem in social network analysis and many other applications, aiming to learn the vector representation for each node.
1 code implementation • 23 Feb 2018 • Huaxiu Yao, Fei Wu, Jintao Ke, Xianfeng Tang, Yitian Jia, Siyu Lu, Pinghua Gong, Jieping Ye, Zhenhui Li
Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations.
no code implementations • 12 Oct 2017 • Ishan Jindal, Tony, Qin, Xue-wen Chen, Matthew Nokleby, Jieping Ye
In building intelligent transportation systems such as taxi or rideshare services, accurate prediction of travel time and distance is crucial for customer experience and resource management.
no code implementations • 12 Sep 2017 • Tao Yang, Paul Thompson, Sihai Zhao, Jieping Ye
As a regression model, it is competitive to the state-of-the-arts sparse models; as a variable selection method, SGLGG is promising for identifying Alzheimer's disease-related risk SNPs.
no code implementations • 31 Aug 2017 • Jie Zhang, Qingyang Li, Richard J. Caselli, Jieping Ye, Yalin Wang
Firstly, we pre-train CNN on the ImageNet dataset and transfer the knowledge from the pre-trained model to the medical imaging progression representation, generating the features for different tasks.
no code implementations • 19 Jul 2017 • Xiang Li, Aoxiao Zhong, Ming Lin, Ning Guo, Mu Sun, Arkadiusz Sitek, Jieping Ye, James Thrall, Quanzheng Li
However, the development of a robust and reliable deep learning model for computer-aided diagnosis is still highly challenging due to the combination of the high heterogeneity in the medical images and the relative lack of training samples.
no code implementations • 23 Jun 2017 • Hemanth Venkateswara, Prasanth Lade, Binbin Lin, Jieping Ye, Sethuraman Panchanathan
Estimating the MI for a subset of features is often intractable.
no code implementations • 22 Jun 2017 • Hemanth Venkateswara, Prasanth Lade, Jieping Ye, Sethuraman Panchanathan
Popular domain adaptation (DA) techniques learn a classifier for the target domain by sampling relevant data points from the source and combining it with the target data.
no code implementations • 27 Apr 2017 • Qingyang Li, Dajiang Zhu, Jie Zhang, Derrek Paul Hibar, Neda Jahanshad, Yalin Wang, Jieping Ye, Paul M. Thompson, Jie Wang
Then we select the relevant group features by performing the group Lasso feature selection process in a sequence of parameters.
no code implementations • 17 Mar 2017 • Shuang Qiu, Tingjin Luo, Jieping Ye, Ming Lin
We study an extreme scenario in multi-label learning where each training instance is endowed with a single one-bit label out of multiple labels.
no code implementations • 2 Mar 2017 • Ming Lin, Shuang Qiu, Bin Hong, Jieping Ye
We show that the conventional gradient descent heuristic is biased by the skewness of the distribution therefore is no longer the best practice of learning the SLM.
no code implementations • NeurIPS 2016 • Ming Lin, Jieping Ye
We develop an efficient alternating framework for learning a generalized version of Factorization Machine (gFM) on steaming data with provable guarantees.
no code implementations • 19 Aug 2016 • Qingyang Li, Tao Yang, Liang Zhan, Derrek Paul Hibar, Neda Jahanshad, Yalin Wang, Jieping Ye, Paul M. Thompson, Jie Wang
To the best of our knowledge, this is the first successful run of the computationally intensive model selection procedure to learn a consistent model across different institutions without compromising their privacy while ranking the SNPs that may collectively affect AD.
no code implementations • 25 Jul 2016 • Kai Zhang, Chuanren Liu, Jie Zhang, Hui Xiong, Eric Xing, Jieping Ye
Given a matrix A of size m by n, state-of-the-art randomized algorithms take O(m * n) time and space to obtain its low-rank decomposition.
1 code implementation • ICML 2017 • Weizhong Zhang, Bin Hong, Wei Liu, Jieping Ye, Deng Cai, Xiaofei He, Jie Wang
By noting that sparse SVMs induce sparsities in both feature and sample spaces, we propose a novel approach, which is based on accurate estimations of the primal and dual optima of sparse SVMs, to simultaneously identify the inactive features and samples that are guaranteed to be irrelevant to the outputs.
no code implementations • NeurIPS 2015 • Jie Wang, Jieping Ye
By a novel hierarchical projection algorithm, MLFre is able to test the nodes independently from any of their ancestor nodes.
no code implementations • NeurIPS 2015 • Pinghua Gong, Jieping Ye
(2) We establish a rigorous convergence analysis for HONOR, which shows that convergence is guaranteed even for non-convex problems, while it is typically challenging to analyze the convergence for non-convex problems.
no code implementations • 15 May 2015 • Jie Wang, Jieping Ye
One of the appealing features of DPC is that: it is safe in the sense that the detected inactive features are guaranteed to have zero coefficients in the solution vectors across all tasks.
no code implementations • NeurIPS 2014 • Jie Wang, Jieping Ye
Sparse-Group Lasso (SGL) has been shown to be a powerful regression technique for simultaneously discovering group and within-group sparse patterns by using a combination of the $\ell_1$ and $\ell_2$ norms.
no code implementations • 31 Aug 2014 • Moo. K. Chung, Jamie L. Hanson, Jieping Ye, Richard J. Davidson, Seth D. Pollak
Sparse systems are usually parameterized by a tuning parameter that determines the sparsity of the system.
no code implementations • 30 Jul 2014 • Binbin Lin, Qingyang Li, Qian Sun, Ming-Jun Lai, Ian Davidson, Wei Fan, Jieping Ye
The effectiveness of gene expression pattern annotation relies on the quality of feature representation.
no code implementations • 4 Jun 2014 • Pinghua Gong, Jieping Ye
Under the strongly convex condition, these variance-reduced stochastic gradient algorithms achieve a linear convergence rate.
no code implementations • 1 May 2014 • Binbin Lin, Ji Yang, Xiaofei He, Jieping Ye
Based on our theoretical analysis, we propose to first learn the gradient field of the distance function and then learn the distance function itself.
1 code implementation • 4 Apr 2014 • Zheng Wang, Ming-Jun Lai, Zhaosong Lu, Wei Fan, Hasan Davulcu, Jieping Ye
Numerical results show that our proposed algorithm is more efficient than competing algorithms while achieving similar or better prediction performance.
no code implementations • 2 Jan 2014 • Chao Zhang, Lei Zhang, Wei Fan, Jieping Ye
Finally, we analyze the asymptotic convergence and the rate of convergence of the learning process for representative domain adaptation.
no code implementations • 31 Dec 2013 • Ji Liu, Ryohei Fujimaki, Jieping Ye
Our new bounds are consistent with the bounds of a special case (least squares) and fills a previously existing theoretical gap for general convex smooth functions; 3) We show that the restricted strong convexity condition is satisfied if the number of independent samples is more than $\bar{k}\log d$ where $\bar{k}$ is the sparsity number and $d$ is the dimension of the variable; 4) We apply FoBa-gdt (with the conditional random field objective) to the sensor selection problem for human indoor activity recognition and our results show that FoBa-gdt outperforms other methods (including the ones based on forward greedy selection and L1-regularization).
no code implementations • 25 Oct 2013 • Jie Wang, Peter Wonka, Jieping Ye
Some appealing features of our screening method are: (1) DVI is safe in the sense that the vectors discarded by DVI are guaranteed to be non-support vectors; (2) the data set needs to be scanned only once to run the screening, whose computational cost is negligible compared to that of solving the SVM problem; (3) DVI is independent of the solvers and can be integrated with any existing efficient solvers.
no code implementations • 29 Jul 2013 • Jun Liu, Zheng Zhao, Jie Wang, Jieping Ye
Safe screening is gaining increasing attention since 1) solving sparse learning formulations usually has a high computational cost especially when the number of features is large and 2) one needs to try several regularization parameters to select a suitable model.
no code implementations • 16 Jul 2013 • Jie Wang, Jun Liu, Jieping Ye
One key building block of the proposed algorithm is the l1q-regularized Euclidean projection (EP_1q).
no code implementations • NeurIPS 2014 • Jie Wang, Jiayu Zhou, Jun Liu, Peter Wonka, Jieping Ye
The l1-regularized logistic regression (or sparse logistic regression) is a widely used method for simultaneous classification and feature selection.
no code implementations • 30 Apr 2013 • Ji Liu, Lei Yuan, Jieping Ye
Specifically, we show 1) in the noiseless case, if the condition number of $D$ is bounded and the measurement number $n\geq \Omega(s\log(p))$ where $s$ is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of $D$ is bounded and the measurement increases faster than $s\log(p)$, that is, $s\log(p)=o(n)$, the estimate error converges to zero with probability 1 when $p$ and $s$ go to infinity.
no code implementations • NeurIPS 2012 • Chao Zhang, Lei Zhang, Jieping Ye
Afterwards, we analyze the asymptotic convergence and the rate of convergence of the learning process for such kind of domain adaptation.
4 code implementations • 18 Mar 2013 • Pinghua Gong, Chang-Shui Zhang, Zhaosong Lu, Jianhua Huang, Jieping Ye
A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems.
no code implementations • NeurIPS 2012 • Binbin Lin, Sen yang, Chiyuan Zhang, Jieping Ye, Xiaofei He
MTVFL has the following key properties: (1) the vector fields we learned are close to the gradient fields of the prediction functions; (2) within each task, the vector field is required to be as parallel as possible which is expected to span a low dimensional subspace; (3) the vector fields from all tasks share a low dimensional subspace.
no code implementations • NeurIPS 2012 • Pinghua Gong, Jieping Ye, Chang-Shui Zhang
In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel regularizer.
no code implementations • NeurIPS 2013 • Jie Wang, Peter Wonka, Jieping Ye
To improve the efficiency of solving large-scale Lasso problems, El Ghaoui and his colleagues have proposed the SAFE rules which are able to quickly identify the inactive predictors, i. e., predictors that have $0$ components in the solution vector.
no code implementations • 10 Sep 2012 • Sen Yang, Zhaosong Lu, Xiaotong Shen, Peter Wonka, Jieping Ye
We expect the two brain networks for NC and MCI to share common structures but not to be identical to each other; similarly for the two brain networks for MCI and AD.
no code implementations • NeurIPS 2011 • Jiayu Zhou, Jianhui Chen, Jieping Ye
We further establish the equivalence relationship between the proposed convex relaxation of CMTL and an existing convex relaxation of ASO, and show that the proposed convex CMTL formulation is significantly more efficient especially for high-dimensional data.
no code implementations • NeurIPS 2011 • Lei Yuan, Jun Liu, Jieping Ye
There have been several recent attempts to study a more general formulation, where groups of features are given, potentially with overlaps between the groups.
no code implementations • NeurIPS 2011 • Jun Liu, Liang Sun, Jieping Ye
In this paper, we show that such Euclidean projection problem admits an analytical solution and we develop a top-down algorithm where the key operation is to find the so-called \emph{maximal root-tree} of the subtree rooted at each node.
no code implementations • NeurIPS 2011 • Qian Sun, Rita Chattopadhyay, Sethuraman Panchanathan, Jieping Ye
In this paper we propose a two-stage domain adaptation methodology which combines weighted data from multiple sources based on marginal probability differences (first stage) as well as conditional probability differences (second stage), with the target domain data.
no code implementations • NeurIPS 2011 • Shuai Huang, Jing Li, Jieping Ye, Teresa Wu, Kewei Chen, Adam Fleisher, Eric Reiman
This is especially true for early AD, at which stage the disease-related regions are most likely to be weak-effect regions that are difficult to be detected from a single modality alone.
no code implementations • NeurIPS 2010 • Ji Liu, Peter Wonka, Jieping Ye
We show that if $X$ obeys a certain condition, then with a large probability the difference between the solution $\hat\beta$ estimated by the proposed method and the true solution $\beta^*$ measured in terms of the $l_p$ norm ($p\geq 1$) is bounded as \begin{equation*} \|\hat\beta-\beta^*\|_p\leq \left(C(s-N)^{1/p}\sqrt{\log m}+\Delta\right)\sigma, \end{equation*} $C$ is a constant, $s$ is the number of nonzero entries in $\beta^*$, $\Delta$ is independent of $m$ and is much smaller than the first term, and $N$ is the number of entries of $\beta^*$ larger than a certain value in the order of $\mathcal{O}(\sigma\sqrt{\log m})$.
no code implementations • NeurIPS 2010 • Jun Liu, Jieping Ye
The structured regularization with a pre-defined tree structure is based on a group-Lasso penalty, where one group is defined for each node in the tree.
no code implementations • NeurIPS 2009 • Shuai Huang, Jing Li, Liang Sun, Jun Liu, Teresa Wu, Kewei Chen, Adam Fleisher, Eric Reiman, Jieping Ye
Recent advances in neuroimaging techniques provide great potentials for effective diagnosis of Alzheimer’s disease (AD), the most common form of dementia.
no code implementations • NeurIPS 2009 • Liang Sun, Jun Liu, Jianhui Chen, Jieping Ye
MMV is an extension of the single measurement vector (SMV) model employed in standard compressive sensing (CS).
no code implementations • NeurIPS 2008 • Shuiwang Ji, Liang Sun, Rong Jin, Jieping Ye
We present a multi-label multiple kernel learning (MKL) formulation, in which the data are embedded into a low-dimensional space directed by the instance-label correlations encoded into a hypergraph.
no code implementations • NeurIPS 2007 • Jieping Ye, Zheng Zhao, Mingrui Wu
The connection between DisKmeans and several other clustering algorithms is also analyzed.