# Towards simple time-to-event modeling: optimizing neural networks via rank regression

29 Sep 2021  ·  , , , , ·

Time-to-event analysis, also known as survival analysis, aims to predict the first occurred event time, conditional on a set of features. However, the presence of censorship brings much complexity in learning algorithms due to data incompleteness. Hazard-based models (e.g. Cox's proportional hazards) and accelerated failure time (AFT) models are two popular tools in time-to-event modeling, requiring the proportional hazards and linearity assumptions, respectively. In addition, AFT models require pre-specified parametric distributional assumptions in most cases. To alleviate such strict assumptions and improve predictive performance, there have been many deep learning approaches for hazard-based models in recent years. However, compared to hazard-based methods, AFT-based representation learning has received limited attention in neural network literature, despite its model simplicity and interpretability. In this work, we introduce a Deep AFT Rank-regression for Time-to-event prediction model (DART), which is a deep learning-based semiparametric AFT model, and propose a $l_1$-type rank loss function that is more suitable for optimizing neural networks. Unlike existing neural network-based AFT models, the proposed model is semiparametric in that any distributional assumption is not imposed for the survival time distribution without requiring further hyperparameters or complicated model architectures. We verify the usefulness of DART via quantitative analysis upon various benchmark datasets. The results show that our method has considerable potential to model high-throughput censored time-to-event data.

PDF Abstract

## Code Add Remove Mark official

No code implementations yet. Submit your code now

## Datasets

Add Datasets introduced or used in this paper

## Results from the Paper Add Remove

Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.