Paper

Quality at the Tail of Machine Learning Inference

Machine learning inference should be subject to stringent inference time constraints while ensuring high inference quality, especially in safety-critical (e.g., autonomous driving) and mission-critical (e.g., emotion recognition) contexts. Neglecting either aspect can lead to severe consequences, such as loss of life and property damage. Many studies lack a comprehensive consideration of these metrics, leading to incomplete or misleading evaluations. The study unveils a counterintuitive revelation: deep learning inference quality exhibits fluctuations due to inference time. To depict this phenomenon, the authors coin a new term, "tail quality," providing a more comprehensive evaluation, and overcoming conventional metric limitations. Moreover, the research proposes an initial evaluation framework to analyze factors affecting quality fluctuations, facilitating the prediction of the potential distribution of inference quality. The effectiveness of the evaluation framework is validated through experiments conducted on deep learning models for three different tasks across four systems.

Results in Papers With Code
(↓ scroll down to see all results)