Paper

On Estimating the Training Cost of Conversational Recommendation Systems

Conversational recommendation systems have recently gain a lot of attention, as users can continuously interact with the system over multiple conversational turns. However, conversational recommendation systems are based on complex neural architectures, thus the training cost of such models is high. To shed light on the high computational training time of state-of-the art conversational models, we examine five representative strategies and demonstrate this issue. Furthermore, we discuss possible ways to cope with the high training cost following knowledge distillation strategies, where we detail the key challenges to reduce the online inference time of the high number of model parameters in conversational recommendation systems

Results in Papers With Code
(↓ scroll down to see all results)