no code implementations • 30 Nov 2021 • Shashank Kedia, Aditya Mantha, Sneha Gupta, Stephen Guo, Kannan Achan
We propose eBERT, a sequence-to-sequence approach by further pre-training the BERT embeddings on an e-commerce product description corpus, and then fine-tuning the resulting model to generate short, natural, spoken language titles from input web titles.