Search Results for author: Pallavi Kalapatapu

Found 1 papers, 0 papers with code

Privacy-Aware Semantic Cache for Large Language Models

no code implementations5 Mar 2024 Waris Gill, Mohamed Elidrisi, Pallavi Kalapatapu, Ali Anwar, Muhammad Ali Gulzar

Caching is a natural solution to reduce LLM inference costs on repeated queries which constitute about 31% of the total queries.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.