Skip to main content
Article
Challenges and Opportunities of DNN Model Execution Caching
DIDL '19: Proceedings of the Workshop on Distributed Infrastructures for Deep Learning (2019)
  • Guin R. Gilman, Worcester Polytechnic Institute
  • Samuel S. Ogden, Worcester Polytechnic Institute
  • Robert J. Walls, Worcester Polytechnic Institute
  • Tian Guo, Worcester Polytechnic Institute
Abstract
We explore the opportunities and challenges of model execution caching, a nascent research area that promises to improve the performance of cloud-based deep inference serving. Broadly, model execution caching relies on servers that are geographically close to the end-device to service inference requests, resembling a traditional content delivery network (CDN). However, unlike a CDN, such schemes cache execution rather than static objects. We identify the key challenges inherent to this problem domain and describe the similarities and differences with existing caching techniques. We further introduce several emergent concepts unique to this domain, such as memory-adaptive models and multi-model hosting, which allow us to make dynamic adjustments to the memory requirements of model execution. 
Keywords
  • deep learning,
  • edge server,
  • caching algorithms
Disciplines
Publication Date
2019
DOI
10.1145/3366622.3368147
Citation Information
Guin R. Gilman, Samuel S. Ogden, Robert J. Walls and Tian Guo. "Challenges and Opportunities of DNN Model Execution Caching" DIDL '19: Proceedings of the Workshop on Distributed Infrastructures for Deep Learning (2019) p. 7 - 12
Available at: http://works.bepress.com/sam-ogden/5/