Paper
EdgeServe: efficient deep learning model caching at the edge
Abstract
In this work, we look at how to effectively manage and utilize deep learning models at each edge location, to provide performance guarantees to inference requests. We identify challenges to use these deep learning models at resource-constrained edge locations, and propose to adapt existing cache algorithms to effectively manage these deep learning models.
Guo2019 PDF
BibTeX
@InProceedings{Guo2019, author = {Guo, Tian and Walls, Robert J and Ogden, Samuel S}, booktitle = {Proceedings of the 4th ACM/IEEE Symposium on Edge Computing}, title = {EdgeServe: efficient deep learning model caching at the edge}, year = {2019}, pages = {313--315}, abstract = {In this work, we look at how to effectively manage and utilize deep learning models at each edge location, to provide performance guarantees to inference requests. We identify challenges to use these deep learning models at resource-constrained edge locations, and propose to adapt existing cache algorithms to effectively manage these deep learning models.}, }