![](https://d3ilqtpdwi981i.cloudfront.net/3nloQwhfugs5ClKmOpqIT09gY2U=/425x550/smart/https://bepress-attached-resources.s3.amazonaws.com/uploads/1d/30/4b/1d304bf7-cf5f-4f77-948f-b3fc3407325b/thumbnail_f41296c8-7c64-4130-9930-491353d8fc66.jpg)
Deep neural networks achieve state-of-the-art performance on many tasks, but require increasingly complex architectures and costly training procedures. Engineers can reduce costs by reusing a pre-trained model (PTM) and fine-tuning it for their own tasks. To facilitate software reuse, engineers collaborate around model hubs, collections of PTMs and datasets organized by problem domain. Although model hubs are now comparable in popularity and size to other software ecosystems, the associated PTM supply chain has not yet been examined from a software engineering perspective.
We present an empirical study of artifacts and security features in 8 model hubs. We indicate the potential threat models and show that the existing defenses are insufficient for ensuring the security of PTMs. We compare PTM and traditional supply chains, and propose directions for further measurements and tools to increase the reliability of the PTM supply chain.
© Association for Computing Machinery, 2022.
Author Posting © Association for Computing Machinery, 2022. This article is posted here by permission of the Association for Computing Machinery for personal use, not for redistribution. The article was published in SCORED'22: Proceedings of the 2022 ACM Workshop on Software Supply Chain Offensive Research and Ecosystem Defenses, Pages 105-114, November 2022. https://www.doi.org/10.1145/3560835.3564547