
Guidelines for Secure Development and Deployment of AI Systems
New digital technologies come with new cybersecurity risks and attack vectors. Therefore, companies must ensure that the integration of AI is protected from these threats. The concept of security in the development of AI systems has been thrust to the forefront of various regulatory initiatives.
Despite this regulatory progress, important gaps remain between general frameworks and their practical implementation at a more technical level. In this paper, we explore the basic cybersecurity requirements that should be considered in the implementation of AI systems. These requirements should apply to a broader range of companies relying on third-party AI components to build their own solutions.
To implement AI safely, organizations need technical guidance on developing and deploying AI within their infrastructure. Implementing AI without proper guidance can pose significant risks. This document focuses on providing guidelines for developers and administrators of AI systems, MLOps, and AI DevOps, leveraging existing foundational models to create generalized AI solutions, with a particular emphasis on cloud-based AI systems. The paper addresses key aspects of developing, deploying and operating AI systems, including design, security best practices and integration, without focusing on foundational model development.
