Microsoft recommends approaching the security of artificial intelligence with the same fundamentals applied to critical software, but with additional controls over autonomy, access, and validation. The central recommendation is to treat AI systems as if they were a new employee: with specific objectives, clear limits, and control points before allowing them to advance.
This approach requires defining what actions the system can execute, what evidence it must present to demonstrate progress, and which decisions should continue to be resolved deterministically outside of the model itself. The guide also emphasizes applying the principle of least privilege to prevent a useful capability from becoming an unnecessary exposure vector.
More than a metaphor, this parallel with a new employee serves to translate AI security into concrete operational decisions: permissions, verification, traceability, and supervision.