Cloud Security Alliance: Best Practices for Securing AI Systems
Part of the best practices/considerations section of the autonomous agent section reads:
Knowledge bases should have access controls to prevent unauthorized access and ensure responses are generated based on permitted sources.
- External knowledge bases often have unique authorization controls, roles, enforcement, and other quirks. Abstracting all interactions with a given knowledge base into modules specific to that knowledge base compartmentalizes this complexity.
- Validate task planning with capabilities the orchestrator has knowledge of to prevent hallucinations of invalid steps.
Sandbox the orchestrator plugins as much as possible to adhere to the least privileged principles.
- Do plugins need full network access?
- What specific files do they need to interact with?
- Do they need read and write permissions, or is read access sufficient?
- For sensitive data, prompt the user for permission to perform said action
- If the system being interacted with only permits coarse-grained permissions, is it possible to integrate mitigating controls into the plugin?
A couple of agent pitfalls/anti-patterns include:
- Even within the trust boundary, agents should not fully trust other agents as each could be interacting with data from beyond the boundary.
- Consider reviewing and potentially enhancing the logging practices across the system, including external components. While it would be nearly impossible to log everything, implementing more comprehensive request tracing throughout the ecosystem that agents interact with could be valuable. Focus on key interactions and critical paths to improve visibility and troubleshooting capabilities without overwhelming the system or violating privacy concerns.
"Key principles emphasize the necessity of excluding LLMs from authorization decision-making and policy enforcement," concluded the report, prepared by CSA's AI Technology and Risk Working Group. "Continuous verification of identities and permissions, coupled with designing systems that limit the potential impact of issues, is crucial. Implementing a default-deny access strategy and minimizing system complexity are effective measures to reduce errors .... Additionally, rigorous validation of all inputs and outputs is essential to protect against malicious content."
The report also emphasized the importance of human oversight over any LLM. "Incorporating human in the loop oversight for critical access control decisions is recommended. This approach can enhance security, reduce the risk of automated errors, and ensure appropriate judgment in complex or high-stakes situations," the report said.
About the Author
David Ramel is an editor and writer at Converge 360.