
Navigating the Security Landscape of Generative AI and Agentic AI
As the landscape of artificial intelligence expands, it's clear that security practices must adapt. Companies like Anthropic and OpenAI have demonstrated how AI can be weaponized in cyberattacks with minimal human assistance.
Traditionally, the focus was on generative and large language models (LLMs). However, as OWASP has recognized, much of our attention now must shift towards agentic systems—AI that can operate autonomously. These systems introduce new challenges because they interact with environments in ways unpredictable to humans.
Without robust visibility and observability, security efforts are like shooting arrows into the dark. The nature of these interactions makes it difficult to pinpoint when an attack is occurring or even if one has happened at all.
Adapting security protocols to this new reality means moving away from static permissions. Least privilege must be a continuous practice; the permissions that are appropriate at deployment can quickly become excessive as an agent's workflow evolves.
Automated access reviews and anomaly detection around agent behavior are crucial to maintaining effective least privilege over time. These systems must be rigorously tested because, while necessary for security, they can also introduce complexity and false positives.
Furthermore, mapping and governing inter-system dependencies is essential. Documenting API calls, data flows, and cross-platform integrations tied to AI agents ensures that these systems are secured consistently across every environment they touch.
This discipline is challenging but necessary because permission drift can lead to significant security risks. While visibility improves with automation, it also requires meticulous management and continuous monitoring.
Sources
Stay updated
Get our latest technical articles and product updates delivered to your inbox.