For the initial three years of the Large Language Model (LLM) era, the AI gateway primarily addressed a developer challenge. With numerous model providers, each offering distinct SDKs and authentication schemes, developers sought a unified interface. AI gateways emerged as solutions to consolidate this fragmentation.
Products like Portkey, LiteLLM, Kong AI Gateway, and Cloudflare AI Gateway each tackled this fragmentation, allowing developers to select one, obtain a single OpenAI-compatible endpoint, and proceed with their work. Crucially, the security team was often absent from these early considerations.
Now, cybersecurity giant Palo Alto Networks has officially entered this space.
Last week, Palo Alto Networks announced its intent to acquire Portkey and integrate it into Prisma AIRS. This integration aims to establish Portkey as a unified control plane for securing every AI transaction across an enterprise. While the deal has not yet closed, the strategic implication is clear: the layer positioned between an AI agent and every model it invokes is no longer merely infrastructural "plumbing"; it has evolved into a critical security "checkpoint."
Consider the unparalleled visibility an AI gateway offers. It observes every prompt an agent sends, every model response received, every tool call, every memory read, and every interaction with MCP servers. Within an enterprise AI stack, no other layer provides a more comprehensive overview of an agent's activities than the gateway. The security industry recognized this critical insight earlier than most developers.
Prior to Palo Alto's acquisition move, Portkey was already processing trillions of tokens monthly for Fortune 500 customers. Its ease of implementation, requiring just three lines of code, and its support for 3,000 LLMs, MCP servers, and agents underscored its robust developer-centric appeal.
Palo Alto Networks will augment Portkey's existing capabilities with advanced features such as identity management, authentication, artifact scanning, automated red teaming, and runtime security. These security measures will be enforced precisely at the point where every agent call passes through the gateway. This transforms the gateway into the definitive source for understanding what AI agents are actually doing, rather than what was merely intended.
This is not the first instance where a major security player has redefined rules in a developer-owned infrastructure category. Web application firewalls (WAFs), for example, began as a network team's concern. Developers subsequently routed all HTTP requests through them, and Cloudflare evolved WAFs into a comprehensive platform. The pattern remains consistent: initial developer convenience, followed by enhanced visibility, then control, and ultimately, strategic acquisition.
What makes this particular moment distinctive is the advent of AI agents. A single agentic workflow can initiate dozens of LLM calls per task, with each call traversing the gateway. At this volume, the gateway transcends being a mere proxy; it becomes a comprehensive log detailing every decision made by an autonomous system and its rationale. For heavily regulated industries—including financial services, healthcare, and government—such a log is not optional; it constitutes the essential audit trail.
Notably, Kong is also actively advancing agent gateway capabilities and Agent-to-Agent (A2A) traffic governance.