When using Kimi K2.6’s Agent mode to build a web application, users are no longer just getting Python snippets but a live URL integrated with a frontend, backend, and an independent database. Kimi has effectively taken over the entire lifecycle from development to hosting and database O&M. However, this creates a massive engineering hurdle: supporting one million users means the backend must handle one million independent, production-grade databases simultaneously—a workload traditional database architectures cannot sustain.
AI app-building scenarios impose three strict constraints on databases. First, the granularity must be "one database per end-user." Under traditional cloud pricing, hosting millions of instances is financially non-viable. Second, the database schema is generated on-the-fly by an LLM and evolves frequently as users iterate on their prompts, demanding high robustness against schema changes. Third, the workload is polarized; most databases remain idle, but a single viral app can trigger a 100x spike in concurrency. The system must support extreme elasticity while ensuring tenant isolation.
Kimi addressed these by adopting TiDB Cloud, based on three critical technical decisions. Decision 1: Achieving ultra-low cost via Serverless Cluster multi-tenancy. TiDB Cloud utilizes a "virtual database interface" where long-tail tenants don't consume physical resources until a request is made, allowing the unit economics of millions of users to work. Unlike platforms like Supabase that provision real PostgreSQL instances per user, TiDB's architecture scales more efficiently for high-density multi-tenancy.
Decision 2: Simplifying LLM code generation with a unified tech stack (Vector + SQL + JSON). In Kimi’s Agent mode, a typical SQL query might perform filtering, JSON field selection, and vector similarity ranking simultaneously. By using TiDB, the LLM doesn't need to coordinate multiple clients or manually handle transaction merging, drastically reducing the error rate in AI-generated code. Decision 3: Using Warm Pools and Scale-to-zero to provide instances in under a second. TiDB Cloud maintains pre-warmed instances to bypass lengthy provisioning processes, ensuring the Agent receives a fully prepared database instance instantly.
Kimi's choice reflects a broader industry shift. Currently, over 90% of new clusters on TiDB Cloud are created directly by AI Agents rather than human engineers. This marks the beginning of an era where databases are treated as fluid, programmable resources rather than static infrastructure.