DeepSeek has released its latest AI models, the V4 Pro and Flash versions, a little over a year after its application gained significant traction and became the top-rated free app on Apple's App Store in the US.
In its announcement, DeepSeek stated, “Welcome to the era of cost-effective 1 million context length.” Context length refers to the maximum number of tokens an AI model can process and remember, with a larger context enabling more coherent and consistent AI performance in extended conversations. For comparison, OpenAI’s recently announced GPT-5.5 offers a context window ranging from 400,000 to 1 million tokens.
The new models remain open-source, allowing users to download and modify their code. DeepSeek asserts that V4 Pro features enhanced agentic capabilities and claims its reasoning prowess rivals that of top closed-source models. Furthermore, it states that V4 Pro is only surpassed by Gemini-3.1-Pro in its breadth of world knowledge. Specifically, V4 Pro boasts 1.6 trillion total parameters with 49 billion active parameters.
The V4 Flash model, while not as powerful as the Pro version, offers significantly faster response times. DeepSeek indicates that V4 Flash's reasoning abilities closely approach those of V4 Pro and it performs on par with the Pro version on simpler AI agent tasks. V4 Flash has 284 billion total parameters and 13 billion active parameters.
It's worth noting that shortly after DeepSeek topped the App Store charts, its use was banned by US federal agencies and on government-owned devices. Authorities cited national security concerns and perceived it as a threat to US AI stocks. South Korea also temporarily paused downloads of the app due to privacy concerns.