DeepSeek Enhances R1-0528 Model, Intensifying AI Competition with OpenAI and Google

DeepSeek’s R1-0528 delivers massive leaps in reasoning, inference, and hallucination reduction—placing the open-source Chinese model in direct competition with OpenAI’s o3 and Google’s Gemini 2.5 Pro. While celebrated for performance and flexibility, it also raises global concerns over increased censorship and regulatory compliance in politically sensitive domains.

Developer at a laptop using a vibe coding interface (DeepSeek Enhances R1-0528 Model) - Credit ChatGPT, The AI Track
Developer at a laptop using a vibe coding interface (DeepSeek Enhances R1-0528 Model) - Credit ChatGPT, The AI Track

DeepSeek Enhances R1-0528 Model – Key Points

  • Release Details:

    On May 29, 2025, DeepSeek released its R1-0528 model via Hugging Face without a formal launch or whitepaper. It is available under the MIT License, allowing commercial use, private server deployment, and full customization.

  • Performance Enhancements:

    R1-0528 improved significantly across coding, math, and logic benchmarks. On the AIME 2025 math test, accuracy jumped from 70% to 87.5%, driven by expanded reasoning and higher token counts per question (23,000 tokens vs. 12,000 in the prior version). On LiveCodeBench, coding accuracy rose from 63.5% to 73.3%. Performance on the “Humanity’s Last Exam” doubled from 8.5% to 17.7%.

  • Reduction in Hallucinations:

    Hallucination rates were cut by 45–50% in summarization and rewriting tasks. R1-0528 also offers improved consistency, reliability, and cleaner outputs in code and text generation.

  • Creative and Technical Capabilities:

    The model now supports JSON output, system prompts, function calling, and front-end development tasks. Vibe coding (natural-language-based programming) and role-playing experiences have also been enhanced.

  • Benchmarking Results:

    According to LiveCodeBench and internal evaluations, R1-0528 trails just behind OpenAI’s o4-mini and o3, outperforming models like xAI’s Grok 3 Mini and Alibaba’s Qwen 3. Developers on X noted R1-0528’s ability to generate clean code and pass functional tests on the first try—an achievement previously matched only by o3.

  • Distillation and Lightweight Models:

    A distilled version—DeepSeek-R1-0528-Qwen3-8B—achieved state-of-the-art open-source performance, surpassing Qwen3-8B by 10% and matching Qwen3-235B-Thinking. It can be deployed with GPUs as low as 12–16 GB VRAM, making it suitable for smaller organizations.

  • Enterprise and Open-Source Flexibility:

    R1-0528 is free to use, can be hosted on private servers (including AWS and Azure), and separates user data from Chinese infrastructure if not using DeepSeek’s API. API pricing is $0.14/million input tokens and $2.19/million output tokens, with discounted hours.

  • Impact on AI Industry:

    The original R1 model disrupted the industry in January 2025 by delivering elite performance with minimal infrastructure. This prompted OpenAI and Google to launch leaner offerings like o3 Mini and altered AI investment strategies worldwide.

  • Geopolitical Implications:

    Despite U.S. chip export controls, DeepSeek has thrived. Nvidia CEO Jensen Huang praised DeepSeek and Alibaba’s Qwen as two of the most effective open-source Chinese AI models. He confirmed global traction across Europe and the U.S. Meanwhile, DeepSeek’s founder Liang Wenfeng was invited to a summit with President Xi Jinping, signaling state endorsement and national tech prestige.

  • Censorship & Information Control:

    According to testing by developer “xlr8harder” on SpeechMap, R1-0528 shows increased censorship on topics sensitive to the Chinese government (e.g., Xinjiang). While it may reference human rights issues abstractly, it defaults to official narratives—aligned with China’s 2023 law banning generative content that threatens national unity or harmony.


Why This Matters:

DeepSeek’s R1-0528 proves that competitive AI systems can be built rapidly, cheaply, and under intense geopolitical pressure. The model’s high performance, open-source availability, and cross-platform deployment signal a major shift in the AI race. Yet, its increasing alignment with government censorship policies underscores the mounting tension between open innovation and authoritarian control. The R1-0528 release offers a glimpse into the future of AI development—both technically and politically.

DeepSeek introduces DeepSeek-GRM, enhancing AI reasoning via self-assessment techniques, marking a shift towards efficient, self-improving models.

DeepSeek-R2 combines multilingual, coding, multimodal, and math-solving power—now officially launched and poised to challenge global AI leaders.

Read a comprehensive monthly roundup of the latest AI news!

The AI Track News: In-Depth And Concise

Scroll to Top