DeepSeek RELEASED ITS UPDATED AI MODEL, DeepSeek V3-0324: WHAT’S NEW? (24.03.2025)

Authored by Mr. Abhishek (A student of Symbiosis Law School, Noida)

New Model Released, Again Open-Sourced- In a surprise move that has excited open-source AI enthusiasts, DeepSeek has released an updated version of its large language model, DeepSeek V3–0324. The checkpoint, made available on March 24, 2025, marks another step in DeepSeek’s aggressive innovation strategy, following the widely-discussed release of DeepSeek V3 in December 2024. The company has again chosen to fully open-source the model, hosting it publicly on HuggingFace — a move welcomed by developers and researchers eager to explore its expanded capabilities.

What Is DeepSeek V3–0324?

According to the initial Reddit discussions and community tests, DeepSeek V3–0324 is “an updated checkpoint for DeepSeek’s old model, DeepSeek V3, which was released Dec’24.” While DeepSeek has not officially released technical documentation or performance benchmarks at the time of publication, community reports suggest several notable upgrades. As one Reddit post summarized, “The model boasts 685B parameters and a MoE model.” The updated model also supports a context window of 131k tokens, offering the ability to process significantly longer texts than previous versions. Users have noted that this opens the door to handling more complex documents and conversations. One of the standout improvements appears to be speed. “The output/sec stands at 20=/sec. This is blazing fast,” one early tester shared.

Improvements in Coding Capabilities

In addition to broader context and speed, developers are already highlighting improvements in programming assistance. “The coding abilities are looking better compared to previous DeepSeek models,” a Reddit user posted, sparking early interest in using the model for software development and debugging. Although no official performance statistics or benchmarks have been released, initial tests are underway by community members. One user reported the model was able to pass complex reasoning tasks, including simulations like “bouncing ball in rotating shape.”

V3–0324: Foundation for the Upcoming R2 Model?

Speculation is mounting about whether DeepSeek V3–0324 is a precursor to the highly anticipated DeepSeek R2, a reasoning-focused model rumored to launch in April or May 2025. According to the post, “Many speculate that this updated V3–0324 will serve as the foundation for DeepSeek-R2.” While DeepSeek has yet to confirm this, the model’s increased reasoning capability and performance upgrades suggest a strategic ramp-up toward R2’s release.

Personality Shift: More Power, Less Personality?

Despite the performance upgrades, some users have noted a shift in the model’s tone and conversational style. “A few users felt the new version sounded ‘more robotic’ compared to the original V3, which had a more human-like, conversational tone,” stated one report. Another user added, “Some mentioned it now feels ‘too intellectual’ and less engaging for casual chat.” This change may be the result of fine-tuning decisions aimed at improving logic and coherence, potentially at the cost of relatability.

DeepSeek V3 vs DeepSeek V3–0324: What’s the Difference?

While both models share the same architecture, the latest version appears to represent a polished, faster, and more capable evolution of its predecessor. The key differentiators include a higher parameter count, longer context length, improved reasoning, and much faster output speed. With the suffix “0324” indicating its release date — March 24 — this checkpoint provides a timely refresh for users who rely on DeepSeek models in creative, coding, and analytical domains.

Benchmarks Still Missing

Despite growing interest, there are no official benchmarks released yet for DeepSeek V3–0324. “No official benchmarks have been released yet, but independent tests are expected soon,” one user pointed out. AI communities and model evaluators across Reddit and HuggingFace forums have already begun crowd-sourced testing efforts, focusing particularly on reasoning and code generation.

Free Access on DeepSeek’s Website

For those eager to test the new model, it’s already available online for free. “As notified on Reddit, the model is updated on https://chat.deepseek.com/,” reads one community alert. This accessibility reflects DeepSeek’s continuing commitment to democratizing access to high-performance AI — a philosophy increasingly in contrast to more commercialized offerings from U.S.-based competitors.

Early Community Reception: Cautiously Optimistic

The AI community’s early reactions have been a mix of excitement and critical analysis. While many applaud the model’s speed and coding prowess, others are urging caution until more thorough benchmark testing becomes available. The absence of official documentation is also leaving many questions unanswered about the scope of changes under the hood. Still, the decision to open-source the model gives researchers and engineers a powerful tool for experimentation, free from licensing barriers.

What’s Next?

With DeepSeek-R2 anticipated in the coming months, the release of V3–0324 may be just a stepping stone. The company has not commented on whether this version will directly inform R2’s training or tuning, but the speculation persists. Users interested in keeping up with updates should monitor DeepSeek’s community channels and GitHub repositories for future checkpoint announcements, benchmarks, and technical notes. The original Reddit thread ends with a teaser: “Will be updating the blog once more info is available. By the time, try the new model by DeepSeek!”

REFERENCES