China-Based DeepSeek’s Open-Source Generative AI Tool Coder Poses Competition vs. OpenAI’s ChatGPT, Other AI Platforms

By Jose Resurreccion

Jun 18, 2024 12:00 AM EDT

China-Based DeepSeek’s Open-Source Generative AI Tool Coder Poses Competition vs. OpenAI’s ChatGPT, Other AI Platforms
People visit the AI produce during World Artificial Intelligence Conference (WAIC) in Shanghai on July 7, 2023.
(Photo : WANG ZHAO/AFP via Getty Images)

Chinese artificial intelligence (AI) startup DeepSeek announced Monday (June 17) the release of its open-source generative AI code language model, Coder V2.

VentureBeat reported that the latest Coder version was built upon DeepSeek-V2, a mixture of expert (MoE) models introduced last month, and could excel at coding and math tasks. It supports over 300 programming languages, has a context window of 16K, and even outperforms state-of-the-art closed-source models, such as GPT-4, Turbo, Claude 3 Opus, and Gemini.

DeepSeek claimed it was the first time an open-source model achieved such a feat.

The launch of Coder V2 could be seen as a development or an escalation not only in the generative AI industry currently occupied by US big tech firms but also in the wider geopolitical and commercial conflict between the United States and China.

It could be recalled that Pope Francis warned the G7 leaders over the weekend about the risk AI poses to the world, and separately, the leaders tackled what they said was China's unfair commercial practices.

READ NEXT: G7 Summit Tackles China's 'Unfair' Business Practices, Pope Francis Joins to Talk About AI

DeepSeek's Edge vs. Generative AI Competitors

Founded last year to pose a competition primarily against OpenAI and to "unravel the mystery" of artificial general intelligence (AGI), DeepSeek previously made headlines after it reportedly trained two trillion English and Chinese tokens. The large language model (LLM) developer rapidly made massive developments to its Coder tool system.

The original Coder version did decently on benchmarks with capabilities such as project-level code completion and infilling, but it only supported 86 programming languages and a context window of 16K.

In a tweet, the company explained the performance differences between Coder V2 and other AI LLMs using a chart, showing that, for the most part, DeepSeek's generative AI outperforms all the others.

The only model outperforming Coder overall was GPT-4o, which obtained marginally higher scores in several tests.

Meanwhile, DeepSeek Coder V2 has been offered an MIT license, allowing it for research and unrestricted commercial use.

READ MORE: McDonalds Removes AI Drive-Thru Voice Ordering System After It Went Viral for Wrong Reasons

© 2024 VCPOST, All rights reserved. Do not reproduce without permission.

Join the Conversation

Real Time Analytics