From Rossiiskaya gazeta, Feb. 3, 2025, p. 5. Complete:
Chinese AI startup DeepSeek has sent shock waves through the US stock market, tanking shares of tech companies that have raked in billions from the field. They are going to rake in another $500 billion that Donald Trump has promised for the development of AI. But it turns out that the new open-source R1 problem-solving model uses far less computing power and far fewer chips (and therefore less money) to achieve at least the same results. Investors in American corporations are asking themselves: What were we overpaying for? High bonuses for top management?
Much of this stock market panic is hype. However, the fact is that competition in the AI market is intensifying, there is a fight to make the product cheaper, and betting on one country (the US) having a monopoly was delusional. The main thing is that a universal, “global” AI is not yet in sight. Too much depends on who trains the AI model, including what “censorship restrictions” are included in it. For example, China’s DeepSeek answers “politically sensitive” questions in a completely different way from OpenAI’s ChatGPT chatbot. It is thus naïve to expect that humanity will unite in the name of creating a “global AI” for the benefit of the entire Earth. It cannot be ruled out that in the future, competition between different “national AI models” will even result in an AI war.
The US Navy was the first to ban using the Chinese model, then the Pentagon as a whole did the same. Italy has blocked Chinese AI. The discrediting of the competitor has begun. For instance, Reuters reported that DeepSeek’s accuracy does not exceed 17%. OpenAI, and then Microsoft, accused the Chinese of stealing their data. They were referring to a technique called “distillation” that developers use to train their models by simply transferring knowledge from a large model to a small one. This allows them to achieve similar results at a much lower cost. But even if the Chinese used ChatGPT to train their model, there is nothing illegal about that. In doing so, DeepSeek was able to make do with around 50,000 Nvidia graphics processing units (GPUs), compared to around 500,000 used by its American counterpart OpenAI. Now there will definitely be a race to make AI models cheaper – and for mass commercialization – which is good for consumers.
Some, however, doubt that DeepSeek only cost $6 million to train, a hundred times less than GPT-4. Most likely, only the final stage of training this model cost $6 million, without taking into account previous experiments and versions. However, the reduction in price is still evident. And China has the scientific groundwork for such developments. In 2023, China registered more patents for inventions than the rest of the world combined. Chinese universities produce more than 6,000 PhDs in the natural sciences per month, more than double the US rate. So there are people there to “train” the AI.
DeepSeek has both advantages and disadvantages. Moreover, Alibaba recently presented its AI model to the market (Qwen 2.5 Max), which is supposedly more powerful and advanced. And OpenAI quickly rolled out an even more powerful and advanced model, o3-mini, which is several times cheaper to operate than the outdated GPT-4o model and comparable to DeepSeek. For now, DeepSeek will only be able to partially replace ChatGPT by performing simple information retrieval tasks, according to industry experts. Unlike ChatGPT, DeepSeek is free, but this may also be its weakness – “cheap is not always good.” When comparing DeepSeek and ChatGPT, some people think that DeepSeek gives less deep and detailed answers. Others stated that the Chinese AI is better at creative tasks, showing greater flexibility. Either way, the product was a success, giving hope that hundreds of billions of dollars in investment are not needed to create a full-fledged AI model. DeepSeek shows that almost all countries and even medium-sized IT companies can afford to implement successful AI projects.
[Russian] AI experts consider DeepSeek to be just one of the mid-level neural networks. For those that are not aware, Russia already has dozens of neural networks of this level. Those who tested DeepSeek in Russian believe that “the answers are superficial” and “you need to poke around in the ‘Deep-Think’ mode to get an in-depth result. But then the time it takes to process requests also increases.” As one Russian AI developer noted, “the Chinese periodically fall back to English when asked in Russian,” which reveals borrowings from ChatGPT. Finally, DeepSeek does not have a voice query feature, a stage long passed by domestic AI developments such as Olga Uskova’s Cognitive Technologies and the Sber and YandexGPT neural networks. As one of the testers noted, DeepSeek sometimes even pretends to be YandexGPT. Yandex itself commented on this in the sense that many neural networks are trained on data available on the Internet, which can include both original texts and materials generated by other AI systems.
Russia is ready to join the AI race. In terms of total computing power, our country is among the top 10 leaders, and according to the Economic Development Ministry, the overall level of AI implementation in priority areas of the economy is 31.5%. In accordance with the national AI development strategy, its implementation in the economy will increase GDP by 11.2 trillion rubles by 2030, at which point 95% of priority industries will actively implement AI. So we are definitely “no dumber than others.”