From Nezavisimaya gazeta, Aug. 5, 2024, p. 1. Complete text:

The European Union Artificial Intelligence Act entered into force on Aug. 1, 2024. It took two years to develop this law, which sets a strict legal framework for the development and application of AI technologies in the EU. The deadline for implementing this act is Feb. 2, 2025. [The act] prohibits technology companies from using applications that pose a threat to civil rights, for example, biometric characterization that identifies sexual orientation or religion. It also bans the unauthorized extraction of faces from the Internet or security camera footage.

Around the same time the EU adopted this law (May 2024), the [Russian] Federation Council’s commission on information policy and media relations held a round table on the abuse of AI and ways to counter it. The Russian senators said that neural networks could pose a threat not just to the wallets of ordinary citizens in the hands of financial fraudsters, but also to national security, since they are being developed by Russia’s enemies.

In general, our country has recently started paying a great deal of attention to the legal and ethical aspects of the development and use of AI systems. For example, Constitutional Court chairman Valery Zorkin said at the St. Petersburg International Legal Forum that it would be categorically unacceptable in a constitutional and legal context to recognize AI as a subject of rights based on the human model, but that robots require a separate legal regime.

The National Research University Higher School of Economics was the first university in Russia to approve a declaration of ethical principles for creating and using AI systems.

This code of ethics was ceremonially signed as part of the 28th Russian Internet Forum “RIF in the City 2024.”

The development of ethical norms for using and regulating AI systems is, without a doubt, wonderful, important and necessary. But if ethical norms for the use of AI are introduced, then an AI environment would need to be created simultaneously. Clearly, that would cost a lot of money. According to the Academy of Labor and Social Relations, the market for AI solutions reached a value of 650 billion rubles. And, according to the national strategy for the development of AI to 2030, the amount organizations spend on incorporating AI [into their systems] will increase to 850 billion rubles.

For comparison, over the next six years, Microsoft and OpenAI intend to create a data processing center that will include the Stargate AI supercomputer to support AI. This project is estimated to have a cost of $100 billion.

As of March 2024, China was still the largest market for industrial robots in the world, accounting for 52% of all such robots used in 2022 (there were 3.9 million in the entire world). China intends to move to the production of humanoid robots by 2025. About 10,000 industrial robots with AI systems have now been installed in Russia.

Perhaps it’s even a good thing that we’re running a little bit ahead of the AI steam engine by trying to check all the ethical boxes in an area that is for the most part at the startup stage. But aren’t we getting ourselves into a situation where an ideal model is being created that we are somehow supposedly starting to control, while often not really understanding what the basis for this model is in reality? For example, according to Sergei Markov, a leading Russian specialist in neural networks and the developer of the GigaChat service, “Attempts to restrict or prohibit the development of AI technologies can pose significantly greater risks than the emergence of a superintelligence that is dangerous to humanity.”

Simply put, the subject of these regulations is so far quite amorphous. Crushing it with an ethical pressing machine during its developmental stage adds an additional risk factor of falling behind in technological development. In our country, we still have people who participated back in the day in the battles between the government and the Weismanists, Morganists and other cyberneticists and geneticists as part of the fight against philosophical idealism in the late 1940s.

Ethics as a science is an even more subjective basis for prohibiting revolutionary scientific trends. This is because ethics by definition is based on what moralists called “appropriate” centuries and millennia ago. We cannot allow Russia to be thrown off the track of the most advanced scientific developments by people who are reckless about the country’s future. We usually call these people timeservers.