Permanent Mission of the Russian Federation to the United Nations

Permanent Mission of the Russian Federation to the United Nations

Statement by Chargé d'Affaires of the Russian Federation Dmitry Polyanskiy at UNSC briefing on artificial intelligence

Mme.President,

We welcome to this meeting the Secretary-General, whose address we followed very carefully.  We also thank briefers for their insights.

The Russian Federation attaches great importance to the development of advanced technology that should serve the mankind and contribute to socio-economic progress. Artificial Intelligence (AI) ranks among the latest cutting-age technologies, attracting considerable scientific interest. However, its practical significance and potential application, not to mention its hypothetical projection on political processes, have yet to be assessed.

Taking into account the expert assessments that were made today, it seems that AI is in an early stage of its development as a technology. Too much is unknown about it.  

Therefore, the choice of the Security Council Chamber as a platform for this discussion was not obvious to us. Even the concept note for this meeting says that there are no concrete arguments to back the idea of a natural nexus linking AI to issues of international peace and security, at least not yet. The concept note only lists a set of unallied technological trends, which we can, if want to, discover and apply to any area.

Mme.President,

When preparing for this meeting, I decided to ask if AI creates risks for international peace and security to the first-hand source, which is the artificial intelligence itself. Here is what it answered.  

“Artificial intelligence can pose a certain threat to international peace and security, but it can also have a positive impact. Here are a few aspects to consider:

1. Cybersecurity. The development of AI may cause new vulnerabilities and threats in cyberspace. AI can be used to create powerful cyberweapons, automate cyberattacks and bypass existing defenses.

2. Autonomous weapons systems. The development of AI-based autonomous weapons systems could pose security risks, as such systems could decide to use force without direct human involvement. This could lead to unpredictable consequences and exacerbate armed conflicts.

3. Spreading disinformation. AI can be used to create and spread disinformation and fake news, which can undermine credibility of information and contribute to social and political tensions.”

Therefore we consider attempts to establish a generic linkage between artificial intelligence and threats to peace and security to be somewhat far-fetched. As you can see, AI agrees with us in that.

To begin with, the international community has to determine the nature and character of possible risks and threats, assess their scale, tentative response measures, etc. This requires a professional discussion based on scientific expertise, which may take more than one year. It is already underway at specialized platforms.

Certain military aspects of AI that may have a projection on global and regional security are also discussed in specialized formats. In the case of lethal autonomous systems, this is the Group of Governmental Experts of States Parties to the Convention on Inhumane Weapons. On the other hand, security issues in the use of information and communications technologies (ICTs) are discussed in their entirety in the specialized open-ended United Nations Working Group (OEWG) under the auspices of the General Assembly. We believe that it is counterproductive to duplicate these efforts.

Mme.President,

As any advanced technology, AI can be beneficial to humanity, or it can be destructive, depending on who uses it and for what purposes. Today, unfortunately, we are witnessing how the West, led by the United States, is undermining trust in its own technological solutions and IT companies that implement them. American intelligence services interfere in the activities of the industry's largest corporations, manipulate content moderation algorithms, carry out user surveillance, including through manufacturer’s backdoors in hard and software. Such facts are uncovered on a regular basis.

At the same time, the West sees no ethical problem in allowing the AI to let through hate speech on social media platforms if it turns out politically convenient to them, as in the case of extremist META corporation and its "tolerance" to calls to annihilate Russians. At the same time, the algorithms are set to promulgate fakes and disinformation, block anything that appears "wrong" to the owners of social networks and their handlers in the intelligence services, i.e. the truth that hurts the eye. In the spirit of the notorious "cancel culture," AI is made to edit digital data bulks, thus fabricating a false history.

Summing up, the main source of threats and challenges in this area is not the AI itself, but the ethically compromised pioneers of this technology from among "advanced" democracies. This aspect is no less important than the problems raised by the British Presidency as the reason for this meeting.

Mme.Presidency,

The idea about prospects of artificial intelligence in terms of emergence of new markets and sources of wealth is very popular today. However, the issue of the uneven distribution of such benefits is slyly avoided. The Secretary-General addressed these aspects in detail in a recent report on digital cooperation.

The digital inequality has grown to the point where in Europe some 89 per cent of the population have access to the Internet, while in low-income countries this share stands only at 25 per cent. Digital services now account for nearly two thirds of global trade in services, yet the cost of a smartphone in South Asia and sub-Saharan Africa is more than 40 per cent of an average monthly income, and African users pay more than three times the global average for mobile data. Finally, the acquisition of digital skills by citizens is supported by governments in less than half of the world's countries.

The reason is that the wealth created by innovation is distributed unevenly and is dominated by a handful of large platforms and states. Digital technologies have led to significant improvements in added value and performance, but these benefits do not translate into shared prosperity. UNCTAD's latest report, “Technology and Innovation 2023”, warns that developed countries will enjoy most of the benefits of digital technologies, including artificial intelligence. Digital technologies are accelerating the concentration of economic power in the hands of a narrow group of elites and companies: the combined wealth of tech billionaires totaled $2.1 trillion in 2022.

Behind this is a massive gap in governance, particularly across borders, and in public investment. Historically, digital technologies have been developed privately, and governments have consistently lagged behind in regulating them for the sake of the public interest. This trend needs to be reversed. States should play a pivotal role in developing regulatory mechanisms for AI. Any self-regulatory tools for the industry must comply with the national laws of the country where the company operates. We oppose the formation of supra-national oversight bodies in the field of AI. We also consider inadmissible the extraterritorial application of any norms in this area. We can only reach universal agreements in this area only on the basis of an equal, mutually respectful dialogue by sovereign members of the international community, and given due consideration of all the legitimate interests and concerns of the participants in the negotiation process.

Russia is already contributing to this process. In our country, major IT companies have developed a national Code of Ethics in the field of artificial intelligence, which sets guidelines for safe and ethical development and use of AI systems. It does not establish any legal obligations and is open for accession by foreign specialized organizations, private companies, academic and public institutions. The Code has been formalized as a national contribution to the implementation of the UNESCO Recommendation on the Ethics of AI.

Mme.President,

In conclusion, I would like to emphasize that no AI system should compromise the moral and intellectual autonomy of a human. Developers should regularly assess the risks associated with the use of AI and take steps to minimize them.

Thank you.

 

Video of the statement