On the Artificial Intelligence Day on March 13, domestic companies - AI developers, as well as universities and industry organizations: Sber, Yandex, MTS AI, Skoltech, MIPT, ITMO University, Innopolis University, V.P. Ivannikov Institute for System Programming of the Russian Academy of Sciences, HSE, Lobachevsky State University of Nizhny Novgorod - signed a declaration on the responsible development and use of services based on generative artificial intelligence (Gen AI).
The document was signed via the Gosklyuch service. The declaration develops and details the Code of Ethics in the field of artificial intelligence as applied to generative AI.
and users. Developers are advised, for bahamas whatsapp number database example, to create a team of experts from different industries to systematically check the content created by services based on generative artificial intelligence for compliance with moral and ethical standards, and the services themselves for hacking resistance and information security.
In addition, the declaration contains a recommendation for AI developers to inform users about the fact of using generative AI in services where this is possible and not obvious.
The declaration reminds users that the law establishes liability for the dissemination of information that is contrary to the law, even if it is created using GenAI.
"If you have created incorrect information using services based on generative artificial intelligence, please report it to the developer. This will help make the service better and safer," the authors of the declaration recommend.
Acting Director of the Financial Technologies Department of the Bank of Russia Stanislav Korop spoke about the risk-oriented approach to regulating AI, which the Central Bank of the Russian Federation adheres to. According to him, this approach is also optimal for regulating generative AI, which may become necessary in the near future.
"The task of the regulator in terms of risks from AI is to find a balance between improving the conditions for the development of technology and at the same time for high-quality risk management. There are three types of risks associated with the use of AI: those that do not require regulatory intervention; those that require regulatory intervention; those that necessitate voluntary ethical regulation," explained Stanislav Korop.
According to him, ethical codes are widely used in world practice: in the European Union, Great Britain, Canada, Korea. Ethical standards allow flexible and adaptive indication of risk zones to market participants and recommendations for their management.
"Generative AI in the financial industry is used mainly in relations with clients: organizations embed it in chatbots, as well as in services that are not directly related to the provision of financial services - to generate texts and images. The potential damage and the emergence of risks here are low. But since the industry is developing rapidly, we cannot say with certainty that such risks will not appear," said Stanislav Korop.
According to him, decisions on the need for regulation should and will be made taking into account a comprehensive analysis of the emerging problem and an assessment of the potential consequences of risks.
The declaration includes recommendations for both developers
-
- Posts: 422
- Joined: Thu Jan 02, 2025 7:50 am