RESPONSIBLE ARTIFICIAL INTELLIGENCE: A RESPONSIBILITY OF EVERYONE


Are the machines and systems that use this technology autonomous enough to discern between good and evil, between what is legal and illegal, between what is ethical and what is immoral? Can they be held legally responsible for their actions and consequences? Where does the real responsibility lie?

The rapid advance of Artificial Intelligence (AI) and its growing application in various areas of our daily lives deserves deep reflection by the community and a call for attention to our legal environment. AI, with its extraordinary learning capacity and influence on decision-making, forces us to examine the responsibility we share with this technology.

Are the machines and systems that use this technology autonomous enough to discern between good and evil, between what is legal and illegal, between what is ethical and what is immoral? Can they be held legally responsible for their actions and consequences? Where does the real responsibility lie?

Statements like those of Sam Altman, who after launching Chat GPT and having millions of users, says he is scared by the risks that the use of this technology can pose; or the request for a moratorium by Elon Musk and Steve Wozniak to stop large-scale AI experiments until there is regulation, are acts of conscience and responsibility. What once seemed like science fiction on the big screen is now presented as a reality palpable.

The speed of development and adoption of AI contrasts with the absence of strong regulation and adequate oversight, which could lead to unintended consequences. Europe already has a proposal for an AI Regulation (Artificial Intelligence Law), however, its negotiating phase between the Member States is now beginning to approve the final text and it is expected that it will not be before 2026. This Regulation or Law does not grant personality legal nature of AI because no matter how autonomous AI may become or be, behind this technology there is always a manufacturer, supplier, client and user of it.

The real responsibility lies with the legal entities and individuals behind or using this technology, not with the machines.

For this reason, and as long as there is no clear and applicable regulation, organizations and citizens have a civic and moral duty to supervise and control the development and use of AI. This goes beyond the interpretations that may be given about the application of the theory of damage and the causal link between the actors and the damage caused.

The way the algorithm is trained and its capacity depends on us; This is a responsibility shared by all.

As an organization, companies, institutions and entities in general can establish an organizational model that guarantees that their own developments or technologies acquired from third parties that use AI are subject to supervision. This means ensuring that they are safe, transparent, traceable, non-discriminatory and respectful of human rights and the environment. This model would be based on three key components:

Fundamental Principles of Responsible AI: These principles, in line with the organization's values, should be the basis of all projects involving AI technology. This includes respect for human autonomy, prevention of harm, equity and transparency.

Supervision Methodology: A specific methodology must be implemented that allows supervising the entire life cycle of an AI project, from its conception to its execution, with control and monitoring mechanisms.

Responsible AI Governing Body: The creation of a Responsible AI Committee is recommended, made up of representatives from areas such as Legal, Compliance, Privacy, Information Technology, Information Security, Human Resources and Purchasing. This committee would ensure compliance with the principles and methodology, in addition to maintaining a record of all AI-related projects.

Furthermore, as citizens and users of AI, we can contribute to the control of this technology in two fundamental ways:

Protect Our Data: Since AI systems are data-driven, we must ensure that privacy policies comply with data protection regulations when we share our information with this technology.

Use AI Responsibly: When using AI, we must provide appropriate and ethical prompts, ensuring that the tool is trained and operated in accordance with ethical, moral and legal principles, respecting human dignity.

There is no doubt that AI has the potential to improve organizations and people's quality of life. However, we all have a responsibility to ensure that it does not become a real threat to humanity. Responsible AI is a shared goal that requires active commitment from all parties involved.

Written by

Diolimar García
Diolimar García
02-04-2024 23:44:05
Contact