AI: Responsibilities of its creators and users

Last week, in this column, I addressed some of the main challenges that the regulation of artificial intelligence in Mexico will entail. The problem stems from the very legal definition of the concept and its implications for the regulation of related matters, but it is not limited to these aspects. Another challenge posed by regulation in this area is the delineation of responsibilities between those who design AI systems and those who use them.
Artificial intelligence has entered our lives so abruptly that society sometimes perceives it as if it were autonomous, an anonymous entity that constructs and develops itself, as if acting of its own volition, without the umbilical cord connecting it to its creators, and without the interference of the user who gives it instructions and makes requests. The problem with this conception is that, if it were replicated in the legal realm, there would be no way to define those responsible for any rights violations committed through the use of AI, since algorithms are not identifiable legal entities that can assume liability.
For this reason, when regulating this matter, the scope of the responsibilities of both creators and users must be carefully analyzed to preserve legal certainty and the rights and freedoms of both. Just as law developed the theory of legal fiction to allow the creation of legal entities, with their respective legal spheres, it is now necessary to adapt legislation to allow and incentivize the development and expansion of artificial intelligence systems, without leaving the rights of all involved unprotected.
As for AI developers, it is clear that their responsibility includes the process of identifying and selecting the databases and information they will feed into their systems or platforms. A clear example is the disputes that have arisen in several countries over the potential violation of intellectual property rights when AI system developers incorporate databases or content owned by third parties without obtaining their authorization or paying for their commercial exploitation. Regarding the implementation of biased algorithms, any restrictions would have to be carefully evaluated to ensure respect for freedom of expression.
As for users, many of them have developed an almost blind trust in the information they obtain from AI systems. Some perceive them as a kind of modern-day oracles, responsible for solving their problems, to the point that they make decisions that have a profound impact on their lives based on the recommendations generated by an AI platform.
Some people ask AI for psychological support or personalized advice to resolve a specific situation, and they accept the results of the consultation even over their own judgment. According to Dr. Mara Dierssen, president of the Spanish Brain Council, when excessive tasks are delegated to AI, neurological effort is reduced, which also diminishes the ability to think critically and solve problems independently (Vademecum, Royal National Academy of Medicine of Spain). There are also cases in which users use AI tools to generate or distribute illegal content.
In these cases, AI creators would not have to assume responsibility for each person's use of their platforms or the decisions they make based on them. Perhaps they should simply be required to include warning labels, rather than to reduce or exhaustively verify the content they generate, as this would drastically diminish the benefits of AI.
Eleconomista