This week the president of Microsoft joined the club of business leaders who have released major artificial intelligence tools to the general public and then give us dire warnings. In the case of Microsoft, what is most grating is that the company fired its entire ethics team while finalizing its big launches in the field of AI this year (which has even earned it a public rebuke from the United States authorities). ).
Many of the tech giants that create AI tools They talk about regulating these tools but they say that it must be the governments that do it, while they wash their hands of it.
But at the same time, there is also no information that companies share with national or supranational authorities what’s being worked on (which makes sense because they have to keep their plans unfiltered to avoid being copied by the competition) so they can stay ahead of the curve.
This, all together, means that we have powerful AI tools in our hands (and everything that is expected to come), but without any regulation.
ZAO, the Chinese MOBILE APP that through DEEPFAKE turns you into DICAPRIO in SECONDS
The need to regulate AI according to its creators
Mira Murati, one of the great minds behind ChatGPT and probably the most powerful woman in artificial intelligence, has a very low-key life in which there are few documents with her opinions and ideas, but she has visited the American program The Daily Show of Trevor Noah and There they touched on interesting topics such as the regulation of artificial intelligence or ethics in these technologies.
In an interview with Time he had already said that AI can be misused, or it can be used by bad people. So there are questions about how the use of this technology is controlled globally. She believes that, more than companies, it should be governments and regulatory bodies that address these issues.
The Center for AI Safety is an NGO that wants to “reduce the risks at the societal level that AI entails” and a few months ago, the “AI Risk Statement” was published on its website. This statement, in which creators, academics, journalists and regulators, among others, also participate, aims to warn of the dangers of developing this technology inappropriately. This brief statement affirms that:
“Mitigate extinction risk from AI should be a global priority, along with other large-scale societal risks like pandemics and nuclear war.”
Companies that have come together for their own regulation
With all this, there are large companies that have announced unions and measures for their regulation, but for now it is not clear that it is something effective or marketing. For example, in July, the creators of ChatGPT join Microsoft and Google to regulate the development of artificial intelligence.
Four of the main AI players announced a union to form a new industry body designed to ensure the “safe and responsible development” of artificial intelligence.
Thus, the developer of ChatGPT OpenAI, Microsoft, Google and Anthropic have announced the Frontier Model Forum, a coalition that draws on the experience of member companies to develop technical assessments and benchmarks, and promote best practices and standards.
Initiatives to regulate AI
In the European Union, in July the Commission announced that as part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to guarantee better development conditions and use of this innovative technology. AI can bring many benefits, including better healthcare, safer and cleaner transportation, more efficient manufacturing, and cheaper, more sustainable energy.
In April 2021, the Commission proposed the first EU regulatory framework for AI. It proposes that AI systems that can be used in different applications are analyzed and classified according to the risk they pose to users. The different levels of danger will imply more or less regulation. Once approved, they will be the world’s first AI standards.
Italy even banned ChatGPT in March “with immediate effect” for having serious doubts about your data protection policy. It was the public authority Guarantor for the protection of personal data (the equivalent of the Data Protection Agency in Spain) that has given the order to limit the processing of Italian data by OpenAI “with immediate effect”, opening an investigation in parallel.
Then he allowed it again although certain changes were asked to the great OpenAI tool. At that time, OpenAI had 20 days to prove that it complied with the law, which in practice translates into taking action. If you did not meet this deadline, the prohibition would go from provisional to permanent and they contemplated Economic sanctions.
In Spain there has been talk on various occasions of regulating, although more in the past and there are no forceful decisions for this year 2023 while this technology is unstoppable. In 2020 the Government announced the investment of 600 million euros in a new employment plan that will be supported by Artificial Intelligence and a public agency was also proposed to supervise the use of algorithms and AI in Spain.
Image| Photo by Lenin Estrada on Unsplash
In Genbeta | https://www.genbeta.com/a-fondo/amenaza-inteligencia-artificial-real-estos-20-empleos-riesgo-ser-sustituidos-ia