About the letters and manifestos from Silicon Valley representatives positioning itself on the dangers and measures to adopt around the artificial intelligence it is an exciting subject. As soon as some draw a letter, another group of CEOs and experts come together to take the matter down, they wait a few weeks… and then they take out their own public positioning…not so different from before.
A week ago, without going any further, the non-profit Center for AI Safety published its “AI Risk Statement” of only 22 words (in English), which we translate as follows:
“Mitigate the risk of extinction from AI should be a global priority, along with other societal risks like pandemics and nuclear war.”
Among the signatories of such a brief message were CEOs of several of the leading companies in AI, including Sam Altman himself, CEO of OpenAIwhich many considered the recipient of the criticism of the letter (with a similar objective to this one, although more extensive) published weeks ago by Musk.
But the driving entity (which defines its mission as “reducing risks on a societal scale associated with AI”) seems to have decided that conciseness did not conflict with digging a little deeper in your proposals.
Thus, they have just presented a new, more extensive document of ‘existing political proposals aimed at present and future damages’. “The goal of this one-page document is to outline three proposals that we believe advance AI security. This list is not exhaustive“.
There is, yes, a big difference with respect to ‘Musk’s letter’: the need for moratoriums on the development of advanced AIs is not mentioned
ZAO, the Chinese MOBILE APP that through DEEPFAKE turns you into DICAPRIO in SECONDS
I. Legal liability for damage caused by AI
The first theme addressed in the text is the need to establish enhanced legal accountability frameworks for damage caused by AI systems. The AI Now Institute is quoted as arguing that allowing General Purpose AI (GPAI) developers to evade liability through standard legal exemptions would be the wrong approach…
…because it would create a dangerous lagoon that would favor large companies with many resourcesto place all the responsibility on ‘downstream actors’ who lack the resources, access and capacity to mitigate all risks.
II. Increased regulatory scrutiny
The second prominent theme focuses on the need for greater regulatory scrutiny directed at the development of AI systems, and that spans the entire product life cycle, not just the application layer: the companies that develop these models should be accountable for the data and design decisions they make.
According to the document, transparency and regulations targeting training data can help combat algorithmic bias and prevent companies from obtaining benefits of copyrighted materials without compensating its creators.
III. Human supervision of automated systems
The third theme highlighted in the paper is the importance of human oversight in the implementation of high-risk AI systems. Human supervision can help mitigate possible concerns regarding both bias and the spread of misinformation and detection (and timely deactivation) of “dangerous” AI systems.
Express reference is made to the European Union’s regulatory proposal, positively assessing the emphasis it places on the need for human beings to be able to intervene and annul a decision or recommendation that could cause potential harm.
In summary…
Yet another attempt by Altman & Co. to get ahead and ‘regulate before they regulate’, telling lawmakers how best to tie the industry’s hands. We already spoke in the chronicle that we did of Altman’s intervention before the US Senate about the problems that this potential regulation to dictation has. That being said, ‘the devil is in the details’and the brevity of the document does not allow us to appreciate them for now.
In Genbeta |