Nowadays, all the artificial intelligence technology that we haveeven the newest and most surprising examples, falls into the category of ‘weak artificial intelligence’: Hyper-specialized forms of AI in a certain task, in which they can even surpass humans, but are not capable of going further.
Yes, ChatGPT is capable of answering questions on many topics, but it only answers questions, does not perform medical diagnoses or is capable of driving autonomously, to name just two cases.
However, on paper, AI has more advanced levels, such as the so-called AGI (artificial general intelligence), theoretically capable of solving any intellectual task solvable by a human being, a multitasking AI capable of doing thousands of different things well. We are not talking about a mere aggregation of weak AIs, but about a single AI capable of undertaking all these tasks (and self-training to continue learning).
It’s still not the ‘Skynet’ from Terminator (the detail of being endowed with self-awareness is missing), but as a concept it commands respect, which is just another way of saying that it is disturbing. But there are already people making plans for when we get to that stage. We talk, of course, about OpenAI, the developers of DALL-E 2 and ChatGPTwho this weekend have published an article on their website called “Planning for the AGI and beyond”.
“Beyond”, cloth. We remind you that just a few days ago, Sam Altman, CEO of OpenAI, talked about potentially “we are not that far from witnessing some AI that could be scary.”
ZAO, the Chinese MOBILE APP that through DEEPFAKE turns you into DICAPRIO in SECONDS
How nice to talk about ‘governance’
On its website, OpenAI explains that They consider their mission “to ensure that artificial general intelligence benefits all of humanity.”. If AGI is successfully created, it could help us uplift humanity by increasing abundance, boosting the global economy, and aiding in the discovery of new scientific knowledge.”
“On the other hand, AGI would also carry a serious risk of misuse, drastic accidents, and social disruption. Because AGI’s advantage is so great, we don’t think it’s possible or desirable for society to halt its development forever; instead AGI, society and developers have to figure out how to get it right.”
“[Una vez se cree la primera AGI] progress is likely to continue from there, possibly maintaining the speed of this past decade for a long period of time. If this is true, the world could become vastly different from how it is today, and the risks could be extraordinary. A super-intelligent AGI out of the general interest could cause serious damage to the world; An autocratic regime with a decisive superintelligence advantage could also do that.”
“As our systems move closer to AGI, we’re becoming increasingly cautious about building and deploying our models.”
In its document, OpenAI makes generic references to the need to ensure that “the benefits, access and governance of AGI are shared widely and fairly.” But it doesn’t say how to guarantee that, nor does previous experience with other ‘disruptive’ technologies portend that this may be the foreseeable result: Even today, what keeps the world safe from nuclear war is not ‘global governance’, but the principle of Mutual Assured Destruction between the great powers.
The CEO of OpenAI does have an emergency plan, but it will be of little use to us
The OpenAI CEO, however, does seem to have his own plans for this eventuality, and they sound a lot like the plans many had during the Cold War to deal with nuclear apocalypse. In 2016, when Altman was known primarily for chairing the startup incubator Y Combinator, the New Yorker published a profile on him in which he collected his status as a ‘prepper’, or ‘preparationist’ in the face of potential global disasters.
He did not speak of nuclear apocalypse, but gave as examples the “super contagious virus” pandemics (remember, this is from 2016) and “an AI that attacks us.” Let us remember that, at that time, it had only been a few months since He had just founded OpenAI together with Elon Musk to promote, precisely, the development of AI. Putting the Band-Aid before the wound?
“I try not to think about it too much. But I have weapons, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli army and a large piece of land in Big Sur (a sparsely populated region of California south of Silicon Valley ) that I can fly to.”
Yes, indeed: the creator of the main driving organization, today, of the development of AI, which began as a non-profit entity but today earns money hand over fist with it, has plans to put himself (and only him) safely in his perfectly equipped bunker if the AI development gets out of hand.
Soothing.
what yes?
Via | Futurism & OpenAI Blog