The artificial intelligence is taking over the universe and is becoming a technology with exponential and practically unstoppable growth. That is why great experts, philosophers, businessmen like Elon Musk and scientists want to join forces to temporarily stop its development.
The letter, issued by the non-profit Future of Life Institute and as of Wednesday morning, has been signed by more than 1,100 peopleincluding Max Tegmark (MIT Center for Artificial Intelligence and Fundamental Interactions), and tech entrepreneurs like Elon Musk (SpaceX, Tesla, Twitter) or Steve Wozniak (co-founder of Apple), among many others.
“Powerful artificial intelligence systems should be developed only once we are sure that their effects will be positive and their risks are manageable,” you can read the letter.
when pointing some of the risks of AIthe letter denounces the “out of control race to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict, or control” and the lack of “planning and management” suitable for potentially dangerous technology.
‘Pausing Giant AI Experiments: An Open Letter’
The reality is that the development of these systems —such as the new OpenAI GPT-4 model— has taken on an ever-increasing speed, unattainable by security systems and almost light years away from being aware of everything that is happening.
This is something that institutions such as Europol alert, warning about the possible misuse of the system in ‘phishing’ attempts, disinformation and cybercrime.
![Elon Musk and other experts concerned about the uncontrolled development of AI 1 Artificial intelligence](https://cdn.computerhoy.com/sites/navi.axelspringer.es/public/media/image/2023/03/inteligencia-artificial-2995870.jpg?tf=3840x)
Of course and not surprisingly, Sam Altman, CEO of OpenAI, has not signed the letter and prefers to stay away from this type of revolt. The reality is that being the CEO of the largest artificial intelligence company in the world, it is quite logical.
The letter is fully available to read and to emphasize again that it is published by the Future of Life Institute, which supports “the development of institutions and visions necessary to manage the technologies that drive the world and enable a positive future” and aims “reduce large-scale damage, catastrophes, and existential risks resulting from accidents or intentional misuse of transformative technologies”.