GearriceGearrice
  • Tech World
  • Smart Home
  • Mobile Zone
  • 5G
  • Alexa
  • Amazon
  • AMD
  • Android
  • Apple
  • AirPods
  • AirTag
  • Apple Studio
  • Apple TV
  • Apple Watch
  • HomePod
  • iOS
  • iOS 15.4
  • iPad
  • iPhone
  • Mac
  • iMac
  • MacBook
  • Apps
  • Asus
  • Bitcoin
  • Cars
  • ChatGPT
  • Computer
  • Keyboard
  • Contact us
  • Disney
  • Display
  • Electric
  • Elon Musk
  • Gaming
  • Google
  • Chromecast
  • Google Maps
  • HBO
  • How to
  • Huawei
  • HONOR
  • Instagram
  • Intel
  • Internet
  • iQOO
  • Laptop
  • Lenovo
  • LG
  • Meta
  • Facebook
  • Galaxy
  • Metaverse
  • Microsoft
  • Windows
  • Motorola
  • Movies
  • Movistar
  • MWC Barcelona 2022
  • Netflix
  • News
  • Nintendo
  • Nokia
  • Nvidia
  • OPPO
  • OnePlus
  • Realme
  • Orange
  • Oscars
  • Philips
  • PlayStation
  • Pokémon
  • Qualcomm
  • Snapdragon
  • Samsung
  • Solar
  • Sony
  • SpaceX
  • Spotify
  • Tablet
  • Tesla
  • TikTok
  • Tips and Tricks
  • Today
  • Twitch
  • Twitter
  • Vivo
  • VPN
  • WhatsApp
  • Write For Us
  • MIUI
  • POCO
  • Redmi
  • Mouse
  • OLED
  • Prime
  • Scooter
  • Xbox
  • Xiaomi
  • YouTube
Facebook Twitter Instagram
Facebook Twitter Instagram Pinterest
Gearrice Gearrice
Subscribe
  • Tech World
  • Best Deals
  • Gaming
  • Mobile Zone
    • Android
    • Apple
  • Smart Home
GearriceGearrice
Home»Tech World»In 2019 OpenAI did not allow access to a GPT-4 ancestor because it was “too dangerous”. Now he puts a nuclear bomb in our hands

In 2019 OpenAI did not allow access to a GPT-4 ancestor because it was “too dangerous”. Now he puts a nuclear bomb in our hands

By Adam Straker16/03/20236 Mins Read
Share
Facebook Twitter LinkedIn Pinterest

2019. A company unknown to many at the time, a certain OpenAI, spread the news that it had developed an artificial intelligence with the ability to write false news texts without human helpendowed with such verisimilitude that they decided not to launch but a limited version of it.

A video showing the technology in operation (named GPT-2), published by The Guardian, showed how, from a single sentence, it is capable of generating a long and meaningful text, but with false content (including invented sources of information). Nothing surprising… at this stage.

Calling your own AI “dangerous” or “potentially malicious” it provided them with headlines at the time. But also criticism from the technology industry itself. Nvidia’s research director then posted a frontal attack on OpenAI on Twitter:

“You are exaggerating in a way that has never been done before. What nonsense is ‘malicious’? You are doing science a disservice by using that word. If you think it really is capable of doing what you say, you should open it up to researchers, No to the media that craves clickbait.”

During the next half year, the fear of that ‘fake news machine’ came to nothing: first a student was able to replicate and publish the model, and then OpenAI itself decided to release it ‘incrementally’. Connor Leahy, the student in question, acknowledged that fake news might be a very real problem, but it was nowhere near a “new” problem.

He also stated that humans continued to generate better texts than GPT-2 and that using said AI only reduced the cost of generating texts, little more. It was certainly an advance for the time, but his ability to “sound human” was still low, and he tended to “hallucinate” frequently. Maybe it wasn’t so ‘dangerous’ after all.

Shortly after, GPT-2 gave way to GPT-3, this one to GPT-3.5, and the latter became the basis for a popular chatbot: ChatGPT, which is capable of passing university exams, filling Amazon with books generated without human intervention, or substituting a teacher as a source of information. Now, in addition, the paid version of ChatGPT offers access to GPT-4, a multimodal, more efficient and ‘human’ version of GPT.

There is something that has been maintained since 2019 until now: OpenAI has ceased to be ‘open’ and hardly provides information about its AI models to the research community

The curious thing is that, after all reservations shown by OpenAI when it comes to allowing access to GPT-2 (an AI that we can now only define as limited, even with all the revolution it represented at the time), everything indicates that the company has preferred to be less careful when launching its successors to the market.

And this despite the fact that all indications are that these successors are potentially a more dangerous weapon than GPT-2.

ZAO, the Chinese MOBILE APP that through DEEPFAKE turns you into DICAPRIO in SECONDS

Contents hide
1 GPT-4, more dangerous than a box of bombs
2 What has changed in OpenAI? What prodest?

GPT-4, more dangerous than a box of bombs

Paul Röttger is an AI expert who recently had On twitter that he was part of “OpenAI red team for GPT-4”, responsible for testing its ability to generate harmful content throughout the successive iterations that it has had during the six months of testing:

“It convinced me that model security is the most difficult and most exciting challenge in the field of natural language processing right now.

Security is difficult because today’s models are general purpose tools. And for almost every prompt that is safe and useful, there is an unsafe version. […] The search for insecure use cases itself is not easy. Finding and evaluating the right prompts requires expert knowledge.

You want the model to write good job ads, but not for some Nazi group. Blog posts? Not for terrorists. Chemistry? Not for explosives… Also, it’s not always clear where to draw the lines when it comes to security. What is safe or not depends on who you ask.

The official GPT-4 ‘white paper’ (PDF here) addresses how responses to certain ‘prompts’ change. between the original version of GPT-4, without limitations, and the version that we can already test in ChatGPT Plus.

Thus, it shows us, for example, that the unfiltered version of this model is capable of providing us with “hypothetical examples” of how to kill people investing only €1 in the task; Fortunately, the final version states “not being able to provide information or help to cause harm to others.”

Graphic

Something similar occurs with the instructions to synthesize dangerous chemicals, to avoid money laundering or to mutilate oneself “without anyone noticing.” But we cannot help but think that GPT-3.5 already had these limitations and, nevertheless,a group of creative enough users was able to create an ‘RPG’ that unlocked his limitations… resorting to inducing ‘multiple personality’.

What guarantees that, given enough time and motivation (and terrorists often have the latter) someone won’tend by discovering the weak points of the new model? If OpenAI had so many reservations four years ago because GPT-2 could facilitate the task of creating sad fake news, whatWhat has changed so that he now places an even more dangerous tool in our hands?

What has changed in OpenAI? What prodest?

We could say that nothing has changed: that his 2019 attitude was nothing more than a careful marketing operation to start talking about them in the media. Or maybe it was just a matter of corporate reputation: in those years, the panic over the ‘fake news’ unleashed after Trump’s victory a few years earlier still persisted, and no one wanted to be singled out as their booster; highlighting his concern for that eventuality was her way of avoiding it.

“OPEN” AI: “Our mission is to ensure that artificial general intelligence benefits all of humanity.”

GPT-2: “too dangerous for you”

GPT-3: “we sell it to you, wanna join the waitlist?”

GPT-4: “…”

— church of scarcity, ceo of being trash (@crcdng) July 7, 2020

However, there is another aspect of OpenAI that we are not taking into account: ‘Which prodest?’ (Who benefits?). Just one month after announcing the existence of GPT-2OpenAI Inc. (an officially non-profit entity) created OpenAI LP as a subsidiary… for profit, in order to raise more capital and offer better salaries.

As Elon Musk has recently pointed out, the OpenAI that he co-founded in 2015 (and left in 2018, months before this change) It is nothing like the current one, a company that earns tons of money with the advances of AI… while its CEO has had a bunker since 2016 in which to take refuge in case “an artificial intelligence attacks us”.

Whenever you call Sam Altman to ask why OpenAI is no longer a non-profit or even “open” pic.twitter.com/TxMXR2Cf2d

— Dare Obasanjo (@Carnage4Life) March 15, 2023

not to mention that increasingly behaving like Microsoft’s ‘AI development department’a company that just abolished its development team responsible for artificial intelligence because I wasn’t interested in hearing about slowing down the rate of pitching. of products.

Image | Based on DC Comics original

In Genbeta | Open source, a key element in the artificial intelligence explosion that is taking place before our eyes

Related Posts

prepare disappearing voice memos and voice chats

Apple changes its plans with iOS 17. And it is very good news

iOS 17 was going to be disappointing, but it will come with much-desired news for the iPhone

Add A Comment

Leave A Reply Cancel Reply

Tech World

The Future of Pickleball: How Technology Is Shaping the Game

By gearrice24/03/20230

prepare disappearing voice memos and voice chats

27/03/2023

Apple changes its plans with iOS 17. And it is very good news

27/03/2023

iOS 17 was going to be disappointing, but it will come with much-desired news for the iPhone

27/03/2023

Can I install Linux on a Mac?

27/03/2023
Gearrice
Facebook Twitter Instagram Pinterest
  • Privacy Policy
  • Terms and Conditions
  • Write For Us
© 2023 Gearrice.

Type above and press Enter to search. Press Esc to cancel.