OpenAI has released a powerful new AI model for understanding images and text, GPT-4, which the company calls “the latest step in its efforts to extend deep learning”.
After months of rumors and speculation, OpenAI finally announced GPT-4: the latest in its line of AI language models that power apps like ChatGPT and the new Bing.
According to the OpenAI website product page, GPT-4 is ” more creative and collaborative than ever ” And ” may generate, edit, and iterate with users on creative and technical writing tasks, such as songwriting, screenwriting, or learning a user’s writing style “.
GPT-4 finally understands images submitted by users
One of the main differences between GPT-4 and GPT-3.5 (the model that powers ChatGPT) is that the new GPT-4 version is able to accept images in addition to text. For example, if you show GPT-4 a picture of eggs, flour, and milk and ask what you can do with those ingredients, the language model will include the image and list several potential recipes.
You can now scan marketing and sales reports, with all their graphs and figures, textbooks and user manuals, even screenshots work, and ChatGPT will take care of summarizing everything in a folder in a appropriate language. An OpenAI employee even demonstrated that it was possible to create a complete website from a few rather vague instructions written by hand on a white sheet.
GPT-4 is smarter than ever
Unsurprisingly, GPT-4 is also much smarter than previous models. According to OpenAI’s research paper, GPT-4 passed a mock bar exam with a score similar to that of the top 10% of candidates. In comparison, ChatGPT scored below 10%. The Multimodal Language Model also scored 99% in the Biology Olympiad, compared to 31% for GPT 3.5.

GPT-4 also outperforms ChatGPT with its advanced reasoning capabilities and can produce more concise and precise answers than ChatGPT.
Also read – ChatGPT has the brain of a nine-year-old
GPT-4 is not yet perfect
Despite this significant technological advance, not everything is perfect yet. OpenAI CEO Sam Altman warned, ahead of the GPT-4 reveal, that “ people are just waiting to be disappointed “. Altman also tweeted that the model is ” still imperfect, still limited, and still looks more impressive on first use than it does after spending more time with it “.

OpenAI also warns that the systems retain many of the same issues as earlier language models, including a tendency to make up information and the ability to generate violent and damaging texts. However, according to OpenAI, it is also more difficult for GPT-4 to escape the safeguards that limit its usability.
How to try GPT-4?
OpenAI says GPT-4 will soon be available for both ChatGPT and the API. You will need to be a ChatGPT Plus subscriber to access it, and be aware that there will also be a usage limit for playing with the new model. Access to the API for the new model is managed by a waiting list.
The API will allow programmers to integrate AI into their applications. OpenAI will charge approximately 3 cents for 750 question words and 6 cents for 750 answer words. The good news is that it might be possible for you to try GPT-4 for free at some point. ” Depending on the traffic patterns we observe, we may introduce a new subscription tier for higher GPT-4 usage; we also hope at some point to offer a number of free GPT-4 queries so those who don’t have a subscription can try it out as well “explains the company.