Every year, the Sohn Conference Foundation organizes an investment ideas contest, and the winning idea of the contest is presented at a conference that is held annually in the US. But it is not the only thing that is done during said event. At the 2023 conference, held on May 9, Stripe CEO (Patrick Collison) struck up a conversation/interview with his OpenAI colleague (Sam Altman) in which they talked at length about the future of AI.
They started by talking about economics (and about World Coin, Altman’s ‘crypto’ project). Altman himself was convinced that AI could trigger a massive increase in real economic growth by identify the most productive investments and designs for the companies.
But from their point of view, AI can have far more relevant and profound economic impacts. He himself gives as an example that an AI that could effectively allocate funds to cancer research it could save billions of dollars while accelerating progress in the search for a cure.
About ChatGPT (and other OpenAI projects)
What about the impact of ChatGPT and all that OpenAI is doing on the web search industry? Altman sums it up like this:
“Language models are going to greatly change Internet search, but they’re not a threat to search. And they’d be a threat to Google only if Google didn’t do anything about it…and obviously Google isn’t going to do nothing.” .
Remember that this video was recorded the day before Bard’s global unveiling at the Google I/O eventa launch that —as we have already analyzed here— will have a decisive impact on searches as we have been understanding the concept.
Altman has also provided two relevant data:
For what task do users most often use ChatGPT? The most common use case for ChatGPT is text digest “so you can keep up with email and Slack messages.”
In addition, Altman has announced that Open AI is developing a program that allows natural conversation with ‘AI agents’ (programs like AutoGPT, prepared to use language models to carry out automated tasks, without relying on constant human feedback), which could have important consequences once voice communication is possible.
On the debates surrounding the security of AI
Altman believes that relying solely on reinforcement learning for AI is not an adequate long-term solution, and that what is crucial to achieving ‘alignment’ (the technical term for AI/human interest matching) is understanding the inner workings of AI. The models:
“You know, being able to tell exactly ‘here is the circuit’, or the set of artificial neurons, where something is happeningor, and tune it in a way that produces a robust change in model performance.”
“I think that nuclear materials and AI supercomputers have some similarities, and that it is in this field that we can draw more parallels and inspiration than others… but I would also want to warn people about jumping to conclusions from this. I am thinking specifically of [plantear] sort of like an IAEA for artificial intelligence.”
He believes that while the temptation to draw too much inspiration from previous technologies should be avoided, establishing a global regulatory agency for powerful AI training systems is crucial for safety:
“But if tomorrow existed, what would be the first thing I should do?” The answer, in his opinion, is to demand that systems above a certain threshold undergo security audits and assessments before its release.
In Altman’s view, more technical work is needed to ensure the safety of the AI, instead of mere philosophical discussions on Twitter:
“Most of the people who say they’re really concerned about AI security seem to just spend the day on Twitter saying they’re very concerned about AI security. There are people who are very concerned and doing a lot of technical work on it, but we need a lot more people like that. What the world needs is not more people writing long philosophical rants on Twitter.”
Altman and Collison have also addressed the theory—held by more people in Silicon Valley—that Chinese researchers publish a large number of articles, but their impact per article is relatively low, which makes us suspect that they may not be publishing the most important works. “Do you think that’s likely to be true?” Collison asks. “I don’t trust my intuitions on this topic…I just feel confused.”
The open source push
Both interlocutors agree in highlighting the significant and increasingly rapid improvement of the great ‘open source’ language models, although Altman continues to reserve the proprietary ‘superlarge’ models the task of generating new advances in the field of AI.
“One of the big surprises this year, for me, has been the progress in open source models, and this frenetic pace in the last 60 days, or this way. How good do you think open source models will be a year from now?” Collison asks.
“But I think we’re going to be in a world where there are very capable open source models and people are using them for all sorts of things, and the creative power of the whole community is going to impress us all.”
Asked whether for many ‘everyday use cases’ an open source model might be sufficient, Altman says he believes that “for many economically valuable things, yes, a small open source model will suffice“.
Although, according to him, the open models will always remain a couple of years behind the owners.
Altman has also had words for Facebook, since the release of the code for his LLaMa model was the flip switch on the current boom in open source language models.
According to the CEO of OpenAI, Facebook’s AI strategy “has been unclear” but he is confident that, having the competent team behind their AI projects, they will end up adopting “a more coherent strategy.”
Image | Based on original by Techcrunch on Flickr
In Genbeta | “We are not here to masturbate to the number of parameters.” The CEO of OpenAI downplays what was most praised about GPT-4