Secure Innovation Act for Frontier Artificial Intelligence Models intensifies debate on AI regulation!
The ethical dilemmas posed by the use of Artificial Intelligence would be endless without a legal framework to regulate them. In this sense, the authorities of California in the United States have taken a first step, becoming the first US state to pass a bill regulating the training of large-scale AI models.
SB 1047, known as the Safe Innovation Act for Frontier Artificial Intelligence Modelsestablishes a series of requirements for companies that develop and train these models. Of course, the measure has already generated debate among the companies that promote it.
AI does not escape the law
According to the senators in favor of the project, It seeks to act as a “safeguard to protect society” from the misuse of AI.
Some of the changes proposed by the new law focus on allowing actions such as:
- Quickly and completely disable an AI model if it is deemed unsafe. Developers will need to ensure this is possible.
- Language models must be protected against “unsafe post-training modifications” or anything that could cause “critical damage.”
Several experts have already spoken out against it, fearing that the new regulations could stifle innovation and make the work of independent developers more difficult. Small developers and companies like OpenAI have expressed concern about the difficulties they might face in meeting all the requirements. established by law and the penalties that this entails.
For its part, Professor Hinton, former director of artificial intelligence at Google, praised the bill because the risks of powerful artificial intelligence systems are “very real and should be taken extremely seriously,” according to 9to5mac.
Naturally, Apple is taking a keen interest in these regulations, given that the company is actively developing Apple Intelligence features, scheduled for public release this fall with the iOS 18.1 and macOS Sequoia 15.1 updates, although the iOS 18.1 beta already has some functionality.
However, the decision on whether to approve or veto the bill will have to wait until the end of September.
Setting precedents
With this law, California positions itself as a leader in AI regulation. This could influence legislation at the federal leveland combat the risks generated by the misuse of AI, such as the generation of misinformation or the creation of distorted images and videos (deepfakes). The latter, a major concern that highlighted the lack of regulation, with the launch of the Google Pixel Studio tool and the stir it caused by allowing the generation of violent images.
At the federal level, The Biden administration has unveiled a set of voluntary guidelines for tech companies.
While in March this year, the European Parliament approved the Artificial Intelligence Lawwhich guarantees security and respect for fundamental rights while driving innovation.