Both users and companies are increasingly adopting technologies based on artificial intelligence. These have arrived to make our lives easier, especially in the most tedious tasks. This is the case of generative AIs such as ChatGPT, which thanks to its powerful GPT-4 language model is capable of help us in all kinds of situations while responding and processing us in natural language.
However, it is also important to know the risks involved in the use of this type of technology. It is not the first time that certain companies and senior officials have expressed concern about the direction of artificial intelligence. In the case of Google, despite having launched Bard, it also wants to tie up all the possible ends with respect to security. For this reason, it has launched the Secure AI Framework (SAIF), a kind of roadmap in order to shed some light on specific security mechanisms for combat the dangers of misuse of artificial intelligence.
Basic ideas for companies to fortify their security
Although it is currently a framework still in its infancy, the company wanted to explain the concept of this idea and Google’s future plans to collaborate in a community that use artificial intelligence safely. For this they have published an entry explaining all this on their blog.
Google’s intention with SAIF is basically to help other companies and organizations to apply basic security and control mechanisms over their artificial intelligence-based systems and protect themselves from the new wave of threats that take advantage of these technologies.
This is important because it could help companies protect themselves from hacker attacks that take advantage of their artificial intelligence systems to manipulate their models or steal the data with which said models were trained. Google has boiled down its security framework into six key elements:
- Expand protection to infrastructure and the AI ecosystem.
- Extend the detection and response to the threats that arise.
- Integrate AI in defense against other AI-based threats.
- Harmonize control across the platform to ensure consistent security across the organization.
- Tailor controls to adjust damage mitigation measures and quickly generate feedback loops for AI deployment.
- Contextualize the risks of AI-based systems in other company processes.
Although it is still a very premature plan, it seems that Google wants for the moment that organizations and companies have a basic basic idea about security in artificial intelligence. “Even though people are looking for more advanced approaches, you have to remember that you also have to consider the basics,” says Phil Venables, head of security at Google Cloud.
In recent months, all kinds of threats related to the misuse of artificial intelligence have appeared. An example is the injection of promptswhich consists of tricking the AI with malicious commands and ideas “hidden” in blocks of text to try to change the behavior of the AI and obtain sensitive information.
Among other threats we can find theft of models that have trained the AI or ‘data poisoning’, which consists of inject corrupt data to the model to change its behavior and thus have control. Although Google’s idea of adopting stronger measures against threats based on artificial intelligence is something that companies must standardize, it is clear that there is still not much effort dedicated to SAIF, so we have to wait to see what it all leads to. .
Image | Hitesh Choudhary
In Genbeta | What AI Was Missing: WordPress Is Now Able To Write You Full Text For Blogs (And You Can Try It For Free)