In this publication, it is explained how Apple is carrying out this process, and the way they are doing is, at least, ingenious.
How does Apple train your artificial intelligence?
The challenge is complex: improve functions such as intelligent writing, automatic summaries or gene generation without accessing sensitive user information. To achieve this, Apple has supported techniques such as differential privacy and the use of synthetic data, an approach that allows study patterns for use No need to identify anyone individually. That is, yes, Apple needs data to train its AI, but it does not collect them directly from your emails or your conversations.
The system works in a way as sophisticated as careful. Apple generates fictional emails with common themes, as an invitation to play tennis at 15:00, and makes them “embeddings”, which are numerical representations of the content that summarizes aspects such as the subject, the language used or the length. These embeddings are sent to a small group of users who have activated the option of Device analysis. The iPhone of each of those users compares those synthetic emails with real samples that have locally stored, and determines which one it looks more like.
The key is that the most similar mail selection is done on the device itself, without the Apple ever seeing the real content. From there, through differential privacy, which is a technique that introduces random content to protect personal data, Apple can know which synthetic emails have been selected more times together, which allows you to deduce how users usually express themselves without having read even one of their messages.
Apple thinks about the user
This information is then used to train models that improve functions such as mail summaries, automatic writing or intelligent responses. And all this happens without compromising user privacy, since the data collected are not linked to a device, or an account, not even an IP address. The same goes for the Genmoji: Apple only analyzes patterns that have been used by hundreds of people and ensures that no request is unique or identifiable.
This process not only demonstrates to what extent Apple carries its promise of privacy by flag, but also reveals a level of technical sophistication that could mark the way to follow in the development of artificial intelligence ethically.
In addition, this method will not stay there. Apple has already confirmed that these techniques will extend to more Apple Intelligence tools such as Playground Image, creation of memories, writing tools or visual intelligence.
Of course, for these processes to work, it is necessary for the user to have the option to share the device analysis activated. If the idea does not convince you, you can deactivate this function in privacy adjustments. But if you decide to participate, you will be helping to train a more useful and powerful AI … without giving up your privacy.