The AI features that will be included in Apple Intelligence will mostly run locally on your iPhone. However, some features will need to use external servers that Apple will host, the company calls its servers “Private Cloud Compute”. Craig Federighi who is Apple’s vice president of software, discussed some details about the next-generation servers for Apple Intelligence, a very interesting interview with Wired which allows us to know information that we did not know.
Behind the scenes of Private Cloud Compute servers
Starting this year, iPhone users in the United States will be able to take advantage of Apple Intelligence, new AI features that are coming to Macs, iPhones, and iPads in droves. For some features, Apple will have no choice but to use its own servers, but this will be done in a privacy-friendly manner. In a recent interview published today, Craig Federighi shared some information about the processes that are being implemented internally.
First of all, he explains that the PCC servers are intended to be very basic, they are not the latest generation servers, but servers identical to those that can be used for other Apple services. The difference, however, is that the servers will all be equipped with M2 Ultra and M4 chips, information that was revealed in May 2024 by journalist Mark Gurman.
Another interesting detail is Apple’s desire not to save any data from your interactions with Apple Intelligence’s AI features. Apple’s goal is for you to use its servers remotely, but for them to know nothing about you, and not even be able to identify you or save any information about what you’ve done from your Apple device.
PCC servers are as basic as they come. For example, they do not include “persistent storage”, meaning they do not have a hard drive that can retain processed data long-term. They integrate Apple’s dedicated hardware encryption key manager (Secure Enclave) and also anonymize each file system’s encryption key at each boot. This means that once a PCC server is rebooted, no data is retained and, as an added precaution, the entire system volume is cryptographically irrecoverable. At that point, all the server can do is start over with a new encryption key.
But who proves that Apple is telling the truth?
With companies like Meta still doing “big things” with our personal data without saying so openly (or belatedly), it’s hard to trust the kind of talk Craig Federighi is giving.
According to Apple’s vice president of software, a user in the United States or even in France can easily check if the system is working, as Apple says today in the interview.
Apple makes each PCC production server publicly available for inspection so that people not affiliated with Apple can verify that PCC does (and does not) do what the company claims and that everything is implemented correctly.
All PCC server images are recorded in a cryptographic attestation log, essentially an indelible record of signed claims, and each entry includes a URL for where to download that individual version. PCC is designed so that Apple cannot put a server into production without registering it.
And in addition to providing transparency, the system functions as a crucial enforcement mechanism to prevent bad actors from setting up malicious PCC nodes and hijacking traffic. If a server build has not been registered, iPhones will not send Apple Intelligence queries or data to it.
Apple understands that artificial intelligence is a wonderful innovation and that it should not be missed, however, the company also understands users’ concerns about their privacy. Craig Federighi explains that this is all an initiative on Apple’s part to “create a model of trust.”