On its machine learning blog, Apple announced resounding support for the StableDiffusion project. This includes updates in macOS 13.1 beta 4 and iOS 16.2 beta 4, which have just been released, to improve the performance of these models running on Apple Silicon chips.
Apple leads by example with Core ML
Apple hasn’t stopped at updating its operating systems, the company has also published a detailed document and code examples (see Github below) to show how to convert the source models of StableDiffusion into a format native Core ML.
This announcement is the biggest official endorsement Apple has given to the recent emergence of machine learning-based image generators.
As a reminder, image generation techniques based on machine learning have become known thanks to the surprising results of the model SLAB. These AI image generators accept a text string as a prompt and attempt to create an image of what you requested. A variation called StableDiffusion launched in August 2022 and quickly gained community interest. Recall that Meta launched a similar concept for videos with Make-A-Video.
With new hardware and software optimizations, StableDiffusion’s Core ML models take full advantage of Apple’s neural engine and GPU architectures found in M-series chips. This leads to impressively fast generators. . According to Apple, a base MacBook Air M2 can generate an image using a 50-iteration StableDiffusion model in less than 18 seconds (down from 23 seconds). Even an iPad Pro M1 can perform the same task in under 30 seconds (down from 38 seconds) according to Apple’s Github.
Apple takes the opportunity to encourage developers to integrate StableDiffusion 2 into their applications so that they run on the client, rather than depending on online services, called Cloud. Running on the device is “free” and preserves privacy.