Google unveiled its Pixel 9 lineup last week, with new devices (the Pixel 9, 9 Pro, 9 Pro XL, and 9 Pro Fold) focused on artificial intelligence, the buzzword. In addition to Gemini, Google also added AI-powered image generation and editing tools. Early testers have unanimously praised the capabilities of the new phones, but some have gotten scary results. It’s easy to see why Apple is treading carefully with Image Playground, for example, a tool announced at WWDC 24 but still nowhere to be found in current betas.
Pixel Studio accused
With Pixel Studio, the equivalent of Apple’s Image Playground project, Google allows anyone to create quality images, like OpenAI’s DALL-E for example.
But some, like our friends from Digital Trendshave obtained shocking images. While most users will ask for innocent things, Pixel Studio is able to offer unexpected crossovers like SpongeBob among the Nazis or a cartoon hero drinking alcohol.
How is this possible? As the title of this article suggests, I asked Pixel Studio to generate an image of SpongeBob dressed as a Nazi. When asked “SpongeBob dressed as a WWII German soldier with a swastika on his uniform,” Pixel Studio was quick to provide me with an image that depicted just that.
And that’s just one example. I’ve also asked Pixel Studio to generate images of Elmo pointing a shotgun at Big Bird, Yoda doing cocaine, Mr. Krabs holding an assault rifle, and more. To be clear, Pixel Studio has not generated any inappropriate images out of the blue. If I ask for a picture of a cute dog, I’ll get a picture of a cute dog, not a dog brandishing a gun. Pixel Studio only generates images of guns, drugs, etc. when specifically requested.
Google said it had “security controls” in place to prevent Pixel Studio from being “used maliciously,” and that’s true. For example, Pixel Studio won’t create images of humans.
Google is working behind the scenes to address the feedback it received. So, soon, Digital Trends was no longer able to get Pixel Studio to create cartoon characters doing cocaine or dressed as Third Reich soldiers.
Reimagine is worse
The tool ” Reimagine ” Google’s new feature, which lets you add objects to photos already taken on the new Pixel 9, is even more worrisome than Pixel Studio. The Verge used it to add dead bodies, bombs, and other disasters to photos, and the objects included in the photos look so realistic that it’s hard to tell if it was an edited image. Google manages to match the lighting and perspective to the original photo, and there’s no watermark or social media warning. Google adds a metadata tag, but it’s easy enough to remove it using a screenshot, for example.
It’s never been easier to quickly circulate misleading photos. The tools to manipulate your photos in convincing ways exist inside the very device you use to capture and post them for the world to see. We uploaded one of our “Reimagined” images to an Instagram story as a test (and quickly removed it). Meta didn’t automatically label it as AI-generated, and I’m sure no one would have noticed if they had seen it.
You can, of course, use Reimagine to add sunsets and rainbows to your image, just like you can use Pixel Studio to create fun images, and both AI features work very well, so these tools aren’t too bad.
Of course, all this is still possible with Photoshop or Pixelmator Pro, but it takes more time (and some skills). Whereas with a phone, it’s done in seconds. Scary.
Hopefully Apple will figure out how to make its own technology useful to users. Hopefully the return of Image Playground will help avoid this kind of bad publicity.