Adobe released a new thing called “Generative Fill” yesterday in beta for Photoshop that is absolutely insane. All part of your Firefly project, an engine that uses AI to generate images, only now it will be integrated into your own interface. popular graphic design app – which makes the ability to edit images with artificial intelligence available to virtually anyone.
It is only available in beta version within the download channels of its application suite, CreativeCloudbut the power and what can be done with it is amazing.
What to do if Generative AI doesn’t work for you
Note that if you do not have your date of birth on Behance you will not be able to use this new functionality
When I first tried the Photoshop Beta version, the option of the new floating bar where we can introduce the “prompts” of the images that we want to generate did not appear. If this happens to you, try to update the beta again because sometimes the correct version is not downloaded the first time.
If you download it, and the “Generative fill” button is not activated, it is for two reasons: the strangest is because you may not have internet access (which is a requirement for this new functionality, since nothing is processed locally). The most common is that you need to have your date of birth in the Adobe profile. The complicated thing is to find where it is added, since it is not so simple.
I contacted the Adobe technicians to inform me of this problem and the solution is somewhat hidden: you must go to the “Links to resources” section within Creative Cloud and then click on Behance. This will open a browser where you can put your date of birth that will be saved in your account – and the option will be activated.
The first experiments are promising

All tests are done in a MacBook Pro M2 Max, although that is really indifferent to them: the processing time is really the internet upload and download speed that we have in our connection, since the part of the image to be processed is sent and received online. It is usually a matter of seconds (about 10-12) in a 300Mbps connection like mine, depending on the area to be treated and the complexity of the prompt.

Don’t fall in love, but this relative’s puppy is called Romeo
We start with the tests, I choose an image of a puppy of a relative that I made with the iPhone, in vertical format. First, I widen the canvas to give myself some space. My idea is to convert, for example, this photo intended for an Instagram story, into a horizontal photo. I select the enlarged white areas that I want to fillbut I do not indicate what I want in them: the AI will take care of expanding the limits of the image automatically.
As you can see, the result is spectacular. It always offers us three versions, which we can choose with the right browser. Now I’m going to add some color, add grass to one corner, see if it covers the area well, matches the lighting and the outline of the tiles: at the prompt, I simply type “Grass” (At the moment the beta version only understands commands in English):

As you can see, the composition is perfect and in the area of the layers, each element generated by AI will have its own: so we can hide it or delete it whenever. Now let’s add some dog toys.
I choose, of the three options, the least “weird”, but as you can see it is perfect in size and perspective, on the top right. I do not touch anything manually: everything you see is generated instantly by the AI. Let’s move on to a complicated area: that of sunbeams on the right. And let’s ask him for something more complex: dog food on a plate, in the sun zone:

The dog toy on the top left is also AI generated.
Notice how perfectly when it comes to illuminating the plate, the food… matches the lateral sunbeams and the perspective is perfect. Mind you, this is amazing: without saying anything about the image, the AI interprets perspective and position the element with its correction and created taking into account the physical parameters such as light, inclination, size of the rest of the objects in the scene… Finally, something even more difficult: add a blanket between the puppy and the food plate , with the rays of the sun incident:

The result, as you can see, is absolutely incredible. It took me about two minutes of the clock to generate that scene with a completely normal photo from my mobile – and with a photorealistic content. Let’s see another example, a plate of food, taken in portrait mode with the iPhone, so it has out-of-focus areas, with light, perspective…

We are going to ask him to put a piece of bread, but not to tell him the size, or how I want it or where. I select the side of the plate, and simply write “bread” at the prompt, without further ado:

The results in terms of quality of the incorporated element, lighting, integrity with the rest of the photo… is resounding. I have to say that also Photoshop He presented me with two other options. with different breads, one integral and another more “bollito” type. This happens because I didn’t specify what type of dough I wanted.

Now let’s go to a complicated area, on the left: intersection of light and shadow, perspective, lens blur…we’re going to ask you to add a remote, generic:

As you can see, it embroiders. And also without regenerating even once: it is the version that the AI offers me by default, although there were two more models. With this technology we can also delete things from the image: without adjusting much, I simply select the fork and knife and type in the prompt “delete fork and knife”:

The ice cream has also been generated by AI
A second later, here you have the result. We can now add elements on top, continue deleting, change the color of the tablecloth or expand the boundaries of the imageas we have seen with the puppy test.
A technology that will be mainstream

original photo
What Photoshop offers with this new generative fill integration are basic image manipulation tools that anyone can use: no model to train, nor have a powerful Mac. You just have to tell him what you want, and he does it. Always with limits: in my tests, the generation of living beings leaves much to be desired and some somewhat more complex generations have to be worked on more at the prompt level or later retouching them from the program.
As a last example, again expand the limits of a vertical image: simply enlarging the canvas and letting Photoshop run wild, turns it not only into a believable image but consistent with the scene – in seconds.

Photo with extended boundaries, automatically generated by Photoshop’s AI in 12 seconds.
While the final version of Photoshop arrives with the integrated Firefly capabilities, we can already say that AI image editing is not only available to a few: It is one of the most famous editing programs in the world that integrates it, and it is going to start to reach anyone. Let’s see what they offer in the future and what alternatives are proposed – without a doubt – by their competitors in the coming months. If I don’t try it, I don’t believe it.
In applesphere | Microsoft already has its revolution, now it’s Apple’s turn: I want an AI like Copilot on Mac and I think it’s close