Fast company logo
|
advertisement

Adobe is bringing its updated Firefly 3 model into Photoshop with a variety of new features that cater professionals and amateurs alike.

[GIF: Adobe]

BY Jesus Diaz7 minute read

At last, after two years of lagging behind other generative AI platforms, Adobe has caught up to the competition with its new Firefly 3. Not only does the new model’s photorealistic output appear radically improved but, in combination with the latest public Photoshop beta available today, it provides new and useful tools that could improve the life of pixel pushers everywhere, from amateurs to professionals.

Photoshop beta—available in its final form sometime later in the year—introduces features that deeply integrate with the new Firefly 3 generative AI engine, while still maintaining the same AI-enabled user experience introduced in Photoshop’s previous version. The experience centers on the contextual task bar, the buttons box that pops up whenever you make a selection and offers a text field to enter an AI prompt. Adobe has made this task bar even more central to the Photoshop experience by presenting users with a prompt to generate a full image whenever they open a blank canvas.

This UX change seems to go against Adobe’s previous stance on generative AI. Instead of framing AI as an add-on utility that makes mundane tasks easier, Firefly is now a core feature right from the start. The new prompt box aims to alleviate the fear of a blank canvas. It’s a smart new direction that will appeal to in particular to amateurs, or the would-be Canva users who want to graduate from the online platform into something bigger and more powerful.

[Photo: Adobe]

Now we’re getting somewhere

Photoshop’s new AI features will make everyone happy, even the hardest of Photoshop’s hardcore users. I’ve been critical of Adobe’s generative AI efforts, but from what I’ve seen in the latest demo (and that’s a caveat, because I still have to try these features myself), it feels like the company is actually fast-forwarding its classic program into the generative AI future.

Firefly 3’s ability to more accurately interpret your prompts underpins five new AI-powered features that turn Photoshop into a truly powerful tool for image editors. Perhaps the most important is its Generative Fill with Reference Image tool. Imagine this as a way to “deep fake” anything in your image using an image as a real-time guide. It’s like training the AI on the spot to generate what you want, explains Alexandru Costin, vice president of generative AI at Adobe. “When you use a particular mask to retouch a photo or inject an object into a photo, you can point to a reference image and we’ll do our best to understand what you’re trying to blend in and do your job for you of repositioning, relighting, deeply integrating your object into a scene.” 

To understand this better, John Metzger—who works on Adobe’s product management team for Photoshop—showed me how you could easily swap an acoustic guitar held by a wild bear with any electric guitar image you can upload right in the AI prompt window. Using the reference images to inform the fill process, this allows you to integrate that electric guitar with ease, matching the perspective, the lighting of the scene, even reflections . . . an impressive demo indeed.

Before, this would have been extremely hard to do, even as a Photoshop expert. It would have required patiently erasing the acoustic guitar, pasting a new electric guitar, masking it, and working for hours using regular tools like the airbrush, distortion, and multiple compositing layers to integrate the new guitar into the scene.

On the other hand, if you are a generative AI wiz, you could have trained your own model or LoRA (think of it as an AI mini-model specialized to create certain things or styles) to “in-paint” the original acoustic guitar with an electric guitar but would require some hours of work, expertise, and enough computing power. Now, with the new Photoshop beta and Firefly 3, a simple selection and the upload of an image seems to get you exactly what you want.

Changing an acoustic guitar for an electric guitar with this precision may seem banal, but if it works as demonstrated it will save countless hours—heck, even days—for anyone doing serious Photoshop work. And it will allow any user to do the same because it’s so easy to do.

[Photo: Adobe]

Photoshop’s other new powers

The new Photoshop includes other tools that allow you to fill a blank canvas in a snap. With Generate Image in Photoshop, users can type in words and add a layer to an existing document. “We’re giving them text-to-image,” Costin says.

The new Generative Background automatically generates backgrounds that complement the main elements of an image, enhancing overall image harmony. Costin says a tool like this will make it easier to blend the background with a product shot in a realistic way.

With Generate Similar, users can right-click an image and see variations of it. And with the Enhanced Detail feature, users can fine-tune images to improve sharpness and clarity, helping them get a pixel-perfect output expected from a professional camera or illustrator.

Firefly 3 makes all the difference

All these generative AI integrations would have been useless if powered by the old Firefly, which was so impossibly Midjourney version one-level bad that it felt unusable for any real work. While Firefly 2 was lacking compared to the current state-of-the-art models in Stable Diffusion and Midjourney, Firefly 3 really excels, matching the competition (again, my only measuring stick is the demo here).

advertisement

According to Costin, Firefly 3 has been engineered to overcome these deficiencies, focusing on higher-image quality and more sophisticated user input understanding. “We’ve significantly improved how the model handles fine details and textures, which are crucial for photorealistic outputs,” Costin says. These advancements are apparent in how the new model processes images, with a noticeable improvement in how details are rendered, particularly in complex compositions where previous models struggled.

Take the example of the bear with the electric guitar. With the previous model, an AI-generated guitar might look deformed and almost caricature-like, with a mess of strings that barely resemble the neck of a guitar; the new Firefly 3 guitar shows a perfect fretboard with six parallel strings.

Costin tells me that the new model is designed to produce not only more photorealistic images but also to better interpret the creative intent behind user inputs. “Our goal was to make an AI that not only ‘sees’ but ‘understands‘ artistic directives, allowing it to generate content that closely aligns with the user’s vision,” he says. Adobe has achieved by enhancing the model’s language-processing capabilities, ensuring that user inputs translate better into desired outcomes.

[Photo: Adobe]

Walking a very fine (and inevitable) line

The new Photoshop abilities look useful and will no doubt boost productivity, but it puts the company in a weird position. Adobe is the standard toolbox for creative professionals—the very people who might feel like this advance in features could risk their livelihood. 

Photoshop—and the rest of the Adobe Cloud suite—is clearly accelerating toward a future in which less and less manual work, ability, and talent will be required to do the baseline work of creatives. And the AI features will inevitably keep coming. The competition has no qualms coming up with features that change the way people create, transforming everyone with an ounce of creativity into creators (or, better said, curators) without spending hours doing the actual actual work.

Then again, professionals won’t be able to deny that Photoshop’s new abilities will save them an incredible amount of time. Their internal struggle and any ethical controversy, I believe, will slowly fade away as we all transition to a new way of working. The proof is in the pudding. Adobe claims that its subscribers are using Firefly so much it has powered a “30% year-over-year increase in gross new Photoshop subscriptions.” That was with the crappy Firefly. With the new one, it seems impossible to me that any professional will ignore the ability to swap an acoustic guitar into an electric guitar in just two clicks.

[Photo: Adobe]

Training a better Firefly model

Earlier this month, news broke that Adobe trained Firefly, in part, with Midjourney images—a move that clearly has benefitted the company’s new hungry model, which is much bigger than the one before (Adobe didn’t provide the exact number of images used, though). Costin acknowledges the move, but says the Midjourney images that Adobe used were vetted just like the rest of Adobe Stock images. “We allow the submission of generative AI images to Adobe Stock from 2022,” he says.

Costin says there are more than 400 rules that Adobe Stock checks when they accept assets in Adobe Stock to make sure they respect the company’s guarantee of commercial safety. This includes things like ensuring the images don’t contain trademarks or copyright IPs and are not duplicative with something that already exists.

And yet, regular Photoshop users—those who use generative AI to expand an image in the current Photoshop—don’t seem to mind that, so long as the end result is improved.

Time will tell how Firefly 3 performs in the field. But it’s clear that this is Adobe’s first truly serious attempt at launching a generative AI model that can compete with other big names. Combined with a smart UX that millions of creatives already understand, Adobe might actually have an edge, yet.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.


ABOUT THE AUTHOR

Jesus Diaz is a screenwriter and producer whose latest work includes the mini-documentary series Control Z: The Future to Undo, the futurist daily Novaceno, and the book The Secrets of Lego House. More