• Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Photo: Tim Levy
    Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
  • Adobe MAX 2025. Photo: Tim Levy
    Adobe MAX 2025. Photo: Tim Levy
Close×

Adobe’s message from the MAX 2025 keynote in Los Angeles was loud and clear: AI is no longer just a generative tool; it’s your new creative assistant. The company unveiled a sweeping set of updates across its Creative Cloud suite, centred on 'agentic AI' you can actually talk to (or at least text), a major new Firefly model, and deep integrations with third-party AI – including technology from Google and Topaz Labs.

Another step towards text prompt computing?

Consider the humble computer mouse. Invented by Douglas Engelbart in 1963, it took over 20 years to hit the mainstream in the early 1980s. It was an interface revolution that changed how we interact with computers, making them accessible to the masses – even if it still required learning some complicated software.

Fast forward to 2025. AI may have gone mainstream in 2023, but its adoption has been blindingly fast compared to the two-decade crawl of the mouse. The mouse isn't dead, of course, but now we can literally tell our software what we want it to do.

Text prompting has been integrated on so many levels – and this will only become better over the next few years. And we can applaud this direction as it cuts down our post-production time.

Adobe MAX 2025. Photo: Tim Levy
Forest Chaput de Saintonge hosts a Lightroom class at Adobe MAX 2025. Photo: Tim Levy

It is not unlikely for us to see a scenario in the near future where we plug in our CFexpress card to the computer and just tell 'it' to: open Lightroom, import, rename to XYZ, cull to the top 10 images, colour balance, add XYZ filter and export to 1920px JPGs on the longest edge. 

But for photographers now, the Adobe Max Keynote's 'wow' headline was the introduction of AI-assisted culling to Lightroom and Lightroom Classic. This has been on the photographers wishlist for a while, and it’s great to see it finally go live.

Literally a Lightroom wishlist

Speaking of wishlists, that’s one of the most interesting things about the Adobe Lightroom stand in the 'Creative Park' (the massive expo hall where attendees connect with teams and sponsors). They have a physical 'wishlist' board for attendees to add to. Interestingly, the Lightroom team actually implemented several of last year's top wishes and it will be fascinating to see what users have requested by day two of this year's event.

Adobe MAX 2025. Photo: Tim Levy
Terry White demonstrated Assisted Culling at Adobe MAX 2025. Photo: Tim Levy

AI assisted culling

AI 'assisted culling' works like this: Lightroom analyses your shoot – say, 3000 images – and then, using simple checkboxes and a slider, lets you boil them down by automatically rejecting misfires, blinking shots, and unfocussed or badly exposed images. But what if all your images are technically good? That’s where the 'stacking' function comes into play. The AI identifies similar images – like a burst from a sporting play or a series of portraits in the same spot – chooses the 'best' one, and stacks the rest underneath it. This could be a godsend for sports or wedding photographers who shoot tonnes of frames and need to deliver hero shots in a rush. It's also just a much nicer way to get an overview of a shoot without staring at a daunting grid of 3000 thumbnails!

No need for reshoots? Rotate / recreate your image!

Adobe MAX 2025. Photo: Tim Levy
Want that shot facing forward? No need for a reshoot! Photo: Tim Levy

Another 'wow' moment initially appeared during the Adobe Illustrator segment – a feature stemming from the 'Project Turntable' sneak peek we saw previously. It allows you to rotate a 2D vector image by letting AI work out what that object looks like from different angles – and forming it into what looks like a 3D object. This technology is now being applied to photography.

Adobe MAX 2025. Photo: Tim Levy
Just turn the sitters face front-on using prompts. Photo: Tim Levy

As other AI models are integrated into the Adobe suite, photographer Terry White demonstrated a portrait where the sitter was facing slightly away from camera. After sending the image to Photoshop, he selected his AI model (in this case, Google's Gemini 2.5) and he prompted to 'turn him forward and maintain his disposition'. The sitter was altered to face forward while looking exactly the same. This begs the question – is this still a real photographic portrait?

Colour Variance slider

We also saw the addition of a 'Colour Variance' slider. If you have a portrait with uneven skin tone – ike razor rash (red blotches), uneven white balance, or mottled light – this slider can quickly adjust for more uniform colour across the subject's skin.

4K Slideshows

On a personal note, my big complaint / suggestion last year at Adobe MAX was that slideshows were still only 1080p after more than 10 years. Now you can create incredible slideshow videos to watch on a huge 4K TV. If you haven't tried this – you should give it a go. 

Other notable updates include Leica tethering, auto dust spot removal (great if you're still shooting on a DSLR with a dirty sensor), more filters for Library and Smart Collections, and a number of performance enhancements.

Adobe MAX 2025. Photo: Tim Levy
There are more than 250 sessions / classes at Adobe MAX 2025. Photo: Tim Levy

The other big Adobe MAX announcements 

The New AI Assistants

The standout feature of the keynote was the introduction of new AI Assistants in beta for Photoshop and Adobe Express. This moves Adobe's AI from simple text-prompt generation into a conversational, 'agentic' experience.

Instead of hunting through menus, users can now instruct the assistant in plain language to perform complex and repetitive tasks. During the demo, a user simply typed, 'Rename all my layers based on my content,' and the AI organised the file.

Other commands, like "make the background look like a sunset and harmonise the lighting," were executed in seconds. This new assistant can also provide personalised recommendations and tutorials to help complete a project.

Firefly Evolves: Image 5 and Multimedia

Adobe's core generative AI, Firefly, received its most significant update yet, expanding beyond still images into a full multimedia creation studio.

Firefly Image Model 5: Now in public beta, this new model is a major leap in quality. It produces more photorealistic images at a native 4-megapixel (4MP) resolution, offering far greater detail. It also powers a new "Prompt to Edit" feature, allowing users to make precise edits to existing images using text.

Generate Soundtrack: This new tool (in public beta) creates custom, studio-quality, royalty-free music. Users can tailor the soundtrack to match the mood and, crucially, the precise length of their video clips.

Generate Speech: In partnership with ElevenLabs, this feature (public beta) is a powerful text-to-speech generator for creating realistic, multilingual voiceovers directly within Firefly.

Firefly Video Editor: A new web-based, timeline video editor (in private beta) was announced, allowing creators to generate, organise, and sequence video clips in one place.

Photoshop 

Generative Upscale with Topaz: In a major third-party integration, Photoshop's Generative Upscale feature is now powered by technology from Topaz Labs. This allows users to upscale low-resolution images to 4K with realistic detail, a huge benefit for restoring old photos or working with small source files.

Harmonise is Here: The popular Harmonise feature, which automatically matches the colour, lighting, and shadows of a composited object to its new background, is now out of beta and in the full version of Photoshop.

Partner Models in Generative Fill: Users are no longer limited to Firefly. The Generative Fill tool now allows users to choose from a dropdown of different AI models, including Google's Gemini 2.5 Flash (also referred to as Nano Banana) and Black Forest Labs' FLUX.1.

Premiere Pro

AI Object Mask: This new (public beta) feature is a massive time-saver for video editors. It uses AI to automatically identify and isolate people or objects in a video, creating a trackable mask. This eliminates hours of manual rotoscoping needed for colour grading, blurring, or applying effects to specific parts of a shot.

Premiere on iPhone & YouTube Shorts: Adobe launched a new, free, and watermark-free video editing app, Premiere on iPhone. Alongside this, a new partnership with YouTube creates a "Create for YouTube Shorts" space, allowing mobile creators to edit with pro-level tools and publish directly to the Shorts platform.

Adobe MAX 2025. Photo: Tim Levy
Adobe MAX 2025. Photo: Tim Levy

An Open Ecosystem and a Look at the Future

Underpinning all these announcements was Adobe's new, more open strategy. By integrating partner models from Google, Topaz Labs, ElevenLabs, OpenAI, and Runway, Adobe is positioning Creative Cloud as a central hub for all the best AI tools, not just its own.

The company also 'snuck' future tech, including Project Moonlight, an AI assistant designed to work across all Adobe apps and learn from a user's assets, and Project Graph, a node-based tool for building and automating complex creative workflows.

More to come


Stay tuned as we report back tomorrow from Adobe Max – and we are especially looking forward to 'Sneaks' where we get a sneak peak of what Adobe is working on behind the scenes.