An interview with Tom Hogarty, Senior Product Manager, Adobe

Adobe’s annual creativity conference, AdobeMAX, has just wrapped up in Los Angeles, California, with the tech giant unveiling yet another round of updates to their well-established Creative Suite products as well as a couple of newcomers.

Tom Hogarty, Director of Product Management for all things photography at Adobe.
Tom Hogarty, Director of Product Management for all things photography at Adobe.

Most pertinent to photographers were some handy new improvements to Photoshop as well as the highly anticipated release of Photoshop for the iPad. But perhaps the most dazzling aspect of this year’s MAX was the clear trajectory that Adobe has embarked upon dealing with “Sensei” - their collection of AI algorithms that are slowly starting to permeate many aspects of the CS suite. 

If you’ve pressed the “Auto” button in Adobe Lightroom CC anytime in the past 6 months, you might be surprised to learn that you have already started to use AI in your post-processing workflow. But the technology is certainly not limited to making sure your white balance is on point. As became evident at MAX, Adobe is already beginning to explore how machine learning and AI will drastically re-shape photography into the future. 

We sat down with Tom Hogarty, Senior Director of Product Management for "all things photography" at Adobe to find out where we’re at with AI in photography and where we’re going.

Touching on an obvious theme of the conference, the intersection of creativity and machine learning is a vast, un-explored realm within almost all creative disciplines but as Hogarty divulges, Adobe seems to be taking a relatively practical approach to harnessing this so far unwieldy power. 

We’ve seen some innovative updates to Lightroom recently that incorporate AI. Can you tell us about how those came about? 

It’s interesting that you should preface this with how we are delivering innovation. Instead of waiting for one big release, we are dropping updates every two or three months.

We did the “auto” update back in the centre of last year. We had profiles and presets sync in the spring and early summer so we have been trying to speed up the delivery of value to customers. Those two are both examples of “Sensei” based learning.

An example of Adobe's intelligent machine learning—the people view in Lightroom Classic CC. The feature scans your image catalog to find potential faces for review and confirmation.
An example of Adobe's intelligent machine learning—the people view in Lightroom Classic CC.
The feature scans your image catalog to find potential faces for review and confirmation.

The “auto” is using machine learning to improve image quality and the people face view is really about helping people manage through machine learning and Sensei. So those are going to be two concrete paths going forward where we help people improve image quality and help to organize more effortlessly - that’s the macro trend: the AI and the ML that we put under the umbrella of Sensei and that is really just getting started. We’re really excited about that. 

What’s the difference between AI and “machine learning” in this context? 

I tend to use the term machine learning a little more often because it refers to taking a data set and training an algorithm. We do this a lot with images - we did it with the People view when we trained that on a library of tens of millions of images. AI for me is more of an umbrella term. We tend to stick with the “Sensei” branding because we don’t want customers to have to worry about the nuances of what these things are - we just want them to know that the results are derived from that. 

Can you touch on the duality of where machine learning crosses paths with creativity? How are machines being trained in this context of creativity? 

I definitely agree with most of the comments that have been made this week pertaining to the idea that humans cannot be replaced in the creative process. This goes right down to everyone’s different perspective and different artistic intent. What Sensei can do is to hopefully be a good assistant in getting you to a better starting point to begin exercising your creativity. And whether that’s based on the top photographers in world and what they expect a good start point to look like or it could be a starting point based off of what we have been able to glean from your own style and habits over time.

Long-awaited, Adobe's latest update has finally brought Photoshop CC to mobile devices like iPad.
Long-awaited, Adobe's latest update has finally brought Photoshop CC to mobile devices like iPad.

We’re definitely at the point where we’re helping people with a better starting point but I would also really like to get to the point where we are just kind of learning and adapting. I think of an actual photography assistant who knows the style of the photographer they’re working with knows where they are going to want to set up the lights, what kind of lens they are probably going to want to use, what the colour temperature should be, the time of day - it’s an assistant that is helping you get better by removing the tedious stuff that you don’t want to do every time. 

Your colleague, Josh Haftel, recently said that he’s looking forward to the day where we don’t have to worry about depth of field and other considerations when shooting. Isn’t this a big part of being a good photographer?

It’s been the holy grail of the Litro mission initially, the L16  - to basically capture the moment but then give the photographer the ability to adjust those decisions after the fact. Because those are the two things that are currently baked into a still image at the time of capture. And there is strong desire to not have to bake them in and to have flexibility later. Because you know you’ve got that moment but how you express that moment just gives you more ability to express yourself. A lot of people are taking a swing at that one and it’s an area that we track closely.

The release of Photoshop for iPad and Adobe Premiere Rush seem to be further democratizing photography and video by making these products to accessible for everyone. Is this a very conscious effort at Adobe? 

Yes. Photography used to be expensive because of the cost of film and then DSLRs made that go away. Now, if you kind of have an eye and you want to start playing around, you already have a camera in your pocket probably so you can start to test and iterate in photography with zero cost, effectively.

No, it's not a film about bike couriers in New York. Premiere Rush is Adobe's new 'user-friendly' program for video creators. A modern all-in-one video editing solution, it allows for quick edits of video and easy publishing on platforms like YouTube and other social networks.
No, it's not a film about bike couriers in New York. Premiere Rush is Adobe's new 'user-friendly' program for video creators. A modern all-in-one video editing solution, it allows for quick edits of video and easy publishing on platforms like YouTube and other social networks.

Film (i.e. video) is always a couple of steps behind because the equipment is more expensive but again I think the democratization is present with the ability for people to start playing with video on their phone. The number of people I see running around with iPhones on gimbals getting little, quick shots are now going to be able to edit with Premiere Rush on the phone. I was cutting together a few sequences earlier today, having a blast. So, I think it’s great for everyone because you’re going to have more people that are able to engage.

And the timing is perfect because yes, we want more people to have access to things like Photoshop and this has been a huge shift since we migrated to a subscription model - instead of having to shell out $700 to get your first copy of Photoshop, it’s now $10 a month. So, I imagine a huge amount of parents who have kids that want to get into this but you know, the $700 was not going to happen! 

Looking at the future of AI, is there much of an idea already of how this will be implemented at other stages of the photographic process? I.e. in camera? 

It’s early days but we have explored some concepts surrounding when you have the camera open… what can we do to help guide to a better capture? Can we do scene analysis? You know, for example, based on the rule of thirds, we should probably be doing this or based on a specific setting of the camera if you are able to change shutter speed or aperture. So, yeah, we’ve definitely talked about it extending all the way to the capture experience. It’s pretty exciting. 

That sounds great! Surely, that would be of huge benefit to a lot of iPhone users out there! To help guide them. 

Yeah, or even to challenge them! So, take the example of kids. For the longest time, I was shooting them from my height and the camera should be telling me ‘get down on the ground!’. Things like that. 

With the trajectory that hardware is on and the continued emphasis on ever-increasing resolution, how does software have to both respond to and pre-empt these sorts of trends? 

Well, we’re right in the middle of that equation. You’ve got hardware capabilities increasing on one side, resolution increasing on the other and we are the software right in the middle that has to both balance and optimize those.

The thing that I love about the imaging team at Adobe is that we have guys like Thomas Knoll who created the first version of Photoshop and who made the imaging experience sane with computers that are a millionth as powerful as my phone. And so, we are used to getting the most out of the latest hardware so that you can do more and have a better experience.

The trajectory I see is that desktop gains will probably be moderate over time but the gains on mobile are improving at a quicker pace. 

How will that difference be reflected in the updates to desktop and mobile versions of Adobe software in the future? 

Well, in terms of Lightroom - we have it on many different surfaces - iOS, Android, web etc. and a lot of this has to do with what is available on those surfaces. But a lot of it also has to do with the maturity of the code.

So, if I look at Lightroom, that code base is over thirteen years old. So, the pace at which you can manipulate it to adapt to hardware is different to that of Lightroom CC, which is one year old. So, there’s a lot of variables that go into - code maturity, velocity of the hardware, devices themselves and as you mentioned, resolution. 

The new content aware fill in Photoshop CC.
The new content aware fill in Photoshop CC.

It certainly seems like almost every visual operator in the future could use just their iPhone at a professional level. Is Adobe striving for this? And what will the implications be? 

Yeah, it is certainly possible! And photography is already seeing it. We’ve had Lightroom on iOS and Android for a few years but it was really only the last 18 months that I have seen professionals getting rid of the Desktop altogether, traveling the world with only an iPad and cranking out professional images. 

Photography is always at the bleeding edge and photojournalists are at the bleeding edge of that - they had to do stills and then they were asked to do video as each of those technological advances came along so they will probably also be the first to drop the traditional Desktop as a platform and just use a phone and/or a tablet. So, I always keep my eye on photojournalists because they are always at the tip of the advancements. 
 

HDR panoramas can now be merged inside the latest version of Lightroom Classic CC.
HDR panoramas can now be merged inside the latest version of Lightroom Classic CC.

So, does that emphasis on mobile in the future mean an emphasis on the adoption of cloud storage?  

I do think it’s a given. Given the limited storage on mobile devices and the expectation to have access to creative files wherever, I do think that is the inevitable trend. And Dropbox and similar platforms have helped solidify that as well. Which also increases the importance of bandwidth management as well as privacy and security.  

Sam Edmonds travelled to AdobeMAX courtesy of Adobe.