Sneaks at Adobe MAX has historically been the most interesting stream of the conference because it showcases what the company is working on behind the scenes. Some of these technology previews eventually make it into a real Adobe product in one way or another, but some also feel like flights of fantasy from the minds of a developer trying to create the next big software magic.
Now, bear in mind that the term “Photoshop magic” has long been used to describe any digital image manipulation that looked like it took a professional graphics designer hours to achieve…usually on Adobe Photoshop, because that’s what all serious graphics designers used.
Today however, our smartphones are able to perform Photoshop magic with a tap of your finger, in real-time, and even on videos. I’m talking about beautification filters, virtual backgrounds, augmented reality animations, transformations, lighting adjustments…and the list goes on. These aren’t even dedicated photo editing apps; every communication and social media app has a host of such 3D/AR/filter tools at its disposal. So, it’s actually a real feat that Adobe continuously manages to blow my mind during Adobe MAX and this year, it’s not just Sneaks that piqued my interest, it’s just about everything.
And the one thing that really struck me was how Adobe has leveraged hard on AI improvements since they announced the Sensei AI framework back in 2016. I bring this up because watching this year’s Adobe MAX really shows how Sensei integration within the Adobe universe is the perfect example of maturing AI technologies and how AI fits into our everyday lives.
Fundamentally, for me, it answers the question of whether AI is going to make all our jobs obsolete. And the answer that I’m sticking with—at least for now—is no.
The sky’s the limit
Let’s start with good old Photoshop magic. Two new Sensei-powered features coming into Photoshop is Neural Filters and Sky Replacement. Now, both of these tools feel like something most phone apps can do, but at a level x100. Neural Filters for example doesn’t just take a photo to apply skin smoothing or beauty touch-ups; it can scan a 2D image, identify objects and do really crazy things like turning your head…in a still photo!
Sky Replacement also feels pedestrian at first. It’s just a masking and overlay filter to replace the sky in a photo? How mundane, or so I thought. Of course, I expected improvements to things like content aware masking and edge refinements with every new generation, but Sky Replacement won’t just replace sky more accurately, it can even take your new sky and apply the correct colourisation to the rest of your picture. So if you’re replacing a morning sky with an evening sky for example, your entire picture changes accordingly so it actually looks like it was shot in the evening.