Adobe, one of the world’s largest and most powerful software companies, is trying something new: It’s applying machine learning and image recognition to graphic and web design. In an unnamed project, the company has created tools that automate designers’ tasks, like cropping photos and designing web pages. Should designers be worried?
The new project, which uses Adobe’s AI and machine learning program Sensei and integrates into the Adobe Experience Manager CMS, will debut at the company’s Sneaks competition later in March. While Adobe hasn’t committed to integrating it into any of its products, it’s one of the most ambitious attempts to marry machine learning and graphic design to date. There have been efforts to use AI in the design world before–for instance, Wix’s Advance Design Intelligence and automated projects like Mark Maker, but Adobe’s is notable because of the company’s sheer reach in the design world. Although it’s just a prototype, it’s one to watch closely.
The as-of-yet unnamed new product is designed, first and foremost, to make it easier to customize websites for users at large-enterprise customers. When I viewed a demo, for instance, machine learning and AI techniques were applied to editing the Food Network’s web pages.
Instead of a designer deciding on layout, colors, photos, and photo sizes, the software platform automatically analyzes all the input and recommends design elements to the user. Using image recognition techniques, basic photo editing like cropping is automated, and an AI makes design recommendations for the pages. Using photos already in the client’s database (and the metadata attached to those photos), the AI–which, again, is layered into Adobe’s CMS–makes recommendations on elements to include and customizations for the designer to make.
According to Cedric Huesler, a product management director for Adobe Marketing Cloud who worked on the project, the idea is offering what he calls “human-augmented” design. The AI offers recommendations, which designers can manually override.
“The problem, obviously, is personalization at scale,” Huesler tells Co.Design. “We can repeat the same process just by providing different inputs”–once implemented, the machine learning tool is designed to let large-enterprise users quickly generate customized content. In the case of large-enterprise customers like the Food Network, Adobe says, partial automation lets them create customized web and mobile experiences for customers more quickly and more affordably than they could otherwise.
The AI is meant to make design easier for large projects. It includes both image recognition components that automatically crop or otherwise edit photos, and more conventional components that rely on text metadata for design decisions.
Huesler points out that, in the Food Network example, content could be instantly customized for users. For example, users whose activity indicates they are lactose intolerant or gluten intolerant will see different recipes and images highlighted. The machine learning product won’t actually handle the heavy lifting of reimagining an interface and making complicated UI or UX decisions. But it can, for instance, help quickly determine what photos and text content are ported onto pages designed for very small user segments.
“Every brand wants to do personalization,” says Steve Hammond, senior director of product of Adobe Marketing Cloud. “They want to make content relevant to individuals and audiences, but as you expand the audiences you’re creating content for, you face two bottlenecks: How do you create variations of content, and how do you create imagery for them?”
These bottlenecks, he says, are something that machine learning can help tackle.
Once again, an old theme in the world of machine learning is picked up here. In the case of web design and graphic design, machine learning helps automate tedious and boring tasks. The vast majority of graphic designers don’t have to worry about algorithms stealing their jobs anytime soon because, while machine learning is great for understanding large data sets and making recommendations, it’s awful at analyzing subjective things such as taste. Nonetheless, it’s important to note that tedious and boring tasks describe much of the entry-level work that’s done in the design world. It’s safe to bet that in the coming years, machine learning tools will be used for many of the tasks that entry-level workers do now.
Approximately 60% of the projects exhibited at Sneaks make their way into Adobe products as functionality improvements or new features. Whether or not Huesler’s feature or a similar one is integrated into Experience Manager or another Adobe product (which is always possible, given the company’s wide presence in the design world), the company is investing a lot of resources in AI and machine learning.
The functionality’s debut at the Sneaks competition, will be part of a wide display of experimental projects Adobe’s employees work on. Sneaks is a major event inside Adobe’s internal ecosystem–the company brings celebrity cohosts (past years included Carrie Brownstein and Thomas Middleditch; this year’s host is Kate McKinnon) each year.
Huesler’s proof-of-concept uses Adobe’s Sensei platform, which offers machine learning and AI frameworks for the company’s products. Sensei is already slowly working into Adobe’s popular Creative Cloud platform and other products; for instance, in Creative Cloud, it assists with image recognition and editing facial expressions.
Adobe has poured a considerable amount of resources into Sensei, which was publicly unveiled in late 2016.
In our conversation, Hammond was excited for the potential Sensei held for solving vexing design problems. He noted that one of the big problems facing Adobe’s corporate clients is offering customers uniform experiences across different platforms such as desktop websites, mobile websites, smart-home devices, advertising, call centers, and touchscreen kiosks. Automating the minutiae of design, he adds, makes things easier for these customers.
But at what cost? For now, artificial intelligence isn’t stealing any designer’s job–existing efforts are good at cropping photos and making minor visual modifications, and that’s it. But Adobe’s project is one of the first in a very new field. Expect more in the future.