Adobe unveils ‘Sensei’ to improve digital experiences

Adobe Sensei, unveiled at the Adobe Max 2016 creativity conference, includes a unified AI/machine learning framework.

By: IANS | San Diego | Published:November 3, 2016 2:52 pm
Adobe, Adobe sensei, machine learning, artificial intelligence, ai, deep learning, Adobe sensei features, Adobe Max 2016, semantic segmentation, Iot, Internet of things, gadgets, technology, technology news Adobe Sensei, unveiled at the Adobe Max 2016 creativity conference, includes a unified AI/machine learning framework.

Leveraging its machine learning, artificial intelligence (AI) and deep learning capabilities, software major Adobe Systems has unveiled Adobe Sensei — a new framework and set of intelligent services that improve the design and delivery of digital experiences — at its ongoing annual creativity conference here. Adobe “Sensei”, which means “master” or “teacher” in Japanese, tackles complex experience challenges, including image matching across millions of images, understanding the meaning and sentiment of documents and finely targeting important audience segments. “Adobe Sensei is uniquely focused on solving today’s complex experience challenges in the design, document and marketing fields, where only Adobe has decades of expertise and market leadership,” Shantanu Narayen, President and CEO of Adobe said on Wednesday.

“Leveraging our machine learning and AI capabilities, as well as trillions of content and data assets, Adobe Sensei will be one of our biggest strategic investments. We’re excited to open it up to our broader ecosystem of partners, ISVs and developers to enable even more innovation,” he added.

Adobe Sensei, unveiled at the Adobe Max 2016 creativity conference, includes a unified AI/machine learning framework that power Creative Cloud, Adobe Document Cloud and Adobe Marketing Cloud and automates mundane tasks, drive predictive and personalisation capabilities and boost productivity.

For example, when a user searches for images, Adobe Sensei uses deep learning to search and tag images automatically and makes recommendations. It can find faces in an image and uses “landmarks” such as eyebrows, lips and eyes to understand their position and change the facial expression without ruining the image.

It also does “Semantic Segmentation” and shows each image region labeled with its type — whether it is a building or the sky, for example.

This labeling allow easy selection and manipulation of objects, using simple commands.

Apart from giving marketers and analysts new visibility into which segments are most important to their businesses, it also algorithmically determines the impact of different marketing touch points on consumers’ decisions to engage with a brand.

In addition to its availability in Adobe’s cloud offerings, Adobe Sensei will be available to partners and developers as APIs via Adobe.io — Adobe’s developer platform.