Premium
This is an archive article published on March 15, 2018

Google just open-sourced the AI-driven tech powering Portrait mode on Pixel 2

Google has made the Artificial Intelligence-driven technology, which made these portraits on Pixel 2 possible, available as an open-source tool.

Google, Google Pixel 2, Pixel 2 AI tool, Google DeepLab-v3+, What is DeepLab-v3+, Google Pixel 2 Portrait mode, Portrait mode, Pixel Portrait mode, Pixel 2 Google makes AI-driven tech powering Portrait mode on Pixel 2 and Pixel 2 XL available as open-source.

Google Pixel 2 and Pixel 2 XL might not have a dual-rear camera, but both phones comes with a Portrait mode on the front and rear camera. In the Pixel 2’s case, the Portrait mode is driven by AI and software, and the camera has proved to be one of the best features of the smartphone series. Now, Google has made the Artificial Intelligence-driven technology, which made this Portrait mode on Pixel 2 possible, available as an open-source tool.

Google’s research team put out a blog post that there were making the open source release of their “semantic image segmentation model” which is called “DeepLab-v3+” and implemented in TensorFlow. According to the blog post, “semantic image segmentation” stands for “assigning a semantic label, such as ‘road’, ‘sky’, ‘person’, ‘dog’, to every pixel in an image.” Google says this feature is helping power various new applications, including the the synthetic shallow depth-of-field effect seen in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones.

The post explains that when each pixel or subject in the image is assigned one of these labels, it also helps in figuring out the outline of the objects concerned, which is crucial in Portrait mode. In the Portrait mode, the object, be it a flower or a dog or a person, is in sharp focus while the rest of the background is blurred, creating a shallow depth of field. While most smartphone players like Apple or Samsung are relying on dual sensors to create this effect in smartphone photos, Google in Pixel 2 and Pixel 2 XL rely on software.

Google’s blog post says with the DeepLab-v3+ open source release also includes “models built on top of a powerful convolutional neural network (CNN) backbone architecture for the most accurate results…” The post also points out that these systems of image segmentation have improved drastically over the last couple of years with advance in methods, hardware and datasets. The post adds that by sharing this system, Google hopes there will be more uses of this system in academics and the industry to help build newer applications.

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement