Apple iPhone 7 was officially launched last week, and sales begin this Friday, September 16 in the US and other select countries. The iPhone 7 brings modest improvements over the previous two generations of the device, with the dual camera array for the larger 5.5-inch Plus variant being one of the biggest talking points. But Apple hasn’t just gone with a dual-lens setup in the iPhone 7 Plus’ camera; it has also added a machine learning element to the image signal processor to ensure non-blurry images in any lighting conditions.
When it comes to machine learning and AI, Apple has never really talked much on these subjects. In fact a recent article by BackChannel gave one of the first detailed inside view on how Apple views AI, and machine-learning, which have come to occupy such an important space in today’s technology discourse. From Google to Facebook to Microsoft, no company wants to be behind on this race. In the new iPhone 7 Plus, says it has introduced machine learning AI to the Image Signal Processor (ISP), which will be able to do 100 billion operations in 25 milliseconds.
In Apple’s case machine learning will be used to determine the context of an image and help deliver a better picture on the iPhone 7 Plus. So unlike Facebook or Google, which are teaching their software how to understand the image and what it depicts, Apple plans to use AI a bit differently. In the new iPhone 7 Plus, Apple is promising Portrait mode with a shallow depth of field, and a Bokeh style photography where user will be able to blur the background.
In Portrait mode, it is the machine learning bit which comes into play as the Apple iPhone 7 Plus’ camera tries to understand the content of the picture, differentiate between the background and the foreground and then fixes the settings like exposure, white balance, focus, etc automatically.
The challenge for the AI is to know what is the subject of a photo that a user is trying to take, and where does the subject end and the background begins. Bokeh is an effect that is usually used by photographers with DSLRs, which allows them to give an artistic blurring to the background of an image. This is the same effect Apple had used on the invite for the September 7 launch event. And it’s pretty much impossible to do this with a smartphone camera, something Apple plans to change with the new iPhone 7 Plus.
In Apple’s case, the camera can differentiate between a human face and a wall as Phil Schiller showed in the demo, and keep the subject in focus, while the background gets blurred for the perfect portrait effect. With the larger iPhone 7 Plus, you can see the background blur in real-time thanks to the dual rear camera array. The only catch is that Apple is not shipping the iPhone 7 Plus with this Portrait mode on it; the feature will be seeded out as a free software upgrade to users later in October. This is a first with Apple, which usually ships complete products. Still for most iPhone 7 Plus users, the Portrait mode update will be a much awaited one, and many will be hoping to test it out to see what it can actually do.
Apple is widening its machine learning usage across iPhones and iPads as a whole with the latest iOS 10 update, which has a lot more of these capabilities. For instance, Photos now has a Memories feature to create albums from a vacation, trip, or occasion. Apple says Memories relies on machine learning, object recognition, etc to put together these albums.
But Apple also insists it doesn’t plan to use its new machine learning tools to collect data on users and will protect user privacy as a whole.