Apple, in a document released Friday, details how it has used advances in machine learning to introduce significantly improved human recognition in iOS 15, even in situations where a face is not clearly visible.
The company lists “enhanced recognition for individuals” as a new feature on the iOS 15 version of the Photos app, though the web page is sparse on details. However, a new blog post on Apple’s machine learning site reveals that the Photos app can identify people in a variety of scenarios, even if their faces aren’t clear to the camera.
One of the methods Apple uses to achieve this is to match the faces and upper body of specific people in images.
“The faces are usually occluded or simply not visible if the subject is not looking at the camera. To solve these cases we also consider the upper part of the body of the people in the image, since they usually show constant characteristics, such as clothing, inside of a specific context … These constant characteristics can provide strong clues to identify the person through images captured within minutes of each other, “writes Apple.
The company takes a full image as input and then specifically identifies the detected faces and upper body. It then matches faces to the upper body to improve individual recognition in situations where traditional facial recognition would be impossible.
As always, the mechanism uses machine learning on the device to ensure privacy. Apple has also taken steps to ensure that the process minimizes memory and power consumption.
“This latest advancement, available in Photos with iOS 15, significantly improves the recognition of people. As shown in Figure 8, by using private machine learning on the device, we can correctly recognize people with extreme poses, accessories or even occluded faces and use the face and upper body combination to match people whose faces are not visible at all, “writes Apple.
The blog post It contains much more detail on training the machine learning model, for anyone interested.