During Whatsapp's downtime yesterday afternoon, Facebook and Instagram substituted many pictures with tags showing how artificial intelligence processes pictures and how machine vision technology operates. Leafing through the pictures uploaded to social networks, rather than seeing the real pictures, summary descriptions were read as "the picture may contain: people who smile, people who dance, marriages, inner or external individuals."
The malfunctions thus revealed how the artificial intelligence "interprets" the photos and their real life. The tags are not limited to describing the context in which people find themselves, but they also convey information about the subjects involved, using facial recognition algorithms to identify individuals.
As soon as 2016, Facebook uses machine learning to "read" pictures. These tags are already used by users with vision issues to define pictures and videos. What is not evident is if this data is also used by Facebook to target advertising. In reality, it would be feasible theoretically to extract from these pictures a variety of very precious information to profile individuals. Thus, artificial intelligence could get hold of information about animals, hobbies of people, their preferences for holidays, or the items they love to purchase.
Regardless of how this information is used, what emerged is a intriguing image of how one of the world's biggest data collection activities is performed, as it demonstrates how far reality can be interpreted through computer vision. The deep learning improvements achieved in recent years have truly revolutionized the artificial vision world, and the most modern algorithms can now transform visual content into text that is easy to archive and extract data from.
In brief, a tremendous mine of precious data. If Facebook did not commercially use it, from its point of perspective, it would be like wasting a treasure.