Every day, humanity uploads billions of photos to Facebook, and the social network automatically tags them, which helps visually impaired people appreciate the content generated by everyone on the network. Are you confused? The previous sentence is correctly written; it simply wasn’t referring to tagging your friends. When you upload a photo to Facebook, it examines and assigns “alt text” to that image. People can modify that text to help make it more accurate. This is precisely so that when a user who has reported having partial or complete visual impairments, Facebook can verbally describe what the photos posted by their friends on the social network show[110].
Just as we constantly feed this monster, Google also knows everything about us. From our culinary preferences and political candidates to whom we surround ourselves with, not only based on the photos we upload and the tags of people we add but also by cross-referencing information related to the Wi-Fi networks we connect to, the time our devices are near each other based on our coordinates, and checking if we are already “virtual friends” with someone else on the networks. If we could access and read all this data, we could even make fairly accurate predictions of whether someone spent the night with another person they just met through their group of friends.
The logic would be quite simple, so let’s illustrate it with a heteronormative example for easy understanding. Suppose Alicia travels to Buenos Aires on a Friday to visit her friend Belén, who is celebrating her birthday the next day, and proceeds to stay at her apartment. Belén is friends on social networks with both Alicia and Carlos. The social network algorithm knows well that Alicia doesn’t hang out with Belén much today because they live in different cities, although based on the online activity they maintain with each other, it can predict that they have a close friendship, which is evident from the exchange of Likes, mutual responses to their Stories, and frequent communication through chat. The algorithm, through various variables, such as frequent use of the same home Wi-Fi network and overlapping coordinates, knows that Belén and Carlos are good friends. On Saturday, Belén decides to celebrate her birthday and invites all her friends to a particular nightclub, which is easy to detect by a series of coordinates. The algorithm already knows that today is Belén’s birthday. At a certain point, Alicia and Carlos’ phones move away from the rest of the group, heading towards what the algorithm recognizes as Carlos’ house or a hotel. At the same time, it detects that Belén returned home, or maybe she decided to spend the night elsewhere, so her home would be empty and available for Belén to sleep there peacefully, as we had already detected she slept there the night before. Still, the coordinates of our characters show that this was not the case. Can we guarantee that something happened between Belén and Carlos? Not necessarily, but based on mutual consent, maybe their smartwatches monitoring their heart rates can help us solve this mystery. We could even evaluate at what time Belén’s phone moved away from Carlos’ house that night or the next day, and then the algorithm analyzes their subsequent interactions on social networks. Does this sound too invasive? Maybe it is. Maybe it’s already happening. So, what is the limit? Does it exist? Some will say that privacy is the price to sacrifice in exchange for the services provided by the websites and apps we use the most and offer us personalized experiences. That is the current paradigm; it is real, but it doesn’t have to be the standard for tomorrow.
Various studies have shown that data such as the Likes we leave on Facebook or the time we spend looking at a photo on a social network can be used to determine your sexual orientation, drug use, whether you are of African descent, and your political views[111]. This statistical approach is not 100% accurate, as a person can also change their views on certain topics throughout their life, but the level of accuracy of these predictions can be extremely precise, giving these companies the power to analyze your profile to the fullest extent and then use that information for various purposes, such as selling you a shirt or inducing you to vote for a particular political party. Thus, in the near future, it is likely that a beverage company will show you a commercial with images of people who look like or appeal to you, and show me another ad with the same rhetoric but with different people. Will they actually be real people or mere representations of humans artificially created digitally? Do you think you could detect a fake? Then I invite you to visit the website www.this-person-does-not-exist.com. Every time you open that page or hit the reload button within it, you will get a new image of a person who doesn’t really exist. Surprise!
[110] Using Artificial Intelligence to Help Blind People ‘See’ Facebook-About Facebook. Facebook. (2016). Viewed July 18, 2021, at https://about.fb.com/news/2016/04/using-artificial-intelligence-to-help-blind-people-see-facebook.
[111] Enhancing Transparency and Control When Drawing Data-Driven Inferences About Individuals. Daizhuo Chen, Samuel P. Fraiberger, Robert Moakler, and Foster Provost Big Data 2017 5:3, 197-212. Viewed April 29, 2022, at https://www.liebertpub.com/doi/full/10.1089/big.2017.0074.