Have Amazon, Facebook and Google learned anything from cases of data misuse?Chris Sutcliffe 8th November 2018
After the Cambridge Analytica scandal earlier in the year, you’d expect that platforms like Facebook would be rather more careful about how it exploits user data. Research has shown that consumer concerns over misuse of user data is one of the primary reasons why people choose to use ad blockers, and since user data is effectively the commodity on which the platforms operate, any further scandals are likely to have a cooling affect on their business models. Earlier this month a study from Pivotal Research Group found that people were spending less time across Facebook’s platforms (though it couldn’t say if that was specifically due to associaion with Cambridge Analytica).
Unfortunately Facebook has now been implicated in yet another potential misuse of user data: It registered a patent that uses geolocation to provide friend recommendations to people who are physically close to you. Or, as Lisa Vaas put it for Naked Security, “Facebook wants to reveal your name to the weirdo standing next to you.”
Vaas’ article is well worth reading. It contains some examples of when Facebook’s constant drive to connect people based on user data has massively backfired, either by recommending people whose only connection was that they had been to an anonymous meeting for parents of suicidal teens, or misuse by the police. Further examples aren’t hard to find, either. The Reply All podcast has another couple of scary examples in this special (you can listen to this while reading on).
“If the Amazon voice assistant determines that you have a sore threat, the system would “communicate with the audio content server(s)” to select the appropriate ad. “For example, certain content, such as content related to cough drops or flu medicine, may be targeted towards users who have sore throats,” the patent says.”
The patent also explicitly mentions (strap yourselves in: “embodiments of the disclosure may use physical and/or emotional characteristics of a user in combination with behavioral targeting criteria (e.g., browse history, number of clicks, purchase history, etc.) and/or contextual targeting criteria (e.g., keywords, page types, placement metadata, etc.) to determine and/or select content that may be relevant for presentation to a user” To be clear, this means selling products based in part on emotional state. That’s making some people extremely uneasy, since it sounds like the worst parts of Blade Runner made real – and Google is doing the same.
It doesn’t help either that Amazon has been especially opaque about what happens with its user data. Writing for Quartz, Youyou Zhou highlights yet another example of user data misuse, this time via Alexa, and also points out that users have essentially acquiesced to allowing the government to search their communications whenever it wants:
“The US government does not need a search warrant in most cases to get personal information that’s already shared voluntarily with somebody else, like a bank or internet provider or utility, according to reporting by the Marshall Project.”
Complicating matters are that users are apparently happy to share some of that data in very similar circumstances. Nobody batted an eyelid when Netflix’s (genuinely smart) data-based commissioning system was revealed, and there is a dating app that is explicitly based on cross referencing user data.
The difference seems to be around user consent and visibility. The horror stories that are so easily shared, and that are peppered throughout this section, are horror stories precisely because they involve a perceived betrayal on the part of the user – even though we have all explicitly given the platforms the ability to do all that when we accept their EULAs.
And while there’s no absolute guarantee that the platforms will deploy the tech they’ve patented, accurate user data has been table stakes for the advertising platforms for some time. Amazon, whose climb into the upper echelons of ad giants seems unstoppable, is doing so precisely because of its huge reams of user data around purchasing habits. However, as the issue becomes more high-profile, it remains to be seen whether that’s a gamble that will pay off or if such instances will force increasing amounts on regulation on those notoriously opaque companies.
Effectively, then, the platforms are betting that providing ease of use around purchasing to the vast majority of us trumps those high-profile instances of data misuse. They’re making the mother of all omelettes – and they aren’t afraid of breaking one or two eggs.
Martin Tripp Associates is a London-based executive search consultancy. While we are best-known for our work across the media, information, technology, communications and entertainment sectors, we have also worked with some of the world’s biggest brands on challenging senior positions. Feel free to contact us to discuss any of the issues raised in this blog.