Personal data & the threat of social media
“The Great Hack” & the work of Carole Cadwalladr & Paul-Olivier Dehaye exposes the profoundly troubling reality that democracy is under assault – throughout the Western world – via the instruments of social media managed by tech giants, especially Facebook. I watched the documentary at MyData 2019 a couple weeks ago.
I’m sharing these thoughts now, all the more cogent in light of Mark Zuckerberg’s disingenuous defense of “free speech”, signalling Facebook’s continued failure to face this threat to democracy.
“Is personal data the real problem?”
The documentary emphasizes Facebook’s access and use of personal data, but is that the real problem?
After the movie screening, Carole and Paul were joined by Jessikka Aro to discuss as a panel. I asked the panel this after the movie: what if the problem isn’t personal data? Google hoards personal data, but is it the same threat to democracy?
I think the argument for “privacy” is a distraction. I don’t mean that personal data isn’t part of the problem, but rather: it is vitally important that the correct problem be solved. This is destroying our democracy. It already has.
I wrote my thoughts up shortly afterward, and I’m sharing them here, because I think it’s important we correctly understand the problem.
A bit of reflection.
My reaction also comes from my experience in tech, where I’ve come to witness the two-edged nature of dramatic advances. The moment where we realize we have created a monster.
I used to work in biotech, in a lead research lab. I was in a room when Kevin Esvelt, who develops DNA-editing technology, told us, “This is private, and I need your help to think about it.” He then showed us how his technology could eradicate malaria – and how it could also accidentally destroy an entire species.
This movie left me with the same, terrible sense of duality I had that day. Major innovations can be a two-edged sword, it cuts in both directions – its results can be godlike and demonic. I think “social media” is one such invention.
What if…
Let’s consider a best case scenario: I’ll pose two potential “problems” and we’ll consider which one really needs solving.
Problem 1: “Facebook’s aggregation of personal data is profoundly invasive, outside our control, and this is the problem: it’s used against us by disinformation campaigns that destroy democracy.”
Problem 2: “Facebook’s role as social media, as communication infrastructure – with mechanisms for like/share, virality, and monopoly control of social relationships – this is the problem: we’re trapped in system used by disinformation campaigns to destroy democracy.”
Now put on a black hat: you’re the disinformation campaigner. Imagine a world where the problem is solved. Can you still destroy democracy?
Scenario 1: Facebook respects personal data
Imagine a world that has solved “problem 1”: Facebook retains minimal personal data – and none that I get access to.
Good news for me! I’m the black hat and I don’t have to respect personal data. I don’t have to follow GDPR. I’m Russian, I’m Chinese – whoever I am, I’m not you, I don’t care about your laws. I am here to destroy your civil society.
More good news for me! I realize that the key information collected by Cambridge Analytica wasn’t even collected by Facebook – they just used the platform to get people to answer surveys.
I don’t need Facebook’s cooperation. I just need to collect data about people’s behavior – demographics are enough for me, honestly. Microtargeting is great but I don’t give a damn about the individuals themselves.
Maybe I’ll run A/B testing with meme deployments on demographically-matched geographic regions and track virality. Maybe I’ll lure people into offsite surveys with a game, sniff their IPs. Maybe I’ll plant evercookies anywhere I can. I’ll do it all at once.
I don’t need any data from Facebook – hah! I just need Facebook to distribute my content. I have a small army of people that will create fake accounts, and we will deepfake as needed, I will apply humans to act like bots – MTurk style – because people are cheap and you can’t CAPTCHA your way out of this.
We spread disinformation, propaganda. We create memes. We push the boundaries. We post divisive sort-of-true content that you have trouble dismissing as fake news. You’re always catching up, five steps behind us while we destroy elections and sow hatred.
Scenario 2: The world where social media doesn’t exist.
I’m not sure what to do now. I guess I could … email people? Can text messages go viral?
Oh, I’ve got an idea! “Forward this email to ten people or you’ll die in seven days.”
Honestly I think I’m kind of screwed. I may as well be dropping pamphlets from airplanes.
But if it’s not personal data…
What’s the problem?
As I suggested above, I think it’s “social media”. I think the experiences of the third panelist, Jessikka Aro, showed Twitter is also a threat. (Not Google, Apple, Amazon, Netflix, or Microsoft.)
…and what do we do?
First: it’s important to just get this far, to think the real problem might not be about “privacy”. (Not in any way we can effectively legislate, because illegal entities will ignore the law.) That it might be something else.
Social media has weaponized communications. It’s destroying us, and I don’t know exactly how to fix that without dismantling social media itself. But I know where I’d start.
I’d push for studying the problem, proactively. I’d start collecting ideas. (Could the platform create technical limits that limit how quickly content spreads, and would that help? Can we get better at detecting bad actors in the system with AI/ML?)
Michelle Meyer made an important point when she advocated for companies to do experiments, to do research. The deployment of these technologies without studying their effects on people is worse – reckless – it has the potential for much greater harm.
From here I don’t know the best tactic to make that happen. Can we expect Facebook to do this itself? Would the new Oversight Board be doing this – or will solutions be limited to “crisis response”? Do we hope to use law or public relations to push Facebook to work on it with people we trust (government? academics?)?
The internet is a commons.
My closing thought was to wonder: do we need to regulate social media as a public utility?
Is that crazy? These are how people communicate with each other – like the telephone, the radio, there’s a lot of precedent for society regulating similar things. (If you think those are similar.)
There’s an increasingly weak claim that Facebook is vulnerable to competition – that it could fall, just as MySpace did. I think that gets more and more dated as the years pass. MySpace didn’t lock in. It didn’t reach critical mass. Facebook did.
There’s alternative attacks – maybe this is a monopoly problem, maybe we can break it into pieces. But (a) that might not be possible (is social media inherently monolithic?) and (b) it might make it worse – the same bad actors now propagating virality across a collection of connected platforms, and less “centrality” to use to fight their behavior with governance and technical barriers.
It’s important to recognize that people can’t just leave. They won’t. You have to pay people $1000 to stop using Facebook for a year. That’s an order of magnitude more than people are worth to Facebook – Facebook is literally worth far more to us than we are to it.
We can’t keep relying on atomistic agency.
We can’t keep relying on atomistic agency. Because information is cheap, we can all have copies of it – and bandwidth is nearly limitless for text – it is tempting to consider the internet a public good that benefits from minimal regulation. That is decreasingly the case. Which information we’re able to consume is inherently limited by our own capacities. The internet isn’t infinite, because we are not infinite beings.
We can’t keep relying on atomistic agency! Because our personal data in Facebook is entangled with our interactions with others. We can ask for portability for ourselves, but we cannot export our friends and family. We might consider our content in the network to be “personal data”, but it is not atomistic in nature. We cannot export the social network.
Social media isn’t a benign public good; it’s a commons, a shared communications resource that can be co-opted, poisoned, and misused. Which means, in turn, that it should – per Elinor Ostrom’s observation – have rules that are responsive to those affected by the rules. In Facebook’s case, that is over 2 billion users as of 2019. The people affected by the internet is increasingly “all of humanity”. Social media a technology that has given us new ways to communicate and connect, but is also a technology that can destroy us.