(12:54) Pro-regulation argument
Reform advocates like Tristan Harris say of course individuals have power, but when we focus the problem on specific CONTENT, or we frame it as a few executives in silicon valley who control everything...we’re still missing the bigger problem:
“We often frame this issue as a few bad apples, we’ve got these bad deep fakes, bad content, bad bots...dark patterns, what I want to argue is we have dark infrastructure. This is now the infrastructure by which 2.7 billion people, bigger than the size of Christianity make sense of the world. It’s the information environment. If private companies went along and built nuclear power plants all across the United States and they started melting down, and they said well it’s your responsibility to have hazmat suits, and build a radiation kit. That’s essentially what we’re experiencing now. The responsibility is being put on consumers. When in fact if it’s the infrastructure, the responsibility should be put on the people building that infrastructure.”
Protection of children and teenagers has become a primary focus of those calling for regulation. Tristan points out that policies designed to safeguard the content kids are exposed to is not a new idea at all:
"We used to have Saturday morning cartoons. We protected children from certain kinds of advertising, time place manner restrictions. When YouTube gobbles up that part of the attention economy, we lose all those protections. So why not bring back the protections of Saturday morning?" – Tristan Harris
There is a LOT of nuance to this debate, and a lot more to unpack in terms of how we solve the issue of technology and misinformation. To help us do exactly that, we’re thrilled to welcome Dr. Sam Woolley. Sam is a writer, researcher and professor with a focus on emerging media technologies and propaganda. His work looks at how automation, algorithms and AI are leveraged for both freedom and control. His recent book, The Reality Game: How the Next Wave of Technology Will Break the Truth explores the future of digital disinformation across virtual reality, video, and other media tools, including a pragmatic roadmap for how society can respond.
(14:24) Interview With Sam Woolley
SCOTT:
Tell us a little bit about your background and then what are you working on, and how did you get interested in it?
SAM:
I'm primarily a writer and researcher. I work at the University of Texas, where I oversee something called the Propaganda Research Lab within the Center for Media Engagement. My team is mostly focused on doing analysis of emerging media and trends like disinformation and manipulation of public opinion. So, we study all these sorts of emerging things on platforms, from Parler to Facebook to Twitter, and the ways in which these platforms are leveraged for various forms of manipulation or hate or harassment. Simultaneous to that, we also focus on solutions, so we look at the ways in which we can solve these problems.
SCOTT:
Do you feel like the truth is already broken in some ways, and if not, what will it mean to eventually break it?
SAM:
In some ways, the truth has always been a broken concept, right? Hannah Arendt wrote back in the 60s about the connections between truth and politics and basically made the assertion, very correctly, that politicians have always have a very flexible relationship with the truth. And in fact, that lying is oftentimes seen as a necessary part of politicians, not just by demagogues or authoritarians, but also by statesmen and regular politicians in Congress in the United States.
That's not to say though that facts are broken, because I think truth and facts are two different concepts that require a little bit of parsing. Today, in today's day and age, you have people like Kellyanne Conway sitting before the news media saying things like, "We're presenting our own alternative facts," in reference to Spicer's comments, as press secretary. But alternative facts are not facts, right? Fact actually refers to science and refers to empirical knowledge, being able to verify, being part of the scientific process.
I would say that while the truth is under attack and the truth is always and already broken in some ways, what we have to work to do is we have to work back to get on track where we have a shared reality.
10 years ago, a decade ago when I first started studying this stuff, no one really wanted to talk about the fact that social media and new media technologies were being used for manipulation and that this was causing a real challenge to the truth.
SCOTT:
What is computational propaganda?
SAM:
Computational propaganda is a term that Phil Howard, the director of the Oxford Internet Institute, and myself, back when we were both at the University of Washington co-coined together. We were thinking about the ways in which social media was being used during the Arab Spring and during the Occupy Wall Street protests, particularly looking at automated fake profiles. These profiles that were on Twitter that had software behind them to automate their tasks and spread particular political perspectives.
Basically these bots, political bots, were being used to manipulate public opinion by amplifying particular content. The goals were either getting people to re-share their content, because it gave that content the illusion of popularity, or getting algorithms to pick it.
We landed on this term, computational propaganda, because really what we saw was the ways in which propaganda, which is something that's been around for a very long time, was being enhanced by things like automation and anonymity. It was bots but it was also the algorithmic processes that I was just talking about. You had social media companies out there saying things like, "We're not the arbiters of the truth. We just present information," and trying to dress themselves up as a modern-day AT&T or some such.
SCOTT:
How automated actually are the bots or is it still fairly a manual process to run those, the bots?
SAM:
Social media bots or social bots, and more specifically what my colleagues have called political bots. These are forward-facing bots that have presence on social media. It's an automated piece of software that gets built to automate the social media profile, so it can automate the posts, it was automate likes, or in the case of Twitter retweets. It can post automated comments on say a news site below the line. And so bots are fairly automated things. They usually get built through the application programing interface on a given site, the API.
There's also sock puppets, which you mentioned. Sock puppets are usually run by people. They're accounts that have no clear identity on a site like Facebook, Twitter, YouTube, whatever, that are manually run.
Increasingly, we're seeing a merger of these two things. There was more separation. Most of the political bots that were out there on Twitter back in 2013, or even 2016 during the US election then, were very heavy-handed. They were used just to massively boost of particular political candidates, or to spam comment sections with repetitive comments over and over and over again. That was all they needed to do because propagandists and people attempting to spread disinformation are very pragmatic. They use the cheapest tools they need and the simplest tools that they need oftentimes to get the job done.
SCOTT:
In your book, you argue that fighting fake news AI bots with more AI is a mistake. Why do you think that?
SAM:
Every time I go to a conference there's always someone at the conference that says, "Well, if Russia wants to use bots or AI, then we should use AI in the US," or the UK, or wherever I am. So, there's this very adversarial idea that we should fight fire with fire.
I disagree with this. I think that responding to bots with more bots, or responding to AI with more AI systems, oftentimes can lead to unexpected and problematic consequences. The number-one thing that happens when you fight bots with bots is a tremendous amount of noise. Bots already mess up our information ecosystem. They already have created a tremendous amount of noise on Twitter. And they've created a tremendous amount of distrust in social media, because people just don't know what to believe. They don't know whether or not random profiles that they're interacting with might be aimed at manipulation. And that distrust is something that we don't want to throw fuel on, right? To extend that fire metaphor, right?
SAM:
Let's think about ways to build policy that helps to limit the effectiveness of political bots. Say for instance stopping them from posting every minute, which Facebook and Twitter have picked up in the last several years and other companies have as well. We've seen this be quite successful.
On the other hand, however, we also need to do a redesigning of some of the social media technology in a way that also doesn't allow cyborg accounts, so combinations of bots and people, to function quite so well.
SCOTT:
AI bots is not a good use technology. Is there policies, obviously, we can get these platform providers to agree to this? Is there any sort of technology that might help in this fight?
SAM:
I think, to add nuance to the point on bots and AI, there are beneficial ways we can use both of these things, both of these things in concert to create smart bots that, say for instance, help to verify information or help to educate or help to connect particular communities that need to be connected. Say educational communities that otherwise wouldn't be speaking. So, maybe we can use AI bots as the stitches in the patchwork quilt to connect diverse communities in order to create more equality and better democracy.
As we build these AI systems we have to think very carefully about what we're putting in, because what we put in matters on the backend.
SCOTT:
Do you feel like government or policy, the government has in enforcing some of these ideas or these policies?
SAM:
I'm in favor of regulation. I think regulation has to happen. I think that the social media companies know, in the United States in particular, that regulation has to happen. The US Government has got to stop divesting from the regulation of the digital sphere just because they don't understand it, because there's lots of people that do understand it here in the United States and can help them to build sensible regulation. It's just a question of getting organized and doing it.
If we're going to regulate the space, and I think we are, we have to involve technologists, public interest technologists and researchers in this process, so that the policy that gets created is actually sensible and is able to be mapped onto the internet and digital ecosystems, rather than just trying to retrofit old policies that were created pre-internet.
SCOTT:
How do you recognize that there's a bot in play or that someone's attempting to perhaps influence you?
SAM:
People have gotten more savvy to the fact that there are people out there trying to manipulate them online, and that these different types of technologies and political strategies are part of social media. So, that awareness is good. I would say that on an individual level there's lots of different tools that are out there.
On our website, you can find a lot of different tools for tracking whether or not a profile is a bot profile, or whether or not a story has been fact checked. It might be disinformation.
There's also organizations like the Poynter Institute, which generally is aimed at helping journalists to become better reporters, but that has also done a lot of work to put out information on fact checking and disinformation in the digital sphere.
We also need to think before you're posting something that's very political or inflammatory.
SCOTT:
It would be a great design principle to build into a place where you don't want disinformation to spread rapidly.
SAM:
We have to ask ourselves what does it look like to redesign social media and other communication tools in the digital realm with the fact in mind that people will discuss politics, that people will be discussing elections, voting, people will be discussing science, climate change, these sorts of things?
SCOTT:
Anything else you want to leave us with Sam?
SAM:
The last thing I would say is that we need to be thinking about what's next for social media. The move to encryption and encrypted sites and private sites is worrying to me, because I think what it means is that we will become more ignorant to the fact that there still is a lot of disinformation and extremism flowing, and that might allow us to pretend like it doesn't exist but it still very much exists. So, pushing people to dark corners of the internet is not necessarily a good thing and it's not a solution.
As this continues to happen, and I think it will, we need to be thinking very carefully both as companies, as policymakers, as researchers about the ways in which we can maintain encrypted spaces, because they're really important for all sorts of human rights-oriented communication, while also preventing a lot of the horrible misuse of these platforms that we see, that we know about, like terrorism but also disinformation, child pornography, these sorts of things that are really problematic.
I think we already have mechanisms for understanding how we can prevent the sharing of that kind of content through allowing people to tag it and share it with the company and say, "Hey, this is bad." But yeah, let's not just let companies move towards encryption, and as companies let's not just move towards encryption and think that that's the solution. It's not.