Former Twitter exec: People skipping their 'vegetables' with social media news
- Social media has changed how news is delivered
- Determining fact from fiction could become as important as learning reading, writing and arithmetic
- An emphasis on algorithmic transparency can clash with users’ behaviors on social media
Discussions about "fake news" have dominated headlines and led companies such as Facebook Inc., Twitter Inc. and Alphabet Inc. to unveil a variety of approaches to moderating content on their platforms.
S&P Global Market Intelligence talked with Adam Sharp — who served as Twitter’s head of news, government and elections until December 2016 and is currently a consultant and speaker — about how news is distributed on social media. Below is an edited transcript of the conversation.
S&P Global Market Intelligence: Fake news on social media is a huge issue right now. Why?
Adam Sharp: I think two big things have changed in the social media age. The first is these things were almost always delivered in a bundled fashion, you had 'if it bleeds, it leads' in the newscast, but you might also have hard news in the same newscast. The delivery mechanism always sort of ensured that vegetables were on the plate even if you were buying the sundae. In the social media context, it's a lot easier for a piece of content to be delivered individually, so you discover one quote-unquote fact, one piece of information, perhaps not even in the context of a full news story.
The other difference is obviously the speed and scale of distribution. This is a double-edged sword because all of the reasons why Twitter and Facebook are excellent platforms for delivering real news is what makes them effective platforms for delivering fake news and spreading that more widely. If your entire business model is based on getting someone to 'like' that piece of content and essentially being the Pavlov's dog of getting them to hit the lever for another biscuit, that model is not necessarily going to deliver the vegetables.
What role do Twitter and Facebook have to play in handling misleading information?
There's a platformwide interest in reputation preservation and not being seen as a place devoid of credible information. I think Facebook has been a little bit more susceptible to these challenges than Twitter has because while Twitter has done some iterations in their product in recent years to algorithmically surface content that you might be more interested in, once you scroll past that handful of tweets, you are now in a chronological timeline where the tweets that you are now seeing are not necessarily being driven by what you'll like the most. I think that inherently makes the Twitter timeline more diverse than the Facebook experience, where you're only seeing a subset of what your friends have shared, and it's traditionally being optimized for that 'like' engagement.
Given that Facebook has the larger credibility threat to contend with there, they've also been commendably more active in trying to correct for that. Just last week I saw that they've started testing prompts to users in the timeline educating users about fake news.
But could this type of moderation become censorship?
I think that's an area that the companies would rather not engage in. The moment you start adding that subjective hand, particularly on a platform like Twitter, which is known for being a platform for freedom of speech, then you start getting into gray areas where you may be turning off as many users as you're helping. You may be silencing particular perspectives inadvertently if you're blocking an account for sharing one bit of false information, but then you're also blocking opinion that they're sharing. They also have a challenge in scalability: Twitter is half a billion tweets a day and Facebook is more content than that. The moment you start taking responsibility for manually moderating some of that because it's quite difficult to do that algorithmically, you then open yourself for criticism for the things you didn't moderate. I think Twitter has seen this in reaction to how they've handled abuse in the past.
Is having this discussion about fake news productive?
I think the conversation is most likely a good thing that leads to results, but an oversimplification or clouding of the issue risks having the opposite effect.
Another question it raises is how we value information literacy in our country. I went to journalism school, and in the first class you're being taught about source evaluation and double-checking the facts and when you have the story right. We're now in a society where everyone is essentially a publisher. The moment you hit tweet or retweet or 'like' you are potentially distributing a piece of information as much as any professional journalist might. Are some of these elements of self-discipline and information awareness and literacy things that should be coming out of the quote-unquote journalism curriculum and becoming as commonplace as the three R's? I think that's an important discussion to have as people become more aware of where they get their facts.
Would more transparency about how algorithms work help the conversation around fake news?
On some level, [companies] tackle this every day and always have. You look at just recently, Twitter changed how replies are displayed in a way to make it more approachable for new users. But a lot of longer-term, more experienced Twitter users complained and said, "No, we like it the old way," and you saw this back and forth. And so every single product decision that Twitter makes, that Facebook makes, that [Snap Inc.'s] Snapchat makes winds up being this decision between giving the user more control or the product more control over the experience. It is one case where user behavior and user statements are not always aligned. When you ask the question, people generally articulate a desire to have flexibility and visibility and ultimate domain over their experience, and yet in their behavior, users of these platforms demonstrate a desire to just tap an icon and have it be what they want.