Thomas Samson / AFP / Getty Images
Twitter finally seems to be coming to grips with the extent of its problems. It’s now publicly admitting to them, and it says it’s trying to fix them.
The unhealthy platform has let its issues fester for years. Its feeds have long been filled with trolls, misinformation, performative outrage, and abuse. And recent Congressional scrutiny has exposed how woefully unprepared it is to mitigate state-sponsored manipulation of its platform.
On Thursday afternoon, Twitter CEO Jack Dorsey went live on Periscope to talk about this new focus, explaining that Twitter is trying to work to increase its platform’s “health,” an umbrella term under which it’s currently lumping its plan to fix all these problems. On the broadcast, Dorsey was joined by the company’s legal, policy, and trust and safety lead Vijaya Gadde, its head of Trust & Safety Del Harvey, and its health product manager David Gasca. The quartet did their best to explain what “health” means to Twitter, essentially admitting that the company is starting at square one.
Twitter, Dorsey said, is trying to define what health means and how to measure it, and eventually it would like to give its users the option to choose a more healthy experience. Twitter, he said, “can do a much better job at giving people tools to choose more health, for however we end up measuring that and defining that, which is still being worked on.”
If that sounds vague to you, well, it is. Which is why we’re left with plenty of questions about this effort that’s purportedly poised to change the way Twitter works. Here are five to start:
1. What product changes will come out of this effort?
Twitter recently released a request for proposals asking the public to help it “define what health means for Twitter and how we should approach measuring it.” Sounds good, but what exactly does Twitter plan to do with the data? On the Periscope, Dorsey gave few clues. “We’ve had conversations about more moderation by community owners,” he said. “But ultimately we don’t have any particular answer right now.” Dorsey said this effort is Twitter’s top priority. But that’s all we know now. Where this ship is heading is anyone’s guess — even Twitter’s, it appears.
2. Will this effort make the public empathize with Twitter? And is that part of the goal?
Content moderation decisions are often incredibly complex, regularly presenting those making them with a lot of bad options. Twitter seems to bungle even the easy decisions, and it makes its choices with little transparency, often angering its users who feel that some people have been unfairly silenced, while others run amok. Throughout the broadcast, Dorsey repeated the words “transparency” and “trust.” If the public gets a look into the content moderation decisions Twitter is facing, perhaps they will trust it more. Or at least, they’ll empathize with some of the unwinnable decisions. And maybe that’s part of the goal here.
3. Will Twitter ever be able to fix verification?
Dorsey, on the broadcast, didn’t mince words about the state of verification on Twitter. “Verification, as many of you know, is something that we believe is very broken on our platform and something that we need to fix,” he said. The company, he said, is reworking and rethinking the blue checkmark, a necessary move after Twitter verified a handful of white nationalists (before eventually taking their verification badges away). The company long held that the blue checkmark was not an endorsement, but it recently backed off this stance after it became clear that no matter how many times it said “verification is not an endorsement,” people still see it that way. Twitter, Gasca said, is thinking about “the profile on the platform, and how can we increase context so you know when you see someone, how to evaluate what they’re saying. How you should interpret their message based on who they are and what their history is.” Hearing this, it seems like Twitter is considering new verification options that are even more complex than the hard-to-decipher system that exists today. Verification should be simple: It should simply indicate that you are who you say you are. But that’s a tough system to put in place for more than 300 million users. It remains to be seen how this effort will turn out, but it seems it will take Twitter time to figure verification out.
4. Will right wing users ever get on board with “healthy” changes?
Throughout the Periscope broadcast, viewers commented on Periscope about the company’s perceived bias against conservatives. “Twitter hates conservatives. Not nice to us,” wrote one user. “Stop crushing conservatives,” wrote another. Gadde addressed these accusations, telling viewers that Twitter’s employees go through anti-bias training, and if they’re found to make biased decisions, they’re disciplined. Still, should Twitter implement major changes to emphasize “health,” the comments indicated it will likely face pushback from segments of conservatives, some of whom seem ready to seize any opportunity to claim the San Francisco-based company is trying to silence their voices.
5. Is Twitter finally going to scrap its policy against commenting on individual accounts?
The bulk of Dorsey’s comments were was vague, but he was crystal clear on one thing: Twitter wants to be more transparent. “Often times we have taken action on tweets and accounts and not explained why. We’ve had a bunch of policies in the past that we are now revisiting around how we communicate and to who we communicate,” he said. “In some cases we weren’t communicating to the reporters, in some cases we weren’t communicating to the violator of the terms of services, in some cases we weren’t communicating to the world. We see opportunities around all those dimensions to add more clarity around our actions.” Is Twitter’s “we don’t comment on individual accounts” policy — which it’s used as a shield when asked to explain tough judgment calls — on its way to the ash heap of history? Sure sounds like it.
Twitter did not immediately respond to these questions. We’ll update the story if and when it does.