Since Elon Musk walked into Twitter’s headquarters, sink in hand, one of the biggest questions surrounding his ownership of the company has pertained to content management. Musk announced that a number of previously banned accounts would be reinstated, one of several factors which has led to speculation about whether or not Twitter will be allowed to remain in the Apple and Android app stores. Now, the New Zealand government has brought to light an incident that raises a lot of concerns about the company’s current ability to handle moderation.
As The Guardian reports, someone uploaded one of the worst possible things you could upload to Twitter — in this case, footage of the terror attack on two mosques in Christchurch, New Zealand in 2019.
To state the obvious, if you run a widely used social network in 2022, you need to have a system in place to flag things like this as soon as possible and delete them. As The Guardian reveals, Twitter didn’t do this until the government of New Zealand alerted them to the presence of the video.
Christchurch Call, an international initiative led by New Zealand and France that was launched after the terrorist attack, has a number of goals, including preventing “the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services, including its immediate and permanent removal, without prejudice to law enforcement and user appeals requirements.”
As noted by The Guardian, the Twitter team that had been working with Christchurch Call appears to have vanished. Given that Musk laid off significant numbers of content moderators working on the site, it’s unclear what Twitter will do to prevent something like this from happening again. Let’s hope the answer isn’t “nothing.”
Thanks for reading InsideHook. Sign up for our daily newsletter and be in the know.