While the mainstream media focuses on the role that social media giants played in the spread of the live-streamed video of the Christchurch terror attack, they could also be facing tighter regulation over their own coverage of the massacre.

The Australian has led its coverage with Scott Morrison’s calls for tighter controls on live streaming, a push he will take to the G20 meeting in June.

The gunman streamed 17 minutes of his attack over the internet, and while Facebook said it was working on removing the video shortly afterwards, different versions and edits were appearing on its platforms and on Twitter and YouTube for hours. Footage continues to be available on some platforms.

Facebook has cited its commitment to not spreading the video by saying it has removed 1.5 million videos of the attack, including edited videos. But it won’t say how many times those videos were viewed, or comment on how such a huge number of videos could be posted on its platform. YouTube (owned by Google) has also been briefing journalists that thousands of videos have been removed, but will leave videos that use the footage in a context that has “news value”.

As with previous mass shootings to appear on digital platforms, they’ve been left scrambling to remove the content and clean the graphic footage from their sites.

But it wasn’t just the tech giants spreading the footage and manifesto over the weekend, something that has largely escaped reflection by legacy media coverage. Part of the manifesto was read out on the ABC’s live coverage (which soon-to-be editorial director Craig McMurtrie has now said was a mistake). Sky News Australia repeatedly ran parts of the footage until Saturday morning, and the Nine, Ten and Seven networks have all played edited versions of it in footage.

News.com.au, the most-visited news website in Australia and owned by News Corp, had an edit of the video set to auto-play at the top of its main story on Friday afternoon, next to another story dedicated to a detailed rundown of everything shown in the video, also with the video embedded. Daily Mail Australia also ran clips from the video, descriptions, a page full of screen grabs from the video, and links to the manifesto.

Seven’s Sunday Night showed clips from the video on Sunday, with reporter Alex Cullen showing it to the episode’s interview subject — the gunman’s cousin — until she said she couldn’t watch any more.

Even the ABC showed still images from the video (which McMurtrie explained in this piece for the ABC’s website).

Internationally, UK newspapers including The Sun, The Mirror and Mail Online have come under fire for publishing at least parts the video online before removing it.

The question of whether Australian regulation of the traditional media sector is sufficient will be raised by ACMA, the broadcast regulator, in its investigation into coverage of the massacre. Announced yesterday, ACMA flagged that it would be talking to the Australian Press Council and broadcasters’ representative bodies about whether the current rules are adequate.

Currently, ACMA regulates broadcasters while the Australian Press Council, a voluntary membership body, handles complaints about newspapers and news websites who are its members. Press council members include Nine, News Corp, Daily Mail Australia and Crikey.

News Corp has responded to the inquiry in typical fashion, using it as another opportunity to call for tighter regulation of the tech platforms. In a statement, a spokeswoman said:

Organisations that allowed video of the killings to be streamed live and then failed to remove it from their platforms for many hours are not subject to the same scrutiny and have no formal agreement to take responsibility for their actions. This goes to the fundamental problem of there being one set of rules for responsible media organisations and no rules at all for digital platforms.

Publishers were already pushing for greater regulation of the tech giants, while trying to avoid greater scrutiny on themselves, in making submissions to the current ACCC inquiry into online platforms.

In that inquiry, the tech platforms have argued that self regulation is sufficient, although harmful content has been outside the inquiry’s remit. On YouTube, 70% of removed content was first flagged by a computer rather than a person, but using data to identify harmful content is harder when it’s streamed live than when it’s an uploaded video. At Facebook, moderators are given training on what to look out for on live streams — including gunfire and begging — in taking the videos offline.

On top of that is the sheer volume of content uploaded to these sites — their business models are built on scale and giving people the ability to say what they want, when they want.

While they say they want to remove harmful content, Facebook CEO Mark Zuckerberg himself has said he doesn’t want Facebook to monitor what people say before they say it, making it seem unlikely Facebook will be checking content before its uploaded. 

Social media’s role in the spread of hate speech has also been of concern since long before Friday. Neo-Nazi Blair Cottrell was removed from Twitter just last week — prior to the attacks — and in the US broadcaster and conspiracy theory pusher Alex Jones has been removed from most major platforms after sustained campaigns.

Germany enacted a hate speech law a year ago to require social media platforms to remove hate speech quickly, but this has been criticised as an ineffective “paper tiger” response. 

How should media companies and social media giants be regulated after the Christchurch terror attack? Write to boss@crikey.com.au with your thoughts.