The new legal decision could have significant implications for how news content is distributed online and ensure less sensationalism, especially in Facebook messages designed to generate as much response as possible.
Last week ,. The Australian High Court upheld the verdict under some circumstances, the Australian media would be responsible for user comments submitted to their Facebook pages.
The finding has raised new concerns about possible restrictions on journalistic freedom of expression and reporting capacity. But the complexity of the case goes deeper than the original title. Yes, the High Court ruling provides more room for the media to be legally responsible for comments made on their social media pages, but the whole nuance of the ruling specifically seeks to ensure that incendiary messages are not shared with a clear purpose of commenting and sharing.
The case is from investigation in 2016, who found that prisoners at the Darwin Juvenile Detention Center had been severely treated, even tortured. In the subsequent media of the incident, some media outlets had sought to provide more context for the victims of this torture, and a handful of publications highlighted the criminal records of those victims as an alternative account of the incident.
One of the former prisoners, Dylan Voller, alleges that subsequent media descriptions of him were both erroneous and derogatory, leading to Voller seeking legal redress for published allegations. Voller himself had become the focus of several articles, including punching in the Australian headline “Dylan Voller’s list of prison events exceeded 200”, which highlighted the many mistakes made by Voller that had led to his imprisonment.
The issue, particularly related to Facebook’s comments, arose when these reports were republished on the Facebook pages of those outlets. The core of Voller’s argument is that the formatting of these articles, especially in Facebook posts, generated negative comments from users of the platform, whose Voller defense team has claimed to get more comments and engagement on these posts and thus gain more reach within the Facebook algorithm.
As such, the crux of the case falls to a critical point – it’s not that publications can now be challenged simply to put people’s comments on their Facebook posts, but how the content is framed in such posts and whether Facebook posting and whether it attracted derogatory comments, and there may be a definitive connection between the perception of the community, which may harm the individual (it is not clear that the same rules would extend to the whole as such).
Actually original notes, Voller’s legal team claimed that these publications:
“It should have been known that there is a ‘significant risk of defamation’ after posting, partly due to the nature of the articles.”
Thus, the complexity extends far beyond the top-level observation that publishers can now sue for comments posted to their Facebook page, as the real impetus here is that those who post any content to Facebook on behalf of a media publisher need to be more cautious about the actual wording of the posts. Because if subsequent defamatory comments can be linked back to the post itself and the publisher is found to have instigated such a response, legal action can be taken.
In other words, publishers can redistribute what they want as long as they stay in line with the facts and don’t want to intentionally distribute incendiary social media messages around such an event.
An example of this is another article published by Australia on the case of Dylan Voller, which, as you can imagine, has also raised a long list of critical and negative remarks.
But the message itself is not an insult, it’s just a statement of facts – it’s a quote from an MP, and there’s no direct evidence that the publisher tried to entice Facebook users to comment on a shared article.
What’s the real question – the decision to give publishers more responsibility to consider framing their Facebook posts as a way to attract comments. If a publisher is seen as inciting negative comments, they can be held responsible for them – but there must be conclusive evidence showing both the harm done to the individual and the purpose on social media, especially in a linked article that can lead to prosecution.
Which may actually be a better way to go. Over the past decade, online algorithms have changed media incentives so significantly that publishers clearly benefit from sharing hateful, emotionally loaded headlines to comment on and share, ensuring the best possible reach.
This also applies to misinterpretations, half-truths and outright lies in order to get a user response. The United States would restrict press freedom more severely.
This decision again applies specifically to Facebook posts, and their wording is designed to provoke an emotional reaction to attract engagement. Demonstrating the ultimate connection between the Facebook update and potential personal injury is still difficult, as with all defamation. But perhaps this observation makes Facebook page administrators in the media more relevant in their updates, as opposed to entering comments to trigger the algorithm’s reach.
While as such, it opens up the media to increased responsibility, it could in fact be the way forward by introducing more effective reporting and requiring publishers to take responsibility for launching online crowd attacks based on their case.
Because this is clearly happening – the best way to attract comments and shares to Facebook is to trigger an emotional reaction that then invites people to comment, share, etc.
If a Facebook message is found to be clearly encouraging for one and it can damage your reputation, it seems like a positive step – although it inevitably involves an increased risk for social media leaders.