So, at the end of the day, legislation to curb fake news looks like a sure thing. (I am not going to say it will happen because I must give Parliament the dignity of being able to say no). Reason: Current legislation isn’t good enough, especially in terms of curbing the quick spread of disinformation. Also, we now don’t have laws to compel the likes of technology companies like Twitter and Facebook to do anything. These social media platforms have to be accountable too.
That’s my one-cent worth of summary of the tome produced by the Parliamentary Select Committee on deliberate online falsehoods. It’s really no surprise to me that some kind of law would be in the pipeline going by the way the committee handled the public hearings. Almost every representator was pinned down on the necessity for some sort of action to curb fake news.
Those who thought that current laws were good enough were wrong and they didn’t have evidence to prove that people here can discern fact from fiction, the panel said in its report. As for the definition of fake news which had some people worried that it might be used to curb legitimate dissent, the panel said that falsehoods can be independently verified and the courts here have historically done so. So there!
It’s a massive report, albeit repetitive. If anyone needs a primer on fake news and examples to go with it, it’s recommended reading. The pity is that the examples are from the world over and the only significant Singapore example is the defunct TheRealSingapore, which had the Sedition Act thrown at it for spreading disinformation and creating dissent.
A lot of attention was paid to foreign State and non-State actors out to break social cohesion here by employing such “non-kinetic’’ warfare tactics. The pity is that not much light was shed on this for national security reasons so much so that we are left with assertions that cyber warfare is already happening here – and will continue to happen. Again, worldwide examples peppered the report and there was a brief reference to the SingHealth hack (which isn’t about fake news by the way but stealing). A significant portion was dedicated to big, bad Russia’s shenanigans.
Did the committee’s report allay fears that any new laws or regulations would be abused by the powers-that-be? Face it. That’s really the objection to fake news laws isn’t it? That the G would somehow use it to tamp down any opposition – and throw you in jail for it too.
The might of the elephant in the room was mentioned here and there, for example, in the discussion on who should decide on what is fake:
Representors raised concerns about whether Executive action would be credible. There was concern that Executive action could feed fears over the abuse of power. It was also pointed out that Executive directions would not be able to deal with falsehoods spread by the Executive. That said, both Law Dean Associate Professor Goh (Yi Han) and law academic Associate Professor Eugene Tan explained that judicial oversight of Executive action would serve a crucial balancing role in ensuring the propriety of the Executive’s exercise of discretion.
Civil society activists had tried to pull the elephant’s tail, by advocating a Freedom of Information Act and the introduction of an Ombudsman to investigate public complaints of executive excess. The supposition, methinks, is that more information is needed to counter fake information and a check on the G needed if it tried to do any kind of “cover up’’. The committee countered this by saying these were big issues and too multi-faceted a feature for the committee to take into account. The ball was, instead, lobbed to the G.
As there are countries which have such legislation and institutions, the Committee suggests that the Government studies the experience of these countries, and whether having a Freedom of Information Act and an ombudsman will help in dealing with deliberate online falsehoods.
As for allegations that such laws would produce a “chilling effect’’? To answer the question, the committee referred to the testimony of Mothership, the online news site.
Mothership testified that they did not experience a drop in traffic, nor a drop in contributions, comments and engagement on its platform as a result of being covered by the Broadcasting Act licensing regime. This suggested the need for circumspection in assessing the extent of any potential “chilling effect”. The prospect of a “chilling effect” should be dealt with through calibration in the powers deployed; the answer cannot be to do nothing at all.
The committee quoted liberally from Professor Thio Li-Ann’s representation to make the point that free speech isn’t being circumscribed.
She had said: “There is no human right to disseminate information that is not true. No public interest is served by publishing or communicating misinformation. The working of a democratic society depends on the members of that society … being informed not misinformed. Misleading people and … purveying as facts statements which are not true is destructive of the democratic society and should form no part of such a society. There is no duty to publish what is not true: there is no interest in being misinformed.’’
Her argument is that fake laws would protect “the marketplace of ideas’’ by driving out what is false so that people can come to conclusions based on the facts. In other words, it is a promotion of the democratic process.
Okay. So, if legislation is more or less to be expected, what did the committee propose to ensure that the net isn’t cast so wide that it catches every single lie, piece of satire, prank and deliberate falsehoods to bring down the country?
Methinks we should pay attention to two critical phrases that were used in the report: Prescribed threshold for intervention and criminal culpability. In other words, how bad should things get before the law kicks in? And how heavy should the law come down on purveyors of lies? Should it be different strokes for different folks?
Representors said the threshold for intervention has to be based on a combination of factors such as magnitude and nature of impact, type of content and intent and identity of perpetrator and so forth. Also there are different degrees of disinformation and different sorts of lies.
On this, the panel recommended:
Criminal sanctions should be imposed on perpetrators of deliberate online falsehoods. These deterrent measures should be applied only in circumstances that meet certain criteria. There should be the requisite degree of criminal culpability (i.e. intent or knowledge), in accordance with established criminal justice principles. There should be a threshold of serious harm such as election interference, public disorder, and the erosion of trust in public institutions.
So no, the committee didn’t pin down weightages on each factor or draw up a matrix. In any case, we shouldn’t expect it to hammer in the nuts and bolts. What’s good is that it has taken into account different facets and motivations for spreading falsehoods and advises a calibrated approach. It remains for someone somewhere to draw up the “criteria’’ and translate it into legal language.
So, dear reader, I am leading you down to this point: The committee produced a pretty general report, replete with examples and backed up by experts both local and foreign. The G can now say that it has embarked on an extensive public consultation exercise. But the real bite of fake news laws will be what will be drafted by the executive. And that should mean a second round of scrutiny on “threshold’’, “criminal culpability’’ as well as penalties imposed.
I am hoping that a Select Committee to scrutinize the Bill will be formed. But I am not betting on it.