While Elon Musk promises a Twitter that does not suppress speech, former President Barack Obama recently asserted that we need more content moderation rather than less. Obama seems to miss the reality of what moderation really means. Intentionally or mistakenly, he's calling for greater censorship.
Genuine "moderation" means protecting people from information that might do them harm, typically misinformation or disinformation that leads them toward unwise or dangerous actions or decisions. Moderation should not "protect" people from viewpoints or dissent; that would be censorship.
The boundary between legitimate moderation and censorship is blurry and easily crossed. On one hand, nobody should favor suppression of facts or control over access to them. What do we mean by "facts"? Well, physical, logical, and mathematical facts are indisputable. Simplifying a math problem by protecting people from the fact that pi, the ratio of a circle's circumference to its diameter, is an "irrational" number, which means that its decimal form neither ends (like 1/4 = 0.25) nor becomes repetitive (like 1/6 = 0.166666) would be absurd. You can't build a bridge or a house with pi equal to, say, 3.1. At the other extreme of speech, few would object to suppressing direct incitement to commit harm, such as social media "dares" to commit and make videos of robberies. We would place in that category some of the conspiracy theories about COVID vaccines, such as that they implant surveillance chips or that they cause vaccinees to "shed virus."
It is the gray area between fact and incitement that illustrates the problem of moderation. When it comes to opinions, viewpoints, or dissent, it is inevitable that the biases and mindset of those who would moderate their expression make an objective view of potential harm impossible. One person's opinion is another's attack. How often in the recent past have we heard the woke among us declare that "speech is violence"? Justice Louis Brandeis addressed the problem in his iconic concurring opinion in Whitney v. California (1927) when he wrote: "If there be time to expose through discussion, the falsehoods and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence" — in the vernacular, the remedy for bad speech is more speech.
In addition to opinions, other major forms of speech in the gray area are evidence-based data (e.g., studies, surveys, and polls) and analysis. But can any entity that arrogates to itself the role of moderator possibly be adequately versed in the quality and breadth of the available data? After all, data points can be cherry-picked, the inclusion of data may be shaped by confirmation bias, conflicting conclusions from the same data are possible, some studies or analyses can be missed, and there is a common tendency to mistake correlation for causation. The result should be extreme caution in suppressing speech when "misinformation" may lie solely in the underinformed eye of the moderator.
Musk rightly observes that such caution is rarely exercised, especially at Twitter, and that unqualified moderators freely apply their own preferences and beliefs to an imperfect understanding of whether the speech at hand presents a danger to its consumers. This is what can turn their moderation into censorship.
With Musk owning and running Twitter, let us hope that he embraces legitimate "moderation" and cracks down on what is, in fact, censorship.
Andrew I. Fillat spent his career in technology venture capital and information technology companies. He is also the co-inventor of relational databases. Henry I. Miller is a physician and molecular biologist. They were undergraduates together at MIT.