"Unpacking Silicon Valley's Incompatibility with Content Moderation"

In the mid-1990s, two US lawmakers inserted 26 words into the Communications Decency Act of 1996, which eventually became section 230 of the Telecommunications Act of the same year. These words exempted any provider or user of an interactive computer service from liability for content provided by another information content provider. This led to an exponential increase in user-generated content on the internet, but also to the rise of vile, defamatory, and downright horrible content.

The hosting sites, although not legally responsible, began engaging in moderation, but this presents two problems: it is very expensive due to the sheer scale of user-generated content, and it often outsources the dirty work to people in poor countries, who are traumatized by watching unspeakable cruelty for pittances.

Recently, platforms have been experimenting with AI moderation, which presents its own set of problems. First, there's HL Mencken's observation that "For every complex problem, there is an answer that is clear, simple, and wrong." Second, there's the cybernetic perspective, which emphasizes that for a system to be stable, the number of states its control mechanism can attain must be greater than or equal to the number of states in the system being controlled.

If a platform like Meta has billions of users throwing stuff at its servers every millisecond, it has what Ashby would have called a variety-management problem. There are really only two ways to deal with it: choke off the supply or amplify internal capacity to cope with the torrent. The former undermines the business model, and the latter is challenging even with half a million human moderators.

The law of the land doesn't apply to Meta, thanks to section 230, but beating Ashby's law might prove tough for AI.

Source: <https://www.theguardian.com/commentisfree/2024/apr/27/silicon-valleys-business-model-is-incompatible-with-the-moderation-of-online-horror-and-hatred>

Comments

Popular Posts