For regulating artificial idiocy, against regulating cryptography

Margaret Heffernan correctly argues for regulation of artificial intelligence idiocy.[1] But I need to refine her argument.

Even when apostate pioneers in artificial intelligence warn vociferously against its dangers, an equal and opposite cohort (sometimes even including the same voices) argues that the technology is too young to be constrained, that there isn’t yet sufficient evidence of harm and that business can be trusted to do the right thing.

But no other industry gets such a free pass. Electrical appliances are tested to ensure they don’t explode or catch fire. Cars aren’t allowed on the road if they don’t meet safety standards. Pharmaceutical businesses must prove their products are safe before they can go on sale. If harms emerge in any of these, regulation piles on top of regulation. Developing and enforcing standards is a cornerstone of the social contract: citizens expect their governments to strive to keep them safe.

So why is technology the exception? When Facebook was found to have experimented on users without their consent, when social media has been shown to harm young and vulnerable users, calls for regulation resound and then go quiet. In the argument that new technology is too precious an economic opportunity, AI is but the latest target. Its evangelists argue that, so far, it hasn’t shown any signs of harm, and the engineering is too abstruse for legislators to understand. The first point is debatable, the second is often correct. I have had many conversations with MPs and chief executives who privately acknowledge feeling out of their depth when it comes to tech. It’s less embarrassing to avoid the gnarly problems, meaning they find common cause with the companies who also benefit from ignoring them.[2]


Some of Heffernan’s argument reminded me of the argument over cryptography, which stands accused of enabling all manner of pedophilia, terrorism, and other criminality, but which we also rely upon to secure, for example, our credit information when we make online purchases. White supremacist gangs regularly insist that they absolutely must have a back door to decrypt criminal communications while cryptographers insist, often seemingly to deaf ears, that any compromise on cryptography compromises, among other things, our banking information, because any back door can be exploited by bad actors, including foreign governments who would invoke sovereignty to insist on access to such back doors in their pursuit of dissidents.

In rough form only, the cryptographers’ argument resembles that of artificial idiocy advocates. The white supremacist gangsters and other intrusive agencies imagine that cryptographers could accommodate their demands if they only wanted to. Cryptographers insist that the gangsters and other agencies are demanding a contradiction and refuse.

But cryptography is available to everyone, including criminals and terrorists. No one would argue that cryptography has shown no harm because the potential harm is obvious. And the mathematics that goes into cryptography excludes a vast majority of people, politicians included, from understanding the technology. It would appear that on Heffernan’s argument, cryptographers should indeed be compelled to comply (to all our detriment).

My argument against artificial idiocy relies on statisticians’ warnings, repeated well beyond blue faces, against inferring relationships from correlations. Artificial idiocy, I promise, will generate a catastrophically wrong answer that will cause great harm.[3] It will do this because it depends upon a fallacy, inferring relationships from correlations, to function at all. (Hint: self-driving cars rely on artificial idiocy.)

That’s what separates this from the cryptography argument. The cryptographers (high technologists) are arguing against fallaciously implementing a harmful contradiction and in favor of protecting privacy. The fans of artificial idiocy (high technologists) and, by implication, of self-driving cars, are arguing for fallaciously implementing a harmful, even life-threatening[4] fallacy to, among other things, put Uber drivers out of work.

  1. [1]Margaret Heffernan, “The tech sector’s free pass must be cancelled,” Financial Times, June 28, 2023, https://www.ft.com/content/c668bf20-40ea-4eb9-8910-ab63d213a63b
  2. [2]Margaret Heffernan, “The tech sector’s free pass must be cancelled,” Financial Times, February 28, 2023, https://www.ft.com/content/c668bf20-40ea-4eb9-8910-ab63d213a63b
  3. [3]David Benfell, “Our new Satan: artificial idiocy and big data mining,” Not Housebroken, April 5, 2021, https://disunitedstates.org/2020/01/13/our-new-satan-artificial-idiocy-and-big-data-mining/
  4. [4]Jacob Comer, “State Police: Tesla on autopilot crashes into stopped truck on Pa. Turnpike,” Pittsburgh Post-Gazette, June 26, 2023, https://www.post-gazette.com/news/transportation/2023/06/26/state-police-tesla-autopilot-crash-truck-interstate-freightliner/stories/202306260060; Russ Mitchell, “Huge Tesla data leak reportedly reveals thousands of safety complaints. 4 things to know,” Los Angeles Times, May 26, 2023, https://www.latimes.com/business/story/2023-05-26/tesla-autopilot-alleged-data-breach-leak; Russ Mitchell, “San Francisco’s fire chief is fed up with robotaxis that mess with her firetrucks. And L.A. is next,” Los Angeles Times, June 22, 2023, https://www.latimes.com/business/story/2023-06-22/san-francisco-robotaxis-interfere-with-firetrucks-los-angeles-is-next; Faiz Siddiqui and Jeremy B. Merrill, “17 fatalities, 736 crashes: The shocking toll of Tesla’s Autopilot,” Washington Post, June 10, 2023, https://www.washingtonpost.com/technology/2023/06/10/tesla-autopilot-crashes-elon-musk/; Kevin Truong, “San Francisco Officials Make Last-Ditch Effort To Block Robotaxi Deployment,” San Francisco Standard, June 2, 2023, https://sfstandard.com/transportation/san-francisco-officials-make-last-ditch-effort-to-block-robotaxi-deployment/; Kevin Truong, “San Francisco Transit Chief Blasts Cruise, Waymo Robotaxis as Unsafe: ‘Race to the Bottom,’” San Francisco Standard, June 7, 2023, https://sfstandard.com/transportation/san-francisco-transit-chief-blasts-cruise-waymo-robotaxis-unsafe/

One thought on “For regulating artificial idiocy, against regulating cryptography

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.