The US Legal System is Being Hacked
This article saved into your bookmarks. Click here to view your bookmarks.
My BookmarksOnce you understand algorithms, the US legal system starts to make more sense. Or maybe nonsense. In the neutral algorithmic terms of information flow and security, the US court system is being “hacked.” In a wide algorithmic sense, “hacking” is when a functioning system acts non-functional as a result of inputs going to the wrong place.
So, when I say that the US court system is being hacked, I mean that it is making rulings which are legally true, yet against the intent of the law. Too little common sense and too much technicality — legalism run amok — is tying the Law in knots. Informational concepts like data compression (reducing the bits needed to represent data) and legibility (or “legal cognizability,” the court’s authority to try a case without prior approval) explain how such strategies work, and can be stopped.
On January 5, I sat in a US Federal courtroom, hearing arguments in three cases which will determine the future of topics ranging from state violence to digital damage to children. These cases took place in the Ninth Circuit of the US Federal District Court, one of the highest courts in the US.
All three cases demonstrate how the Law can be hacked. The culprits are compressed, non-human representations of human activities, such as contracts or disclosures, which assume for themselves the power of human judgement. But laws need context and live interpretation, which comes from humans and not just other laws. The Founding Fathers had good reason to insist that only human beings be the judges and jury.
That great book described a particular legal arena in Victorian England called “Chancery,” a kind of probate court gone rogue. A chummy network of lawyers and chancellors (judges) would pay themselves out of the estates they were supposed to administer, often draining the money entirely. Chancery was a travesty of justice, a perfect example of self-funded administration run amok.
In a narrow historical lens, Chancery began around the year 1000 as a royal document-issuing office. As with most administrative overgrowth, by 1400 Chancery had expanded beyond mere document issuing to include providing “fair relief” to petitioners in court. Over a few hundred years, the office became so parasitic it was abolished in 1872, twenty years after Dickens’ writing. In a wide algorithmic lens, Chancery illustrated the mathematical process of “leading indicator dependency,” a concept that explains the tendency of any learning system to make use of quick rewards and ignore long-term costs.
By blurring historical detail, this wide algorithmic lens covers a lot. On the one hand, leading indicator dependency can explain how creatures as simple as bacteria can be tricked into self-destructive behavior. In a strict sense, the motor systems of those creatures have been hacked. On the other hand, leading indicator dependency can also explain how systems as complex as Chancery evolve to exploit and defend their resource streams.
Chancery gained traction by creating and enforcing ever more specific contracts and technicalities which overrode common sense. That is, by creating and enforcing various minute Letters of the Law, Chancery collectively overwrote the Intent of the Law.
Now the same thing is happening in the US.
We can understand this new hacking in terms of old hacking. Hacking the Law and hacking computers are similar, because computers and laws have similar structures, rules and loopholes. For example, software is organized in hierarchies — minutia atop foundational meta-categories, subclasses atop superclasses — while the law similarly stacks local jurisdictions atop county, state and federal, all on top of English Common Law. To decide what information to pay attention to, computers use protocols, handshakes, private keys and so forth, while the law uses standing, jurisdiction, appellate process and such. To categorize information safety, computers use address space, kernel space, sandboxes and user space, while the law tracks decisions, reasoning and precedents.
In both cases, once a bad decision becomes a precedent, it can spawn similar decisions, perpetuating itself. We know that in computers, security holes allow viruses, worms, malware, kernel hacks, data breaches and countless other named and un-named ways to make the computer do what it shouldn’t. We should expect the Law to be similarly hackable.
Most importantly, software and the law share a common weakness: they’re both built on discrete categories, not the flowing real numbers of Nature. Nature has no sharp-edged borders anywhere. Made-up borders give categories, symbols and even logic an artificial certainty which doesn’t hold up in real life. For example, in a computer, a single-bit error crashes the core; in politics, a constitutional ambiguity can incite revolution. Nervous systems aren’t so brittle, being continuous in space and time to match the world they live in.
But even with the lubrication of natural bandwidth, Nature has hacking too. Hundreds of millions of years ago there were flying insects whose eyes saw specific colors, as well as plants whose pollen needed transport. To lure and reward the insects for transporting pollen, plants evolved special appendages and colors to tickle insects’ visual systems. We call those attention-grabbing innovations “flowers.”
In Nature, every kind of lure or camouflage — there are so many! — counts as an example of hacking. Humans are especially vulnerable to lures, especially when you consider how we hack ourselves by things we make and by things we like.
The three appeals heard by federal judges (and overheard by me and friends) each recapitulate these features of hacking. Here they are:
In a more chilling example, the attorney objected to the court’s requirement that crowd-control officers give audible warning to people before firing on them with weapons. The court wanted people to be able to avoid harm, but the attorney said that determining audibility was subjective, being “up to the whim of the crowd.”
This was based on the rationale that upon receiving the email, the customer should have investigated and quit using the product. Upon hearing this argument, Judge Nguyen looked astonished and said, “I get thousands of emails a day, I could never read them all!” Exactly: the law contradicts itself. On the one hand, people are legally obligated to read every email. On the other hand, it is impossible to do so.
That smarmy background is necessary to appreciate the arrogance and cluelessness of the company’s following legal claims. Because a kid used the software at school, she could have read its legal Terms and Conditions. Because the parent did not pull the kid out of school, they implicitly accepted those terms. Because those terms ban lawsuits (again in favor of arbitration), this lawsuit alleging the product causes harm cannot be heard in US court. Now the parent and kid have no rights to rectify the harm, or even acknowledge it exists. The contract is so powerful, the instant your eyes behold its pixels, your rights evaporate.
The first evidence of written law in human history is Hammurabi’s Code, a collection of Babylonian laws. It outlined economic, family, criminal and civil laws — in other words, how humans interact. Contracts began in much of the same way: as written records of distinct human relationships. That is, as compressed representations. But although the contract was on paper (or clay), real live humans had to witness, write and interpret those contracts. The paper contract marked a live handshake or promise.
When the Founding Fathers wrote the US Constitution, they had human hardware in mind: in-person votes, public speech on soap-boxes, printing presses and trials in which accused faces accuser close to twelve attentive jurists. All of those in-person interactions, micro-expressions and nano-gestures provide the high-bandwidth validation of reality which any nervous system needs. Informationally, there is literally a million-fold difference between the bandwidth of a contract (a few thousand bytes of fixed text) and a sensory system processing real life (megabytes per second).
At first a contract couldn’t stand on its own, apart from the person who signed it. In case of disagreement, the contract’s counterparties could meet in person in court, and real people could decide in person whose interpretation is right. All that is changing fast.
One tipping point was the invention of the “corporation,” a fictional entity which has the same rights as a person, but is really just a set of contracts absent of heart or feeling. Once a non-human thing could have the (human) power to own and enforce a contract, it was only a matter of time before those fake-human entities also found ways to make The Law bend their way. Corporations began following written contracts more and following social contracts less. At the time of the Founding Fathers all business entities were actual people with families and opinions. Correspondingly, the main enforcement pressures were human: social contracts, social shame and threats of prison. Nowadays most businesses are abstract clouds of text with few identifiable owners and little human sense.
Governments bear equal blame for the accretion of nonsense. Once, governments merely collected taxes. Now, like administrations everywhere, governments create clouds of requirements as they try to exert more control over humans while spending less human effort of their own. The result is too many rules: each separately followable in principle, but collectively overwhelming. Paperwork is a huge help in making rules, because paperwork stays put and can be validated. A test can stand in for understanding, a certificate can stand in for competence, a waiver or disclosure can stand in for permission.
Unfortunately paperwork isn’t paper any more. Compared to paper, electronic records are cheaper to broadcast in bulk, easier to lose, easier to fake and easier to use against you. And unlike paper and ink, electronic bits have no physical, testable trace of truth, and thus no trustworthiness. With paper, one often used “certified mail” or “process servers” to prove a message arrived and was seen. Now it can be enough to merely claim an email was sent based on a database entry, absent any other evidence it was seen or even arrived. But electronic bits tend to win for the simple reason that administrators receive the savings while humans bear the costs.
To be sure, electronic technology is technically neutral, at least until weaponized for gain. But that’s happening. The formerly neutral field of “user interface,” or human-computer interaction, has the active sub-field “adversarial design.” Adversarial design produces adversarial interfaces, which use persuasive technology (pixels, colors) to hack a user’s decisions against the user’s interests. The law recognizes the user’s decisions as binding, but does not notice the active deceptions which spurred them.
The worst innovation is automatic consequences. Now that machines can both record and execute, automated punishments (like red-light cameras) will be more prevalent, and will serve as precedents for even harsher auto-punishments.
Idiotic and/or dangerous rules and mandates are common in my home state of California, where attorneys craft so many laws. Each example of stupid legalism is ultimately a case of “technicality beats reality.”
Speaking of driving, across cities, Californians are seeing more and more self-driving cars. Self-driving cars are allowed to operate if a human signs paperwork taking responsibility for anything that might go wrong. But no human nervous system can move fast enough to take over driving the instant an autopilot abdicates. The waiver is merely a way to shift blame away from the car company and toward the driver, nothing more.
I am not an attorney, so I don’t know if the following ideas make legal sense. And I am not a politician, so I don’t know if they are politically feasible either. But as a lifelong engineer and scientist I know they would have the right effect on law anywhere in the world.
“Duty of care” (i.e., do not harm) should be expected of all products of all kinds, especially online products. Already, online products kill thousands of people — social media is known to have a negative effect on mental health. Such products hide out in the legal blind-spot between the immunity of so-called “publishers” and society’s blindness regarding informational toxins.
Humanity’s grand tragedy is twofold. On the one hand the laws of modern society are ever more mismatched to actual human function, and create ever more human dysfunction and misery. On the other hand the Law has always been and could only ever be driven by actual humans, its sharp edges smoothed by native human bandwidth. The Law hurts us, yet it still needs us.
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Commenting Guidelines
Please read our commenting guidelines before commenting.
1. Be Respectful: Please be polite to the author. Avoid hostility. The whole point of Fair Observer is openness to different perspectives from perspectives from around the world.
2. Comment Thoughtfully: Please be relevant and constructive. We do not allow personal attacks, disinformation or trolling. We will remove hate speech or incitement.
3. Contribute Usefully: Add something of value — a point of view, an argument, a personal experience or a relevant link if you are citing statistics and key facts.
Please agree to the guidelines before proceeding.
Lawsuits and funding cuts from governments such as the US and the EU, aimed at restricting free speech, are putting...
by Marco De Ponte, May 13, 2025
The US health insurance industry is a bureaucratic nightmare due to excessive layers of companies and intermediaries that create confusion...
by Andrea Mazzarino, August 22, 2024
It’s no longer a crackpot conspiracy: The administrative state controls the lives of US citizens. Career bureaucrats in the executive...
by Andrew Morrow, April 12, 2024
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.
Congress Can’t Keep Pretending The Iran War Is Optional
The Great Decoupling: Why the EU and Iran Have Reached the Point of No Return
What the Iran War Reveals About the Limits of Chinese Power
The Time Is Out of Joint: Power, Misalignment and the G1.5 World
The Iran War: How Does It End?
Undoing the Endangerment Finding: Science, Policy and the Fight Over US Climate Authority
Force Without Legitimacy: Bombing Iran Will Not Produce Regime Change
The Gulf Confronts an Ugly Truth About Aligning With America
Why Legality Matters: The Crucial Role of Law in Global Order
A War to End All (Middle East) Wars?
When Skyscrapers Speak Louder Than Clinics in Ethiopia
Trump’s Continued War on Climate Change: Repealing the Endangerment Finding
Repealing the Endangerment Finding: In Trump’s America, Everything Happens, but Nothing Changes
Is Europe a Possessed Continent?
Trump and German Rearmament: Sowing the Seeds of Upheaval in Europe