Carlos DeLuna died on December 7, 1989. Last meal: declined to eat. Last words: “I want to say I hold no grudges. I hate no one. I love my family. Tell everyone on death row to keep the faith and don’t give up.” He’d been convicted of killing Wanda Lopez in a gas station robbery. An eyewitness flagged DeLuna in a photo lineup at the Corpus Christi Police Department.
Lineups were standard at the time. Still are, some places. Sometimes they’re in person, sometimes with a book of mugshots. Works the same either way. Eyewitness looks at the lineup. “Which of these faces is the one you saw at the scene of the crime?” Eyewitness picks the one. We’ve got the guy.
Well, we’ve got a guy. A guy is often the guy. That guy. But sometimes it’s just some guy.
Today courts are starting to understand this. Human memory is imperfect, and malleable. But in DeLuna’s time, the idea that a sincere eyewitness could be wrong about their own memory was absurd.
Nobody wanted to attack DeLuna for no reason, to take away his freedom and ultimately his life. But that’s what happened, and Columbia University researchers later discovered that he was almost certainly wrongly convicted. DeLuna had been nearby, and closely resembled the perp. It was no attack, no conspiracy against DeLuna. There was no bad guy, no big plan. Just one big mistake.
A few weeks ago, a man got a letter from the California Department of Justice. Complying with a new law that went into effect in 2018, he’d registered a gun with the state and sent in photos of the gun as he’d been asked to do. Here’s what happened next, as described by a store owner who knows the man (lightly copy edited):
The DOJ, six of them from the San Diego office, went to his home to request to inspect the AR pistol. He was not home so his father took the business card and gave it to his son. He called and the DOJ asked if he would bring down the firearm so they could inspect it.
The next day he went to the San Diego DOJ office with his fixed-mag AR pistol. He presented it with the upper and lower unattached just to be safe. The DOJ agent informed him the gun was in violation. Giving him a paper with assault weapon regs [with the following violations highlighted]:
First: Rifle under 30″ (not true since it was registered as a pistol)
Second: Handguard and flash hider on pistol (Now the agent is recognizing it as a pistol, but this doesn’t apply since it’s fixed-mag.)
Third: Fixed magazine capable of holding more than 10 rounds. (Okay, so now he recognizes it as a fixed magazine but this would not apply because it is a CompMag which only holds 10 rounds.)
Lastly the DOJ agent says it is an unsafe handgun. (Well, he built it as an exempt bolt action and later converted it — DOJ would have to prove he did not.)
When none of the above works for the DOJ agent, he tells my customer he is in violation because there was not a DOJ-issued serial number engraved on the gun prior to January 1, 2015.
2015? At this point my customer is worried because he has never heard anything about 2015. That’s because no one has.
The San Diego DOJ agent tells him he has a choice: surrender the firearm or go to jail.
He of course is concerned; he has no idea if he has violated this mystery 2015 law and surrenders the firearm. Upper, lower, they take everything.
We often frame gun rights as a reduction in personal attack surface. Access to firearms helps people defend themselves against hostile actors (robbers, a jealous ex, an oppressive police force, etc.), reduces the weak’s vulnerability to the strong, and so on. This goes in the same bucket as getting a big dog or a house in a low-crime neighborhood — to reduce your attack surface is to eliminate footholds for attack. So people recognize the corollary; to reduce gun rights is inevitably to open footholds for attack, i.e. to increase one’s attack surface. This is a straightforward mental model, so it’s what attracts most of the attention.
Mistake surface is a second-order danger, so it goes unnoticed. People worry what “the government” or “the police” will do. But in real life, those are groups of individual humans. “The police” don’t pull you over or knock on your door. Pete does. Pete has a few years on the job, but he just moved to town and is still learning the department’s computer system. He spilled some marinara on his shirt at lunch. It’s not “the police” that decide to arrest you. Pete decides. And he has about 5 minutes to make the call, because he’s supposed to pick his kid up from soccer practice by 6pm.
The attack surface model is that “the government” will consciously and maliciously find that you’ve broken the law, and they’ll come take your guns. The mistake surface model incorporates the actually-much-likelier-event that any individual whose radar you happen to cross will think you broke the law . Compared to the attack surface model, that multiplies the number of false positives by the total number of individuals who have to make a decision about you. In other words, 1-2 orders of magnitude more false positives.
And your mistake surface creates one more danger multiplier: it doesn’t matter how wrong Pete is about the law. That’s the key difference between your attack surface and your mistake surface. An attack requires an attacker, but a mistake is self-fueling. What Pete does for free (and actually at negative cost if he arrests you and goes into overtime) on a hunch, it will take you thousands of dollars and multiple court dates to undo.
Game theory 101: if one party can make unlimited small mistakes for free and the counterparty has to pay to fix them, who’s going to win the game?
You have a lot of exposed mistake surface in court, too. The way courts work is that Pete makes a decision, you get run through the entire trial process, and then they decide if Pete is right. There is no escape valve labeled, “But Pete was wrong about the technical details of the law and/or my gun.” That comes at the end. You’re going through the process and paying the bills no matter what. The only variable is whether it ends with you going to prison.
What happens to Pete? His boss will say, “Hey Pete, remember that guy you arrested with that machine gun (Narrator: It was not a machine gun.) last year? We sent an email about those guns, they’ve got some weird-ass new thing in the checklist now when you’re seeing if it’s legal.” and Pete will say, “Ok, thanks, I’m gonna read that.” End of what happens to Pete.
So what’s the forcing function that emergently corrects this system over time? The amount of time before Pete’s kid needs to get picked up from soccer.
People worry that registration increases the attack surface that gun rights expose to attackers (states, police, robbers, or whatever form of attack a person thinks about). And that’s fair. But statistically, mistake surface — the number of touchpoints where a mistake by a single individual can ruin your life — can be expected to be the more frequent source of problems.
This entire class of problems goes away if Pete doesn’t know about your gun in the first place. Reducing touchpoints to zero reduces mistake surface to zero. (Some raise a counterargument here: “Why reduce touchpoints to zero? Just fix it by hiring better Petes.” Two problems: 1) It’s a systemic problem due, as described above, to the lack of a forcing function that selects for good outcomes and against bad outcomes. So the system by its nature creates Petes. 2) With scale, any problem with false positives only gets worse. And the whole point of a registration system is to operate at massive scale.)
None of this is to dismiss the idea that jackboots are a danger. The idea I’m underlining though is that it’s important not to focus so much on that that we forget the damage that mistakes can do. Statistically, mistakes will be the more common of the two cases. So to ignore mistake-damage is to understate the dangers of registration quite dramatically.
Mistake surface flies under the radar for the same reason that Andrew Ross Sorkin’s credit card surveillance idea committed an embarrassingly innumerate base rate fallacy — the danger is subtle! But no less tangible.
In the past two years I’ve been asked for more and more recommendations about matters of self defense and personal security, especially from urban, educated professionals who aren’t necessarily comfortable with firearms and who are definitely not used to thinking about their fellow citizens as potential threats to themselves and their families. For a variety of reasons, both valid and vastly overhyped, ordinary people are increasingly concerned for their physical safety.
There’s a standard talk that I give my friends and family on this topic, and I want to share it, here, because it’s critically important that folks who are new to this topic begin in the right place — not with a list of what things they should buy or skills they should acquire, but with a big-picture sense of how they should approach this entire topic of personal security.
Given the aforementioned demographic of people who ask me about personal security, there is an analogy that they often have prior familiarity with that’s a near-perfect fit for the problem of keeping yourself safe: personal finance.
Both self-defense and finance share the peculiar quality of information asymmetry: they’re complex fields where experts and insiders know a ton that normal people simply can’t grasp without becoming experts themselves. On a practical level, that manifests itself in five important similarities that are worth pondering as you approach this topic for the first time.
Most professional people who take some care with portfolio advice are aware that while there are general rules of thumb that work for most people, the particulars of their own situation — age, income, retirement expectations, geography, and health needs, to name a few — matter a very great deal for long-term financial planning.
It is exactly this way with personal security. Simple searches online will quickly net you some very general advice that’s worth heeding for most people, and I’d encourage you to evaluate such advice using the caveats and cautions laid out in the rest of this article. But detailed advice on equipment and training always has to be evaluated carefully with an eye to your specific situation.
Take the topic of guns — they are absolutely not for everyone. Some people live in states or countries where the legal risks and burdens of gun ownership are too high; others have home situations that guns simply do not belong in; and others will just never be comfortable with firearms. For these people, some mix of martial arts training, less-lethal weapons like mace or tasers, and finely-honed situation awareness might be the best option. Or, maybe the best they can do is get a large dog from a reputable breeder, and install some sturdy locks and ample outdoor lighting on their property.
If you’ve decided that guns are part of your security picture, then you have a whole host of specialized considerations to contend with. Will you be carrying concealed? How will you secure a firearm in your home? What guns are legal in your locality? Under what precise conditions can you legally use deadly force so as to minimize civil and criminal liability (this varies greatly by state in the US)? How often can you train with the weapon? And so on.
You can see how quickly the process of identifying your needs and then figuring out how to meet them gets complicated, which brings me to my next recommendation.
If you can afford it or have access to the right person in your network, get professional help for as many aspects of your personal security as you can, just like you would if you were doing estate planning, buying life insurance, or investing for retirement.
The need for quality help is especially acute if you buy a firearm — you must seek out qualified instruction with an instructor who’s credentialed by a widely-regarded organization, whether that’s an NRA certification or something else. But even absent any gun-related considerations, consider finding and engaging a local private security professional of the type you can find in most cities to walk you through the basics of keeping yourself safe in your area of your city without exceeding the financial and physical resources that you reasonably can dedicate to this. Private investigators are also a good resource, here, if you can’t find local private security professionals.
Note: Beware advice from cops and former military, because not all of these people have quality training and directly relevant security experience. Just because someone had some weapons training for their day job doesn’t make them an expert on keeping you safe. Just as you would if you were interviewing a financial planner, check credentials and certifications, and ask tough questions about what this person actually did in their previous role.
For every Warren Buffett there’s a Bernie Madoff out there looking to scam the unwary out of their hard-earned money. And as rife with con artists as the world of money men is, the “tactical” and martial arts worlds are even worse.
You can’t swing a dead cat on the internet without hitting a former super-elite special forces operator who did three tours and was personally responsible for killing 15 of the last 20 guys we’ve claimed are “Al Qaeda’s number 3 operative.” Or so they’ll tell you.
If you go to a Krav Maga gym or BJJ school, ask about the instructor’s qualifications and lineage. Ask to see the framed copies of certifications that should be on on the wall somewhere. And if those copies are in a foreign language, take cellphone photos and run them by someone who can read them. (I once heard of a Chinese restaurant menu being passed off as a certificate in a Kendo dojo — it’s not even the right language.)
If you’re training with guns, then you have to be extra careful. Don’t just find out if the instructor is credentialed, but check up on them. Call the place where they said they’ve trained and see if there’s a record of them training, there. I regularly see YouTube videos of “trainers’” dangerous, showy stunts being circulated for criticism in some FaceBook groups I’m in. Don’t get taken in.
Did you know that an AR-15 bullet was designed for wounding the enemy, and not killing them? You probably did know this, because everyone knows it, and it’s wrong.
If you want to learn about the actual science that went into the design of the AR’s 5.56 NATO round, then there are books that summarize and even reprint some of the McNamara-era Department of Defense studies that gave rise to the gun, but most folks (including too many who should know better) are content to pass along tactical folklore about wounding Viet Cong and piercing Soviet-era metal helmets.
Similarly, I often see claims about bullet calibers and “stopping power” and “rules” about attack distances that are couched in scientific terms but haven’t necessarily been scientifically evaluated. In short, as is the case with some “technical analysis” and stats and charts that personal finance gurus publish for viral clicks, just because it sounds like science doesn’t mean it’s science. As always, do your own homework.
Back in 2013, former Vice President Joe Biden got into hot water for advising people to buy a double-barreled shotgun and fire off two rounds into the air to scare off intruders without harming them. The problem: that terrible but alarmingly common advice could get you either killed or arrested.
I can’t count the number of times I’ve heard someone offer up a Louisville slugger stashed under a carseat as a superior alternative to a firearm for home and auto defense. This blustery bit of vigilante fan fiction is typical of the advice you find online. A bat is certainly a valid option if you’re gun-shy, or even a possible backup option if you aren’t, but superior to a firearm? No. When you, untrained civilian, whack a 250-pound attacker wearing a heavy coat in the wrong place with your mini-slugger, and he grabs it from you and goes to work on you with it, you’ll spend your final moments wishing you had a gun.
Just like you don’t jump on random hot stock tips from people on the internet or friends and family (at least, I hope you don’t), avoid these simple-sounding self defense tips, where somebody confidently lays out the One Cool Trick They Don’t Want You to Know for keeping yourself safe.
For the past couple months, we’ve maintained a private spreadsheet of important active cases at the circuit court or Supreme Court level. Just realized it could be a useful community resource, because no similar snapshot seems to exist anywhere else. So it’s now public at the link below. Anyone logged into Notion can comment — let us know if there are any cases or details we’ve missed.
For the familiar points of disagreement, most people (on all sides) resort to first-order thinking and low-effort culture war agitprop. That provides short-term validation, but it doesn’t spread knowledge and it’s not persuasive to those who disagree.
To persuade and to get smarter ourselves, it’s important to focus on systems-level thinking. Here’s a list of highly shareable, high-quality answers to the issues that frequently come up around gun rights.
A Guardian study showing that murders in the US are extremely concentrated to specific areas and demographics, which suffer under rates of violence an order of magnitude higher than the median
Alan Korwin answering the question “Why do you need an AR-15?”
Korwin also level-headedly addresses the subtle inherent flaws with the idea of gun registration (he doesn’t use the term “base rate neglect”, but nicely encapsulates the concept):
The Feed, “Gays & Guns”:
BuzzFeed, “Grandmas Shoot Guns for the First Time”:
Keanu Reeves at the range:
The New York Times’ Andrew Ross Sorkin published an article on Christmas Eve, to argue that credit card companies should build models that take spending activity as input and return “probability that this customer is planning a mass shooting” as output.
An excerpt from the crux of it:
A New York Times examination of mass shootings since the Virginia Tech attack in 2007 reveals how credit cards have become a crucial part of the planning of these massacres. There have been 13 shootings that killed 10 or more people in the last decade, and in at least eight of them, the killers financed their attacks using credit cards. Some used credit to acquire firearms they could not otherwise have afforded.
Those eight shootings killed 217 people. The investigations undertaken in their aftermath uncovered a rich trove of information about the killers’ spending. There were plenty of red flags, if only someone were able to look for them, law enforcement experts say.
Sorkin is well-known for having used his NYT column in the weeks after the Parkland massacre to successfully lobby Citigroup and Bank of America to fire their business customers who sell standard-capacity magazines and other common touchstones. So people on all sides reacted predictably to his new article.
Some people loved the idea:
This is a tremendously important story by @andrewrsorkin . Banks already have to report suspicious transactions; it would be easy to also report aberrant gun purchases by a possible mass shooter. This could save lives. Cc: @KLemkau https://t.co/LK1A8avdUC
— Joe Nocera (@opinion_joe) December 24, 2018
If you know even a little about how companies use personal data to make uncannily precise guesses about our future behavior, you know it would be child’s play to flag potential mass killers.
They do it to sell soap flakes. They can do it to save lives. https://t.co/tlq9Ty1fyN
— Nick Confessore (@nickconfessore) December 24, 2018
Civil liberties advocates and gun rights advocates were, as a rule, less sanguine:
Wanting the millions of Americans who purchase firearms on credit to be reported to the government as potential terrorists is disturbingly authoritarian. It’s sad to see such an illiberal take in the NYT. pic.twitter.com/t4mFznwhn0
— Stephen Gutowski (@StephenGutowski) December 24, 2018
The philosophical disagreements are well-known to anyone who has studied them, and I won’t rehash them here beyond saying that Jonathan Haidt’s moral foundations theory captures it well (particularly the theory’s care/harm, authority/subversion, and liberty/oppression axes).
That can be a fun discussion to have, but it’s not one that’ll teach us anything new. Instead, we’re going to examine something that almost everybody missed: the straightforward innumeracy of Sorkin’s article.
Base rate neglect is a simple and counterintuitive idea: the chance of a test giving a false positive is determined both by the test’s accuracy and by the prevalence of the condition that the test is looking for.
Wikipedia explains, using a hypothetical terrorist detector machine (lightly edited here for brevity):
In a city of 1 million inhabitants let there be 100 terrorists and 999,900 non-terrorists. Thus, the base rate probability of a randomly selected inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance camera and automatic facial recognition software.
The software has two failure rates of 1%:
- The false negative rate: If the camera scans a terrorist, a bell will ring 99% of the time, and it will fail to ring 1% of the time.
- The false positive rate: If the camera scans a non-terrorist, a bell will not ring 99% of the time, but it will ring 1% of the time.
Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words, what is P(T | B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the “base rate fallacy” would infer that there is a 99% chance that the detected person is a terrorist.
The fallacy arises from confusing the natures of two different failure rates. The “number of non-bells per 100 terrorists” and the “number of non-terrorists per 100 bells” are unrelated quantities. One does not necessarily equal the other, and they don’t even have to be almost equal. To show this, consider what happens if an identical alarm system were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100 non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore, 100% of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The “number of non-terrorists per 100 bells” in that city is 100, yet P(T | B) = 0%. There is zero chance that a terrorist has been detected given the ringing of the bell.
Imagine that the first city’s entire population of one million people pass in front of the camera. About 99 of the 100 terrorists will trigger the alarm—and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098 people will trigger the alarm, among which about 99 will be terrorists. So, the probability that a person triggering the alarm actually is a terrorist, is only about 99 in 10,098, which is less than 1%, and very, very far below our initial guess of 99%.
We can apply the same math to estimate a false-positive rate for Sorkin’s mass shooter detector. We’ll make a few assumptions about inputs, in each case being maximally generous:
Running these inputs through the math above, we find:
Remember also that our input numbers are unrealistically generous. False positive and false negative rates around 1% are achievable in tightly defined circumstances. Physics models, narrow subsets of medical imaging, and the like.
In predicting human behavior, 1% error rates are unheard-of. Companies like Facebook get paid billions of dollars to build these systems, and they hire armies of elite PhDs with unlimited resources to do it — and they’re regularly on the front page of Sorkin’s own newspaper for messing it up. In the real world, double-digit error rates abound.
An automatic response might be, “62,499 — or 10x that many, or 100x — false positives will be worth it if the system catches one true positive.” But that elides the core problem that we set out to solve: you don’t know ahead of time which one of the 62,500 (or 10x that many, or 100x that many) is the real mass murderer. And there is no system that can sort through that haystack without running out of resources (having first consumed all the resources that the system was built to save in the first place). Human reviewers can analyze the ML model’s flags, but at the cost of increasing bias and false positives — worsening the very problem they’re trying to solve.
Base rate neglect is counterintuitive, but famous enough in the ML community that well-informed people expect a famous reporter to know about it before writing a NYT feature.
This would be like writing an article criticizing Amazon for burning jetfuel instead of teleporting packages — interesting to debate, but surreal to do so without ever mentioning that the technology for teleportation doesn’t exist.