I think everyone can relate to the heavily promoted idea that removing things makes you lighter, such as “shedding a few pounds”. And maybe a lot of people can relate to reducing exposure, such as “keeping your head down”.
That’s why I find it curious to read in Behavioral Scientist a claim that people “neglect” subtraction.
The problem is that we neglect subtraction. Compared to changes that add, those that subtract are harder to think of. Even when we do manage to think of it, subtracting can be harder to implement.
The basis of this article is a cute story about parenting.
An epiphany in my thinking about less came when my son Ezra and I were building a bridge out of Legos. Because the support towers were different heights, we couldn’t span them, so I reached behind me to grab a block to add to the shorter tower. As I turned back toward the soon-to-be bridge, three-year-old Ezra was removing a block from the taller tower. My impulse had been to add to the short support, and in that moment, I realized it was wrong: taking away from the tall support was a faster and more efficient way to create a level bridge.
This says to me right away that people are naturally inclined to subtract. It’s perhaps second nature. Don’t want to get your hand wet? Subtract it from exposure to rain.
However, instead of this line of thinking the author set about using a contrived Lego set to confirm that her mistake was some sort of grand mental conspiracy instead of just her being wrong — a confirmation bias experiment, if you will.
Since I had become a professor, I had been trying to convert my interest in less into something I could study instead of just ponder. […] I began carrying around a replica of Ezra’s bridge. I tried it out on unsuspecting students who came to meet with me, checking whether they would subtract, like Ezra, or add, like I had. All the students added. I also brought the Lego bridge to meetings with professors…
All her students and close professors thought much like she did, instead of like a three-year old? Color me shocked.
At the heart of this experiment is a fundamental point that a LOT of blocks had to be added in order to construct a bridge out of Legos. Then at a crucial point a decision to add or remove is measured, conveniently ignoring the rather significant fact that blocks have been added the entire time before then.
If I add 15 blocks and then remove one, how subtractive have I been really?
And if I have 100 blocks in my collection in order to create a 10 block structure instead of a 15 block one from a much smaller collection, how subtractive have I been really?
I’m not calling the study nonsense, as it does highlight what we all know already about the need to subtract things (e.g. surface area is also targeted area, less features means fewer potential vulnerability), but in all honestly… it reads to me that the author is so insecure about intelligence they had to start a huge campaign to explain why they didn’t think of something before their toddler did.
The analysis gets really wonky and I find it underwhelming. Let’s look again at that paragraph.
The problem is that we neglect subtraction. Compared to changes that add, those that subtract are harder to think of. Even when we do manage to think of it, subtracting can be harder to implement.
Perhaps they should have subtracted a lot of words? The New York Times had a better way to describe this that probably looks familiar to everyone.
Overwriting is a bigger problem than underwriting.
But seriously, subtraction is very easy when there’s incentive and it’s even very common, in some cases overused. Overwriting is a problem because it’s easier to write long form, harder to be short form. However, when you’re in a rush because of imminent threats or you have a much smaller vocabulary then short form gets a LOT easier. Any guesses how long a PhD thesis by a toddler would be?
Speaking of incentive to subtract, a fair number of sites on the Internet track layoffs by companies overeager to regularly reduce staff, for example.
I’m disappointed these incentive and risk angles weren’t explored more by behavioral scientists, but I suspect that’s because once the author was satisfied that they weren’t the only one who made a mistake, they settled into comfort of having published articles and collaborations to prove what they believed when they started (protecting their sense of being intelligent, as opposed to understanding their own bias).