The electric carmaker posted total revenue of 22.5 billion dollars, down twelve per cent year-on-year and falling short of Wall Street’s 22.7 billion dollar estimate. Operating income plunged by forty-two per cent to 900 million dollars, marking Tesla’s second consecutive quarterly decline.
You would think the stock would be worthless by now, given it’s for a car company with seriously flawed designs flogged by a Nazi that nobody likes.
…Tesla is “a toxic brand that is inseparable from its leader.” Quarterly profits … fell to $1.17 billion, or 33 cents a share, from $1.4 billion, or 40 cents a share. That was the third quarter in a row that profit dropped. […] Tesla shares were little changed in after-hours trading…
…we’ll have Robotaxi in half the population of the US by the end of the year. […] Investor questions begin with an inquiry about Tesla Robotaxis. Tesla noted that it expects to 10X its current operation in the coming months. The Bay Area is next, and Tesla is looking to expeedite the service’s approval. As for technical and regulatory hurdles for Unsupervised FSD, Elon Musk stated that he believes the feature should be available in a number of cities by the end of the year. Tesla, however, is being extremely paranoid about safety, so Unsupervised FSD’s rollout will be very, very cautious.
What a pile of absolute bullshit.
Promising investors revolutionary scale at revolutionary speed while emphasizing safety is a combination that defies technical and regulatory reality.
It’s amazing that bald face lying is still a thing to prop up stock prices.
Let’s count the problems, starting with a false dichotomy between aggressive expansion and safety (claiming both “extremely paranoid about safety” and serving half the US population by year-end), an appeal to extremes in promising impossible scaling (10X growth in months to reach 165+ million Americans), hasty generalization from limited current operations to nationwide deployment, post hoc reasoning that implies regulatory approval will automatically follow their timeline rather than determining it, equivocation through vague terms like “coming months” and “a number of cities” that obscure the lack of concrete planning, contradiction between needing Bay Area approval while claiming imminent national rollout, survivorship bias in focusing only on potential success while ignoring the massive infrastructure, regulatory, and technical hurdles, and wishful thinking disguised as business projections where desired outcomes are presented as inevitable results despite the fundamental impossibility of achieving such scale in the stated timeframe while maintaining the claimed safety standards.
Eight(yes eight)flaws from the ceo of 88 who’s always late, and full of hate, the Texas fraud of no cattle and all hat.
Tesla dealer showroom after the CEO gave Hitler salutes at a political rally
Security professionals are intimately familiar with the tension between formalization and practice.
We can document every protocol, codify every procedure, and automate every response, yet still observe the art of security requires something more. Things made Easy, Routine and Minimal judgement (ERM) depend on a reliable source of Identification, Storage, Evaluation and Adaptation (ISEA).
A recent essay by astrophysicist Adam Frank in Noema Magazine explores a similar tension in consciousness studies, one that has profound implications for how we think about all intelligence, both human and artificial.
The tension here is far from new. Jeremy Bentham’s ambitious attempt to create a mathematical model of ethics—his utilitarian calculus—ultimately failed because it tried to reduce the irreducible complexity of moral experience to quantitative formulas. No amount of hedonic arithmetic could capture the lived reality of ethical decision-making. His codified concept of “propinquity” was never made practical, foreshadowing the massive deadly failures of driverless AI hundreds of years later.
In sharp contrast, Ludwig Wittgenstein succeeded in understanding language precisely because he abandoned the quest for mathematical foundations while being one of the best mathematicians in history (yet not a very good WWI soldier). His practical and revolutionary language games emerged from what he called “forms of life”—embodied, contextual practices that resist formal reduction. We depend on them heavily today as foundational to daily understanding.
Frank’s central argument is that modern science has developed what he calls a “blind spot” regarding consciousness and experience. The idiocy of efficiency means a rush to reduce everything to computational models and mathematical abstractions has totally forgotten something fundamental to success:
Experience is intimate — a continuous, ongoing background for all that happens. It is the fundamental starting point below all thoughts, concepts, ideas and feelings.
The blindness of the efficiency addict (e.g. DOGE) isn’t accidental. It’s built into the very foundation of dangerously lowering the safety bar for how we practice science. As Frank explains, early architects of the scientific method deliberately set aside subjective elements to focus on what Michel Bitbol calls the “structural invariants of experience“—the patterns that remain consistent across different observers. That may be a baseline, a reductive approach, that drops far too low to protect against harms.
The problem emerges when abstractions are allowed to substitute for reality itself, without acknowledging fraud risks. Frank describes this as a “surreptitious substitution” where mathematical models are labeled as more real than the lived experience they’re meant to describe.
Think of how temperature readings replaced the embodied experience of feeling hot or cold, to the point that thermodynamic equations became regarded as more fundamental than the sensations they originally measured.
Meta is Fraud, For Real
This leads to what Frank identifies as the dominant paradigm in consciousness studies: the machine metaphor (meta). From this perspective, organisms are “nothing more than complicated machines composed of biomolecules” and consciousness is simply computation running on biological hardware.
And of course there’s a fundamental difference between machines and living systems. Machines are engineered for specific purposes, while organisms exhibit something far more substantive in what philosophers call “autopoiesis“—they are self-creating and self-maintaining. Meta is extractive, reductive, a road to death without a host it can feed on. As Frank notes:
A cell’s essence is not its specific atoms. Instead, how a cell is organized defines its true nature.
This organizational closure—the way living systems form sustainable unified wholes that cannot be reduced to their parts—suggests a different approach to understanding consciousness. Rather than asking how matter creates experience, we might ask how experience and matter co-evolve through embodied symbiotic healthy interaction with the world.
You Can’t Eat a Recipe
To understand this distinction, consider consciousness within the act of cooking to eat rather than just computation. The recipe captures the structural patterns and relationships—the “how” and “what” that can be systematized and shared.
Actual cooking involves embodied skill, responsiveness to the moment, intuitive adjustments based on how things look, smell, and feel. There’s a tacit knowledge that emerges through the doing itself.
A skilled chef can follow the same recipe as the unskilled one and produce something entirely different. Ratatouille, the animated film, wasn’t about a rat as much as the lived experience; the kind of analysis of an environment that I like to call in my AI security work “compost in, cuisine out” (proving that “garbage in garbage out” is a false and dangerously misleading narrative).
A lightning strike enlightens this animated film protagonist like Frankenstein turned chef
The consciousness-as-cooking isn’t just about following instructions—it’s about lived engagement with materials, real-time adjustments, the way experience shapes perception which shapes action in an ongoing loop. OODA, PDCA… we know the loop models of audit and assessment as fundamental to winning wars.
Frank’s emphasis on “autopoiesis” fits here perfectly. Like cooking, consciousness might be fundamentally about self-creating and self-maintaining processes that can’t be fully captured from the outside. You can describe the biochemical reactions in bread rising, but the seasoned baker’s sense of when a proper bagel is ready involves a different kind of knowing altogether.
AI Security is Misunderstood
The necessary perspective has serious implications for how we think about artificial intelligence and its role in information security. When we treat intelligence as “mere computation,” we risk building systems that can process information but lack the embodied understanding that comes from being embedded in the world.
Everyone using a chatbot these days knows this intimately when you ask about the best apple and the machine spits back the fruit when you want the computer, or vice versa.
Frank warns that the deceptive reductionist approach “poses real dangers as these technologies are deployed across society.” When we mistake computational capability for intelligence, we risk creating a world where:
…our deepest connections and feelings of aliveness are flattened and devalued; pain and love are reduced to mere computational mechanisms viewable from an illusory and dead third-person perspective.
In security contexts, this might mean deploying AI systems that can detect patterns but lack critical contextual understanding that comes from embodied experience. They might follow the recipe perfectly while missing the subtle cues that experienced practitioners would notice.
Palantir is maybe the most egregious example of death and destruction from fraud. They literally tried to kill an innocent man, with zero accountability, while generating the terrorists that they had begged millions of dollars to help find. I call them the “self licking ISIS-cream cone” because Palantir is perhaps the worst intelligence scam in history.
Correct Approach: Embedded Experience
Rather than trying to embed consciousness in physics, Frank suggests we need to “embed physics into our experience.” This doesn’t mean abandoning mathematical models, but recognizing them as powerful tools that emerge from and serve embodied understanding.
From this perspective, the goal isn’t to explain consciousness away through formal systems, but to understand how mathematical abstractions manifest within lived experience. We don’t seek explanations that eliminate experience in favor of abstractions, but account for the power of abstractions within the structures of experience.
Cooking School Beats Every Recipe Database
This might be why the “hard problem” of consciousness feels so intractable when approached mathematically—it’s like trying to capture the essence of cooking by studying only the recipe. The formalization is useful, even essential, but it necessarily abstracts away from the very thing we’re most interested in: the lived experience of the cooking itself.
Perhaps consciousness studies—and by extension, our approach to AI and security—needs more public “cooking schools” and fewer Palantir “recipe databases.” More emphasis on cultivating the capacity for analysis and curiosity for lived inquiry rather than just dumping money into white supremacist billionaires building racist theoretical machine models.
This is the opposite of abandoning rigor or precision. It means recognizing that some forms of knowledge are irreducibly embodied and contextual. The recipe and the cooking are both essential—but they operate in different domains and serve different purposes.
For those of us working in security, our most sophisticated tools and protocols will always depend on practitioners who can read the subtle signs, make contextual judgments, and respond creatively to novel situations. The poetry of information security written here since 1995 lies not just in the practice of developing algorithms, but in the lived practice of protecting systems and people from harm in an ever-changing world.
The question isn’t whether we can build machines that think like humans, but whether we can create technologies that enhance rather than replace the irreducible art of human judgment and response. Like Bentham’s failed calculus, purely computational approaches to intelligence miss the embodied nature of understanding. But like Wittgenstein’s language games, consciousness might be best understood not as a problem to be solved, but as a form of life to be lived.
Perhaps the poet Wallace Stevens captured this tension best in “The Idea of Order at Key West,” where he writes of the sea and the singer who shapes our perception of it:
She sang beyond the genius of the sea.
The water never formed to mind or voice,
Like a body wholly body, fluttering
Its empty sleeves; and yet its mimic motion
Made constant cry, caused constantly a cry,
That was not ours although we understood,
Inhuman, of the veritable ocean.
The sea was not a mask. No more was she.
The song and water were not medleyed sound
Even if what she sang was what she heard,
Since what she sang was uttered word by word.
It may be that in all her phrases stirred
The grinding water and the gasping wind;
But it was she and not the sea we heard.
Consciousness, like the singer by the sea, is neither reducible to its material substrate nor separate from it. It emerges in the dynamic interaction between embodied beings and their world—not as computation, but as the lived poetry of existence itself.
When efficiency becomes the supreme value, it crowds out everything that actually makes systems robust and humane.
It’s weird intellectual laziness disguised as sophistication. Like, “we’ve solved it, just optimize for the single metric!” But real systems – whether they’re societies, ecosystems, or self-driving cars – are irreducibly complex.
The obsession with efficiency creates blindness to interdependencies, to edge cases, to the messy realities that don’t fit the clean model. It’s literally being blind, by refusing to see things that are obvious, acting like a toddler in a tantrum.
It’s why engineered systems fail catastrophically rather than gracefully degrading. It’s why societies that optimize for pure economic efficiency end up weakened, brittle and cruel.
And there’s something almost masturbatory about efficiency worship – this self-congratulatory feeling of having cut through all the “unnecessary” complexity to find the One True Way. But complexity isn’t a bug to be eliminated; it’s often where resilience and adaptation live.
The horrible deadly Tesla failures for example have always been by design, idiocy dressed up as visionary. Musk gets to feel like an emperor for rejecting a “complex” multi-sensor approach, yet meanwhile his cars are literally stopping in intersections and speeding through school zones mowing down children.
The efficiency ideology revealed him as dumb, unable to process basic feedback that the redundancy and “inefficient” backup systems are actually what safety requires.
Historians recognize this. It’s really just an old white supremacist authoritarian impulse that reality must conform to their elegant racist theory of total control, rather than the theory of power adapting to actual human reality.
It’s about a worldview that sees nuance, interdependence, and adaptive complexity as weaknesses rather than strengths.
When your system is built around the assumption that you’ve already found the One True Way, then any evidence that contradicts that becomes noise to be filtered out rather than signal to be heeded for true innovations.
Tesla’s dumb deadly Robotaxis stopping in intersections, driving “scary as hell” on the wrong side of the road and committing crimes… aren’t sensor bugs because causing harm to society is classified as an inconvenient fact within a deeply racist ideological edifice.
A single police officer in 1994 killed South African “efficiency experts” (AWB) who had been “gaming” Black neighborhoods by shooting at women and children. It was headline news at the time, because AWB promised a race war to forcibly remove all “waste” from government, and instead ended up wasted on the side of a road.
DOGE is simply a racist DOG whistle.
A South African Afrikaner Weerstandsbeweging (AWB) member in 2010 (left) and a South African-born member of MAGA in the U.S. on 20 January 2025 (right). Source: The Guardian. Photograph: AFP via Getty Images, Reuters
What a colossal waste of taxpayer money. DOGE has been cutting funds for public safety while Tesla increasingly wastes public safety funds, like in this report by The Autopian:
Per reports from the San Miguel County Sheriff’s Office, emergency crews attended the fire in the Coventry Hill area just after 1 PM on Sunday afternoon. Along with deputies from the San Miguel and Montrose Sheriff’s offices, Norwood Fire, Naturita Fire, and Paradox Fire crews also reported to the scene where a Cybertruck was on fire…. Thick black smoke could be spotted in the air as 30 firefighters engaged to suppress the flames. US Fish and Wildlife soon joined the fight, supplying additional crews and apparatus to help contain the blaze. The efforts of emergency responders saw the fire 90% contained just two hours after crews arrived on scene. However, that wasn’t fast enough to save the Cybertruck caught in the blaze. The EV pickup was burned to the ground, leaving little more than a bare metal shell sitting in the dust.
30 firefighters and there wasn’t anything saved!?
The “extreme survival” design by the flamboyant Elon Musk is impossible to stop from turning into “little more than a bare metal shell sitting in the dust”. Let that sink in.
a blog about the poetry of information security, since 1995