In our quaint digital age, where algorithms dance like blood red autumn leaves in a Connecticut defense contractor parking lot, Wall Street’s hawks circle with their familiar hunger. The same men who once counted bombs and the resulting shelters as sound investments now spy big profits in the artificial minds that dream of checkpoints and dusty hellfire clouds.
The latest dispatches tell us, with that peculiar modern detachment, how these digital oracles known as Palantir-blessed cybernetic God and prophets default to visions of violence, as predictably as teenage boys returning to thoughts of glory: the worst possible bet.
All these AIs are supported by Palantir… [with] demonstrated tendencies to invest in military strength and to unpredictably escalate the risk of conflict – even in the simulation’s neutral scenario.
How perfectly American, how very Stanford, the marriage of silicon and savagery. Palantir, a strange tail that wags a militant dog, darling of the defense establishment. It carries its baggage like a suburban housewife’s guilt, firing off extra-judicial mistargeted killings like errant golf balls into ponds, privacy violations as casual as country club gossip, political extremism worn like a Brooks Brothers suit. They peddle their digital Trabants to a market drunk on its own mythology, these latter-day merchants of chaos whose every misstep is rebranded as innovation.
The Wall Street wisdom here recalls nothing so much as those bright young men of the 1960s who saw profit in stuffing suitcases of blood stained decolonization dollars into napalm futures. They chase Palantir’s promise like suburban developers pursuing the perfect sundown town cul-de-sac, even as the company pours resources into the digital void with the abandon of a lottery addict. The irony sits heavy as New England humidity, that in our quest to predict a fictionalized cartoonist future, we’ve invested in unpredictability itself, a paradox that would be amusing if it weren’t so damnably dangerous and deadly.
As a preface, please read “A critical history of ImageNet“. It sets out very important and common sense warnings about the foundations behind the latest generation of AI research:
We find that assumptions around ImageNet and other large computer vision datasets more generally rely on three themes: the aggregation and accumulation of more data, the computational construction of meaning, and making certain types of data labor invisible.
You may read that as Fei-Fei Li used her Stanford position after 2009 to drive Big Tech moral decline using highly aggressive promotion of three dubious research priorities:
surveillance for profit
undemocratic representation for power consolidation
mass oppression and devaluation of human work
There wasn’t even a driving or pressing need. This is not like “we’re at war, if we don’t win we’re dead” it’s just completely empty “SOMEBODY NOTICE ME” and unregulated “BIGGER, MOAR, FASTER” nonsense.
And now here is the explanation from Fei-Fei Li herself, an interview where she describes her “desperate” attempts to generate attention and money by immorally acquiring others’ data for use without their consent.
She was told at Princeton very clearly not to engage in such acts, so much that she left to Stanford to become overtly and aggressively anti-humanitarian. That simple origin fact should say it all (Transcript by me).
Geoffrey Hinton: I’d like to start by asking Fei Fei whether there were any problems in putting together that dataset.
Fei-Fei Li: Yes, the dataset you’re mentioning is called ImageNet and I began building it in 2007 and spent the next three years with my graduate students building it. You asked me if there were problems building it. Where do I begin?
Even at the conception of this project I was told that it really was a bad idea. I was a young assistant professor, I remember it was my first year at Princeton. For example, a very respected mentor of mine in the field… actually told me, really out of their good heart, “please don’t do this” after I told them what the plan was in 2007. The advice was you might have trouble getting tenure if you do this. And then I also tried to invite other collaborators and nobody in ML or AI wanted to even go close to this project. And of course no funding.
[I joined Stanford in 2009 instead because]…I recognized that we really need to hit a reset and rethink about ML from a data-driven point of view so I wanted to go crazy and make a dataset that no one has ever seen in terms of its quantity, diversity and everything. So ImageNet after three years was a curated dataset of Internet images that totaled 15 million images across 22,000 object category concepts [pressing 49 thousand low-wage “surveillance” workers in 167 countries into devalued unrecognized immoral labor at Stanford].
So we made the dataset in 2009. We barely made it into a poster in an academic conference and no one paid attention. So I was a little desperate at that time and I believed this was the way to go and we open-sourced it. But even with open-source it wasn’t really picking up. So my students and I thought well let’s drive up the competition…
I wanted to go to Princeton because all I knew was Einstein went there… The one thing I learned in physics… was really the audacity to ask the craziest questions. … By the time I was graduating I wanted to ask the most audacious question as a scientist and to me the absolute most fascinating audacious question of my generation, that was 2000, was intelligence.
Not once, never in any of her presentation about her time from 2000 to 2009 does Fei-Fei Li mention ethics as a factor in her very calculated decision to engage in surveillance for profit, undemocratic representation for power consolidation, and the mass oppression and devaluation of human work.
Those are not accidental decisions. She even mentions Einstein drew her to Princeton, but seems to have completely failed to understand Einstein’s most potent revelations (Essays in Humanism, 1950, p. 24).
Penetrating research and keen scientific work have often had tragic implications for mankind, producing, on the one hand, inventions which liberated man from exhausting physical labor … but on the other hand, … creating the means for his own mass destruction …. This, indeed, is a tragedy of overwhelming poignancy!
The irony: Fei-Fei Li telling the world that Einstein inspired her life, her career, her love of science… and yet trying to convince us that she never once thought about the very clear words from Einstein warning her to NOT become a tragedy of overwhelming poignancy!
Her reasoning, as stated, seems to be this: Einstein was famous, I want to be famous. And that’s it.
Somehow Fei-Fei Li ignored everything written about the dangerous impact of technology on humanity, such that in 2009 all she could think about was penetrating research and keen scientific work that would race her as quickly as possible towards tragic implications. She talks about physics in terms of questions about the universe, as if a lab experiment observing an atom has moral equivalence to stealing photos out of someone’s wallet.
To put it another way, when I listen to Fei-Fei Li she repeatedly states things about history that are absolutely NOT true and make her sound devoid of empathy.
Ethical violations in physics are just as prevalent now as they were 20 years ago, finds a survey of early-career physicists and graduate students.
Why was she so unaware that centuries of opposition to scraping together a single massive dataset always was related to obvious cases of crimes against humanity, the sort of dangerous power accumulation risk that any and every scientist should be forced to study?
To go with just one example, can you guess what her take would be on a well-documented “standing army of soldiers, a kneeling army of worshippers, and a crawling army of informants” period in Austria, sometimes referred to as tragic “centralization with a vengeance“?
Reform was stymied, censorship oppressive, and freedoms restricted. Three very good reasons not to build ImageNet. The neo-absolutist state secret service of Austria kept an espionage card index with surveillance of every Vienna resident from 1849-1868. Photo by me.
Oh, but the data! Oh, but the intelligence! Think of the funding!
She very willfully removed all moral fiber from her work for over a decade while trying to campaign for as much money as quickly as possible. She clearly gravitated into Stanford because of its long history of dubious ethical and moral failures, despite dangers of AI having been a serious topic for AI practitioners for over forty years already (including stark warnings from founders).
And now, after having done so much inexcusably ignorant damage to the world, after pretending humanism wasn’t something anyone had ever thought about before she discovered it at Google in 2017, Fei-Fei Li seems more than happy to lecture from a blood-stained Stanford pulpit about being someone who should be trusted to fix the mess she created.
Ok, ok, I know it’s ironic but there are two important things here.
A San Francisco police station was burglarized on Thursday morning, according to a police spokesperson [Rueca].
[…]
Rueca said police saw a man in a “secure area” of the building, located on the 1100 block of Fillmore Street, around 3:20 a.m. After a “brief struggle” when police attempted to detain him, the suspect, a 35-year-old San Francisco man, was arrested on suspicion of burglary and resisting arrest, according to Rueca.
First, the robbery was around 3am. As someone who investigated SF crime for years and even testified in court… criminals love their 3-4am slot. It’s like they know the inside baseball on when the city puts its guard down. One officer joked to me it’s their shift change time.
Second, they say beefing up security just makes the crooks move on to easier targets, like your next-door neighbor’s garage. But when criminal groups operated out of Texas are boldly rocking to hit up a police station in California, it’s like they’re saying, “no biggie, even with all the cash splashed on security, we decide who to target not you.” Let’s be real, cops lurking in the shadows waiting for action? Not exactly winning any awards for “presence felt” let alone intimidation tactics.
Despite the extensive promotional efforts by a certain car company regarding its “sensor” and “vision” technology aimed at enhancing transit, Tesla consistently demonstrates its failure to live up to these claims, emerging as a notable outlier on roads worldwide.
News from Edinburgh: Tesla blocking a bus. Tesla blocking a road. Tesla forcing unnecessary waste and expense.
Notice the huge distance to the curb on the right. Source: Edinburgh Live
Edinburgh locals have branded a Tesla driver ‘inconsiderate’ after the motorist’s parking reportedly resulted in a road closure on Sunday morning.
Bavelaw Road was forced to close at around 11.30am due to the route being blocked by parked vehicles. One Tesla driver in particular caused a frenzy after their parking blocked a service 44 bus from passing through.
The Tesla brand is characterized by its assertiveness, self-centeredness, and negative influence, primarily attracting consumers who envision themselves achieving rapid social advancement through acquiring some flashy product that often catches on fire. This analogy likens the experience to an individual smoking premium cigarettes on a bus, disregarding others’ comfort and obstructing the doorway.
Bus passengers asked rather than told not to smoke – archive, 1964
1964 seems like forever ago. And yet we’re still at the early point of asking people not to drive a Tesla like a jerk, instead of telling them not to drive one at all.
Remember how controversial it has been in the past to challenge selfish harmful behavior?
‘If I can’t smoke on the bus, I’ll walk’ – how smoking was banned on Dublin Bus 30 years ago.
Then walk. Get some fresh air and some exercise. Stop blowing cancer onto other people. Everybody wins.
Instead of merely encountering a single individual causing a minor disturbance by smoking on a bus, Tesla’s encouragement of disengagement from reality enables the most disruptive individuals to wield far more significant influence, potentially leading to severe consequences such as road blockages and a concerning escalation in accidents and fatalities, as documented on Tesladeaths.com.
The more Tesla the more tragic death. Without fraud there would be no Tesla. Source: Tesladeaths.com
Get the Tesla away from the buses. Get the Tesla away from the roads. Aggressive transit fraud tactics, especially wielded with such predictably anti-social consequences, deserve a ban. Without fraud there would be no Tesla.
a blog about the poetry of information security, since 1995