As a preface, please read “A critical history of ImageNet“. It sets out very important and common sense warnings about the foundations behind the latest generation of AI research:
We find that assumptions around ImageNet and other large computer vision datasets more generally rely on three themes: the aggregation and accumulation of more data, the computational construction of meaning, and making certain types of data labor invisible.
You may read that as Fei-Fei Li used her Stanford position after 2009 to drive Big Tech moral decline using highly aggressive promotion of three dubious research priorities:
- surveillance for profit
- undemocratic representation for power consolidation
- mass oppression and devaluation of human work
There wasn’t even a driving or pressing need. This is not like “we’re at war, if we don’t win we’re dead” it’s just completely empty “SOMEBODY NOTICE ME” and unregulated “BIGGER, MOAR, FASTER” nonsense.
And now here is the explanation from Fei-Fei Li herself, an interview where she describes her “desperate” attempts to generate attention and money by immorally acquiring others’ data for use without their consent.
The fact she was told at Princeton very clearly not to engage in such acts, such that she left to Stanford to be get more aggressively anti-humanitarian, should say it all (Transcript by me).
Geoffrey Hinton: I’d like to start by asking Fei Fei whether there were any problems in putting together that dataset.
Fei-Fei Li: Yes, the dataset you’re mentioning is called ImageNet and I began building it in 2007 and spent the next three years with my graduate students building it. You asked me if there were problems building it. Where do I begin?
Even at the conception of this project I was told that it really was a bad idea. I was a young assistant professor, I remember it was my first year at Princeton. For example, a very respected mentor of mine in the field… actually told me, really out of their good heart, “please don’t do this” after I told them what the plan was in 2007. The advice was you might have trouble getting tenure if you do this. And then I also tried to invite other collaborators and nobody in ML or AI wanted to even go close to this project. And of course no funding.
[I joined Stanford in 2009 instead because]…I recognized that we really need to hit a reset and rethink about ML from a data-driven point of view so I wanted to go crazy and make a dataset that no one has ever seen in terms of its quantity, diversity and everything. So ImageNet after three years was a curated dataset of Internet images that totaled 15 million images across 22,000 object category concepts [pressing 49 thousand low-wage “surveillance” workers in 167 countries into devalued unrecognized immoral labor at Stanford].
So we made the dataset in 2009. We barely made it into a poster in an academic conference and no one paid attention. So I was a little desperate at that time and I believed this was the way to go and we open-sourced it. But even with open-source it wasn’t really picking up. So my students and I thought well let’s drive up the competition…
I wanted to go to Princeton because all I knew was Einstein went there… The one thing I learned in physics… was really the audacity to ask the craziest questions. … By the time I was graduating I wanted to ask the most audacious question as a scientist and to me the absolute most fascinating audacious question of my generation, that was 2000, was intelligence.
Not once, never in any of her presentation about her time from 2000 to 2009 does Fei-Fei Li mention ethics as a factor in her very calculated decision to engage in surveillance for profit, undemocratic representation for power consolidation, and the mass oppression and devaluation of human work.
Those are not accidental decisions. She even mentions Einstein drew her to Princeton, but seems to have completely failed to understand Einstein’s most potent revelations (Essays in Humanism, 1950, p. 24).
Penetrating research and keen scientific work have often had tragic implications for mankind, producing, on the one hand, inventions which liberated man from exhausting physical labor … but on the other hand, … creating the means for his own mass destruction …. This, indeed, is a tragedy of overwhelming poignancy!
The irony of Fei-Fei Li telling the world that Einstein inspired her life, her career, her love of science… and yet trying to convince us that she never once thought about the very clear words from Einstein warning her to NOT become a tragedy of overwhelming poignancy!
Somehow Fei-Fei Li ignored everything written about the dangerous impact of technology on humanity, such that in 2009 all she could think about was penetrating research and keen scientific work that would race her as quickly as possible towards tragic implications. She talks about physics in terms of questions about the universe, as if a lab experiment observing an atom has moral equivalence to stealing photos out of someone’s wallet.
To put it another way, when I listen to Fei-Fei Li I hear her repeatedly state things about history that are absolutely NOT true and sound devoid of empathy. She seems egregiously ignorant about what matters in the world (pun not intended).
Ethical violations in physics are just as prevalent now as they were 20 years ago, finds a survey of early-career physicists and graduate students.
Why was she so unaware that centuries of opposition to scraping together a single massive dataset always was related to obvious cases of crimes against humanity, the sort of dangerous power accumulation risk that any and every scientist should be forced to study?
To go with just one example, can you guess what her take would be on a well-documented “standing army of soldiers, a kneeling army of worshippers, and a crawling army of informants” period in Austria, sometimes referred to as tragic “centralization with a vengeance“?
Oh, but the data! Oh, but the intelligence! Think of the funding!
She very willfully removed all moral fiber from her work for over a decade while trying to campaign for as much money as quickly as possible. She clearly gravitated into Stanford because of its long history of dubious ethical and moral failures, despite dangers of AI having been a serious topic for AI practitioners for over forty years already (including stark warnings from founders).
When Princeton said no, she pivoted to the place that even to this day very openly promotes genocide.
And now, after having done so much inexcusably ignorant damage to the world, after pretending humanism wasn’t something anyone had ever thought about before she discovered it at Google in 2017, Fei-Fei Li seems more than happy to lecture from a blood-stained Stanford pulpit about being someone who should be trusted to fix the mess she created.
Burn toast, scrape faster.
The audacity of AI researcher immorality.