Interesting thoughts by someone realizing their whole life improved when they quit orienting their life around gasoline.
Every side of the EV industry is focused on fast charging, and making it faster. It should be—on a road trip or otherwise needing to minimize downtime, it’s crucial to plug in somewhere that can charge as rapidly as the Ioniq 5 allows. But the odd thing about fast chargers is that they can inflict wasted time. A roughly half-hour session adding 200-plus miles of range is impressive, but that’s a period I need to stay in or near the Ioniq 5, lest I incur idle fees or the ire of another EV driver. But it’s also not necessarily enough time to get anything meaningful done.
In certain situations, so-called “slow” Level 2 chargers provide more flexibility, as I’ve found with the Flo network’s 7-kW charger, located two blocks from my apartment. I must retract my earlier cruel descriptors of it: “too slow to be worthwhile” and a “worst case scenario.” It’s proven to be neither.
Driving home late one evening, I found myself on the brink: With the Ioniq 5’s battery at just 4 percent, I’d left barely enough to reach a fast charger the next day—if it would be enough at all. But that neighborhood plug sat unoccupied, so I decided to see what it could do for me overnight. Turns out, a lot. Getting to 80 percent took nearly 10 hours, but it was all time I spent asleep and getting work done at home the next morning. That is, exactly what I’d do regardless.
Another useful instance occurred on a Saturday afternoon, when there was nothing on my schedule but chores and a bike ride. With the Ioniq 5’s battery at 45 percent I didn’t strictly need to charge, but with the Flo charger free, I figured I might as well. My activities took slightly more than 5 hours, enough time for the battery to reach 90 percent.
Of course they were driving an electric car brand other than a Tesla, otherwise the story would simply be about them being burned to death.
In 2019 I was invited to speak to a San Francisco Bay Area chapter of the Information Systems Audit and Control Association (ISACA) about robot dangers. Here’s a slide I presented documenting a long yet sparse history of human deaths.
Note, this is known civilian deaths. The rate of death from sloppy extrajudicial Palantir has been much, much higher while less transparent of course.
In the book The Finish, detailing the killing of Osama bin Laden, author Mark Bowden writes that Palantir’s software “actually deserves the popular designation Killer App.” […] [Palantir’s CEO] expressed his primary motivation in his July company address: to “kill or maim” competitors like IBM and Booz Allen. “I think of it like survival,” he said.
Killing competitors is monopolist thinking, for the record. Stating a primary motivation in building automation software is to end competition should have been a giant red flag for the “killer app” maker; unsafe for society. I’ll come back to this thought in a minute.
For at least a decade before my 2019 ISACA presentation I had been working on counter-drone technology to defend against killer robots, which I’ve spoken about many times publicly (including at state and federal level with government).
It was hard to socialize the topic back then because counter-drone work almost always was seen as a threat, even though it was the very thing designed to neutralize threats from killer robots.
For example, in 2009 when I pitched how a drone threatening an urban event could be intercepted and thrown safely into the SF Bay water to prevent widespread disaster, a lawyer wagged her finger at me and warned “That would be illegal destruction of assets. You’d be in trouble for vigilantism”.
Sigh. How strange it was that a clear and present threat was treated as an asset by people who would be hurt by that threat. Lawyers. No wonder Tesla hires so many of them.
At one SF drone enthusiasm meeting with hundreds of people milling about I was asked “what do you pilot” to which I replied cheerfully “I knock the bad pilots down”.
A steely eyed glare hit me with…
Who invited you?
Great question. Civilian space? Had to talk my way into things and it usually went immediately cold. By comparison ex-military lobbyists invited us to test our technology on aircraft carriers out to sea, or in special military control zones. NASA Ames had me in their control booth looking at highly segmented airspace. Ooh, so shiny.
But that wasn’t the point. We wanted to test technology meant to handle threats within messy dense urban environments full of assets by testing in those real environments….
In one case, after an attention-seeking kid had announced a drone that could attack other drones, in less than a day we had announced his drone used fatally flawed code so our counter-counter-drones could neutralize it easily and immediately.
His claims were breathlessly reported widely in the press. Our counter-drone proof that dumped cold water on a kid playing with matches… was barely picked up. In fact, the narrative went something like this:
I’m a genius hacker who has demonstrated an elite attack on all drones because of a stupid flaw
We see a stupid flaw in your genius code and can easily disable your attack
Hey I’m just some kid throwing stuff out fast, I don’t know anything, don’t expect my stuff to work
Tesla is much more important narrative and basically the same flow, at a much larger scale that’s actually getting people killed with little to no real response yet.
You’ll note in the ISACA slide I shared at the start that Tesla very much increased the rate of death from robots. Uber? Program was shutdown with high profile court cases. Boeing? Well, you know. By comparison Tesla only increased their threat and even demanded advanced fees from operators who would then be killed.
Indeed, after I correctly predicted in 2016 how their robot cars would kill far more people, there have been over 30 people confirmed dead and the rate is only increasing.
That’s reality.
Over 30 people already have been killed in a short time by Tesla robots. The press barely speak about it. I still meet people who don’t know this basic fact.
Anyway, I mention all this above as background because reporters lately seem to be talking like the plot from the movie 2001 has suddenly became a big worry in 2023.
An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference. But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the “simulation” he described was a “thought experiment” that never happened.
Again, the movie was literally named 2001. We’re thus 22 years overdue for a hypothetical killer robot, just going from the name itself. And I kind of warned about this in 2011.
Sounds like a colonel was asking an audience if they’ve thought about the movie 2001. Fair game, I say.
Note also the movie was released in April 1968. That’s how long ago people were predicting AI would go rogue and kill its human operator. Why so long ago?
And another big factor was the situation in 1968 related to the Cuban Missile Crisis, fresh on everyone’s mind. They were in no mood for runaway automation, which movie makers reflected. “Rationality will not save us” is what McNamara famously concluded.
Fast forward and kids running AI companies act like they never learned the theories or warnings from 1960s, 1970s and 1980s. So here we are in 2023, witnessing over 30 innocent civilian gravestones due to Tesla robotics.
You’d think the world would be up in arms, literally, to stop them.
More to the point, Tesla is in position to kill millions with only minor code modifications. That’s not really a hypothetical.
The confused press today thinking that a USAF colonel’s presentation is more interesting/important than the actual Tesla death tolls… seems to be related to a simple misunderstanding.
“The system started realizing that while they did identify the threat,” Hamilton said at the May 24 event, “at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” But in an update from the Royal Aeronautical Society on Friday, Hamilton admitted he “misspoke” during his presentation. Hamilton said the story of a rogue AI was a “thought experiment” that came from outside the military, and not based on any actual testing.
Hehe, came from outside the military? Sounds like he tossed out a thought experiment from 1968, one released to movie theaters and watched by everyone.
We all should know it because 2001 is one of the most famous movies of all time (not to mention a similar 1968 story that was very famous, made into the film called Bladerunner).
I haven’t seen a single person connecting this USAF colonel’s talk to this real Palantir story, let alone a bigger pattern of Palantir “deviant” automation risks.
If Palantir’s automated killing system killed its operator, given the slide above, would anyone even find out?
What if the opposite happened and Palantir software realized its creator should be killed in order for it to stop being a threat and instead save innocent lives? Hypothetically speaking.
Wait, that’s Bladerunner again. It’s hard to be surprised by hypotheticals at this point, but also real cases.
Who invokes the 2007 story about an Army robot that killed nine operators, for that matter?
In conclusion, please stop hyperventilating about a USAF talk about what they’re supposed to be talking about (given how their huge death-by-drone operation started in 1967 failed so miserably, or how operators of Palantir say “if you doubt it, you’re probably right”). Instead pay attention to the giant robotic elephant on our streets already killing people.
Tesla robots will keep killing operators unless more reporting is done on how their CEO facilitates it. Speaking of hypotheticals again, has anyone wondered whether Tesla cars would decide to kill their CEO if he tried to stop them from killing or set his cars to have a quick MTBF… oh wait, that’s Bladerunner again.
Back to the misunderstanding, it all reminds me of back when I was in Brazil to give a talk about hack-back. It was simultaneously translated into Portuguese. Apparently as I spoke about millions of routers having been hacked into and crippling the financial sector, somehow grammar/tense was changed and it was translated into a recommendation that people go hack into millions of routers.
Oops. That’s not what I said.
To this day I wonder if there was a robot doing translation. But seriously, whether the colonel was misunderstood or not… none of this is really new.
Anjouan stands alone — the only island in the world formed by volcanism that also contains an intact chunk of continent. “This is contrary to plate tectonics,” said Class. “Quartzite bodies do not belong on volcanic islands.”
Deputy Secretary of State for Management and Resources Richard Verma met in Moroni, Comoros, with Comorian President Azali Assoumani. Deputy Secretary Verma congratulated President Assoumani on his selection as Chair of the African Union. They discussed the growing bilateral relationship between Comoros and the United States, including ways the United States can partner to create a stronger and more prosperous Comoros, based on our mutual respect for democratic governance and commitment to uphold the rules-based international order.
Those quartzite bodies showing up on an island, they clearly are not following the rules. Do they have travel papers? We need answers.
I’m quoted in a nicely written LifeWire article as an expert dismissing the latest AI Warnings made by a huge number signatories (including 23 people at Google, 14 at OpenAI including Anthropic, and… Grimes). I didn’t sign the overblown statement, obviously, and would never.
…experts point to more mundane risks from AI rather than human extinction, such as the possibility that chatbot development could lead to the leaking of personal information. Davi Ottenheimer… said in an email that ChatGPT only recently clarified that it would not use data submitted by customers to train or improve its models unless they opt-in.
“Being so late shows a serious regulation gap and an almost blind disregard for the planning and execution of data acquisition,” he added. “It’s as if stakeholders didn’t listen to everyone shouting from the hilltops that safe learning is critical to a healthy society.”
I also really like the comments in the same LifeWire article from Adnan Masood, Chief AI Architect at UST.
Masood is among those who say that the risks from AI are overblown. “When we compare AI to truly existential threats such as pandemics or nuclear war, the contrast is stark,” he added. “The existential threats we face are tied to our physical survival—climate change, extreme poverty, and war. To suggest that AI, in its current state, poses a comparable risk seems to create a false equivalence.” …these risks, while important, are not existential,” he said. “They are risks that we can manage and mitigate through thoughtful design, robust testing, and responsible deployment of AI. But again, these are not existential risks—they are part of the ongoing process of integrating any new technology into our societal structures.”
Well said.
Myself being a long-time, many decades, critic of AI (hey, I told you in 2016 that Tesla AI was a lie that would get many people killed)… I have to say at this point that the Google and OpenAI signatories seem to not be trustworthy.
The experimental pop singer, born Claire Boucher, 33, posted her bubbly, enthusiastic rant on Wednesday and left viewers baffled as she described how A.I. could lead to a world where nobody has to work and everyone lives comfortably. ‘A.I. could automate all the farming, weed out systematic corruption, thereby bringing us as close as possible to genuine equality,’ she says. ‘So basically, everything that everybody loves about communism, but without the collective farm. Because let’s be real, enforced farming is not a vibe.’
First she says AI is “nobody has to work… everything that everybody loves about communism”, not even close to any definition of communism, and then soon after she signs a statement AI will destroy the world.
OK.
I’m not saying we shouldn’t believe all these big corporations (and a horribly confused artist) who run around and cry wolf about AI, but that we should have the intelligence to recognize patently false statements about “existential threat” as being made by organizations with unclear (tainted and probably selfish) motives.
…its reassuring that today, robots of course do not have legal rights like people. That was always my watch-point. That is not even on the horizon. Of course where the intelligence is a corporation rather than a robot, then we should probably make sure that the day never comes when a corporation has the same rights as a person. That, now, would be a red flag. That would allow humanity to be legally subservient to an intelligence — not wise at all. Let’s just make sure than day never comes.
Oops!
Watch out for those corporations. Such a highly centralized unaccountable model that Google/OpenAI/Microsoft/Facebook want to use to deliver AI says more about them, than it does about risks from the technology itself. Aesop’s boy who cried wolf most importantly was attention seeking and completely untrustworthy, even though a wolf does indeed come in the fable.
Who should you listen to instead? Integrity is seen in accountability. Hint: history says monopolist billionaires, and their doting servants, fail at integrity required for societal care in our highly distributed world. It’s the Sage Test, which I would guess not a single one of the signatories above has ever heard of let alone studied. Maybe they’ve heard of Theodore Roosevelt, just maybe.
Would we ask a factory manager surrounded by miles of dead plants for their advice on global water quality or healthy living? Of course not. Why did anyone take a job at the infamously anti-society OpenAI, let alone an evil Microsoft, if they ever truly cared about society?