“In the United States there are no Peugeot or Renault cars!”

Here’s how a Peace Corp veteran tries to illustrate the presence and effects of French colonialism.

I shall never forget a Comorian friend’s reaction to his first trip to the United States. Arriving back in Moroni, rather than enthusiastically describing skyscrapers, fast food, and cable TV, his singular observation was that in the United States there are no Peugeot or Renault cars! This piece of technology, essential to Comorian life, had always been French, and this Comorian was shocked to learn that there were alternatives.

Why would anyone ever expect someone with access to daily delicious fresh fruit and fish to ever enthusiastically describe… fast food?

Yuck!

Skyscrapers?

Wat.

The French passed draconian laws and did worse to require colonies (especially former ones) to only buy French exports. I get it, yup I do. So Comorians lived under artificial monopoly, and only knew French brands. Kind of like how the typical American who visits France says “I need a coffee, where’s the Starbucks?” Or the American says “I need to talk with my family and friends, where’s the Facebook?”

Surely being forced by frogs into their dilapidated cars, however, still rates quite far above entering into a health disaster of American fast food. A Comorian losing access to the delicious, locally made slow high nutrition cuisine is nightmare stuff.

But seriously…

“This piece of technology, essential to Comorian life” is a straight up Peace Corp lie about cars.

Everyone (especially the bumbling French DGSE) knows a complicated expensive cage on four wheels is unessential to island life, inefficient, and only recently introduced. Quality of life improves inversely to the number of cars on a single lane mountain road.

Motorbikes? That’s another story entirely, as an actual “unexpected” power differential, which the Israelis, Afghans, Chinese and lately Ukranians very clearly know far too well (chasing British and Japanese lessons).

Go home Peace Corp guy, your boring big car ride to a lifeless big skyscraper box filled with tastless Big Mac is waiting. Comorians deserve better. American interventionists should try to improve conditions locally and appropriately, not just drive former colonies so far backwards they start missing French cars.

Deception From a BlackHat Conference Keynote About Regulating AI

One note from this years’ typical out-of-tune “BlackHat” festivities in Las Vegas has struck a particularly dissonant chord — a statement that regulators habitually trail behind the currents of innovation.

A main stage speaker expressed strange sentiment and stood in contrast to the essence of good regulation itself, which strives to act as an anticipatory framework for an evolving future (like how regulators of a car such as brakes and suspension are designed for the next turn, and not just the last one).

At the conference’s forefront a keynote gave what the speaker wanted people to believe as a historical pattern: governmental entities often mired in reactionary postures unable to get “ahead” of emerging technologies.

Unlike in the time of the software boom when the internet first became public, Moss said, regulators are now moving quickly to make structured rules for AI.

“We’ve never really seen governments get ahead of things,” he said. “And so this means, unlike the previous era, we have a chance to participate in the rule-making.”

Uh, NO. As much as it’s clear the keynote was attempting to generate buzz, excitement from opportunity, its premise is entirely false. This is NOT unlike any previous era in the situation as stated. In fact, AI is as old if not older than the Internet.

If what Moss said were even remotely true about regulators moving quickly to get ahead of AI, unlike other technology, then the present calendar year should read 1973 NOT 2023.

Regulations in fact were way ahead of the Internet and often credited for its evolutionary phases, for better or worse, making another rather obvious counter-point.

  • Communications Act of 1934 reduced anti-government political propaganda (e.g. Hearst’s overt promotion of Hitler). It created the FCC to break through corporate-monopolistic grip over markets from certain extreme-right large broadcast platforms and promoted federated small markets of (anti-fascist) publishers/broadcasters… to help with the anticipated emerging conflict with Nazi Germany.
  • Advanced Research Projects Agency Network, funded by the U.S. Department of Defense, in 1969 created the “first workable prototype of the Internet” realizing a 1940s vision of future decentralized peering through node-to-node communications.
  • Telecommunications Act of 1996 inverted 1934 regulatory sentiment by removing content safety prohibiting extreme-right propaganda (predictably restarting targeted ad methods to manipulate sentiment, even among children), and encouraged monopolization through convergence of telephones, television and Internet.

I blame Newt “Ideas” Gringrich for the extremely heavy dose of crazy that went into that 1996 Act.

You hear Gingrich’s staff has these five file cabinets, four big ones and one little tiny one. No. 1 is ‘Newt’s Ideas.’ No. 2, ‘Newt’s Ideas.’ No. 3, No. 4, ‘Newt’s Ideas.’ The little one is ‘Newt’s Good Ideas.’

And I did say better AND worse, as some regulators warned about in 1944.

…inventions created by modern science can be used either to subjugate or liberate. The choice is up to us. […] It was Hitler’s claim that he eliminated all unemployment in Germany. Neither is there unemployment in a prison camp.

But perhaps even more fundamentally, Moss overlooks the fundamental nature of regulation — an instrument that historically has projected forward, preemptively calibrating guidelines to steer innovation and get ahead of things by navigating toward things like responsible or moral avenues to measure progress.

Moss’s viewpoint can be likened to a piece of a broader history of “BlackHat” disinformation tactics, which ironically wants to send out a call for proactive strides in outlining rules for the burgeoning realm of artificial intelligence because regulators are behind the times, while simultaneously decrying regulators who are doing too much too soon by not leaving unaccountable play space for the most creative exploits (e.g. hacking).

Say anything, charge for admission.

A more comprehensive and stable view emerges against the inconsistencies in their lobbying, by scrutinizing the purpose of regulatory frameworks. They are, by design, instruments of foresight, an architecture meant to de-risk uncharted waters of the future. Analogous to traffic laws, such as the highly controversial seat belt mandates that dramatically reduced future automobile-related fatalities, regulations are known for generating the playing field for genuine innovations by anticipating potential dangers and trying to stop harms. The also highly controversial Airbags were developed — an innovation driven by regulation — after the risk reduction of seat belts petered out. Ahead or behind?

In essence, regulatory structures are nothing if not future-focused. They are about drawing blueprints, modeling threats, calculating potential scenarios and ensuring responsible engineering for building within those parameters. Regulatory frameworks, contrary to being bound just to strict historical precedent, are vehicles of anticipation, fortified to embrace the future’s challenges with proactive insight.

While Moss’s sensational main stage performance pretends it can allude to past practices, it unfortunately spreads false and deceptive ideas about how regulators work and why. The giant tapestry of regulation around the world is woven with threads of anticipatory vigilance, made by people who spend their careers working on how to establish a secure trajectory forward.

Speaking of getting ahead of things it’s been decades already since regulators openly explained that BlackHat has racist connotations, and three years since major changes in the industry by leaders such as the NCSC.

…there’s an issue with the terminology. It only makes sense if you equate white with ‘good, permitted, safe’ and black with ‘bad, dangerous, forbidden’. There are some obvious problems with this…

Obvious problems, and yet the poorly-named BlackHat apparently hasn’t tried to get ahead of these problems. If they can’t self-regulate, perhaps they’re begging for someone with more foresight to step in and regulate them.


Now, don’t get me started on the history of cowboy hats and why disinformation about them in some of the most racist movies ever made (e.g. “between the 1920s and the 1940s”) does not in any way justify a tolerance for racism.

In this golden age of Westerns, good and evil weren’t color coded: there was plenty of room for moral ambiguity.

I say don’t get me started because as soon as someone says “but Hollywood said guy hats are white” I have to point them at the absolutely horrible propaganda that was actually created by Hollywood to form that association of a “White Hat” being “good”

…Hollywood took charge. In 1915, director D. W. Griffith adapted The Clansman as The Birth of a Nation, one of the very first feature-length films and the first to screen in the White House [and used by President Woodrow Wilson to restart the KKK]. Its most famous scene, the ride of the Klan, required 25,000 yards of white muslin to realize the Keller/Dixon costume ideas. Among the variety of Klansman costumes in the film, there appeared a new one: the one-piece, full-face-masking, pointed white hood with eyeholes, which would come to represent the modern Klan.

That’s how “room for moral ambiguity” in Westerns was codified by some wanting white to be “good”. BlackHat thus framing itself as a “bad” guy conference peddles racism straight out of 1915 Hollywood, and never in any good way. They really need to get ahead of things.

I mean practically any historian should be able to tell BlackHat that while in America the white hats typically were regarded as bad guys (very unlike the propaganda of Hollywood, promoting the KKK), an inverse to the evil lawlessness of American white hate groups was BLUE.

…lawless in its inception, [white hat wearing group] was soon dominated by the lawless element against which it was formed. The good citizens, instead of restoring law and order, became the servants and tools of disorder and mob violence. After two years of white-capism, another organization was formed to “down” the “White-caps,” called “Blue Bills.”

The Blue Bills, started by a doctor, in fact were all about law and order, rejecting the “anonymous hacker” methods of the white hats.

The Blue Bills did not consider themselves above the law; they didn’t consider themselves vigilantes, or heroes. They simply wanted to bring a stop to the terror that the White Caps had brought about…

But I digress… the BlackHat conference name is and always has been on the wrong side of history.

Suspicious Errors and Omissions in Inflection’s Generative PI.AI Privacy Policy and TOS

I was asked to have a look at the generative AI site of PI.AI and right away noticed something didn’t flow right.

The most prominent thing on that page to my eye is this rather agressive phrase:

By messaging Pi, you are agreeing to our Terms of Service and Privacy Policy.

It’s like meeting a new person on the street who wears a big yellow cowboy hat that says if you speak with them you automatically have agreed to their terms.

Friendly? Nope. Kind? Definitely NOT.

How’s your day going? If you reply you have agreed to my terms and policies.

Insert New Yorker cartoon above, if you know what I mean.

Who is PI.AI? Why do they seem so agro? There’s really nothing representing them, showing any meaningful presence on the site at all. Surely they are making a purposeful omission.

Why trust such opacity? Have you ever walked into a market and noticed one seller looks like they simultaneously don’t care yet also desperate to steal your wallet, your keys and your family?

I clicked through the links being pushed at me by the strangely hued PI to see what they are automatically taking away from me. It felt about as exciting as dipping a meat-stick into a pool of alligators who were trained at Stanford to inhabit digital moats and chew up GDPR advocates.

Things quickly went from bad to worse.

If you click on the TOS link, given how it’s presented as first and foremost, your browser is sent to an anchor reference at the end of the Privacy Policy. Scroll up to the top of the Privacy Policy (erase the anchor) and you’ll find a choice phrase used exactly three times:

  • Privacy Rights and Choices: Delete your account. You may request that we delete your account by contacting us as provided in the “How to Contact Us” section below.
  • To make a request, please email us or write to us as provided in the “How to Contact Us” section below.
  • If you are not satisfied with how we address your request, you may submit a complaint by contacting us as provided in the “How to Contact Us” section below.

Of course I searched below for that very specific and repetitive “How to Contact Us” section but nothing was found. NOTHING. NADA.

Not good. There’s no mistaking that the language, the code if you will, is repeatedly saying that “How to Contact Us” is being provided, while it’s definitely NOT being provided.

Oh well, I’m sure it’s just lawyers making some silly errors. But it brings to mind how the AI running PI probably should be expected to be broken, unsafe and full of errors and omissions or worse.

It stands to reason that if the PI policy very clearly and specifically calls out a rather important section title over and over again (perhaps one of the most important sections of all, relative to rights and protections), then accuracy would be helpful to those concerned about safety including privacy protections. Or a link could be a really good thing here, as they clearly use anchors elsewhere; this is a webpage we’re talking about.

Journalists provide some more insight into why PI.AI coding/scripting might be so strangely sparse, sloppy and seemingly unsafe.

Before Suleyman was at Graylock, he led DeepMind’s efforts to build Streams, a mobile app designed to be an “assistant for nurses and doctors everywhere,” alerting them if someone was at risk of developing kidney disease. But the project was controversial as it effectively obtained 1.6 million UK patient records without those folks’ explicit consent, drawing criticism from a privacy watchdog. Streams was later taken over by Google Health, which shut down the app and deleted the data.

A controversial founder seems to be known primarily for having taken unethical actions without consent. Interesting and unfortunately predictable twist that someone can jump from criticism by privacy watchdogs into even more unregulated territory.

With all the money that the PI.AI guys keep telling the press about, their privacy policy poorly worded with vague language, not to mention their errors and omissions, seems kind of rich and privileged.

Less than two months after the launch of their first chatbot Pi, artificial intelligence startup Inflection AI and CEO Mustafa Suleyman have raised $1.3 billion in new funding. […] “It’s totally nuts,” he admitted. Facing a potentially historic growth opportunity, Suleyman added, Inflection’s best bet is to “blitz-scale” and raise funding voraciously to grow as fast as possible, risks be damned.

Ok, ok, hold on a minute, we have a problem Houston. Terminology used is supposed to alert us to a blitz where risks be damned? Amassing funds to grow uncontrollably for a BLITZ!?

That talk sounds truly awful, eh? Is someone out there saying, you know what we need is to throw giant robots together as fast as possible, risks be damned, for a Blitz?

Call me a historian but it’s so 1940s hot-headed General Rommel right before he unwisely stretched forces into being outclassed and outmaneuvered by counter-intelligence in Bertram.

Source: “Images of War: The Armour of Rommel’s Afrika Korps” by Ian Baxter. Rommel’s men give him a look of disgust; “grow as fast as possible” orders fall apart due to deep integrity flaws.

I can’t even.

Back to digging around the sprawling Inflection registered domain names to find something, anything resembling contact information meant to be in the TOS and Privacy Policy, I found “heypi.com” hosted an almost identical looking page to PI.AI.

The Inflection color scheme or page hue of tropical gastrointestinal discomfort, for lack of a better “taupe” description, is quite unmistakable. Searching more broadly by poking around in that unattractive heypi.com page, wading through every detail, finally solved the case of obscure or omitted section for contact details.

Here’s the section without running scripts:

See what I mean by the color? And here it is when you run the scripts:

All that, just for them to say their only contact info is privacy@pi.ai? Ridiculous.

Some bugs are very disappointing, even though they’re legit errors that should be fixed. Now I just feel overly pedantic. Sigh. Meanwhile, there are plenty more problems with the PI Privacy Policy and TOS that need attention.

Security Architect’s Guide to ISO/SAE 21434: Vehicle Safety

As any regular reader of this blog surely must know, vehicles have become increasingly connected and reliant on software systems. Rather than harp yet again on the many basic engineering safety failures of Tesla, in this post I will dig into an International Organization for Standardization (ISO) and the Society of Automotive Engineers (SAE) standard for duty and care in security, which can easily help raise the bar on safety.

The 2021 ISO/SAE 21434 outlines 15 clauses to guide architecture of a cybersecurity program throughout the entire lifecycle of vehicle manufacturing. Let’s delve into the key aspects of each clause, with a special focus on Clause 15 – the Threat Analysis and Risk Assessment (TARA) process.

Clause 1. Scope: ISO/SAE 21434 sets the stage by defining the scope. All phases of a vehicle’s lifecycle are meant to be covered, from concept and design to production, operation, maintenance, and decommissioning.

Clause 2. Normative References: This clause lists the external standards and documents referenced in ISO/SAE 21434, providing a foundation for implementation.

Clause 3. Terms, Definitions, and Abbreviations: Here, the standard provides clear definitions for key terms, ensuring a shared understanding for special or specific terminology used throughout the document.

Clause 4. Cybersecurity Management System (CSMS): ISO/SAE 21434 emphasizes the establishment of a cybersecurity management system within organizations. This system, like the venerable ISO 27001 Information Security Management System (ISMS), drives leadership commitment, accountability, and ongoing improvement.

Clause 5. Organization Security Requirements: This section underlines the importance of developing security within an organization’s risk assessment processes. It also highlights the need for cross-functional collaboration so security gets a seat at the business table.

Clause 6. Product Security Requirements: The standard guides the development of specific cybersecurity requirements for vehicle components, systems, and interfaces. This ensures that security is a fundamental consideration in product development.

Clause 7. Cybersecurity Requirements Engineering Process: This clause details the steps for integrating security requirements into the design and development processes, ensuring engineering management is held accountable to thoroughness and traceability.

Clause 8. Cybersecurity Design Process: The standard focuses on embedding security into the design phase of vehicle components and systems. Secure architecture, threat modeling, and coding practices take center stage here.

Clause 9. Cybersecurity Verification Process: This clause outlines the process for checking implemented security measures meet clause 7 requirements. Testing, reviews, and audits are key components.

Clause 10. Cybersecurity Validation Process: ISO/SAE 21434 stresses the validation of the entire vehicle system’s safety due to application of security. Real-world testing ensures the system aligns with intended security objectives.

Clause 11. Cybersecurity Configuration Management Process: Managing security throughout the vehicle’s lifecycle is crucial. This clause covers the usual suspects in software change management, including version control, dependencies and secure updates.

Clause 12. Cybersecurity Risk Assessment Process for Production: Addressing risks introduced during the production phase is vital. The standard tackles potential manufacturing and assembly defects as they relate to systems security.

Clause 13. Incident Detection and Response Planning Process: This section covers post-operation incident detection and response planning. It includes monitoring, reporting, and preparation for incidents.

Clause 14. Cybersecurity Aspects of Decommissioning Process: The secure decommissioning of vehicles is highlighted, ensuring sensitive data removal and minimizing residual security risks. And on that point, I can’t resist mentioning some recent news from CNBC.

A Tesla Model X totaled in the U.S. late last year suddenly came back online and started sending notifications to the phone of its former owner, CNBC Executive Editor Jay Yarow, months later. The car or its computer was suddenly online in a southern region of war-torn Ukraine, he found by opening up his Tesla app and using a geolocation feature. The new owners in Ukraine were tapping into his still-connected Spotify app to listen to Drake radio playlists, he also discovered.

Don’t ask me why it was news to CNBC Executive Editor Jay Yarow that Tesla regularly fails at basic safety engineering. And that brings us to the best Clause of all…

Clause 15. Threat Analysis and Risk Assessment (TARA) Process stands out as a critical step in the standard and one that deserves considerable attention. Running the TARA process involves identifying assets, threats, vulnerabilities, assessing impact and likelihood, calculating risk, identifying existing controls, determining residual risk, setting target risk levels, planning countermeasures, implementation, reassessing risk, documentation, verification, communication, and periodic repetition.

It’s a security architect’s dream, if you ask me. Here’s a simple Step-by-Step Process example for running TARA:

Step 1: Identify Assets and Scope

Identify the assets within the vehicle system, including hardware, software, data, and communication networks. Clearly define the scope of the analysis, specifying which parts of the system will be covered.

Step 2: Identify Threats

Enumerate potential threats that could exploit vulnerabilities in the vehicle system. Consider a wide range of threats, including unauthorized access, malware attacks, physical attacks, and social engineering.

Step 3: Identify Vulnerabilities

Identify vulnerabilities in the vehicle system that could be exploited by the identified threats. These vulnerabilities could be related to software, hardware, communication protocols, or human interaction.

Step 4: Assess Impact and Likelihood

Evaluate the potential impact of each identified threat exploiting a vulnerability. Consider consequences like loss of control, privacy breaches, financial losses, etc. Assess the likelihood of each threat-vulnerability pair occurring based on factors such as the threat’s motivation and capabilities.

Step 5: Calculate Initial Risk

Calculate the initial risk for each threat-vulnerability pair using a predefined risk assessment formula. This typically involves multiplying the impact and likelihood scores.

Step 6: Identify Existing Controls

Identify any existing cybersecurity controls or countermeasures that mitigate the identified risks. Evaluate the effectiveness of these controls in reducing the risk level.

Step 7: Determine Residual Risk

Calculate the residual risk after considering the effects of existing controls. This provides an understanding of the remaining risk that needs to be addressed.

Step 8: Determine Target Risk Level

Define the desired target risk level based on organizational risk tolerance and regulatory requirements. This step helps in setting a clear goal for risk reduction.

Step 9: Plan Countermeasures

Develop a plan for implementing additional or enhanced cybersecurity measures to reduce the risk level to the target. Consider a combination of technical, procedural, and organizational measures.

Step 10: Implement Countermeasures

Put the planned countermeasures into action according to the defined plan. This could involve software updates, hardware enhancements, process changes, etc.

Step 11: Reassess Risk

Reassess the risks after implementing the countermeasures. Determine if the risk level has been effectively reduced to meet the target risk level.

Step 12: Document the TARA Process

Document all the steps taken during the TARA process, including the identified threats, vulnerabilities, risk assessments, countermeasures, and results.

Step 13: Review and Verification

Review the entire TARA process and its documentation for accuracy and completeness. Verify that the chosen countermeasures are appropriate and effective.

Step 14: Communicate Results

Communicate the TARA results and findings to relevant stakeholders within the organization. This ensures that everyone is aware of the identified risks and the measures being taken to mitigate them.

Step 15: Repeat Periodically

Perform the TARA process periodically or whenever significant changes occur in the vehicle system. New threats, vulnerabilities, and technologies may emerge, requiring a reevaluation of the cybersecurity landscape.

Finally, here’s a step-by-step example for the TARA process for a vehicle’s connected infotainment system:

Step 1: Identify Assets and Scope

Asset: Connected infotainment system
Scope: Analysis covers software, communication interfaces, and data flows.

Step 2: Identify Threats

Unauthorized remote access, malware injection, data interception, physical access

Step 3: Identify Vulnerabilities

Unpatched software, weak authentication, insecure data transmission

Step 4: Assess Impact and Likelihood

Unauthorized access could lead to data breaches, loss of control. Likelihood varies based on attacker profiles (e.g. FBI MICE) and system exposures.

Step 5: Calculate Initial Risk

Initial risk score calculated for each threat-vulnerability pair.

Step 6: Identify Existing Controls

Firewall blocks generic service port attempts, authentication and encryption are in place, as well as least privilege principle for role-based access.

Step 7: Determine Residual Risk

Residual risk is calculated considering the effectiveness of existing controls.

Step 8: Determine Target Risk Level

Target risk level set to a certain value based on risk tolerance.

Step 9: Plan Countermeasures

Plan includes implementing stronger authentication, regular software updates, integrity checking and monitoring, intrusion detection with alerting.

Step 10: Implement Countermeasures

Countermeasures are integrated into the infotainment system.

Step 11: Reassess Risk

Risks are reassessed post countermeasure implementation.

Step 12: Document the TARA Process

All steps, assessments, and decisions are documented.

Step 13: Review and Verification

TARA process and documentation are reviewed by experts and stakeholders.

Step 14: Communicate Results

Results and actions are communicated to relevant departments.

Step 15: Repeat Periodically

The whole TARA process is scheduled for regular intervals or when system changes occur, much like how any threat modeling process should be built into software engineering cultures.

Keep in mind that each organization’s TARA process may vary based on their specific context, system complexity, and risk appetite. It’s crucial to involve experts with operational and engineering security knowledge and adapt the process to suit the unique requirements of your organization and its vehicle systems.

By embracing the ISO/SAE 21434 standard significant strides can be taken in bolstering the safety of vehicles. Meticulous attention to the clauses will cultivate a more robust security posture that not only safeguards vehicles but also builds trust with consumers and industry stakeholders alike. As technology continues to evolve into more complex interconnected systems, ISO/SAE 21434 provides a roadmap for the automotive industry to navigate the security landscape with a measure of quality from threat modeling.

Now back to explaining why Tesla is unfit to be on any road…

Related: Hundreds of Brand New Teslas Piling Up in Junk Yards