DOGE Cuts to FAA Make Aviation Gross-negligence Again

Breach of Federal Agency Degrades Aviation Safety to Deadly Pre-Regulation Era

Anyone who knows anything about safety rules can see the problem: when you gut federal oversight, bad things happen fast. The DOGEans swept into transportation safety agencies like a newly formed Khmer Rouge wrecking ball, and the results are playing out exactly how safety experts have always warned they would. First Tesla, then SpaceX, now aviation – it’s like watching the stability of democratic transit systems crash and burn, deregulated trip by deregulated trip.

Tesla Deaths Per Year

Source: TeslaDeaths.com

Tesla’s safety numbers are jaw-dropping – the data shows serious incidents climbing faster than their production line can churn out new cars. Remember when lawn darts got banned in the ’80s for killing 3 kids, the Ford Pinto shutdown in the ’70s for killing 27? Tesla’s battery fires and autopilot crashes are making known horrible engineering tragedies of history look like rounding errors.

The graph shows Tesla’s safety record hitting a nasty turning point in 2021. While their fleet grew steadily (blue line), crashes and deaths (orange and pink) shot up way faster – we’re talking a 5x spike in serious crashes by 2024. When your accident curve is climbing faster than your production line, something’s seriously wrong. Data pulled straight from NHTSA reports and tracked on TeslaDeaths.com.

So is it any wonder that the DOGEan effect on aviation has been an immediate shift to tragedy after tragedy?

Do people understand just how deadly Tesla and SpaceX are? I often think their highly saccharin Zizian cult-like propaganda has made Americans blind to reality of actual dangers.

Do you see the problem? DOGE is failing upwards so fast it is going to be even less successful than the public funding sinkhole tragedy of SpaceX.

Here’s the reality check: NASA ships have reached Mars since 1964, sticking landings since Viking 1 in 1976 – we’re talking eight successful touchdowns including groundbreaking rovers like Sojourner, Spirit, Opportunity, Curiosity, and of course Perseverance. I myself even worked at NASA on Mars terrestrial technology (security related of course). Meanwhile, SpaceX blasted big fat lies out year after year, talking up their Mars game like a new-Rhodesia colonized by 2022, yet they’re still stuck back in Earth orbit blowing up rockets faster than ever before in history. That’s like promising to win the Super Bowl when you haven’t even made it past Pop Warner football without breaking your leg every weekend. A decade of bigger and bigger Mars promises, zero Mars missions, just a bunch of toxic debris.

Let’s connect the obvious dots: a guy who promised Mars colonies by 2022 while his cars’ crash rates outpace production is now influencing national aviation safety. This isn’t about magic fairy dust tech promises of Zizian fantasy anymore, which sank an old abandoned tugboat nobody cared about. DOGEan threats are international, about real public planes in real skies that already put many hundreds or more at risk.

There have been four major deadly U.S. aviation disasters so far this year. They happened within the span of two weeks in Washington D.C., Philadelphia, Alaska and Arizona. …the Washington D.C. crash on Jan. 29 that killed 67 people [blamed on FAA staffing changes] is the only fatal commercial aviation crash in 2025 and in the past 15 years.

The headlines tracking FAA changes read like a countdown clock nobody wanted to see ticking. And based on what we have seen from Tesla and SpaceX’s atrocious safety track records, that’s not the kind of countdown to deaths anyone should ignore.

Related:

  • https://www.theguardian.com/us-news/2025/jan/30/dc-plane-crash-faa-investigation
  • https://www.propublica.org/article/elon-musk-spacex-doge-faa-ast-regulation-spaceflight-trump
  • https://thehill.com/homenews/administration/5149006-elon-musk-spacex-faa/amp/
  • https://www.yahoo.com/news/musk-spacex-team-unleashed-faa-193638179.html
  • https://www.reuters.com/world/us/trump-administration-ordered-fully-comply-with-order-lifting-funding-freeze-2025-02-10/
  • https://www.newsweek.com/donald-trump-freeze-hiring-air-traffic-controllers-washington-crash-2023348
  • https://www.usnews.com/news/best-states/virginia/articles/2025-01-31/air-traffic-controllers-were-initially-offered-buyouts-and-told-to-consider-leaving-government
  • https://www.theguardian.com/us-news/2025/feb/17/trump-administration-faa-worker-firings

Regulators Push DeepSeek Towards Privacy-Preserving AI Apps: South Korea Joins Italy Innovation Ruling

Recent regulatory actions by South Korea and Italy regarding DeepSeek’s mobile app highlight an exciting opportunity for developers and organizations looking to leverage cutting-edge AI technology while innovating towards baseline data privacy standards.

Innovation Seeds From Flowering Regulation Headlines

While headlines overstate a wholesale ban on technology with any flaw found, the reality on the ground to technology experts is far more nuanced and promising.

Both South Korea’s Personal Information Protection Commission (PIPC) and Italy’s data protection authority have specifically targeted mobile app implementation that fail to respect privacy concerns. What they don’t emphasize enough to the common reader, and so I will explain here, is that the underlying AI technology is not their complaint.

This distinction is crucial because DeepSeek’s models remain open source and available for use with better user applications. These regulatory actions are essentially defining a better world – pushing the ecosystem toward proper implementation practices, particularly regarding data handling and privacy protection.

Local-First AI Applications Make Sense

This innovation push, thanks to the rules of engagement that create a rational market, is the perfect opportunity for developers to build privacy-preserving local applications that leverage DeepSeek’s powerful AI models while ensuring complete compliance with regional data protection laws.

Here’s why this DeepSeek news matters so much in the current landscape of AI services all around the world violating basic privacy rights:

  1. Data Sovereignty: By implementing local-first applications, organizations and individuals they serve will maintain complete control over their data, ensuring it never leaves their jurisdiction or infrastructure. Data should be centered around the owners, not pulled from them as an illegally acquired “twin” for secretive exploitation and harms.
  2. Regulatory Compliance: Purpose-built local applications can be designed from the ground up to comply with the basics of regional privacy requirements, from GDPR in Europe or PIPC guidelines in South Korea. Even Americans may find some protection in state or municipal privacy requirements to shield them from national-scale threats.
  3. Enhanced Security: Local deployment allows additional security layers and custom privacy controls unique to individual risks, above and beyond the baseline regulations, which might not be possible with third-party hosted solutions trying to serve everyone on a common basis.

Technical Implementation Considerations

Organizations, or even nation-states, looking to build privacy-preserving applications with DeepSeek models must immediately shift focus:

  • Local model deployment and inference
  • Proper data anonymization and encryption
  • Configurable data retention policies
  • Transparent logging and auditing capabilities
  • User consent management
  • Clear data handling documentation

The push toward local deployments by South Korean and Italian regulators appears even more prescient in light of recent security research demonstrating potential backdoor vulnerabilities in LLM models, made transparent by open-source.

While the regulatory focus has been on privacy preservation, local deployments offer another crucial advantage: the ability to implement robust security measures, validation processes, and monitoring systems. Organizations running their own implementations can not only ensure data privacy but also establish appropriate safeguards against potential embedded threats, making the regulatory “restrictions” look more like forward-thinking guidance for responsible AI deployment.

The Implications of DeepSeek Inside

This trend signals a historically consistent technology evolution trend hitting the AI industry, away from centralized extractive practices and towards individual rights-conscious implementations. Like the Magna Carta of so many centuries ago, privacy regulations continue to serve as catalysts for innovation in deployment strategies whether in data storage (Personal Computers), transmission (Internetworking) or processing (AI).

The actions by South Korean and Italian regulators are out front and pushing the whole world toward better practices in AI implementation. This creates opportunities everywhere for local technology companies to develop compliant AI solutions. Owners are emboldened to maintain control over their sensitive data, while developers can create innovative privacy-preserving applications to serve real needs. The open-source AI community thrives by being most respecting of privacy concerns.

As more and more people follow the decades-long trend from shared-compute to mobile personal devices (connected using open standards to shared-compute), localized privacy regulations serve to challenge centralized unaccountable surveillance. We can expect to see growing demand for privacy-preserving local AI applications which presents a massive opportunity for developers and organizations to build privacy-first AI applications that leverage powerful open-source models locally. Competitive advantages come clearly through better privacy practices, because they foster sustainable trust with users through transparent data handling

The future of AI that rises before us goes far beyond model capability towards responsible implementation (all engineering demands a code of ethics). The current regulatory environment is pushing us toward that future because markets fail and fall into criminal monopolization without common sense fairness enforcements (authorization based on inherited rights) that manifest as regulations. The sensible actions in South Korea and Italy to protect privacy in apps are guideposts toward proper AI implementation practices. By focusing on privacy-preserving local architectures, developers can continue to innovate with DeepSeek’s technology while ensuring human-centered outcomes that every state should and can now achieve.


Are you a developer interested in building privacy-preserving AI applications? Check out the Solid Project open standard of data wallet storage infrastructure.

DOGE Breach Expands to Social Security, Eliminating Staff Who Defend Data

Chilling words from the federal government as DOGEan troops expand their breach into even more sensitive data.

Nancy Altman, the president of the advocacy group Social Security Works, told CBS News they heard from SSA employees that officials from the Department of Government Efficiency, or DOGE, had been trying to get access to the Enterprise Data Warehouse — a centralized database that serves as the main hub for personal, sensitive information related to social security benefits such as beneficiary records and earnings data. Altman was told King had been resistant to giving DOGE officials access to the database.

“She was standing in the way and they moved her out of the way. They put someone in who presumably they thought would cooperate with them and give them the keys to all our personal data,” Altman said.

She was standing in the way? It’s literally her job to defend the Constitution. That’s not in the way, that is the way.

Washington Post Goes Dark: Refuses to Explain White House Censorship

Paid content to the Washington Post was abruptly rejected without explanation.

[Asking about] anything they could do to alter the wrap to make it more suitable, they were simply told that the Post could not run it.

“When we asked questions, they said they couldn’t tell us…

Virginia Kase Solomón, Common Cause’s president and chief executive, told CNN the Post’s decision was “concerning,” saying the paper — which uses the slogan “Democracy dies in the darkness” — “seems to have forgotten that democracy also dies when a free press operates from a place of fear or compliance.”

[…]

The White House’s grievance with the AP… has also led to the publisher being indefinitely banned from the Oval Office and Air Force One, hindering its coverage.

When the group was instructed on how to submit new content, they said an ad supporting Trump was the suggestion.

“They gave us some sample art to show us what it would look like,” she said. “It was a thank-you Donald Trump piece of art.”

Clearly the Washington Post has positioned itself into a noticeable stance enabling Trump to kill democracy. Therefore, from a military intelligence history perspective, let me suggest this messaging campaign demonstrated some standard civilian influence operation principles: clear identification, an appeal to authority, and actionable solutions. Its effectiveness would vary significantly, which begs a question why Washington Post was so scared to print such basic ad material. Who did they really expect to be so affected by this it needed to be stopped?

The content that Washington Post abruptly refused to run, fits their earlier editorial decision to block election opposition to Trump

Look, we’ve got a textbook example here of defensive democracy messaging that deserves immediate deconstruction. The visual security stack is straight out of the propaganda playbook – blood-red emergency signaling combined with documentary-style monochrome. Classic appeal to authority with the White House imagery.

But here’s the real vulnerability assessment:

The psychological attack surface is multi-layered. They’re running parallel operations with emotional triggers + constitutional legitimacy claims + crisis framing. Smart move embedding that QR code – bridges legacy trust signals to digital activation paths. Basic NIST authentication principles applied to mass communication.

A critical security flaw though, maybe? They’re treating this like a typical partisan buffer overflow when it’s actually a privileged access management problem. We’re dealing with unauthorized escalation attempts against federal systems by both domestic and foreign threat actors. The messaging fails to address the core exploit: ethno-nationalist groups coordinating with external nation-state actors to compromise democratic institutions.

The platform censorship without transparency is a control plane failure that creates an exploitable trust gap. When WaPo goes dark on defending democracy, they’re essentially running an unpatched system during active attacks.

Basic incident response principles tell us that silence during critical security events automatically amplifies adversarial messaging.

Think Isfahan 1953 – when you leave security vulnerabilities in democratic systems unaddressed, you’re inviting exploitation. This isn’t about partisan messaging effectiveness anymore. This is about fundamental controls to protect constitutional processes from compromise.

Short version: They’re running outdated defensive patterns against evolving hybrid threats. Fix the trust architecture first, then worry about the messaging stack.