European Court of Human Rights: Telegram Does Not Have to Decrypt Messages for Russian Government

In a case known as Anton Valeryevich Podchasov v. Russia the European Court of Human Rights has published the following conclusion upholding privacy and protecting encryption.

80. The Court concludes from the foregoing that the contested legislation providing for the retention of all Internet communications of all users, the security services’ direct access to the data stored without adequate safeguards against abuse and the requirement to decrypt encrypted communications, as applied to end-to-end encrypted communications, cannot be regarded as necessary in a democratic society. In so far as this legislation permits the public authorities to have access, on a generalised basis and without sufficient safeguards, to the content of electronic communications, it impairs the very essence of the right to respect for private life under Article 8 of the Convention. The respondent State has therefore overstepped any acceptable margin of appreciation in this regard.

81. There has accordingly been a violation of Article 8 of the Convention.

The conclusion clearly states Russia is violating Article 8.

Right to respect for private and family life, home and correspondence.

The case stemmed from Russian legislation that mandated telecommunication service providers must retain both the actual content of communications and associated metadata for defined durations, without discriminating based on content or users. Furthermore, a 2017 directive from the Russian Federal Security Service demanded that Telegram provide technical details necessary for decrypting communications.

Tesla CEO Again Fraudulently Promotes Dangerous FSD as Safe for Drunk Driving

The heartbreaking Washington Post account of the tragic demise of a Tesla employee underscores a significant detail. The owner was under the influence of alcohol at the time of his fatal crash caused by the Full Self-Driving (FSD) system.

Notably, The Washington Post’s coverage meticulously substantiated the role of Tesla in the unfortunate fatality, killing an owner who believed he could trust his CEO’s outlandish statements about FSD safety.

From this harrowing low point, the response unfolds predictably.

Tesla’s CEO has attempted to discredit the narrative by perpetuating the same perilous fallacy that operating a vehicle under the influence of alcohol with FSD engaged would ensure safety.


The facts remain stark: the Tesla staff killed was absolutely convinced he was utilizing FSD at the time of his crash, a detail unequivocally clarified by The Washington Post.

More to the point, the consequences of a CEO’s engineering culture that created FSD are tragically evident, given how his intoxicated staff were burned to death for believing FSD would protect them.

Consider this: If the Tesla employee who met a tragic end was NOT utilizing FSD despite thinking he had it running all the time, it implies a systemic overestimation within Tesla’s ranks regarding FSD even being installed — a horrible deception even more egregious.

Tesla’s CEO unnecessarily compounds all the issues by obnoxiously perpetuating a false notion that driving under the influence with FSD is safe. He encourages further perilous behavior (while also proving the Washington Post reporting accurate). But he also undermines the very notion that FSD has any integrity at all when installed or during operation.

This situation evokes another urgent warning: Refrain from engaging with Tesla vehicles, and advocate against their usage among friends and family.

The Tesla response would be like if the CEO of Boeing suggested, after a plane’s cabin door plug unexpectedly detached mid-flight, that passengers could prevent such occurrences by relying even more on known defective engineering — continuing to fly as usual despite news of a serious failure.

Boeing’s planes were GROUNDED without any fatalities. Why are Tesla still allowed to operate on public roads with hundreds already dead and rising?

Nearly 80% of X Twitter Traffic Now is Fake: Explosion of Bots

Some absolutely scathing quotes from researchers exposing the total fraud now running X Twitter.

“I’ve never seen anything even remotely close to 50 percent, not to mention 76 percent,” CHEQ founder and CEO Guy Tytunovich told Mashable regarding X’s fake traffic data. “I’m amazed…I’ve never, ever, ever, ever seen anything even remotely close.”

[…]

During the comparable weekend in February 2023, fake traffic from the platform then-known as Twitter only accounted for 2.81 percent out of 159,000 visits. That’s around 72 percent less than this year…

…based on this traffic data, advertisers could potentially be paying Musk and company for visits from an audience consisting mostly of bots.

X Twitter went from 3% bots (hovering around a general industry average for bad traffic) to nearly 80% bots in just one year.

Paying for visits from X Twitter is now as bad or worse as paying for a Tesla.

Fraud.

Chinese Abruptly Remove Their Driverless Cars From California Roads

Some are speculating that U.S. national security concerns have spooked the Chinese, causing an abrupt halt to many foreign robots being tested on California roads.

Didi is not the only Chinese company that appears to be scaling back autonomous vehicle testing in California, or pulling out entirely.

A DMV spokesperson said that five Chinese-based companies — Baidu Apollo, Pony.ai, WeRide, Didi, and AutoX — drove around 130,000 miles on public roads in California between December 2022 and November 2023.

That’s a significant decline from the previous year, when Chinese autonomous vehicle companies conducted over 450,000 miles of testing. Didi’s vehicles only drove 4,000 miles in 2023, per BI’s calculations.

At least three other Chinese firms — Deeproute.AI, QCraft, and Pegasus Technology — which previously had licenses to test in the state, are no longer listed on the California DMV’s site.

A Deeproute.AI spokesperson told NBC that the company stopped testing in California in 2022.

Chinese-owned firms Nio, Black Sesame, and Xmotors.AI do have permits to test autonomous vehicles in California but did not record any testing activity in the last two years.

As I’ve said widely and repeatedly since at least 2016, China doesn’t need to fire ICBMs at American cities if it can just issue a simple “destroy” command to tens of thousands of road robots (driverless cars).

But on the flip side, the Chinese may worry their most advanced deployments might be embarrassingly unable to win a typical SF street brawl.

Waymo apparently was completely surprised this week when SF’s notoriously colorful Chinatown crowds destroyed their robot.

On that note I personally was in a Tesla in 2016 with three other engineers using SF roads who together 1) disconnected it from the Tesla servers and instead operated the car on a rogue service 2) injected hostile map/navigation data to throw the car off course 3) confirmed trivial and predictable vision sensor flaws (e.g. projected lines that force a “veered” crash into a target).

It was painfully obvious then, as a direct witness to the many security vulnerabilities, that Tesla had produced a remotely controlled explosive death trap that no country should allow on its roads. Yet as much as I gave lots of talks, and spilled lots of ink, I don’t think it made enough of an impression because Tesla kept sending more and more people to their early grave.

We’ve come a long way now with the news from SF that a Waymo robot was just subjected to very public safety test that it almost immediately failed spectacularly. A sense of national security finally may be forming, in as much as the Chinese see California roads no longer as ripe for trivial remote control and exploitation.