Category Archives: Security

US Navy Sea Mammal Training

A curious-looking sea lion approached a boat I was sailing the other day. I had a good laugh with the crew on board about how it must represent the latest Naval surveillance technology…”look out, a seagull-drone also is watching”. It turns out the joke was really on us, according to a report this week by CNET.

At Pier 48 in San Francisco, the city’s police and fire departments, along with its Emergency Operations Center, conducted a drill demonstrating the ability of dolphins and California Sea Lions to help protect coastal areas from maritime attacks.

No word on seagulls but they fit nicely into this picture. Could an octopus could be trained? They would be able to operate without a mechanical clasp like the one required for a sea lion:

I have to wonder how mammals are evaluated for this job. It is not very clear from the story.

Using highly trained dolphins and sea lions selected for their quickness, intelligence, detection capability, and mobility, officials demonstrated the unique ability of these animals to identify and neutralize threats in cooperation with human teammates.

Selected? Obviously they do not enlist. Does this rule out monkeys? What about chimpanzees, pigs or birds? Are dogs the only other animal that has been drafted for US military training? How does the military account for the cost and time of training a dolphin or sea lion? An artificial shark robot seems like a more humane approach, maybe more cost-effective, to this kind of underwater explosive detection and removal operation.

Remember Roboshark2? I have not heard anything since the big splash in 2003.

History at LSE ranked #1

I was just informed that my Alma Mater, the International History department at LSE, has been ranked #1 in the 2011 Complete University Guide.

It was given an overall score of 100 out of 100 possible points. Congrats LSE. Go Beavers!

Oxford was second with a score of 99.8. Hard to understand how Durham ended in third with higher graduate prospects and student satisfaction compared to Oxford, but perhaps research assessment and entry standards have more weight?

LSE was an excellent experience for me, as I studied international security during the Cold War in Asia, Africa and Europe. My thesis was on defense ethics strategy, (dis)information warfare, and long-term global security impact from military occupation of the Horn of Africa:

Anglo-Ethiopian Relations 1940-1943: British military intervention and the return to power of Emperor Haile Selassie

When asked about my transition from a history background to information security, I highlight two key points:

  1. Taxonomy of Authority: At its core, security is about tracking and analyzing events – who did what, where, and when. This mirrors the historical method of studying and interpreting past events. As a historian, I analyzed written accounts to construct coherent narratives. In security, I apply the same analytical skills to computer logs and digital data. Both fields require critical thinking to assess risks based on past vulnerabilities and threats. It’s no coincidence that many security professionals, especially in the military, have a keen interest in history.
  2. Case Study: Ethiopia 1940 and British invasion/occupation offers valuable lessons for modern complex security challenges. This mission aimed to establish stability while respecting Ethiopia’s sovereignty — a delicate balance given Britain’s imperial past and substantially weakened future. The outcomes of this intervention provide insights relevant to recent Western operations in countries like Afghanistan and Iraq. The post-WWII Western policy in the Horn of Africa ultimately failed to ensure regional security. Instead, it precipitated revolution, invited territorial war (with Somalia) and fueled an anti-American military party (the Derg) rise to power. The resulting instability and reduced Western influence continue to create security challenges today, such as piracy and terrorist safe havens. This historical case study demonstrates how understanding past events can inform current security strategies and risk assessments. It illustrates the transferable skills between historical analysis and information security: the ability to analyze complex situations, identify patterns, and draw actionable insights from past events.

In essence, my background in international history at LSE honed my skills in event analysis and reporting — capabilities fundamental to information security and risk management, which form the bedrock of computer security.

XLlpX&submit_button=Search

This seems to be a popular search:

XLlpX&submit_button=Search

Sometimes it is just this:

XLlpX

Could this be meant for XLSX; the flaw in Microsoft decompression of XLSX files?

The vulnerabilities could allow remote code execution if a user opens a specially crafted Excel file. An attacker who successfully exploited any of these vulnerabilities could gain the same user rights as the local user.

The problem was from a lack of validation on the ZIP header when the XML was decompressed. This allowed memory space to be exploited and then remote code could be executed. The vulnerabilities were reported (seven of them) in July of 2009 and Microsoft released a fix in March 2010 with MS10-017

Not XLlpX, but similar.

Kim Cameron on Google Wifi Surveillance

A storm is brewing between Kim Cameron and Ben Adida. I do not mean to step into the middle of their dispute over the Google Wifi surveillance fiasco. Both have good arguments to make.

I noticed, on the other hand, that Cameron makes a huge error in the blog post today “Misuse of network identifiers was done on purpose“.

SSID and MAC addresses are the identifiers of your devices. They are transmitted as part of the WiFi traffic just like the payload data is. And they are not “publically broadcast” any more than the payload data is.

Actually the SSID and MAC are broadcast more in the public in my view since they are part of the handshake required to both negotiate a connection as well as avoid collisions. It is like the front door to a house that has a unique number on it for everyone to see and know whether it is the right door or not.

There is an argument to be made that doors should be able to be hidden, but the implementation of WiFi thus far does not offer any real way to hide them. So I say they are more public than data, which lives behind the door.

The identifiers are persistent and last for the lifetime of the devices. Their collection, cataloging and use is, in my view, more dangerous than the payload data that was collected.

That simply is not accurate.

The MAC and SSID can be easily changed. I just changed it on my network. I just changed it again. Did you notice? No, of course not. Anyone can change their MAC to whatever they want and the SSID is even easier to change. I do not recommend fiddling with the SSID if you have a good one, since it can be seen anyway and doesn’t offer much information to an attacker. My only recommendation is to think about what you are advertising with it. Changing the MAC is nice because you can hide the true identity of your device. Attackers would think it’s a Cisco, for example, when it really is D-Link or SMC.

I am no fan of Google’s response on this issue. A defense is centered around “our engineers did not know what they were doing” or “we didn’t realize that Wifi scanning would collect all this data” is hard to believe. Who in the world writes surveillance software without factoring the risk to privacy and security? That seems rather obvious, which could might be why Congress is getting involved.

“As we have said before, this was a mistake. Google did nothing illegal and we look forward to answering questions from these congressional leaders,” a Google spokesperson said in an email.

A mistake? The first thing anyone should do when turning on WiFi capture tools is verify they will not be in violation of ethics and law. It reads from their defense that they still do not think surveillance would violate a law. This could have a major impact on the security industry and begs the question of ethical data capture.

Google really needs to explain themselves in terms of the basics. They are being asked “why did you just drive to all the homes in town and take photos through the windows?” If they answer “we had no idea we actually were taking pictures of anything” you are right to be suspicious. That’s what cameras do…

Over the years Google has told me their organization is flat and all the engineers are just expected to do the right thing. They throw their hands up at the idea of someone telling anyone else what is or is not allowed. They balk at the idea of anything organized or top down. They are open to passive education and seminars but nothing strict like what is needed for compliance — thou shalt not wiretap without authorization, thou shalt disable SSLv2. Compliance means agreeing with someone else on what Google can and can not do; that is a hard pill to swallow for an engineering culture that prides itself on being better than everyone else.

Although it is easy to see why they are trying to emulate an academic model they grew out of (all schools tend to struggle with the issue of how to secure an open learning environment) they are clearly not a learning institution. Aside from modeling the “academic peer review” process into a search algorithm, they have completely left the innocence of a real campus behind. Their motives are not anywhere close to the same.

Collecting personal data for academic research in a not-for-profit organization could make sense somewhere to someone as a legal activity. Taking profits from collecting personal data to fund surveillance software that runs around the world to collect personal data…uh, hello?

A naive political structure regarding security management and lax attitude to consumer privacy is what will really come into focus now at Google. They need a security executive. They seem to need someone to explain and translate right/wrong (compliance) from the outside world into Google-ese so their masses of engineers can make informed decisions about how to change or come to understand laws they run up against instead of violating them and only then trying to figure out if they should care.

New regulations and lawsuits may help. I am especially curious to see if this will alter surveillance laws. Google of course may plan just to dismiss the results as more outsider/non-Google concepts. The big banks are known to have a capital offset fund to pay fines rather than comply. Large utilities are known to have a “designated felon” who gets to go to jail on their behalf rather than comply. It is definitely worth noting that executives at BP said the $87 million OSHA fine for safety violations last year did nothing for them. It did not register as enough pain to change their culture of risk and save employee lives — it certainly did not prevent the Gulf spill after the loss of a $500 million rig. The spill itself is said to so far have cost BP $930 million and cost the government $87 million. The damage to the economy and environment is yet to be determined. Will BP change?

Although Cameron is wrong on technical points he is right in principal. The Google Wifi surveillance issue has exposed what appears to be a systemic issue for Google management to instill privacy and security due diligence in their engineering practices.