OS X Yosemite, iOS-ification of Macs, Retina Monitors

The WWDC banners have gone up, and The Verge has posted a nice gallery of them. The one that grabbed my attention was the OS X banner.

My interpretations of the banner: (1) the name is going to be "OS X Yosemite", (2) the 'X' looks like OS X is going to get the iOS 7 redesign treatment, and (3) Mac displays going forward are going to be retina.

My interpretation of retina-ification of Mac monitors comes from the thin font of the 'X'. When the banners went up for last year, many had the very thin '7' telegraphing the new iOS 7 design. There was a lot of discussion that such a font (and overall design) made sense because all new iOS devices had retina screens.

Right now only the MacBook Pro has a retina display. There is no retina MacBook Air or iMac, and no Apple retina displays for the Mac Pro or Mac mini.

OS X 10.9.3 supports a few 4K monitors (effectively retina displays), and from what I've heard, OS X looks gorgeous on them.

If the iOS 7 font and overall design made sense because iOS devices were retina, and if OS X 10.10 is going to follow a similar design and font strategy, I expect Apple to be pushing retina (or 4K) displays for the rest of the Mac lines over the next year.

 

Analyzing Linux Audit Data

I thought I’d give folks a quick heads up with what I am doing, and that is bringing Linux and Windows audit data as first class citizens to my monitoring tools.

If you saw some of my old Network Radar presentations, I talked about building a flexible, object oriented library of network monitoring objects, called the Network Monitoring Framework (NMF) that can be combined or extended in different ways to build different network monitoring applications to meet different needs.

Several years ago I extracted part of the NMF and made that the NetSQ core framework.

And then I added the Audit Monitoring Framework (AMF) to allow me, like the Network Monitoring Framework, combine and extend the objects to create custom audit trail monitoring applications.

Monitoring frameworks and applications

My shipping software has focused on Mac’s BSM audit data, but I’ve also added Windows EVTX audit data to the library (Analyzing Windows EVTX Logs and Exfiltration of the Swift). This last month I started working with Linux audit data.

Each audit system has its own idiosyncrasies, so coming up with a common tool for an analyst can be somewhat challenging.

I’ve started with Audit Viewer, and here is a sneak peak of the Linux audit analysis.

It’s wartime all the time

A NY Times story from last month, Hackers Lurking in Vents and Soda Machines, provides another data point on the changing nature of protecting your network from attackers.

“When you know you’re the target and you don’t know when, where or how an attack will take place, it’s wartime all the time,” Ms. Hallawell said. “And most organizations aren’t prepared for wartime.”

And if you have something valuable – money, credit card data, intellectual property, access to customer/partner networks – you are or will be a target.

It is a pretty good article pointing out that attackers use many vectors to get into your systems.

Cyber Warrior Evolution: From Sergeant Bag of Donuts to Cyber Fighter Pilot

Working in the early 1990s on the Air Force sponsored Distributed Intrusion Detection System (DIDS), what would eventually morph into the Air Force’s ASIM global intrusion detection sensor grid, our Air Force program manager described the expected system’s user as Sergeant Bag of Donuts.

The expected user would have little to no cyber security training. We had to build a system for mere mortals.

Indeed, over the next decade, government cyber security R&D largely had the same goal: take the human out of the loop. The system had to detect all future attacks and know how to automatically respond.

Oh, and the system had to be inexpensive too.

If we were building planes, we would be building Cessna-class airplanes. The plane had to be inexpensive, simple to use, and very forgiving because the pilot/user would have minimal training.

Should be be building Cyber Cessnas or Cyber Fighter Jets?

Reality check!

Nearly 25 years later it has become clear that that model has failed.

In a Wall Street Journal article this week, Symantec Develops New Attack on Cyberhacking, Symantec essentially announced it is throwing in the towel on its current strategy.

Antivirus "is dead," says Brian Dye, Symantec's senior vice president for information security. "We don't think of antivirus as a moneymaker in any way.”

So Mr. Dye is leading a reinvention effort at Symantec that reflects a broader shift in the $70 billion a year cybersecurity industry.

One new model is to develop a unit of professional cyber defenders who can can go from subtle electronic indicators to confirmed breach and then develop and execute a response plan.

FireEye recently paid $1 billion for Mandiant, a small firm led by former Air Force investigators who act like cyber-Ghostbusters after a data breach.

Symantec seeks to join the fray this week. It is creating its own response team to help hacked businesses.

This is a fundamental shift in cyber security. Cyberspace is now contested ground. The adversaries are professionals. Brian Krebs estimates the Target hackers made about $54 million. The US military recognized this shift several years ago when it created US Cyber Command. Virtually every major military has a similar organization.

In other words, no more Sergeant Bag of Donuts. No more Cessnas. We need to build cyber security tools for the equivalent of highly trained fighter pilots.

There are lots of issues here, not the least of which is how the new business model will work. As the WSJ points out:

Specialized cybersecurity services for businesses account for less than one-fifth of revenue and generate smaller profit margins. It would be impractical, if not impossible, to sell such services to individual consumers.

Still, we have crossed the Rubicon. There is no going back. We cannot just build the equivalent of Cessnas for weekend pilots. Cyberspace is now a world of professional warriors.

Help on OS X Help

Motivated by a recent discussion on developing help documentation for OS X applications (a HelpBook), I decided to pull together some of my personal notes over the years and post them. Hopefully my struggles will help someone.

The HelpBook structure and .plist files

The first challenge is getting the HelpBook directory tree structure right. I usually start by grabbing an existing HelpBook of mine and changing the files under the pgs directory. I store all HTML files except the top HTML file in this directory. Of course there are a few other things to change like the icon and names in the .plist file.

The second challenge is getting all the variables in the application's .plist and the HelpBook's .plist files set and properly coordinated with each other and with the file and directory names you use (this is part of the reason I re-use the same structure over and over). Below is a diagram of my directory structure (starting at my SVN trunk) and application and HelpBook directory trees.

Adding an index to the HelpBook

The third challenge is adding the index to the HelpBook. Apple's help system will use this when the user searches for words. You get the added benefit of syntax checking your XHTML structure because one minor mistake and hiutil barfs and stops processing. Actually, by default it barfs silently, so you won't realize there is a problem. That is why you should use the "-v" option and watch the output.

I do all this from the command line. My current working directory is the HelpBook's Resources directory. Here are the commands I use:

$ hiutil -Cavf /tmp/MyHelp.helpindex English.lproj
$ hiutil -Fvf /tmp/MyHelp.helpindex
$ cd English.lproj
$ cp /tmp/MyHelp.helpindex .

(Note: don't forget the '.' at the end of the last command)

Adding the HelpBook to the project

The fourth challenge is adding the HelpBook to the application. I keep my HelpBook files outside my application's development folder (but inside my SVN trunk). When adding the HelpBook (through Xcode's "Add Files"), I need to make sure “Create folder references for any added folders” is selected instead of “Create Groups”.

In my older notes I had this listed the other way around. But now when “Create groups” is selected, the MyHelp.help icon in the Xcode file list shows a folder. This is the clue that I chose poorly.

When “Create folders” is selected, the MyHelp.help icon appears as a little HelpBook bundle icon. This is the way it should look.

XHTML entities

The fifth challenge is dealing with XHTML idiosyncrasies. I'm old fashion and craft my HTML by hand, which leads to a lot of potential syntax errors (hence the importance of the "-v" option to the hiutil command above). One biggie is that Apple's Help is not HTML but rather XHTML. For the most part they are the same, but there are places, in particular XHMTL entities (i.e., for apostrophes and quotes), that bite me.

Help is cached

The sixth challenge is testing Help from within the application. When testing I've found one annoying problem – Apple caches the HelpBook so changes aren't reflected when I launch Help from my application on subsequent runs. I'm sure there is a graceful way to resolve this, but I just logout and log back in. This seems to flush the cache.

Final comments

Apple's online documentation for Help (at least the last time I checked) is outdated and sometime contradictory. Also, loading Help the first time is surprisingly slow, especially for my computers with spinning disks.

Personally, I'm very tempted to keep all documentation online in the future. Then when the user selects Help, I'll just open the web page in the user's web browser. This also makes it easier to update the documentation without needing to re-submit my app to the Mac App Store.

A Universal Instrumentation for the Network

Looking for some old stuff, I ran across a 2-page position paper (and slides) I wrote for an EU-US Cyber Trust Summit in 2006. The gist of the paper is summed up by this question:

If the Saltzer and Shroeder principles have been known for over three decades, and if operating systems and network infrastructure have supported mechanisms to enforce much of Saltzer and Schroeder throughout the network, why aren’t we taking advantage of them to build more resistant and robust networks?

Part of my recommendation:

Instrument all control surfaces to collect appropriate audit information so that each observable activity (e.g., packet observed on a wire or a write to a file) can be mapped to (1) the user that instigated the activity, (2) the person(s) who installed the relevant software, and (3) the person(s) who wrote the relevant software.

I've still been working on (1) and (2) since then. I guess I believed in my position.

How rm -rf * almost destroyed Pixar and a pregnancy saved it

I'm reading "Creativity, Inc." by Ed Catmull & Amy Wallace. There are many wonderful little stories in this book, but I want to highlight just one here, a story many of us running UNIX systems can appreciate.

When Pixar was still relatively young and working on Toy Story 2, someone entered the following command on one of their computers with all the Toy Story 2 shots:

/bin/rm -r -f *

Any UNIX user knows the horror this command can cause, and indeed, before the execution could be stopped, two years of work – 90% of the film – had been deleted.

In a meeting an hour later:

"Don't worry," we all reassured each other. "We'll restore the data from the backup system tonight. We'll only lose half a day of work."

Then horror #2 hit: the backup system wasn't working!!!

Pixar still wasn't the powerhouse we think of it as today. It was vulnerable. This could be a real disaster for the company.

But then salvation came from a mom. Galyn Susman had recently had her second child, so she was working at home more. To support this, she set up a system at home that copied the film database once a week.

Within a minute of her epiphany, Galyn and Oren were in her Volvo, speeding to her home in San Anselmo. They got her computer, wrapped it in blankets, and placed it carefully in the backseat. Then they drove in the slow lane all the way back to the office, where the machine was, as Oren describes it, “carried into Pixar like an Egyptian pharaoh.”

While Toy Story 2 would go through many more tribulations (you'll have to read the book), this "accidental offsite backup" saved a near-panic situation for a still vulnerable Pixar.

I cannot help but think of the rhyme "For Want of a Nail", but in this case, it begins with a woman getting pregnant, which eventually saves Pixar's butt.

Sources for the book "Creativity, Inc.: Overcoming the Unseen Forces that Stand in the Way of True Inspiration" include:

Don't Forget What NSA's Mission Is

By and large, I think the discussions coming out of the Snowden leaks have been good. And while the HeartBleed vulnerability isn't related to Snowden, I don't think the media's attention about what the NSA knew and when did they know it would be a topic today without the Snowden leaks.

Having said that, I wish journalists would, at the beginning of their articles, acknowledge that the NSA is a spy agency focusing on signals intelligence (i.e., spying on computer and telephone traffic). That is their mission. From their mission statement:

The National Security Agency/Central Security Service (NSA/CSS) leads the U.S. Government in cryptology that encompasses both Signals Intelligence (SIGINT) and Information Assurance (IA) products and services, and enables Computer Network Operations (CNO) in order to gain a decision advantage for the Nation and our allies under all circumstances.

Furthermore, NSA's page on Signals Intelligence says:

The National Security Agency is responsible for providing foreign Signals Intelligence (SIGINT) to our nation's policy-makers and military forces. SIGINT plays a vital role in our national security by providing America's leaders with critical information they need to defend our country, save lives, and advance U.S. goals and alliances globally.

Developing intelligence based on intercepting and analyzing electronic communications is what the NSA does. Discussing limits on what they can do (e.g., rules of engagement) is perfectly fine. Being surprised or offended that they do these things is nuts.

I am reminded that when I was first developing tools and performing network-based intrusion detection in the early 1990s, I was told that I might be doing something illegal.

How the NSA (and Snowden) Make the Internet Less Secure

First, I don't want to blame the NSA. I blame policy makers. The NSA was given marching orders by policy makers (and decision makers probably gladly use the data the NSA provides), and if half of what has been revealed is true, the NSA may be the most productive government organization we have.

However, I see at least three ways the NSA's activities (and Snowden's) may have made the Internet less secure.

1. Active subversion.

There is evidence that the NSA weakened cryptography on the Internet by pushing a backdoor algorithm to be a national standard and then paying a major software supplier to make that backdoor the default algorithm used in many, many products. I wouldn't be surprised if there were a number of other examples of the NSA actively trying to weaken systems to make their primary mission of spying easier.

But even if you completely trust the NSA, such active subversion efforts can weaken your security. First, other attackers can find them. Second, security agencies have always had spy problems – Edward Snowden, Robert Hanssen, Aldrich Ames, Ana Belén Montes... We should not expect secret vulnerabilities to stay secret.

2. Passive subversion

The most recent accusation is that the NSA knew about and exploited the HeartBleed vulnerability for years. Even if this example isn't true, there is plenty of other evidence that the NSA knows about many vulnerabilities and have not told vendors about them. Again, they use this information to carry out their mission – spying.

Thus, the NSA knows we are vulnerable, but doesn't tell us. They let us remain vulnerable.

This might be one of the major reasons the NSA has pushed to be in charge of protecting the national infrastructure. They know vulnerabilities in the infrastructure that cannot be fixed because vendors don't know about them.

3. Inducement

I bet all governments looking at the Snowden revelations are jealous of the power the NSA gives the US Government. They may condemn the US, but they secretly want similar power for themselves.

Most major governments already had cyber operations. I bet most will be getting boosts in their budgets.

The NSA's activities might have removed any restraint, and Snowden lit the fire. I suspect the Snowden revelations have accelerated cyber spying activities world wide.

Bugs Can Reveal Interesting Things

Sometimes a bug can produce more interesting results than what you originally intended.

On all my machines I run a program I wrote called Data Fence ("In Review" at Apple for 5 weeks now!). It lets you write regular expressions to define a set of data to watch. Any program you did not approve to access a data file matching that regular expression generates an alert with an optional audio alarm.

My regular expression and access types for Keynote version 6 documents are:

The reason I have a trailing "/?" and have the access marked as a directory is that Keynote version 6 documents are actually special directories called bundles. I wanted to catch any malware (well, anything I haven't approved) looking into the contents of a Keynote bundle.

The regular expression can catch someone specifying the path named "foo.key" as well as "foo.key/".

The problem with the regular expression is that it also represents directories with names like "foo.keybler" or ".keychain_reauthorize". In fact, that last one Google triggers. Here is an example from today:

So these are false positives. But they really piqued my interest today because of the dramatic OpenSSL bug being reported today. Web sites need to revoke their old certificates and create new ones. This is probably one of the biggest vulnerabilities hitting the Internet in a while, and I am wondering if this ".keychain_reauthorize" directory has anything to do with Google addressing this in Chrome?

I can fix this regular expression fairly easily, but I've left it there for now because it is an interesting way to watch what Google is doing to my system behind my back. Interesting, very interesting.

The Fifth Protocol

Startup.boy posted The Fifth Protocol on cryptocurrencies that I hope lights a fire under both computer science and economics students.

Suppose we had a QuickCoin, which cleared transactions nearly instantly, anonymously, and for infinitesimal mining fees. It could use the Bitcoin blockchain for security or for easy trading in and out. SMTP would demand QuickCoin to weed out spam. Routers would exchange QuickCoin to shut down DDoS attacks. Tor Gateways would demand Quickcoin to anonymously route traffic. Machines would bypass centralized DNS and OAuth servers, using Coins to establish ownership.

There are also real-space versions of these ideas that have been emerging too. FasTrak, the electronic toll collection system, could easily allow rates to fluctuate in order to ease congestion. You want to drive into the Bay Area from 7 am - 9 am, expect to pay twice the rates than you would if you drove in at 6 am or 10 am. Utility smart meters can allow energy rates to vary minute by minute, and an app on your phone could tell you exactly how much you are spending minute by minute. Suddenly washing clothes at a different time makes sense.

Cryptocurrencies, in both cyberspace and real-space, lowers transaction costs allowing value to be assigned to most activities on a dynamic basis. Combined with instant feedback to users, this creates incentives that can change behaviors and open the door for new business models.

I believe, as the The Fifth Protocol proposes, cryptocurrencies are going to be a fundamental force in cyberspace (and in real-space IMHO).

Mesh Networking and Protecting Your Network

There is something that should scare the bejesus out of those who protect networks: mesh networking.

The general rule of thumb is that any device with two network interfaces is a router.

And guess what? Every phone in everyone's pocket is a router. Virtually all phones have cellular, bluetooth, and WiFi radios. Generally your phones aren't acting like routers, but with wearables on their way, they soon will be. And you can always explicitly turn your phone into a router by turning on the WiFi hotspot feature. Boom! You are a full-blown router.

Now there is a new element to worry about: mesh networks.

Wireless mesh networks have been around forever, but something has changed to turn mesh networking from an interesting concept to a potential game changer: it is now built into every iOS device!

This gives mesh networking the density of devices needed to be useful. (Android devices may soon have it too.) FireChat is one of the first apps to take advantage this feature. I expect to see a lot more apps using this iOS capability. (see Mike Elgan's article for more.)

But here is the kicker for security. This creates a network into your into your organization's premises that completely bypasses the network controlled by the site's network/security administrators. Bypass a firewall or DLP device. Hop an air gap. With a store-and-forward capability, even tunnel through Faraday cages.

I'm waiting to see a command & control networks running over mesh networks. There have already been botnets that use peer-to-peer networking, which makes shutting them down very difficult. But these P2P botnets still use traditional networks and routers to move data around, and this at least gives you a fighting chance. Mesh networking will mean the C&C will literally travel from pocket to pocket of employees, visitors, and random people walking by.

Maybe mesh networking will just be a fad, making a high profile splash that quickly fades like the infamous Color app (which, may be where Apple's technology came from).

But mesh networking is something all cyber security folks should keep an eye on.

Free Audit Aggregation System (FAAS), Waiting for Love

In the fall of 2012 I started working on Yet Another Log Aggregator (YALA) called the Free Audit Aggregation System (FAAS) (see concept paper). I didn't feel there was a good solution for efficiently collecting large (multi-gigabyte) log files that Apple's BSM system could generate.

Collecting detailed audit data like BSM data is important because it can reveal activity that is invisible to many other security sensors. Not only can BSM data be used for novel detection approaches, it can also provide context to alerts generated by other sensors. For example, if a network sensor detects a suspicious network connection, the audit data can tell you what process created that connection, how that process got started, what program image it was running, how the program executable got onto your system, and what files the process read from or wrote to.

By chaining these steps together, you can build control flows  (e.g., detect and watch intruders penetrate a network and then move laterally; see DIDS slides) and data flows (e.g., how did that file leave my system; see Windows audit video).

Unfortunately, FAAS got sidelined when Mavericks broke a few pieces of it and I spent time developing Data Fence (currently in review), an update to Audit Viewer (approved (yay!) and will be released this week), and PS Logger (originally part of FAAS).

Data Fence has a couple of advantages over FAAS, including distributed analysis, live analysis, and leveraging your existing SIEM infrastructure (assuming your SIEM reads syslog data and can parse XML). However, collecting the detailed audit data via something like FAAS and doing deeper back-end analysis is still extremely valuable, so I hope to thaw it out and start working on it again.

In the meantime, I thought I'd resurrect a few old videos to give you an idea of the FAAS vision. If you have some suggestions (like "Use Amazon Web Services instead of Apple Server."), contact me (a tweet works well, at least for starts). The three videos cover

  1. Introduction to FAAS
  2. Using Log Browser to find a log of interest
  3. Applying server-side analysis to the logs

Introduction to FAAS

Using Log Browser

Example of applying servsr-Side analysis

Are You Prepared for the NSA?

IMHO, the greatest value of the Snowden leaks is that they illustrate the threats you face. Even if the NSA and related organizations were dismantled today, you would still face the same threats and attack strategies as described in these documents.

The threats may come from other countries' state operations (almost all countries have them now), professional cyber criminal organizations, "Ocean's 11" style hacker teams, sophisticated individuals, and fully automated, multi-vector mal-systems operating perhaps independently of their creators (25 years ago Robert T. Morris accidentally released such a beast). 

The latest revelations from the Snowden leaks are a series of documents including one titled "I Hunt Sys Admins".

Of course sys admins are going to be targets!

So will your organizational partners, your suppliers and contractors, your supply chain, and your employees. I remember hearing a story where a VMS system administrator received a tape in the mail with a new OS release. It was a Trojaned operating systems. VMS? Software distribution by tapes? Yes, these techniques are that old.

So while you can wring your hands and decry the NSA's activities, there is one thing you should absolutely ask yourself:

Am I prepared to deal with these types of threats?

Windows EVTX Log Format

About four years ago I added Windows' EVTX audit log analysis to my Audit Monitoring Framework (AMF) code base. AMF is the foundation library to a number of my software tools, including Audit Explorer, Data Fence, and Audit Viewer.

Unfortunately, at the time there seemed to be very little detailed information about using Windows auditing (configuring auditing and analyzing the data) and virtually nothing about the underlying binary data format of the log files in case you wanted to write your own tools. That led to a large number of experiments and reverse engineering of the data. The results of that work was two documents:

Windows 7 Auditing: An Introduction

Windows 7’s auditing system can provide a rich source of information to detect and analyze a wide range of threats against computer systems. Unfortunately few people know this auditing system exists much less how to turn it on and configure it. This paper provides step-by-step instructions to configure a simple audit policy useful for understanding how data was exfiltrated from the computer.

Windows 7 Security Event Log Format

Windows security event log provides a rich source of information to detect and analyze a wide range of threats against computer systems. Unfortunately Windows' tool to view these logs, Event Viewer, is extremely limited in its functionality. Furthermore, there are very few third-party analysis tools to fill the gap between what the Event Viewer provides and the potential information that can be leveraged from the security event logs. One potential reason for this gap is that the format of these event logs is poorly documented making it very difficult for third-party developers to write tools to analyze the data. This paper documents the event log format, thus providing a blueprint for developers to create native tools to analyze Windows 7 event logs. We begin by providing an overview of the format and key concepts that are needed to understand the details. Then we dive into a detailed description of the syntax of the event log format.

I added the Windows 7 EVTX parsing and analysis capability into AMF, and built a number of internal tools to analyze Windows 7 audit data. I posted a write up and screenshots of an internal version of Audit Explorer analyzing the data: Analyzing Windows EVTX Logs. I also posted a video showing additional analysis tools using data flow analysis to track insiders collaborating to exfiltrate classified information (this was in 2010, well before Edward Snowden): Windows 7 Audit Trails: Exfiltration of the Swift (reproduced below).

I tried to get the government to fund additional R&D on this, but they were never interested. Maybe they didn't think insiders were a problem (cough, cough). Still, the latest version of Data Fence and Audit Viewer have the Windows audit analysis code embedded in the executables. It just isn't exposed. If there is enough interest (ping me on twitter), I'll expose the Windows analysis code.

Audit Viewer: Getting Started

Sometimes software can take an unexpectedly winding path to release.

I basically finished Audit Viewer version 1.1 a year ago, but I delayed it to wait until FAAS was released. Audit Viewer version 1.1 takes advantages of process snapshots to enhance the BSM data, and FAAS generated these snapshots. But then Mavericks broke a few pieces of FAAS including its crypto (which is funny because Apple's crypto changes also created a huge vulnerability for Apple), and I became focused on Data Fence (currently under review at the Mac App Store).

In the meantime, I created PS Logger, so users can take advantage of process snapshots without installing all of FAAS's infrastructure. With PS Logger released, I finally figured it was time to release version 1.1 of Audit Viewer (and it too is under review at Apple right now, sigh...).

Here is a video showing how to get started with Audit Viewer. It also gives you a glimpse at how you can zoom into different levels of audit analysis. 

Is NSA's TURBINE just a high-end botnet?

The Intercept's "How the NSA Plans to Infect ‘Millions’ of Computers with Malware" by Gallagher and Greenwald describes more Snowden documents including an NSA system called TURBINE. While I encourage everyone to read the article, I kept asking myself, "Is there anything new here?"

I think the answer is "No." Most of the techniques described have been done before to one degree or another by various hacker groups. If you review the HBGary Federal documents released by Anonymous several years ago, they also described many of the same goals and techniques HBGary Federal proposed for clients. Even my 1996 paper (has it been 18 years?!) "ATTACK CLASS: ADDRESS SPOOFING" describes various spoofing strategies, including rerouting packet flows and session hijacking.

The Intercept's article is just another example of the increasing professionalization of cyberspace conflict. You can think of TURBINE and related components as a high-class botnet.

Cyberspace is a contested & valuable space. Virtually every government, criminal organization, and patriotic hacker group is developing tools, techniques, and talent to do similar things. You and your site may or may not be targets of the NSA programs, but there is a very good chance you *will* be the target of another one of these groups using similar techniques.

Patent Trial and Appeal Board

I ran across a fascinating article today in the Wall Street Journal: "A New Weapon in Intellectual Property Wars". It is about the Patent Trial and Appeal Board (PTAB). This did not exist when I served my time as an expert witness in a couple of patent lawsuits. (note: I read the physical paper version of the article and not the online version)

One of the big challenges facing an expert witness is trying to explain your argument for validity or invalidity of a patent (as well as infringement) to a jury who has no experience in the technical field. For example, for a patent to be valid, it had to be non-obvious to a "person having ordinary skill in the art" (abbreviated PHOSITA) at the time of the patent. An example of a PHOSITA might be someone with 6 or more years of training and experience in the field,  perhaps someone who went through 4 years of college to get a computer science degree and then spent 2 years working in that specific field (e.g., computer security).

How do you explain to a member of the jury who has 0 years of experience in the technical field what a person who has 6 or more years of experience in the field might think? And chances are, everyone in the jury will have no experience in the technical field. For example, I was told about a computer patent case where in the initial jury pool of 40 people, only 1 person had even heard of the word "Linux".

And of course, the "other side" will have their own expert who will say everything you said is wrong.

Here is a quote in the WSJ article describing the PTAB:

"It's fast and has a whole fleet of expert judges that understand the science and know the technology."

One patent lawyer described appearing before the PTAB judges as

"getting CAT-scanned, MRI-ed, and X-rayed, all within a three-hour period"

That sounds like a completely different situation than a jury case. Fascinating. Very fascinating.

Is My Bug Report Related to "goto fail" Vulnerability?

Apple recently suffered an embarrassing security vulnerability known as the "goto fail" bug when an SSL certificate check was done wrong.

This reminded me of a bug report I filed earlier about some changes Apple made to SSL that broke my Free Audit Aggregation System (FAAS), and I have to wonder if somewhere the problem I was having and this "goto fail" bug intersected at some point. At a minimum, it shows Apple was breaking people's crypto code (e.g., curl command line program and php), which would have made it harder to spot the original source of the problem.

Speaking for myself, when something that, according to documentation, should work but doesn't, I start trying lots of things hoping to find something that does work. We call these "work arounds". Perhaps the extra "goto fail" line in Apple's code was a work around to make something else pass a test?

Below is my bug report that I posted on 25 June 2013 followed by an update added later that same day:


Original bug report 25-June-2013 11:56 AM

Summary:

When using curl (either the command-line tool or embedded in a PHP script) to connect to a web server over HTTPS that uses a self-signed certificate, passing the certificate to curl doesn't help. The connection fails.


Steps to Reproduce:

(1) Create a web server that uses a self-signed certificate. I'm using Mountain Lion with the Server App.

(2) Get a local copy of server's certificate. I use

$ echo -n | openssl s_client -connect bigmac.lab.netsq.com:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > server.crt

(3) Use that certificate to connect the the server web server over HTTPS via curl

$ curl --cacert server.crt https://bigmac.lab.netsq.com/


Expected Results:

The HTML for the page.


Actual Results:

curl: (60) SSL certificate problem: Invalid certificate chain


Regression:

This works properly on Lion and Mountain Lion, but it fails on 10.9 DP1 and DP2


Notes:

The workaround is to turn off checking of the server's certificate. For the curl command line, this is the -k option

$ curl -k https://your-secure-server/

For curl embedded in PHP use the following line

curl_setopt($ch, CURLOPT_SSL_VERIFYPEER,false);


Update 25-June-2013 01:31 PM

I added the certificate to my keychain, and now I can use curl (both command line and inside PHP).

You might want to add a developer note that -k for the curl command, and

    curl_setopt($ch, CURLOPT_CAINFO, $certificate_file);

inside PHP is essentially a no-op for OS X 10.9 and the certificates should be added to the keychain.