Echoes of the Morris Worm

SecurityWeek article "SMS Worm Hits Chinese Users Hard, Installs Android Backdoor" reports on a worm affecting a million Android phones in China (beware of side loading). But what caught my attention was the second to last paragraph:

The 19-year-old college student admitted creating the malware, but claimed that he only did it for fun and to show off his skills. He didn't realize that it would spread so quickly, he told police. Li was detained in the city of Shenzhen while visiting his parents.

This sounds very similar to Robert T. Morris and the release of the first large-scale Internet worm in 1988. Be careful playing with security. Like playing with fire, it can quickly get out of control.

SEO will drive encrypted network traffic

ArsTechnica's article "In major shift, Google boosts search rankings of HTTPS-protected sites" begins

In a shift aimed at fostering wider use of encryption on the Web, Google is tweaking its search engine to favor sites that use HTTPS to protect end users' privacy and security.

and concludes

Companies devote huge amounts of resources to search engine optimization. Those that so far have ignored calls to implement HTTPS may finally heed them if they believe it will help their pages rise above those of their competitors in the all-important Google search rankings.

SEO is a big business, so I suspect there will be no bigger carrot offered to encrypt web traffic than Google's action to favor it.

I wonder if this is a major discussion point around the water cooler at government spy agencies today? I wonder to what extent this will affect security monitoring tools and services in the next 1-2 years?

Security contractor breached

The Washington post article "Security contractor says hit by computer breach" reports that U.S. Investigations Services, USIS, the largest contractor that carries out security checks, was breached.

An [Office of Personnel Management] OPM spokeswoman said that the agency was temporarily halting all of USIS’s background check fieldwork “out of an abundance of caution.” The spokeswoman, Jackie Koszczuk, said the hiatus will allow USIS to take “necessary steps” to protect its systems.

I wonder what "necessary steps" USIS will do to protect its systems that it wasn't already doing? And why wasn't it doing these things before?

Since USIS must collect very personal and sensitive information on people who will be given jobs with access to valuable and sensitive information, it would be an obvious target of attackers interested in financial crime and espionage.

Furthermore, I wonder if the attackers manipulated any of the data USIS collected. For example, if there was potentially damaging information about a potential future insider, could the attackers have removed the data to  help that future insider get his (or her) security clearance? Could USIS determine if data they had collected was modified or deleted?

Building value takes time

In the Business Insider article "It's Pretty Clear That Apple Is Winning The War With Samsung", Jay Yarow writes

While there are people pushing Apple to lower prices on the iPhone, it seems like Apple is doing the exact right thing by keeping it's phones priced at a premium. It has expensive, new high-end phones that generate healthy sales and profits. They also establish Apple as a premium phone maker.

Yarow misses the point. Apple has developed value with its iOS operating system, hardware components like the A* chip line and Touch ID, and their seamless integration. These create value that people are willing to pay for so Apple can charge premium prices.

Building unique and strong value takes time and lots of effort. As an example, Google has invested a great deal in its mapping service, and Apple learned how hard it was to duplicate that value when it chose to roll out its own Apple Maps.

As I wrote in "Samsung is screwed", I don't think any Android handset maker can expect to have healthy margins on their phones. And I can't imagine a new operating system like Tizen being able to duplicate the development efforts and ecosystems that Apple and Google have created over many years of intense efforts.

Apple can make premium phones and profit on them because of the years of investments they have made.

 

Samsung is screwed

Benedict Evans writes in the post Unbundling innovation: Samsung, PCs and China 

It seems pretty clear now that the Android OEM world is starting to play out pretty much like the PC world. The industry has become unbundled vertically between components, devices, operating system and application software & services. The components are commoditised and OEMs cannot differentiate on software, so they are entering a race to the bottom of cheaper and cheaper and more and more commoditised products, much like the PC industry.

Years ago I read (in "the Great Game of Business"?) that for a business to succeed, it must:

  • be the low cost provider
  • or have something unique to offer that customers are willing to pay for

During the early, rapid expansion phase of a new product/business category, this isn't apparent. Everyone seems to be able to grow. During the late 1980s and early 1990s this was true in the PC world. In the late 2000s and early 2010s, this was true in the smartphone world.

In the expansion phase, most companies seem to make the choice that eventually puts them in a commodity business. In the short term these choices are the fastest path to market and profit. Buy your CPU, operating system, and components off the shelf; assemble into a product; market; sell; and enjoy the profits.

But it's a trap.

Once growth slows, if you aren't the low cost provider or have something unique to offer, you are screwed. And if you are in a commodity business (selling PCs with Windows or handsets with Android), the only choice is to be the low cost provider.

Samsung offers very little innovation that customers want to pay for, and with Google placing greater restrictions on changes to Android, Samsung can't even make many of those changes anymore.

The early expansion phase of smartphones is ending. If a company has nothing unique to offer, it is in a commodity business where, at best, margins are going to be very thin.

OpenGL and crawling ants

I'm having a flashback to 1985 when I took my first graphics programming class (We used this newish language called "C" and we worked on a VMS system).

One of the issues we needed to address in our project was the so called "crawling ants" problem. Thin lines that were slightly off the horizontal created patterns of being drawn for a bit, then not drawn, then drawn again. If you animated the scene (e.g., changing the camera's position), those on-and-off drawing patterns would move, looking like a line of ants crawling along a surface.

This last week I decided to take up programming on iOS, and I thought an OpenGL program would be fun.

I'm using GLKView with GLKBaseEffect and GLKTextureInfo to avoid writing my own shaders (Between Apple's SceneKit and Metal I feel a little strange spending time on OpenGL, but hey, you gotta start somewhere).

But guess what?

Crawling ants!

In theory the GLK texture should be doing linear sampling, and that should create a grayish color, not the constant flipping between black and white. I'm not seeing it though.

I wonder if I am doing something wrong?

The little dash lines move as I move through the scene creating a crawling ants feeling

OS X Yosemite, iOS-ification of Macs, Retina Monitors

The WWDC banners have gone up, and The Verge has posted a nice gallery of them. The one that grabbed my attention was the OS X banner.

My interpretations of the banner: (1) the name is going to be "OS X Yosemite", (2) the 'X' looks like OS X is going to get the iOS 7 redesign treatment, and (3) Mac displays going forward are going to be retina.

My interpretation of retina-ification of Mac monitors comes from the thin font of the 'X'. When the banners went up for last year, many had the very thin '7' telegraphing the new iOS 7 design. There was a lot of discussion that such a font (and overall design) made sense because all new iOS devices had retina screens.

Right now only the MacBook Pro has a retina display. There is no retina MacBook Air or iMac, and no Apple retina displays for the Mac Pro or Mac mini.

OS X 10.9.3 supports a few 4K monitors (effectively retina displays), and from what I've heard, OS X looks gorgeous on them.

If the iOS 7 font and overall design made sense because iOS devices were retina, and if OS X 10.10 is going to follow a similar design and font strategy, I expect Apple to be pushing retina (or 4K) displays for the rest of the Mac lines over the next year.

 

Analyzing Linux Audit Data

I thought I’d give folks a quick heads up with what I am doing, and that is bringing Linux and Windows audit data as first class citizens to my monitoring tools.

If you saw some of my old Network Radar presentations, I talked about building a flexible, object oriented library of network monitoring objects, called the Network Monitoring Framework (NMF) that can be combined or extended in different ways to build different network monitoring applications to meet different needs.

Several years ago I extracted part of the NMF and made that the NetSQ core framework.

And then I added the Audit Monitoring Framework (AMF) to allow me, like the Network Monitoring Framework, combine and extend the objects to create custom audit trail monitoring applications.

Monitoring frameworks and applications

My shipping software has focused on Mac’s BSM audit data, but I’ve also added Windows EVTX audit data to the library (Analyzing Windows EVTX Logs and Exfiltration of the Swift). This last month I started working with Linux audit data.

Each audit system has its own idiosyncrasies, so coming up with a common tool for an analyst can be somewhat challenging.

I’ve started with Audit Viewer, and here is a sneak peak of the Linux audit analysis.

It’s wartime all the time

A NY Times story from last month, Hackers Lurking in Vents and Soda Machines, provides another data point on the changing nature of protecting your network from attackers.

“When you know you’re the target and you don’t know when, where or how an attack will take place, it’s wartime all the time,” Ms. Hallawell said. “And most organizations aren’t prepared for wartime.”

And if you have something valuable – money, credit card data, intellectual property, access to customer/partner networks – you are or will be a target.

It is a pretty good article pointing out that attackers use many vectors to get into your systems.

Cyber Warrior Evolution: From Sergeant Bag of Donuts to Cyber Fighter Pilot

Working in the early 1990s on the Air Force sponsored Distributed Intrusion Detection System (DIDS), what would eventually morph into the Air Force’s ASIM global intrusion detection sensor grid, our Air Force program manager described the expected system’s user as Sergeant Bag of Donuts.

The expected user would have little to no cyber security training. We had to build a system for mere mortals.

Indeed, over the next decade, government cyber security R&D largely had the same goal: take the human out of the loop. The system had to detect all future attacks and know how to automatically respond.

Oh, and the system had to be inexpensive too.

If we were building planes, we would be building Cessna-class airplanes. The plane had to be inexpensive, simple to use, and very forgiving because the pilot/user would have minimal training.

Should be be building Cyber Cessnas or Cyber Fighter Jets?

Reality check!

Nearly 25 years later it has become clear that that model has failed.

In a Wall Street Journal article this week, Symantec Develops New Attack on Cyberhacking, Symantec essentially announced it is throwing in the towel on its current strategy.

Antivirus "is dead," says Brian Dye, Symantec's senior vice president for information security. "We don't think of antivirus as a moneymaker in any way.”

So Mr. Dye is leading a reinvention effort at Symantec that reflects a broader shift in the $70 billion a year cybersecurity industry.

One new model is to develop a unit of professional cyber defenders who can can go from subtle electronic indicators to confirmed breach and then develop and execute a response plan.

FireEye recently paid $1 billion for Mandiant, a small firm led by former Air Force investigators who act like cyber-Ghostbusters after a data breach.

Symantec seeks to join the fray this week. It is creating its own response team to help hacked businesses.

This is a fundamental shift in cyber security. Cyberspace is now contested ground. The adversaries are professionals. Brian Krebs estimates the Target hackers made about $54 million. The US military recognized this shift several years ago when it created US Cyber Command. Virtually every major military has a similar organization.

In other words, no more Sergeant Bag of Donuts. No more Cessnas. We need to build cyber security tools for the equivalent of highly trained fighter pilots.

There are lots of issues here, not the least of which is how the new business model will work. As the WSJ points out:

Specialized cybersecurity services for businesses account for less than one-fifth of revenue and generate smaller profit margins. It would be impractical, if not impossible, to sell such services to individual consumers.

Still, we have crossed the Rubicon. There is no going back. We cannot just build the equivalent of Cessnas for weekend pilots. Cyberspace is now a world of professional warriors.

Help on OS X Help

Motivated by a recent discussion on developing help documentation for OS X applications (a HelpBook), I decided to pull together some of my personal notes over the years and post them. Hopefully my struggles will help someone.

The HelpBook structure and .plist files

The first challenge is getting the HelpBook directory tree structure right. I usually start by grabbing an existing HelpBook of mine and changing the files under the pgs directory. I store all HTML files except the top HTML file in this directory. Of course there are a few other things to change like the icon and names in the .plist file.

The second challenge is getting all the variables in the application's .plist and the HelpBook's .plist files set and properly coordinated with each other and with the file and directory names you use (this is part of the reason I re-use the same structure over and over). Below is a diagram of my directory structure (starting at my SVN trunk) and application and HelpBook directory trees.

Adding an index to the HelpBook

The third challenge is adding the index to the HelpBook. Apple's help system will use this when the user searches for words. You get the added benefit of syntax checking your XHTML structure because one minor mistake and hiutil barfs and stops processing. Actually, by default it barfs silently, so you won't realize there is a problem. That is why you should use the "-v" option and watch the output.

I do all this from the command line. My current working directory is the HelpBook's Resources directory. Here are the commands I use:

$ hiutil -Cavf /tmp/MyHelp.helpindex English.lproj
$ hiutil -Fvf /tmp/MyHelp.helpindex
$ cd English.lproj
$ cp /tmp/MyHelp.helpindex .

(Note: don't forget the '.' at the end of the last command)

Adding the HelpBook to the project

The fourth challenge is adding the HelpBook to the application. I keep my HelpBook files outside my application's development folder (but inside my SVN trunk). When adding the HelpBook (through Xcode's "Add Files"), I need to make sure “Create folder references for any added folders” is selected instead of “Create Groups”.

In my older notes I had this listed the other way around. But now when “Create groups” is selected, the MyHelp.help icon in the Xcode file list shows a folder. This is the clue that I chose poorly.

When “Create folders” is selected, the MyHelp.help icon appears as a little HelpBook bundle icon. This is the way it should look.

XHTML entities

The fifth challenge is dealing with XHTML idiosyncrasies. I'm old fashion and craft my HTML by hand, which leads to a lot of potential syntax errors (hence the importance of the "-v" option to the hiutil command above). One biggie is that Apple's Help is not HTML but rather XHTML. For the most part they are the same, but there are places, in particular XHMTL entities (i.e., for apostrophes and quotes), that bite me.

Help is cached

The sixth challenge is testing Help from within the application. When testing I've found one annoying problem – Apple caches the HelpBook so changes aren't reflected when I launch Help from my application on subsequent runs. I'm sure there is a graceful way to resolve this, but I just logout and log back in. This seems to flush the cache.

Final comments

Apple's online documentation for Help (at least the last time I checked) is outdated and sometime contradictory. Also, loading Help the first time is surprisingly slow, especially for my computers with spinning disks.

Personally, I'm very tempted to keep all documentation online in the future. Then when the user selects Help, I'll just open the web page in the user's web browser. This also makes it easier to update the documentation without needing to re-submit my app to the Mac App Store.

A Universal Instrumentation for the Network

Looking for some old stuff, I ran across a 2-page position paper (and slides) I wrote for an EU-US Cyber Trust Summit in 2006. The gist of the paper is summed up by this question:

If the Saltzer and Shroeder principles have been known for over three decades, and if operating systems and network infrastructure have supported mechanisms to enforce much of Saltzer and Schroeder throughout the network, why aren’t we taking advantage of them to build more resistant and robust networks?

Part of my recommendation:

Instrument all control surfaces to collect appropriate audit information so that each observable activity (e.g., packet observed on a wire or a write to a file) can be mapped to (1) the user that instigated the activity, (2) the person(s) who installed the relevant software, and (3) the person(s) who wrote the relevant software.

I've still been working on (1) and (2) since then. I guess I believed in my position.

How rm -rf * almost destroyed Pixar and a pregnancy saved it

I'm reading "Creativity, Inc." by Ed Catmull & Amy Wallace. There are many wonderful little stories in this book, but I want to highlight just one here, a story many of us running UNIX systems can appreciate.

When Pixar was still relatively young and working on Toy Story 2, someone entered the following command on one of their computers with all the Toy Story 2 shots:

/bin/rm -r -f *

Any UNIX user knows the horror this command can cause, and indeed, before the execution could be stopped, two years of work – 90% of the film – had been deleted.

In a meeting an hour later:

"Don't worry," we all reassured each other. "We'll restore the data from the backup system tonight. We'll only lose half a day of work."

Then horror #2 hit: the backup system wasn't working!!!

Pixar still wasn't the powerhouse we think of it as today. It was vulnerable. This could be a real disaster for the company.

But then salvation came from a mom. Galyn Susman had recently had her second child, so she was working at home more. To support this, she set up a system at home that copied the film database once a week.

Within a minute of her epiphany, Galyn and Oren were in her Volvo, speeding to her home in San Anselmo. They got her computer, wrapped it in blankets, and placed it carefully in the backseat. Then they drove in the slow lane all the way back to the office, where the machine was, as Oren describes it, “carried into Pixar like an Egyptian pharaoh.”

While Toy Story 2 would go through many more tribulations (you'll have to read the book), this "accidental offsite backup" saved a near-panic situation for a still vulnerable Pixar.

I cannot help but think of the rhyme "For Want of a Nail", but in this case, it begins with a woman getting pregnant, which eventually saves Pixar's butt.

Sources for the book "Creativity, Inc.: Overcoming the Unseen Forces that Stand in the Way of True Inspiration" include:

Don't Forget What NSA's Mission Is

By and large, I think the discussions coming out of the Snowden leaks have been good. And while the HeartBleed vulnerability isn't related to Snowden, I don't think the media's attention about what the NSA knew and when did they know it would be a topic today without the Snowden leaks.

Having said that, I wish journalists would, at the beginning of their articles, acknowledge that the NSA is a spy agency focusing on signals intelligence (i.e., spying on computer and telephone traffic). That is their mission. From their mission statement:

The National Security Agency/Central Security Service (NSA/CSS) leads the U.S. Government in cryptology that encompasses both Signals Intelligence (SIGINT) and Information Assurance (IA) products and services, and enables Computer Network Operations (CNO) in order to gain a decision advantage for the Nation and our allies under all circumstances.

Furthermore, NSA's page on Signals Intelligence says:

The National Security Agency is responsible for providing foreign Signals Intelligence (SIGINT) to our nation's policy-makers and military forces. SIGINT plays a vital role in our national security by providing America's leaders with critical information they need to defend our country, save lives, and advance U.S. goals and alliances globally.

Developing intelligence based on intercepting and analyzing electronic communications is what the NSA does. Discussing limits on what they can do (e.g., rules of engagement) is perfectly fine. Being surprised or offended that they do these things is nuts.

I am reminded that when I was first developing tools and performing network-based intrusion detection in the early 1990s, I was told that I might be doing something illegal.

How the NSA (and Snowden) Make the Internet Less Secure

First, I don't want to blame the NSA. I blame policy makers. The NSA was given marching orders by policy makers (and decision makers probably gladly use the data the NSA provides), and if half of what has been revealed is true, the NSA may be the most productive government organization we have.

However, I see at least three ways the NSA's activities (and Snowden's) may have made the Internet less secure.

1. Active subversion.

There is evidence that the NSA weakened cryptography on the Internet by pushing a backdoor algorithm to be a national standard and then paying a major software supplier to make that backdoor the default algorithm used in many, many products. I wouldn't be surprised if there were a number of other examples of the NSA actively trying to weaken systems to make their primary mission of spying easier.

But even if you completely trust the NSA, such active subversion efforts can weaken your security. First, other attackers can find them. Second, security agencies have always had spy problems – Edward Snowden, Robert Hanssen, Aldrich Ames, Ana Belén Montes... We should not expect secret vulnerabilities to stay secret.

2. Passive subversion

The most recent accusation is that the NSA knew about and exploited the HeartBleed vulnerability for years. Even if this example isn't true, there is plenty of other evidence that the NSA knows about many vulnerabilities and have not told vendors about them. Again, they use this information to carry out their mission – spying.

Thus, the NSA knows we are vulnerable, but doesn't tell us. They let us remain vulnerable.

This might be one of the major reasons the NSA has pushed to be in charge of protecting the national infrastructure. They know vulnerabilities in the infrastructure that cannot be fixed because vendors don't know about them.

3. Inducement

I bet all governments looking at the Snowden revelations are jealous of the power the NSA gives the US Government. They may condemn the US, but they secretly want similar power for themselves.

Most major governments already had cyber operations. I bet most will be getting boosts in their budgets.

The NSA's activities might have removed any restraint, and Snowden lit the fire. I suspect the Snowden revelations have accelerated cyber spying activities world wide.

Bugs Can Reveal Interesting Things

Sometimes a bug can produce more interesting results than what you originally intended.

On all my machines I run a program I wrote called Data Fence ("In Review" at Apple for 5 weeks now!). It lets you write regular expressions to define a set of data to watch. Any program you did not approve to access a data file matching that regular expression generates an alert with an optional audio alarm.

My regular expression and access types for Keynote version 6 documents are:

The reason I have a trailing "/?" and have the access marked as a directory is that Keynote version 6 documents are actually special directories called bundles. I wanted to catch any malware (well, anything I haven't approved) looking into the contents of a Keynote bundle.

The regular expression can catch someone specifying the path named "foo.key" as well as "foo.key/".

The problem with the regular expression is that it also represents directories with names like "foo.keybler" or ".keychain_reauthorize". In fact, that last one Google triggers. Here is an example from today:

So these are false positives. But they really piqued my interest today because of the dramatic OpenSSL bug being reported today. Web sites need to revoke their old certificates and create new ones. This is probably one of the biggest vulnerabilities hitting the Internet in a while, and I am wondering if this ".keychain_reauthorize" directory has anything to do with Google addressing this in Chrome?

I can fix this regular expression fairly easily, but I've left it there for now because it is an interesting way to watch what Google is doing to my system behind my back. Interesting, very interesting.

The Fifth Protocol

Startup.boy posted The Fifth Protocol on cryptocurrencies that I hope lights a fire under both computer science and economics students.

Suppose we had a QuickCoin, which cleared transactions nearly instantly, anonymously, and for infinitesimal mining fees. It could use the Bitcoin blockchain for security or for easy trading in and out. SMTP would demand QuickCoin to weed out spam. Routers would exchange QuickCoin to shut down DDoS attacks. Tor Gateways would demand Quickcoin to anonymously route traffic. Machines would bypass centralized DNS and OAuth servers, using Coins to establish ownership.

There are also real-space versions of these ideas that have been emerging too. FasTrak, the electronic toll collection system, could easily allow rates to fluctuate in order to ease congestion. You want to drive into the Bay Area from 7 am - 9 am, expect to pay twice the rates than you would if you drove in at 6 am or 10 am. Utility smart meters can allow energy rates to vary minute by minute, and an app on your phone could tell you exactly how much you are spending minute by minute. Suddenly washing clothes at a different time makes sense.

Cryptocurrencies, in both cyberspace and real-space, lowers transaction costs allowing value to be assigned to most activities on a dynamic basis. Combined with instant feedback to users, this creates incentives that can change behaviors and open the door for new business models.

I believe, as the The Fifth Protocol proposes, cryptocurrencies are going to be a fundamental force in cyberspace (and in real-space IMHO).

Mesh Networking and Protecting Your Network

There is something that should scare the bejesus out of those who protect networks: mesh networking.

The general rule of thumb is that any device with two network interfaces is a router.

And guess what? Every phone in everyone's pocket is a router. Virtually all phones have cellular, bluetooth, and WiFi radios. Generally your phones aren't acting like routers, but with wearables on their way, they soon will be. And you can always explicitly turn your phone into a router by turning on the WiFi hotspot feature. Boom! You are a full-blown router.

Now there is a new element to worry about: mesh networks.

Wireless mesh networks have been around forever, but something has changed to turn mesh networking from an interesting concept to a potential game changer: it is now built into every iOS device!

This gives mesh networking the density of devices needed to be useful. (Android devices may soon have it too.) FireChat is one of the first apps to take advantage this feature. I expect to see a lot more apps using this iOS capability. (see Mike Elgan's article for more.)

But here is the kicker for security. This creates a network into your into your organization's premises that completely bypasses the network controlled by the site's network/security administrators. Bypass a firewall or DLP device. Hop an air gap. With a store-and-forward capability, even tunnel through Faraday cages.

I'm waiting to see a command & control networks running over mesh networks. There have already been botnets that use peer-to-peer networking, which makes shutting them down very difficult. But these P2P botnets still use traditional networks and routers to move data around, and this at least gives you a fighting chance. Mesh networking will mean the C&C will literally travel from pocket to pocket of employees, visitors, and random people walking by.

Maybe mesh networking will just be a fad, making a high profile splash that quickly fades like the infamous Color app (which, may be where Apple's technology came from).

But mesh networking is something all cyber security folks should keep an eye on.

Free Audit Aggregation System (FAAS), Waiting for Love

In the fall of 2012 I started working on Yet Another Log Aggregator (YALA) called the Free Audit Aggregation System (FAAS) (see concept paper). I didn't feel there was a good solution for efficiently collecting large (multi-gigabyte) log files that Apple's BSM system could generate.

Collecting detailed audit data like BSM data is important because it can reveal activity that is invisible to many other security sensors. Not only can BSM data be used for novel detection approaches, it can also provide context to alerts generated by other sensors. For example, if a network sensor detects a suspicious network connection, the audit data can tell you what process created that connection, how that process got started, what program image it was running, how the program executable got onto your system, and what files the process read from or wrote to.

By chaining these steps together, you can build control flows  (e.g., detect and watch intruders penetrate a network and then move laterally; see DIDS slides) and data flows (e.g., how did that file leave my system; see Windows audit video).

Unfortunately, FAAS got sidelined when Mavericks broke a few pieces of it and I spent time developing Data Fence (currently in review), an update to Audit Viewer (approved (yay!) and will be released this week), and PS Logger (originally part of FAAS).

Data Fence has a couple of advantages over FAAS, including distributed analysis, live analysis, and leveraging your existing SIEM infrastructure (assuming your SIEM reads syslog data and can parse XML). However, collecting the detailed audit data via something like FAAS and doing deeper back-end analysis is still extremely valuable, so I hope to thaw it out and start working on it again.

In the meantime, I thought I'd resurrect a few old videos to give you an idea of the FAAS vision. If you have some suggestions (like "Use Amazon Web Services instead of Apple Server."), contact me (a tweet works well, at least for starts). The three videos cover

  1. Introduction to FAAS
  2. Using Log Browser to find a log of interest
  3. Applying server-side analysis to the logs

Introduction to FAAS

Using Log Browser

Example of applying servsr-Side analysis