Cyber Pearl Harbor: Did you miss it?

Google "cyber Pearl Harbor", and Google identifies tens of thousands of documents with that phrase. I've been hearing the phrase for almost as long as I have been in cyber security. It is almost always used with the sense that the cyber equivalent of the Japanese attack on Pearl Harbor is just around the corner.

But I claim that it has already happened. Or more precisely, the Information Operation Pearl Harbor has already happened, and cyber attacks played a significant role.

17 US intelligence agencies agreed that Russia actively interfered with the US elections [NPR] and, according to Former Director of National Intelligence James Clapper, Russia did so to help Donald Trump get elected [CNN].

To understand Russia's Information Operations, I strongly urge everyone to watch Laura Galante's (@LauraLGalante) excellent TED Talk "How (and why) Russia hacked the US election"

And to gain a greater understanding on how you (and your neighbors) can be targeted by a skillfully executed information operation, read this article about some of the targeted information being collected about you in order to shape how you think and perceive the world.

A Republican contractor’s database of nearly every voter was left exposed on the Internet for 12 days, researcher says

The lapse in security was striking for putting at risk the identities, voting histories and views of voters across the political spectrum, with data drawn from a wide range of sources including social media, public government records and proprietary polling by political groups.
Chris Vickery, a risk analyst at cybersecurity firm UpGuard, said he found a spreadsheet of nearly 200 million Americans on a server run by Amazon's cloud hosting business that was left without a password or any other protection. Anyone with Internet access who found the server could also have downloaded the entire file.
...
In all, the leaked files amount to more than 1,000 gigabytes of data — more than four times the size of any previous breach of this type, according to Vickery. The exposed data also contained records of voters' views on specific issues including gun control, abortion and environmental issues, he said. Overall, Vickery said, there were billions of data points and 170 GB of social media posts scraped from Reddit alone.
...
"They're using this information to create political dossiers on individuals that are now available for anyone," said Jeffrey Chester, executive director of the Center for Digital Democracy. "These political data firms might as well be working for the Russians."

Democracies are very vulnerable to information operations, and Putin has figured this out. Why should an enemy drop bombs like the Japanese did in Pearl Harbor when they can achieve their political goals through information operations?

Strange Certificate Warning

This morning I received the following alert on my iPhone (and yes, I do have 708 unread emails)

While the alert complained about the identity of "prc.apple.com", when I clicked on "Details", it referred to "instant.arubanetworks.com". Was the prc.apple.com certificate signed by arubanetworks?

I clicked on "More Details", and the following Subject Name and Issuer Name information popped up.

Scrolling down, I see the details of the signing algorithm: SHA-1.

Given that SHA-1 has been compromised (Announcing the first SHA1 collision), an untrusted certificate alert from a SHA-1 certificate had me concerned.

In the end, as is too often the case in security, I was left with lots of questions.

  • Was there a verification problem with the "prc.apple.com" certificate or "instant.arubanetworks.com" certificate?
  • Was the prc.apple.com certificate faked and signed with a bad instant.arubanetworks.com certificate?
  • Was there a man-in-the-middle attack?
  • Was Apple's software just buggy and in reality everything was fine?
  • Was there a problem, but Apple's alert identified the wrong certificate that was having problems?
  • What the heck Aruba? You are all about the IoT and you use SHA-1? Don't we already have enough troubles with IoT?

 

xfinity gets stuck, a lot

(Updated with additional videos on 2017-03-27)

I have had a lot of problems with my xfinity television service lately, where the video portion gets stuck on an image even though the audio keeps playing. I can use the controller to select different channels or select a pre-recorded video to watch, and while the audio changes, the video never changes.

I am encountering this about once a day. I find I often have this problem when trying to watch Bloomberg. I almost feel Bloomberg isn't a video signal but an application (with an embedded video), and it is buggy software. Whatever is the case, I hope xfinity fixes this soon.

Rebooting my cable box every day (sometimes multiple times per day) is not an acceptable fix.

The following video shows the problem (watch with audio on).

Comcast freezes 2017-02-25. This is getting tiring.
Comcast cable box frozen again. 2017-03-26

 

 

Smart AI hackers - why is it taking so long?

In 1988 I watched the Morris worm work its way through our University computers and through the computers of most of the people I interacted with on a professional basis. The Morris worm used numerous vectors (see "The Internet Worm Program: An Analysis") and a few evasion techniques to work its way through roughly 10% of the Internet (incidentally, "decimate" refers to the killing of one-tenth of a group, so technically the Morris worm decimated the Internet).

About two years later I started playing with Robert Baldwin's SU-Kuang bundled with Dan Farmer's COPS security checker system. SU-Kuang, named after the Kuang Grade Mark Eleven penetration program in William Gibson's Neuromancer, showed how an attacker with an initial set of capabilities could, through the knowledge of the UNIX security model, acquire more and more capabilities until it could achieve some particular goal such as becoming root or having the right privileges to access or modify a certain file. I learned a lot about the UNIX security model (and how one could hack UNIX systems) from Kuang.

In 1996 a colleague, Dan Zerkle, extended the Kuang approach to an entire network (see "NetKuang - A Multi-Host Configuration Vulnerability Checker"). It was disturbingly effective. And there were a number of other systems designed to automatically find their way through systems (see our 2004 report "A Taxonomy for Comparing Attack-Graph Approaches").

In 2004 I gave a talk at the Naval Postgraduate School where I discussed how (free) computer chess games could beat the pants off most human players. Given that the muti-vector Morris worm and Kuang systems as well the original fuzz tool were developed 16 years earlier (1988 was an amazing year), I suspected "GNU-Chess of MalCode" would become the new adversaries, and they would, like their computer chess game counter parts, be more effective than almost all humans playing the game. 

GNU-Chess of MalCode

Well, it has been another 12 years since I predicted the "GNU-Chess of MalCode" (it is easier to predict the future than predict when the future will arrive), and with DARPA's Cyber Grand Challenge we are starting to see the beginning of such systems. The EFF was very concerned, and  engadget laughed at EFF.

I believe the threat is real.

Interestingly, also in 2004 DARPA held its first self-driving car Grand Challenge. Not a single car finished. The best car only made it about 7 miles. Today cars by Google and Tesla have driven themselves millions of miles on America's roads and highways.

I have to wonder how sophisticated automated malware will be 12 years from now?

Removing the headphone jack from iPhone - it's all about the watch

The most persistent rumor about the iPhone 7 has been the removal of the headphone jack. There has been a lot of hostility and ink spilled on this topic like Nilay Patel's "Taking the headphone jack off phones is user-hostile and stupid". And there has been lots of speculation as to why Apple is making this move, including an effort to make the phone thinner, more water proof, or just because this is how Apple rolls (floppy disk, CD, etc.).

However, I think there is one reason for this rumored move by Apple rarely mentioned, and that is to drive the adoption of wireless headphones that will work with Apple Watch.

Whereas the strongest rumor about the phone has been about the headphone jack, the strongest rumor about the next Apple Watch has been the inclusion of GPS. Secondary rumors are the inclusion of a barometer, superior waterproofing, and higher capacity battery (see MacRumors article). This is all about making Apple Watch more of a fitness device. A greater ubiquity of wireless headphones will increase the appeal of the Apple Watch as a stand-alone fitness device. You will no longer need to run, ride, or workout with a giant phone attached to your arm.

I honestly believe that Apple sees widespread adoption of Apple Watch as one of the most powerful ways that Apple can improves lives for the better. You see this in companies they have acquired like Gliimpse (The Verge article) and in libraries that Apple provides developers like HealthKit, ResearchKit, and CareKit. In 2015, when Jim Cramer asked Tim Cook about next frontiers, Cook said heath was "the biggest one of all" (MacRumors article).

A thinner phone, a more waterproof phone, edge-to-edge OLED screens - these are tweeks to an already incredible device. They won't fundamentally change your life.

It is improved health, with Apple Watch spearheading that effort, that is a driving passion at Apple, or at least with Tim Cook. And I believe that the push for wireless headphones that work with Apple Watch is just one small but important part of Apple's larger health strategy.

EINSTEIN and the Movie “Zero Days”

I recently watched the movie “Zero Days” via the iTunes Store, and the movie included a couple of shots inside the Security Operations Center (SOC) of the Department of Homeland Security (DHS). In these shots we see a glimpse of DHS’s network monitoring system called EINSTEIN (Wikipedia, DHS). The segment covering the DHS SOC starts at about 1 hour and 26 minutes into the movie.

EINSTEIN is on its third major release. EINSTEIN 1 appears to be a basic NetFlow-class analyzer with anomaly detection, perhaps done in batch mode. EINSTEIN 2 added signature capabilities via threat intelligence feeds. EINSTEIN 3 adds blocking capability plus the ability to move the sensor farther upstream (e.g., running on ISP or backbone traffic).

Below are my analyses (guesses) of some of the EINSTEIN information shown at the SOC.

Table View

Figure 1 (from “Zero Days”) shows a summary of sensors information coming into DHS. Each row represents a single sensor. The following is my interpretation of the fields displayed.

Figure 1

  • B/Pkt. This first visible column appears to be an average of bytes per packet. An empty packet (one with no payload) typically has about 54 bytes (plus or minus depending on a number of factors). The maximum packet size is about 1500 bytes (depending on physical networking technology). An average of about 600 bytes per packet seems about right.
  • B/Pkt ZScore. The second column appears to show how anomalous the current bytes per packet rate is. If DHS is assuming their data is Gaussian distributed, a ZScore is the number of standard deviations the current value is from the mean. The bytes per packet for the first and third rows appear to be fairly normal, but the bytes per packet for the second row appears to be on the low side (2 standard deviations below the average). A low B/Pkts Zscore can be caused by a high number of empty packets like you would get if your site is being scanned or the focus of several types of DDoS attacks (e.g., SYN-flood). It could also be caused by random fluctuations in values. A Zscore of -2 (or lower) should be seen about 2.28% of the time.
  • Current Bytes. This column is harder to interpret on its own, but given that the cell in the top row has the same value as the cell two columns over, my guess is Current Bytes is simply Bytes per Hour. We don’t know the actual value since the ellipses suggest there are more numbers we are not seeing.
  • Average Bytes. This column is also hard to interpret. As with the previous column and the next column, the actual number is not known because some digits are missing from the display.
  • B/Hr. The column appears to be Bytes per Hour. This is an overall traffic rate. It is unclear what technique they are using. For example, are they using an exponentially decaying average? And if so, what half-life are they using?
  • P/Hr. This column appears to be Packets per Hour - another measure of traffic rate.
  • Location. The far right column is named “Location”, and while we cannot see any values here, another shot in the movie shows the value for this column including “Springfield, …”, “Washington, …”, “Reston, VA”, and “Plano, TX”. Yet another shot in the video shows the column “Agency”, with values like “CBP”, “USDA”, and “VA”. And the next column is named “Collector” with values like “CBP3”, “USDA7”, and “VA1”. From these values each row represents a single sensor at a government site, and the “Location” is the city where this government site is located.

From this table it appears that the only value we can determine is anomalous is Bytes per Packet. For example, if volume increased or decreased a lot, it would not be obvious from the view from this image. A “B/Hr ZScore” or “P/Hr ZScore” column could flag these anomalous conditions. However, the next figure can helps visually spot anomalous traffic rates.

Traffic Rates View

Figure 2 shows traffic volume (i.e., bytes per hour or packets per hour) displayed on a protocol by protocol basis.

Figure 2

Trying to read the fuzzy image, the left column appears to be displaying traffic rates for

  • Total (all traffic)
  • ICMP - Internet Control Message Protocol
  • IGRP - Interior Gateway Protocol
  • RSVP - Resource Reservation Protocol
  • Other - all other traffic that is not part of the named protocols

And the right column appears to be displaying traffic rates for

  • HOPOPT - IPv6 Hop-by-Hop Option
  • TCP - Transmission Control Protocol
  • UDP - User Datagram Protocol
  • ESP - Encapsulating Security Payload

My guess is that the red color represents outbound traffic rates and the blue color represents inbound traffic rates (or vice versa).

What is interesting are the five large peaks for TCP and UDP equally separated in time and of equal volumes. These peaks are also reflected in the “Total” chart as well, indicating, not surprisingly, that TCP and UDP traffic dominate the total amount of traffic. These large and rhythmic spikes may indicate there was a large amount of traffic generated by an automated process that spanned both UDP and TCP. Perhaps it is some type of network scan or multi-vector DDoS attack.

Summary

Admittedly this is a lot of guess work trying to interpret two still frames from the movie “Zero Days”, but this is the first time I’ve seen any images from DHS’s EINSTEIN monitoring system. Furthermore, the video segment appears to be part of a public or press tour of the DHS SOC, so any truly revealing images of EINSTEIN would not be displayed.

In particular, there is a scene showing a large blinking red light on the ceiling. I think the director was trying to show “Red Alert! Something bad is happening!” However, being someone without a security clearance, I know lights like these are turned on when people like me are in the building to tell everyone to cover up any sensitive information and not to talk about sensitive topics. In other words, the blinking red light was because the film crew was in the building.

a16z podcast provides a great history of AI and machine learning

I hate the term "artificial intelligence" because "artificial" implies that it is fake. Unfortunately, if you look at the history of this field, much of the early work was indeed artificial. Researchers would manually program their systems to achieve certain tasks, but the systems did not absorb feedback from their experience to adjust how they would behave in a future situation. They were static systems.

In my mind, "machine learning" is about the "learning". There is nothing artificial about it. The system may not necessarily be learning how humans learn, but it is learning. However, the terms AI and machine learning are still largely used interchangeably.

The field has gone through many twists and turns and several AI winters, but the early promises appear to be finally coming true. I believe there were three important factors necessary to let this happen: true learning systems that used feedback, significant computational horsepower, and lots and lots of data. A short 2.5 page paper in 1986 on back-propagation of errors helped address the first factor, but it wasn't until the recent emergence of the cloud that the last two factors could be addressed. Graphical processing units/engines (GPUs) are also being used to address the computation horsepower factor. It wasn't until about 2012 that these factors started to come together.

The a16z podcast AI, Deep Learning, and Machine Learning: A Primer provides a great 45 minute history and introduction to the field, including a nice demo of Google's TensorFlow at about the 25:45 mark in the video. The book The Master Algorithm covers similar ground but in more depth.

All Disruption and No Nurturing Leads to Half-baked Solutions

In Tony Fadell's announcement that he is leaving Nest, he said (emphasis added):

Although this news may feel sudden to some, this transition has been in progress since late last year and while I won’t be present day to day at Nest, I’ll remain involved in my new capacity as an advisor to Alphabet and Larry Page. This will give me the time and flexibility to pursue new opportunities to create and disrupt other industries – and to support others who want to do the same – just as we’ve done at Nest. We should all be disrupters!

The trouble with focusing on disruption, and indirectly saying everything else is a lesser cause, is that you end up with a bunch of crappy, half-baked solutions. Transforming the world requires Crossing the Chasm that separates the small populations of innovators and early adopters to the far larger populations of early and late majorities. It is only once you reach these populations that transformation can truly occur. Crossing that chasm takes patient nurturing of the technology, and that may take years and seem boring to a disruptor like Fadell.

ArsTechnia's article Nest’s time at Alphabet: A “virtually unlimited budget” with no results wrote:

Two-and-a-half years under Google/Alphabet, a quadrupling of the employee headcount, and half-a-billion dollars in acquisitions yielded minor yearly updates and a rebranded device. That's all.

Well, actually it yielded more - a major safety recall of its thermostat and disabling of one of its major features, shutting down the Revolv servers effectively bricking customers' $300 devices, and what felt like a never ending stream of bad press tarnishing the Google/Alphabet brand.

Tony Fadell and the tech industry in general need to stop worshipping disruption and also focus on nurturing the new technologies until they can cross the chasm and truly transform the world.

 

Earth's micro Rama or Kamin's flute

The New York Times article A Visionary Project Aims for Alpha Centauri, a Star 4.37 Light-Years Away describes an audacious project called Breakthrough Starshot. Its goal is to get a spacecraft to another solar system in 40 years. There are two important caveats:

  • The spacecraft will just zip through the solar system without stopping.
  • The spacecraft would be very, very small, only about a gram (about the weight of the guts of an iPhone).

You could think of it as a teeny tiny Rama (Rendezvous with Rama by Arthur C. Clarke) or maybe Earth's version of Kamin's flute (The Inner Light from Star Trek: The Next Generation) carrying knowledge about who we are to stars throughout the galaxy.

In 45 years we could be seeing photos of other worlds!

The cost is relatively small, estimated to be between $5 billion and $10 billion spread out over about 20 years. NASA's current budget is $19.3 billion per year, so the cost would be about 2.5% of NASA's budget over the 20 years. Or would it be better if this was privately funded like the interstellar project in Allen Steele's book Arkwright? (As of 2016-04-12 Mark Zuckerberg was estimated to be worth $48.2 billion; Bill Gates, $77.2 billion)

And since most of the technology will be spent developing the technologies and the giant laser array, all of which can be re-used, the marginal cost to send a fleet of spacecraft to additional solar systems should be very inexpensive.

In two decades we could take the initial steps to becoming an interstellar civilization.

SimpleSniffer and Apple's Easter Egg in the PCAP Data

Last year when analyzing raw PCAP-NG data from Yosemite, I noticed Apple embedded application name data in the PCAP data. The PCAP-NG format lets vendors create their own extensions to the data, and I thought Apple created a clever and handy extension. Last month I finally sat down to start learning Swift, so I did what I usually do when learning a new language - I wrote a network analyzer.

Source: http://www.toddheberlein.com/long/simplesn...

NSM Source Code from 1995

In 1988 I started working on a project called the "Network Security Monitor", or simply NSM. It formed the core of my Master's thesis work, which I published in 1991. A description of the NSM at that time can be found in the appendix to my thesis here, or you can read the full thesis here.

The Air Force took a copy of NSM and developed it into their ASIM system. The Defense Information Systems Agency (DISA) took a copy and developed it into their JIDS system. And Lawrence Livermore National Laboratory (LLNL) took a copy and developed it into their NID system.

In 1993, for a variety of reasons, I largely stopped working on NSM, but I did at least a final version of it in 1995 which included the ability to analyze RPC, NFS, and SNMP traffic.

Here is the last README.overview from the developer's distribution in 1995.

README.overview

README.overview

I recently asked my boss at the time, Prof. Karl Levitt, if I could post the source code, and he said that it was fine.

So if you want to download some old source code, mostly dating from about 1991-1995 (and in K&R C no less), click here.

An early Managed Security Service Provider (MSSP)

In an earlier blog post, Before there was Mandiant there was WheelGroup, I mentioned WheelGroup wanted to be the ADT of the Internet. Part of that effort included licensing WheelGroup intrusion detection technology to what was hoped to be a growing Managed Security Services Provider (MSSP) market, and I think their first partner was NetSolve (which, like WheelGroup, would be acquired by Cisco).

NetSolve's marketing material for their ProWatch Secure service in 1996 promoted a 24x7 service centered around

  • Prevention of problems before they occur
  • Detection of attempted break-ins in real time
  • Response to unauthorized or suspicious activities

Prevent-Detect-Response is commonplace now, but as a cyber security business, this was still the very early days. Here is one of NetSolve's slick sheets from 1996 promoting their ProWatch Secure MSS. Take a look at it to see what MSS was like 19 years ago.

Before there was Mandiant there was WheelGroup

Today Mandiant is one of the most well known companies (well, now "brand" inside FireEye I guess) in cyber security and even the larger business world (when you get hacked badly, "Who you gonna call?"). Part of Mandiant's early cachet came from the fact that so many key personnel cut their cyber teeth within the Air Force. They had real world experience battling attackers aggressively going after military targets.

But 20 years ago in 1995 another group of Air Force expats formed one of the earliest commercial intrusion detection companies - WheelGroup.

WheelGroup, as co-founder Lee Sutterfield would pitch to me, wanted to be the ADT of the Internet. A few years later WheelGroup was acquired by Cisco. Interestingly, in 2014 Cisco launched Cisco Managed Threat Defense Service that is very similar to WheelGroup's original vision.

Here is a one of WheelGroup's slick sheets promoting their product/service. I encourage you to take a look at it to get a feel for what the the intrusion detection space was like 18 years ago.

Connection log format from Jan 10, 1991

Going through my old notebooks and other material to get ready for a talk at the Security Onion Conference on Sep 11, I found an early description of the connection record format I was using in my NSM software. I would tweak it a little bit by the time I finished my Master's Thesis 6 months later, but this page pretty much captured most of it.

If you look at my notes (click on the image to get a larger version, or save to local disk), you will see that it is pretty much the same as session logs from many of today's NetFlow-like systems: connection channel (address and ports), protocol identifier, start & stop times, number of packets and bytes in each direction.

On one hand, it feels kind of nice that my log format is still essentially being used today. On the other hand, it saddens me that so many of us (including me) are still analyzing log data designed almost a quarter century ago.

Connection Log Format for NSM, early 1991

What percent of breaches by outsiders are not detected by organizations?

Business Insider shows two of Mary Meeker's slides on cyber security. Two data points have me wondering how many breaches by outsiders are actually detected by companies themselves.

In the first slide Meeker states that more than 20% of breaches come from insiders, so less than 80% of breaches are from outsiders.

In the second slide Meeker states that companies didn't detect 69% of breaches themselves but were notified by outsiders.

How do we put those two numbers together? I'm guessing that most of the 69% of breach notifications by outside organizations are not notifying the companies about breaches by the companies' insiders. So if there were 100 breaches, of which 80 were by outsiders, and 69 of those breaches were not detected by the company but by outside organizations, does that mean 86% of breaches by outsiders go undetected by companies?

And what about unknown breaches. Meeker's statistics assume that the total number of breaches are known. The 69% of breach notifications are by outside organizations statistic is really saying 69% of known breaches were not detected by the companies. We have no idea how many breaches are not detected by both the company and outsiders.

It is quite possible that organizations are not able to detect 90% or more of breaches by outsiders. 69% is at best a lower bound.

25 Years Ago: A Network Security Monitor

25 years ago today, 7 May 1990, we published in the IEEE Oakland conference the paper "A Network Security Monitor", which described our design, our "prototype LAN security monitor - hereafter referred to as our Network Security Monitor (NSM)", and results from using this system to monitor our own networks. The next day we published in the DOE Computer Security Group Conference the paper "Network Attacks and an Ethernet-based Network Security Monitor".

From the IEEE paper's "Introduction" section:

Specifically, our goal is to develop monitoring techniques that will enable us to maintain information of normal network activity (including those of the network's individual nodes, their users, their offered services, etc.). The monitor will be capable of observing current network activity, which, when compared with historical behavior, will enable it to detect in real-time possible security violations on the network

That kind of sums up a lot of the work being done today (including by myself again) under names like "cyber analytics" or "security data analytics". Deploying the 1990 version of the NSM we also learned some valuable lessons that shaped future work.

From the "Performance of the N.S.M." section:

The biggest concern was the detection of unusual activity which was not obviously an attack. Often we did not have someone to monitor the actual connection, and we often did not have any supporting evidence to prove or disprove that an attack had occurred. One possible solution would be to save the actual data crossing the connection, so that an exact recording of what had happened would exist. A second solution would be to examine audit trails generated by one of the hosts concerned. Both approaches are currently being examined.

Over the next year or so we added these capabilities. Full packet capture enabled development of the transcript tool (we also added string matching in the data portion of the packets), which proved invaluable for operations (and were inspired by Cliff Stoll's work). And we integrated with a host-based monitor in the DIDS system. A year later we started distributing the NSM.

 

DARPA's Strategic Cyber Security Vision

My former DARPA program manager, Sami Saydjari, gave me permission to post a DARPA video from 2001 titled:

STRATEGIC CYBER DEFENSE
Defending the Future in the Digital Domain
A DARPA Vision

I and a number of other DARPA Principal Investigators were brought down to help SPAWAR work on the story. I recently pulled out my old DVD and was happily surprised at how well the concepts have held up after 14 years and how predictive some of the elements of the story were. At the end is a link to the movie on YouTube (as well as embedded in this blog), but here are a few things to look for at different time codes in the movie along with posts to recent stories or products:

0:27 - insider plants Trojan horse. Insiders are always a big threat, and Trojan horse software lying dormant for years inside critical infrastructure is a huge concern http://abcnews.go.com/US/trojan-horse-bug-lurking-vital-us-computers-2011/story?id=26737476
2:27 - coalition bad guys http://www.cnn.com/videos/business/2015/02/16/erin-dnt-segall-major-bank-hacking-heist.cnn
3:01 - using information posted online to craft targeted threats against soldiers
http://www.cnn.com/2015/03/22/politics/online-threat-against-troops/
3:43 - graceful degradation of services when under attack, anomaly detection (think data analytics), automatic isolation of the threat
4:00 - touch screens everywhere, and a Siri like interface later (about 6 years before popularized by the iPhone).
4:08 - military dependent on commercial communication infrastructure
4:35 - automatic signature generation
5:01 - spinning disks represent defense in depth (one of Sami’s favorite visuals)
5:30 - wrappers. See Invincea http://www.invincea.com
6:20 - attacks hit power grids and ATMs http://www.nytimes.com/2013/05/10/nyregion/eight-charged-in-45-million-global-cyber-bank-thefts.html?_r=0
8:20 - attack prediction
8:29 - traceback (a number of techniques are possible)
8:47 - fishbowl to simulate a site and watch attackers. “Next generation” firewalls, where they detonate suspected malware inside some type of container to watch its behavior is essentially a simplified version of this
8:53 - correlation across victim networks, looking for commonalities to identify potential pathway into the network
10:22 - physical damage to electrical generators http://www.toddheberlein.com/blog/2014/3/4/america-the-vulnerable-and-todays-wsj-article
12:02 - reflexive response capability (autonomic response)
13:40 - correlating multiple information feeds (skills demonstrated in attacks, intelligence on watched threats and their interests, financial information, etc.); this issue is returned to several times in the movie
14:11 - activating probes in foreign networks (Hmmm...)
14:42 - coalition issues, a perennial concern for military operations (I was recently told that the US hasn’t gone into a major conflict without coalition partners since the Spanish-American War)
15:38 - modeling potential adversaries to predict actions they will take https://www.schneier.com/blog/archives/2012/01/applying_game_t.html
15:58 - military logistics computers penetrated, screwing up deliveries of material http://www.computerweekly.com/news/2240230885/US-military-logistics-arm-breached-by-China-linked-hackers
16:19 - deploying additional security even though it may slow down the system
17:50 - automatic voice translation. See apps like http://itranslatevoice.com
19:55 - 50 deaths blamed on the cyber attack (we are hoping to stay away from this)
20:24 - cyber attack back
21:09 - serious attack back

Movie on YouTube

I think I was talking about DevOps

I was going through some old files and ran across a report I wrote in 2001 titled "Before Applying New Technologies". Looking back over it, I think I was describing DevOps before DevOps really took off as an official concept.

While I am a DARPA contractor who appreciates the funding to solve these and other problems, I believe we are frequently putting too much emphasis on the technology and not enough on the overall process of cyber defense. Certainly DARPA is a technology provider and not a general purpose solutions provider, so re-architecting network configurations, processes, procedures, and policies largely lie outside the scope of DARPA’s mission. However, I believe DARPA must consider these issues for at least two reasons.
First, DARPA’s funding to at least some degree depends on satisfied customers. ...
Second, as creators of new technologies, we would like to see our technology deployed in an environment that shows it in the best possible light. ...
...
As described to me by users of JIDS and ASIM, many aspects of the intrusion detection sensor and operations structure, from system creators to users to operations, appear to operate as an open-loop system. Closing some of the loops, that is, creating appropriate feedback systems, can potentially remove much of the data loads analysts must contend with.

I have not worked with DARPA in many years. I wonder if the process has changed?