Last year when analyzing raw PCAP-NG data from Yosemite, I noticed Apple embedded application name data in the PCAP data. The PCAP-NG format lets vendors create their own extensions to the data, and I thought Apple created a clever and handy extension. Last month I finally sat down to start learning Swift, so I did what I usually do when learning a new language - I wrote a network analyzer.
I had the pleasure of presenting at the Security Onion Conference in Augusta, GA today. I also had the challenge of following Richard Bejtlich's presentation. Richard is an excellent speaker, and I strongly encourage you to hear him talk live if you get a chance.
Here are the slides from my talk
Many thanks to Doug Burks for giving me a chance to present.
In 1988 I started working on a project called the "Network Security Monitor", or simply NSM. It formed the core of my Master's thesis work, which I published in 1991. A description of the NSM at that time can be found in the appendix to my thesis here, or you can read the full thesis here.
The Air Force took a copy of NSM and developed it into their ASIM system. The Defense Information Systems Agency (DISA) took a copy and developed it into their JIDS system. And Lawrence Livermore National Laboratory (LLNL) took a copy and developed it into their NID system.
In 1993, for a variety of reasons, I largely stopped working on NSM, but I did at least a final version of it in 1995 which included the ability to analyze RPC, NFS, and SNMP traffic.
Here is the last README.overview from the developer's distribution in 1995.
I recently asked my boss at the time, Prof. Karl Levitt, if I could post the source code, and he said that it was fine.
So if you want to download some old source code, mostly dating from about 1991-1995 (and in K&R C no less), click here.
In an earlier blog post, Before there was Mandiant there was WheelGroup, I mentioned WheelGroup wanted to be the ADT of the Internet. Part of that effort included licensing WheelGroup intrusion detection technology to what was hoped to be a growing Managed Security Services Provider (MSSP) market, and I think their first partner was NetSolve (which, like WheelGroup, would be acquired by Cisco).
NetSolve's marketing material for their ProWatch Secure service in 1996 promoted a 24x7 service centered around
- Prevention of problems before they occur
- Detection of attempted break-ins in real time
- Response to unauthorized or suspicious activities
Prevent-Detect-Response is commonplace now, but as a cyber security business, this was still the very early days. Here is one of NetSolve's slick sheets from 1996 promoting their ProWatch Secure MSS. Take a look at it to see what MSS was like 19 years ago.
Today Mandiant is one of the most well known companies (well, now "brand" inside FireEye I guess) in cyber security and even the larger business world (when you get hacked badly, "Who you gonna call?"). Part of Mandiant's early cachet came from the fact that so many key personnel cut their cyber teeth within the Air Force. They had real world experience battling attackers aggressively going after military targets.
But 20 years ago in 1995 another group of Air Force expats formed one of the earliest commercial intrusion detection companies - WheelGroup.
WheelGroup, as co-founder Lee Sutterfield would pitch to me, wanted to be the ADT of the Internet. A few years later WheelGroup was acquired by Cisco. Interestingly, in 2014 Cisco launched Cisco Managed Threat Defense Service that is very similar to WheelGroup's original vision.
Here is a one of WheelGroup's slick sheets promoting their product/service. I encourage you to take a look at it to get a feel for what the the intrusion detection space was like 18 years ago.
Going through my old notebooks and other material to get ready for a talk at the Security Onion Conference on Sep 11, I found an early description of the connection record format I was using in my NSM software. I would tweak it a little bit by the time I finished my Master's Thesis 6 months later, but this page pretty much captured most of it.
If you look at my notes (click on the image to get a larger version, or save to local disk), you will see that it is pretty much the same as session logs from many of today's NetFlow-like systems: connection channel (address and ports), protocol identifier, start & stop times, number of packets and bytes in each direction.
On one hand, it feels kind of nice that my log format is still essentially being used today. On the other hand, it saddens me that so many of us (including me) are still analyzing log data designed almost a quarter century ago.
Business Insider shows two of Mary Meeker's slides on cyber security. Two data points have me wondering how many breaches by outsiders are actually detected by companies themselves.
In the first slide Meeker states that more than 20% of breaches come from insiders, so less than 80% of breaches are from outsiders.
In the second slide Meeker states that companies didn't detect 69% of breaches themselves but were notified by outsiders.
How do we put those two numbers together? I'm guessing that most of the 69% of breach notifications by outside organizations are not notifying the companies about breaches by the companies' insiders. So if there were 100 breaches, of which 80 were by outsiders, and 69 of those breaches were not detected by the company but by outside organizations, does that mean 86% of breaches by outsiders go undetected by companies?
And what about unknown breaches. Meeker's statistics assume that the total number of breaches are known. The 69% of breach notifications are by outside organizations statistic is really saying 69% of known breaches were not detected by the companies. We have no idea how many breaches are not detected by both the company and outsiders.
It is quite possible that organizations are not able to detect 90% or more of breaches by outsiders. 69% is at best a lower bound.
25 years ago today, 7 May 1990, we published in the IEEE Oakland conference the paper "A Network Security Monitor", which described our design, our "prototype LAN security monitor - hereafter referred to as our Network Security Monitor (NSM)", and results from using this system to monitor our own networks. The next day we published in the DOE Computer Security Group Conference the paper "Network Attacks and an Ethernet-based Network Security Monitor".
From the IEEE paper's "Introduction" section:
Specifically, our goal is to develop monitoring techniques that will enable us to maintain information of normal network activity (including those of the network's individual nodes, their users, their offered services, etc.). The monitor will be capable of observing current network activity, which, when compared with historical behavior, will enable it to detect in real-time possible security violations on the network
That kind of sums up a lot of the work being done today (including by myself again) under names like "cyber analytics" or "security data analytics". Deploying the 1990 version of the NSM we also learned some valuable lessons that shaped future work.
From the "Performance of the N.S.M." section:
The biggest concern was the detection of unusual activity which was not obviously an attack. Often we did not have someone to monitor the actual connection, and we often did not have any supporting evidence to prove or disprove that an attack had occurred. One possible solution would be to save the actual data crossing the connection, so that an exact recording of what had happened would exist. A second solution would be to examine audit trails generated by one of the hosts concerned. Both approaches are currently being examined.
Over the next year or so we added these capabilities. Full packet capture enabled development of the transcript tool (we also added string matching in the data portion of the packets), which proved invaluable for operations (and were inspired by Cliff Stoll's work). And we integrated with a host-based monitor in the DIDS system. A year later we started distributing the NSM.
My former DARPA program manager, Sami Saydjari, gave me permission to post a DARPA video from 2001 titled:
STRATEGIC CYBER DEFENSE
Defending the Future in the Digital Domain
A DARPA Vision
I and a number of other DARPA Principal Investigators were brought down to help SPAWAR work on the story. I recently pulled out my old DVD and was happily surprised at how well the concepts have held up after 14 years and how predictive some of the elements of the story were. At the end is a link to the movie on YouTube (as well as embedded in this blog), but here are a few things to look for at different time codes in the movie along with posts to recent stories or products:
0:27 - insider plants Trojan horse. Insiders are always a big threat, and Trojan horse software lying dormant for years inside critical infrastructure is a huge concern http://abcnews.go.com/US/trojan-horse-bug-lurking-vital-us-computers-2011/story?id=26737476
2:27 - coalition bad guys http://www.cnn.com/videos/business/2015/02/16/erin-dnt-segall-major-bank-hacking-heist.cnn
3:01 - using information posted online to craft targeted threats against soldiers
3:43 - graceful degradation of services when under attack, anomaly detection (think data analytics), automatic isolation of the threat
4:00 - touch screens everywhere, and a Siri like interface later (about 6 years before popularized by the iPhone).
4:08 - military dependent on commercial communication infrastructure
4:35 - automatic signature generation
5:01 - spinning disks represent defense in depth (one of Sami’s favorite visuals)
5:30 - wrappers. See Invincea http://www.invincea.com
6:20 - attacks hit power grids and ATMs http://www.nytimes.com/2013/05/10/nyregion/eight-charged-in-45-million-global-cyber-bank-thefts.html?_r=0
8:20 - attack prediction
8:29 - traceback (a number of techniques are possible)
8:47 - fishbowl to simulate a site and watch attackers. “Next generation” firewalls, where they detonate suspected malware inside some type of container to watch its behavior is essentially a simplified version of this
8:53 - correlation across victim networks, looking for commonalities to identify potential pathway into the network
10:22 - physical damage to electrical generators http://www.toddheberlein.com/blog/2014/3/4/america-the-vulnerable-and-todays-wsj-article
12:02 - reflexive response capability (autonomic response)
13:40 - correlating multiple information feeds (skills demonstrated in attacks, intelligence on watched threats and their interests, financial information, etc.); this issue is returned to several times in the movie
14:11 - activating probes in foreign networks (Hmmm...)
14:42 - coalition issues, a perennial concern for military operations (I was recently told that the US hasn’t gone into a major conflict without coalition partners since the Spanish-American War)
15:38 - modeling potential adversaries to predict actions they will take https://www.schneier.com/blog/archives/2012/01/applying_game_t.html
15:58 - military logistics computers penetrated, screwing up deliveries of material http://www.computerweekly.com/news/2240230885/US-military-logistics-arm-breached-by-China-linked-hackers
16:19 - deploying additional security even though it may slow down the system
17:50 - automatic voice translation. See apps like http://itranslatevoice.com
19:55 - 50 deaths blamed on the cyber attack (we are hoping to stay away from this)
20:24 - cyber attack back
21:09 - serious attack back
I was going through some old files and ran across a report I wrote in 2001 titled "Before Applying New Technologies". Looking back over it, I think I was describing DevOps before DevOps really took off as an official concept.
While I am a DARPA contractor who appreciates the funding to solve these and other problems, I believe we are frequently putting too much emphasis on the technology and not enough on the overall process of cyber defense. Certainly DARPA is a technology provider and not a general purpose solutions provider, so re-architecting network configurations, processes, procedures, and policies largely lie outside the scope of DARPA’s mission. However, I believe DARPA must consider these issues for at least two reasons.
First, DARPA’s funding to at least some degree depends on satisfied customers. ...
Second, as creators of new technologies, we would like to see our technology deployed in an environment that shows it in the best possible light. ...
As described to me by users of JIDS and ASIM, many aspects of the intrusion detection sensor and operations structure, from system creators to users to operations, appear to operate as an open-loop system. Closing some of the loops, that is, creating appropriate feedback systems, can potentially remove much of the data loads analysts must contend with.
I have not worked with DARPA in many years. I wonder if the process has changed?
This week Symantec (and many others) published information about a cyber espionage campaign dubbed "Regin". See "Regin: Top-tier espionage tool enables stealthy surveillance". In general, I take umbrage when every time a novel and/or sophisticated system is discovered it is attributed to a Nation State. See my 2012 video "Glowing Embers", or better yet, read/listen to "Ghost in the Wires" or "Masters of Doom". Creative individuals and small teams can do amazing things.
However, whatever the source of such campaigns or their motivations, you should try to prepare yourself for when one of these campaigns hits your network. While there are security training courses you can take, you can also practice by analyzing even benign activity in your network. Practicing on analyzing such activity can give you the knowledge and skills to detect and analyze the activity of real threats.
In 2012 I published a pair of articles ("The Advanced Persistent Threat You Have: Google Chrome" and "The Making of 'The Advanced Persistent Threat You Have: Google Chrome'") and a Keynote presentation ("Google: The APT You Have") on analyzing Google's automatic update system. In many ways, Google's software resembles a good Command & Control system an adversary might use - small sleeper code that occasionally wakes up to download encrypted new stages, use of virtual file systems, modification of critical resources, and cleaning up after each activity.
I encourage everyone to start searching for and analyzing these (hopefully benign) Command & Control systems in your network. I guarantee you, you have plenty of them operating in your network. Practicing on these will prepare you for the malicious ones.
The Washington Post article "China suspected of breaching U.S. Postal Service computer networks" has some interesting comments and observations setting this breach apart from the usual stories on breaches.
“They’re just looking for big pots of data on government employees,” Lewis said. “For the Chinese, this is probably a way of building their inventory on U.S. persons for counterintelligence and recruitment purpose.”
Watching Google, Facebook, and Amazon track me moving around the Internet in order to build profiles of me, it would make sense to me for governments to do this too, including foreign governments.
“It’s not all about hackers. Having information about real live people could help them with on-the-ground operations.”
I could see a foreign government targeting disgruntled individuals, individuals who can be bought, individuals they can apply pressure to, naive individuals who can be duped, or individuals who can become unknowing cyber mules giving attackers access to their organization's information systems.
I think we have to assume organized cyber attackers (e.g., governments) are building large dossiers on individuals and organizations using the large amounts of data being continually siphoned out of our networks.
For instance, the U.S. Postal Service, at the request of law enforcement officials, takes pictures of all addressing information from envelopes and parcels.
Having access to that traffic analysis data could be extremely valuable. I'm sure with enough information on USPS employees, attackers can flip at least one postal worker (are there any disgruntled or financially stressed postal workers?) or steal or hijack a postal worker's credentials.
But my favorite quote is:
Still, “it’s perfectly appropriate for us to do everything we can to embarrass and punish the Chinese if they’re in our systems, whether or not we’re in theirs,” said former National Security Agency general counsel Stewart A. Baker.
Yeah, everyone is doing it.
Automated attacks began compromising Drupal 7 websites that were not patched or updated to Drupal 7.32 within hours of the announcement of SA-CORE-2014-005 - Drupal core - SQL injection. You should proceed under the assumption that every Drupal 7 website was compromised unless updated or patched before Oct 15th, 11pm UTC, that is 7 hours after the announcement.
7 hours from announcement of the bug to you probably being compromised. Dang, "Internet Time" sure is fast.
Attackers may have copied all data out of your site and could use it maliciously. There may be no trace of the attack.
No evidence? I would like to see what some of these attacks look like from the operating system audit trails or the database audit trails.
What is this bug? According to Drupal's security advisory, it is in security code to prevent SQL Injection attacks. Sad irony. :(
Even the NSA has troubles with their server. And they belched a little HTML/CSS along the way. Just got this as 12:30 pm PDT on Oct 3.
The New York Times article "JPMorgan Chase Says More Than 76 Million Households Were Compromised in Cyberattack" is fascinating. These hackers got deep, very deep, into one of the most important financial institutions in the world. Here are some important quotes:
Hackers were able to burrow deep into JPMorgan’s computer systems, accessing the accounts of more than 90 servers — a breach that underscores just how vulnerable the global financial system is to cybercrime.
It is still not clear how hackers managed to gain deep access to the bank’s computer network. By the time the bank’s security team discovered the breach in late July, hackers had already gained the highest level of administrative privilege to more than 90 of the bank’s computer servers
More disturbing still, these people say, hackers made off with a list of the applications and programs that run on every standard JPMorgan computer– a hacker’s road map of sorts — which hackers could cross check with known vulnerabilities in each program and web application, in search of an entry point back into the bank’s systems.
What I find amazing is they got into 90 servers! 90.
For an organization that essentially says "Trust us with most of your wealth", the depth of this penetration, including blueprints to their systems that could help the hackers come back in, has staggering implications.
In job postings I often see the requirement "must be able to multitask". I often wonder if the people who write these job postings have ever held jobs that required intense concentration over time in order to understand and then master hard problems.
In both research and many software development efforts I have found the overhead associated with a context switch to be large. Today I read the following paragraph from a 2010 article that I think nails it.
The nature of the job. The mental stage that psychologists define as "flow" is one of sustained concentration on the task at hand and a pure focus on your attention on a project. In other words, it's the ability to work without interruption on a task until you've found a natural stopping point. A lot of developers strive for flow when they're working, which is why one meeting can blow an entire day's worth of work. It takes time to get in and out of flow and to retrace your steps to the point where you can move forward.
If you are hiring for a position that only needs shallow thinking, go ahead and keep that "must be able to multitask" requirement. But if you are hiring for a position that requires deep thinking on hard problems, you should consider dropping it.
There is a lot of meat in this article but I want to point out two things. First, Christina reminds us that even if you have a strong password, there are many ways to grab or nullify that
Cubrilovic lists them in order of popularity and effectiveness:
1. Password reset (secret questions / answers)
2. Phishing email
3. Password recovery (email account hacked.
4. Social engineering / RAT install / authentication keys
But having recently activated Apple's two-factor authentication, I was still feeling smug. Then Christina springs the trap.
As we've mentioned before, Apple's two-factor implementation does not protect your data, it only protects your payment information.
Yes, if you have two-factor authentication enabled, the password reset process for an account can be greatly impeded (you need to provide a special one-off key before you can reset a password), but assuming someone can get your password anyway using any number of phishing or remote-access methods, two-factor verification is absolutely not required for accessing an iCloud backup.
Indeed. I immediately looked at Apple's FAQ on the topic, Frequently asked questions about two-step verification for Apple ID, and it states:
It requires you to verify your identity using one of your devices before you can take any of these actions:
* Sign in to My Apple ID to manage your account
* Make an iTunes, App Store, or iBooks Store purchase from a new device
* Get Apple ID related support from Apple
So Apple's 2FA is only focused on purchases and account management. It is not used to protect your data.
Given Apple's push for users to use iCloud for many more things in iOS 8 and OS X Yosemite, I believe Apple needs to put some serious resources behind protecting your data too.
(UPDATE: Apple appears to be taking some good steps in the right direction on this topic: "Tim Cook Says Apple to Add Security Alerts for iCloud Users")
PBS.org's article "As governments invade privacy, tools for encryption grow more popular" mentions a wide spread assumption:
“It’s been co-opted by GCHQ and the NSA that if you’re using Tor, you must be a criminal,” Lewman explained to The Guardian. “I know the NSA and GCHQ want you to believe that Tor users are already suspect, because, you know, god forbid who would want their privacy online, they must be terrorists.”
I wonder if using end-to-end encryption such as PGP and GPG can be a black mark in your file? I've exchanged encrypted email with folks in the government and government contractors. Does this increase my suspiciousness score? Would such activity decrease chances of getting a security clearance in the future?
Nextgov's article "Nuke Regulator Hacked by Suspected Foreign Powers" discusses several attacks on the Nuclear Regulator Commission's computers.
One incident involved emails sent to about 215 NRC employees in "a logon-credential harvesting attempt," according to an inspector general report Nextgov obtained through an open-records request.
The phishing emails baited personnel by asking them to verify their user accounts by clicking a link and logging in. The link really took victims to "a cloud-based Google spreadsheet."
A dozen NRC personnel took the bait and clicked the link.
So almost 6% of employees clicked on the link bait. That is a pretty significant number, especially considering
Every NRC employee is required to complete annual cyber training that deals with phishing, spearphishing and other attempts to obtain illicit entry into agency networks.
I don't have a thing against employee security awareness programs, but I've heard this promoted (typically by management) for 25 years. I'm just not convinced that it is effective.
What I also found amusing was the advertisement running along the left side. Was it a random ad placement? Based on keywords in the web page content? Or is the emerging Ghost in the Internet telling me I need new underwear?