Concrete security incident data is typically scarce for any operating system, but the challenge of finding useful data is even more acute for Linux environments. Some folks might even believe that there is no such thing as Linux malware, or that Linux is inherently secure / heterogenous / rare compared to Windows systems. Instead of going into a “why” discussion, I’d like to take a look at reports of actual incidents, describe those threats, and use the Windows malware experience to infer “what’s next” for Linux.
A key point to consider when looking at Linux malware is that it’s mostly targeting servers. When you compare threats to servers against those targeting client systems, the common exploitation vectors are typically different, in addition to heavy reliance on system administrators’ skill and meticulousness.
Here’s the data I collected for the last 3 or so years:
2011 kernel.org hacked, malware used was some variant of Phalanx, one of the better known Linux rootkits
2012 A Linux rootkit is caught in the wild, nicknamed Snakso (Here’s a blog post describing it)
2012 An iframe injection module is caught in the wild, nicknamed Chapro by Symantec, and later linked to the previously known Darkleech.
2012 Volatility released a nice analysis of a recent variant of Phalanx (dubbed Phalanx 2) caught that year
2014 Linux backdoors were found in the wild in large numbers. Because of the high volume, this was named Operation Windigo
2014 Darkleech is still seen in the wild in newer variants
Phalanx and Snakso are both kernel rootkits that use loadable kernel modules to execute kernel code, and various hooks to hide processes, files and network connections. All other used malware “userland” modules that patch existing binaries on the system and use various techniques to evade system administrators and external system audits.
Linux-running systems are primarily servers, meaning that common exploits targeting browsers (via drive-by attacks) or email clients and file readers (via spear phishing emails) are practically irrelevant. Yet all of these were presumably installed with root privileges, so how did they get in?
We can reasonably speculate that many servers suffer from bad configuration, exposed interfaces and shared SSH keys and credentials. Unpatched servers are also further exposed to publicly disclosed privilege escalation vulnerabilities, and obviously everyone is exposed to zero-days. Since in many cases numerous servers are managed by the same IT administrators, sysadmins are obvious targets for attackers (pdf) for acquiring targets nonlinearly.
Granted, the scale and complexity of these incidents is significantly inferior to what happened in the parallel world of Windows systems. However, we should take into account the following observations -
The technology barrier isn’t as high as previously thought, as demonstrated by last year’s disclosure of the (5-year old) NSA Tailored Access Operations (TAO) catalog. Moreover, many Windows malware capabilities can be ported to operate on Linux running systems, for example, using GRUB to retain persistence.
As data is moving from endpoints to outsourced servers and centralized server farms, we can expect (or fear) that the relative value of exploiting servers is rapidly increasing.
Linux server security relies almost exclusively on security-aware administration and capable administrators, since deployed third party security products are sparse. This leaves incredible room for human error and security gaps due to lack of technical aptitude or awareness.
Linux systems and the Linux security landscape have experienced far less malware activity than its Windows counterpart, yet this very same reduced friction has a major impact, resulting in a relatively immature ecosystem. The Linux server environment lacks of easily deployed security solutions, third party security products (e.g. anti-malware) and requires high administration skills. These factors make it a relatively fertile field for malware.
While data and computation moves from endpoints to servers, the number of Linux servers holding sensitive data is on the rise and provides an attractive opportunity for malicious adversaries. The technology and tools used by nation-state actors will eventually make their way to cybercrime organizations, expanding their efforts and capabilities to target Linux systems.
Firmware compromises are starting to make their way into the mainstream news media and are expected to proliferate in the wild. Oded (PrivateCore’s CEO) prognosticated in an post in early January that cybercriminals would learn from the very skilled NSA ANT technologists to manipulate firmware in their effort to make illicit profits. Others now share that view.
In reading yesterday’s New York Times, I came across an article based on CrowdStrike threat research that included the quote, “As security software becomes more prolific, hackers continue to make their way down the food chain to computer hardware where it is much more difficult to identify and remove.”
The details behind security breaches take time to make their way into the news. I expect that we will eventually read about firmware compromises in the future, but it will take some time before such breach details make their way into the media.
While compromised hardware and firmware might be difficult to identify, that is the hard problem that PrivateCore has focused on since our founding in 2011. New threats require new countermeasures. Hardware and firmware attacks call for a new layer of defense, and PrivateCore provides that layer of defense. If you are an enterprise IT security concerned about trusted computing for your servers, you should take PrivateCore vCage software for a spin.
* Replace Target with your favorite retail chain.
The recent news that Target, Neiman Marcus and perhaps three other retailers suffered breaches involving large volumes of data pilfered is raising concerns among retail security professionals. While details are sketchy and there are plenty of unknowns, it appears that “memory scraping” (also called “RAM scraping”) malware might have played a part in the compromise. There is plenty of research and alerts around memory scraping malware found here, here and here. This sort of malware has been around a while – check out this Dark Reading article from 2009 and this 2009 Verizon Data Breach Investigations piece.
What is memory-scraping malware? What we have seen to date has affected retail point-of-sale (POS) systems and potentially backend systems that are processing various types of payment cards (credit cards, debit cards, prepaid cards, etc.). While standards like the Payment Card Industry Data Security Standard (PCI DSS) call for encrypting cardholder information while at rest (storage) and in transit (in motion on the network), cardholder information is typically unencrypted while in use (memory). If you can access the POS system or server memory, you can extract its contents including the cardholder information.
The data format of such information is clearly defined (see ISO/IEC 7813 and 7816), so attackers can simply implement suitable algorithms in malware which is then installed on the POS machines to harvest cardholder information in memory with those formats in mind.
How can you protect against this sort malware? Antivirus is certainly a necessary component required by PCI DSS for systems handling cardholder information, but AV has been demonstrated to be less than effective in stopping sophisticated threats and updating AV on isolated networks is cumbersome.
One promising countermeasure is attestation. Attestation protects against persistent malware on immutable, “gold” base software images, and ensures – using cryptographic principles and components – that both hardware and software are unchanged. Attesting to the integrity of server and POS systems would validate that the machine (hardware and software) is clean of malware. If a machine was infected, it would fail attestation and could be examined and remediated. Proper attestation supported by strong cryptography would eliminate any chance for otherwise undetected malware persisting.
Naturally, there could be some infection that occurs after attestation that could exploit vulnerabilities, but periodically attested systems (which would typically require a reboot) minimize this window of vulnerability (or opportunity, depending on your perspective). In this situation, malware could infect a machine after it was attested in a known, good state, but that malware would be wiped away the moment the system reboots and that would be validated when the system re-attests.
A normal, stateful machine suffers from malware that can use its hard-drive, or other components, to persist. A stateless machine that relies on a locked-down, base software image and is periodically attested avoids malware that might try burrow its way into a stateful component. POS systems, as well as transaction processing backend systems, are not intended to run arbitrary code. Validating (attesting) such systems against a known, good software image would dramatically reduce the window of opportunity for attackers.
Security measures typically require some change in technology and processes. One change of periodically attesting systems is that it would require downtime as systems reboot and applications restart. The impact of this change could be minimized by rebooting during off hours for POS machines and this could be done in a round-robin fashion among a high-availability (HA) server cluster for mission-critical servers. POS systems are natural candidates for being stateless as they handle stateless data.
No security countermeasure is going to stop all attacks all the time – technology is extremely complex and attackers are very clever. While details of the exact circumstances around the breaches at Target, Neiman Marcus, and other retailers are still unknown, my speculation is that attesting systems would have reduced the chance of a successful attack and minimized the damage of any successful attack by reducing the attack duration.
As 2013 comes to a close, news from Germany’s Spiegel Online that the NSA Tailored Access Operations (TAO) unit created a toolbox of exploits to compromise systems caught my attention. Todd’s prediction: this news is a harbinger of infosecurity risks making headlines in 2014 as bad guys learn from the extremely talented NSA.
The news generated by Mr. Snowden’s disclosures has brought data privacy headlines. What was different about the Der Spiegel article highlighting the TAO was not only the breadth of exploits, but also the depth and sophistication.
The sophisticated exploits highlighted in the Spiegel piece were designed for persistence. These are advanced persistent threats (APTs) – once you are in, can you stay in. As the article highlights, “the [NSA] ANT developers have a clear preference for planting their malicious code in so-called BIOS, software located on a computer’s motherboard that is the first thing to load when a computer is turned on.”
Modifying the BIOS bypasses traditional security layers such as antivirus software. Mitigating against threats using such attack vectors requires an additional layer of security to attest the validity of the host system, harden systems against compromise, and secure the underlying data-in-use (as well as data-at-rest and data-in-transit). This is bad news for enterprises and service providers who need to consider protecting their server infrastructure, but the good news is that there are solutions to shut down this attack vector, notably PrivateCore vCage (my shameless product plug for this post).
The Spiegel news dovetails with a cybersecurity prognostication for 2014 from IT risk and governance auditor Coalfire:“There will be a significant security breach at a cloud service provider that causes a major outage.” Reading the Spiegel Online article, the “security breach” part might have already happened. Buckle your seatbelts and enjoy 2014.