How to Detect Zero-Day Malware And Limit Its Impact

How to Detect Zero-Day Malware And Limit Its Impact
Antivirus systems alone cannot fight a growing category of malware whose strength lies in the
fact that we have never seen it before. Dark Reading examines the ways in which zero-day
malware is being developed and spread, and the strategies and products enterprises can leverage
to battle it.
By Fahmida Y. Rashid, Information Week: Reports
Malware is becoming harder to detect using traditional security tools. Malware developers are
increasingly using techniques such as polymorphism to make variants different enough from
each other that they foil antivirus systems. Zero-day malware, by definition, is malware that
isn’t recognized as a “known bad,” which puts IT administrators at a distinct disadvantage when
it comes to fighting it.
Security experts recommend several techniques for battling zero-day malware, including
behavioral analysis, network monitoring, situational awareness and even hardware-based
security. In this report, Dark Reading looks at several categories of products that have emerged
to address the zero-day malware problem, as well as how these products and processes can
complement existing antivirus deployments. We examine how zero-day malware has
proliferated and how IT administrators can defend their networks from malware they’ve never
seen before.
How To Detect Zero-Day Malware And Limit Its Impact
It was never easy to keep ahead of the cyber bad guys, but with the recent uptick in zero day
malware, things are only getting harder.
Indeed, the malware landscape is changing dramatically, as attackers seek to take advantage of
automated construction kits to generate several thousand malware variants at once. Security
experts estimate that more than 70,000 new instances of malware are being released each day.
While traditional antivirus products are generally effective at detecting and blocking “known
bad” samples, they are challenged to keep up with the rapidly increasing volume of malware
we have been seeing.
There has been a “seismic shift” in how malware is developed and distributed, says Andrew
Brandt, director of threat research at Solera Networks. Malware developers are increasingly
crafting one-time-use malware, so by the time an antivirus vendor has released a signature to
detect the malware sample, the bad guys have most likely moved on to a new version.
Developers are using do-it-yourself construction kits such as Zeus and Poison Ivy to create their
own variants, says Gunter Ollmann, CTO and VP of research at security vendor Damballa.
Thanks to these kits, criminals can generate “hundreds and thousands” of malware variants per
hour with a single press of a button. Add in armoring techniques such as run-time obfuscation,
polymorphism and packers, and the likelihood of antivirus products detecting these malicious
programs is just 2%, he says.
Host-based defenses are “no longer relevant” once the malware is on the computer, says
Ollmann. And some malware variants can even disable installed antivirus software and prevent
a computer from downloading software updates. It’s also common for botmasters to frequently
update malware with newer, undetectable variants.
So instead of relying only on traditional anti-malware systems, organizations need to be looking
at additional types of security to detect and block unknown threats. There are several
categories of defense that companies should consider, say experts, including behavioral
analysis, network monitoring, integrity management and hardware-based security.
Figure 1
How To Generate Zero-Day Malware Attackers have access to
several tools that make it easy to generate malware variants on
the fly. Here are some of them.
•
•
•
DIY construction kits (such as Zeus and Poison Ivy) automate
the development process.
Online tutorials and instructional YouTube videos provide stepby-step instructions.
Packers compress executables to a smaller size, making it difficult
for security products to tell something is malicious.
•
Polymorphic toolkits simplify the creation of malware that can drastically
modify itself, such as how it is encrypted.
Data: InformationWeek Reports
Focus On What It Does
The future of security lies in shifting toward behavior-oriented scanning, says Dennis
Pollutro, president and founder of cloud security vendor Taasera. While “there will
always be a place for signatures,” security products have to begin identifying malware by
what it’s doing rather than what it looks like, he says.
Several things have to happen before the malware infection results in damage or data
theft on the compromised computer, which gives defenders a “couple hundred processes”
to monitor for, he adds. Threat intelligence allows administrators to recognize patterns of
behavior, such as creating directories on a file system or communicating with an IP
address that had previously been flagged as suspicious.
Even if the actual source code (and the resulting hash of the file) of various malware
samples is different, that doesn’t mean the malware’s actual behavior has changed, says
Solera Networks’ Brandt. That makes sense, considering the number of variants
generated using DIY toolkits and that the changes to the code may be as simple as
inserting extra instructions that don’t actually do anything. Even the use of polymorphism
or packing changes just how the malware looks, not how it executes.
Brandt is fine with “signature scanning” but believes the definition needs to be expanded
to encompass more than just looking at the characters in the file and the resulting hash
value. The new signatures should also include network behavior as a piece of the puzzle,
he says.
Certain types of activities can easily be flagged as malicious. For example, once
downloaded, malware generally checks in with a remote server to send information about
the machine’s configuration. The malware also receives instructions back, and there are
certain recognizable patterns in how it communicates. There may be repeated strings in
the header information, such as an identifier assigned by the remote server, suspicious
user agents or even multiple port numbers in the URL.
Even if the malware samples are part of different families, certain behaviors will be
consistent because criminals are employing similar attack techniques, experts say.
Administrators should create a list of acceptable behaviors and then filter out the
legitimate processes to see what is left, says Brandt. For example, perhaps an internal
computer is connecting to a Russian IP address and sending POST data over port 80. Was
a Russian employee using that machine to access legitimate services, such as streaming
video, from a Russian site? “Asking basic questions, such as ‘Who are you talking to?’
and ‘Why are you talking to them?’ can help identify if the traffic is malicious,” says
Brandt.
Organizations also should look for security products that emphasize behavior scanning
instead of relying primarily on signatures, says Roger Thompson, chief emerging threat
researcher at product security testing and certification organization ICSA Labs:
“Everybody agrees this is a good idea; it’s a matter of getting everyone to actually do it.”
Most antivirus vendors have already shifted their products to include network heuristics
and behavioral analysis, but there needs to be a greater emphasis on behavioral scanning,
says Thompson.
Damballa’s Ollmann noted that several security products — not just antivirus — have
incorporated a virtual machine component in which samples are executed to observe
actual run-time behavior. If malicious activity is detected within the virtual environment,
the malware sample is blocked and prevented from executing on the actual computer.
Invincea takes a similar approach: It wraps the Web browser in a virtual machine so that
even if the user accesses a malicious site that attempts to download malware, only the
virtual machine is impacted.
Knowing The Environment
Behavioral analysis means scrutinizing every process and trying to figure out whether it’s
malicious for that specific situation, says Taasera’s Pollutro. The same process can be
considered legitimate in one environment and malicious in the other, so context is
critical.
Understanding that context means administrators need situational awareness of their
networks. “Everything should be listened to,” Pollutro says. Even if the application is
signed with a valid digital certificate, administrators should not automatically assume it’s
safe. That way, if the application creates suspicious directories where they shouldn’t be,
that behavior would immediately be flagged.
Administrators also have to be able to make connections among events and systems that
may seem disparate. Being able to connect the dots across multiple sources — such as
individual application and server logs, as well as data collected by security information
and event management systems — is critical for network visibility, says Dudi Matot,
CEO and co-founder of Seculert. Malware generally leaves behind “fingerprints,” or
traces of its activities throughout the network, that can be tracked through the data
captured by various SIEM data collectors. Linking the individual crumbs together makes
it possible to track down unknown threats and understand what it has already done, he
says.
Figure 2
Obfuscated JavaScript
Obfuscation takes a line of code and modifies it in such a way that the machine still executes it normally
but it looks like a string of random characters to the human eye.
Analyzing network traffic and profiling malware to understand its behavior go hand in
hand for defenders searching for evidence of malicious activity, Matot says.
Detecting Changes
Another way to detect zero-day malware is to focus on actual changes within the
network, as opposed to relying on SIEMs systems and threat intelligence systems
collecting information about every process and analyzing the data for patterns. If the
machine configuration has changed, or an anomaly is detected when the computer boots
up, administrators can be alerted that something may be wrong.
Unexpected configuration changes generally mean the asset has been compromised. A
monitoring system won’t be able to say what is wrong or what the malware is doing, but
by recognizing something has changed and quarantining it from the rest of the network,
administrators can buy some time to find the issue.
The hardware-based Trusted Platform Module chip works in this manner. When the TPM
chip detects changes to a machine’s configuration, such as within the BIOS, it alerts
administrators to potential problems and takes action to isolate the computer, says Steven
Sprague, CEO of software vendor Wave Systems. A microchip installed on the
motherboard, TPM can store encryption keys, password information and specific
configuration information.
Figure 3
A Trusted Platform Module Primer
A TPM is a specialized chip on the computer’s motherboard that
authenticates the system. Here’s how it works:
• Authenticates the computer as a “known” and
“trusted” device; eliminates user passwords.
• Chip stores encryption keys, digital certificates and
passwords.
• If the BIOS or other components have been tampered
with, the chip can block bootup or send an alert.
• Can be used with any major operating system and in
conjunction with firewalls, antivirus systems, smart
cards and biometric scanners.
• Available from a number of vendors, and available on
most desktops and laptops from major PC makers.
Data: InformationWeek Reports
TPM captures data about a PC’s overall health, and it compares the current state of BIOS
to what it has saved to ensure that nothing has changed. The chip can detect that malware
has burrowed into the BIOS and the master boot record, something most antivirus and
other security products cannot do easily. For computers that have TPM enabled and
listening for changes, if a rootkit ever hits the system, the trusted chip would
automatically notice the change in the BIOS or MBR and lock down the machine,
Sprague says.
If any kind of malware on the machine ever tries to modify configuration data, the
computer can be instructed to immediately stop booting up or finish booting up but not be
able to join the network, Sprague says. If the malware modifies the machine so that its
configuration no longer matches what is saved within the TPM, the module can take
appropriate action, as well.
By isolating the system, administrators reduce the possibility the compromised machine
would infect other systems in the network, Sprague says. The isolation also means that
despite residing on the computer, the malware is unable to execute or communicate with
the remote server.
Known Good Vs. Known Bad
It’s always easier to identify what is trusted than it is to identify what is not trusted.
However, while the security industry has traditionally relied on blacklists, these lists are
becoming less effective because they are time- consuming to maintain as they grow. In
addition, the use of blacklists puts defenders in reactive mode and behind the curve,
security-wise. Attackers are constantly creating new malware, remote servers and URLs
to launch their attacks from, so there’s always a new “known bad” to add to the list.
Instead of trying to compile a comprehensive list of everything that’s bad, many
organizations are taking a step back to do the reverse: making a list of everything that’s
good.
Figure 4
Stealthy Web Malware Increasing
The Web-based malware attacks that originate outside the target organization and
successfully evade traditional Web filters are increasing.
Similar to how whitelisting works for spam and Web filtering, application whitelisting
refers to a list of approved software and programs authorized to access network
resources. By restricting what programs can run on the network, the entire environment is
protected from applications, says Dan Brown, security researcher at Bit9.
Traditionally, whitelisting has been used only for fixed-function devices, such as pointofsale systems, where administrators specified the handful of applications that should be
available, says Brown. Today, there is more flexibility as organizations move toward
private software marketplaces with trusted software that employees can download and
install.
A user can still attempt to install an application not on the approved list. However, when
the application tries to execute, the network will notice that the software doesn’t match
any entries on the whitelist and automatically block it from running, Brown says. Most
whitelist systems also perform integrity checks, such as comparing the hash of the file
with what is on the whitelist to ensure that the original application hasn’t been
overwritten by malware with the same name or otherwise tampered with. Because there is
no way for software to execute if it isn’t on the approved list, even if the user downloads
malware onto the computer, the malicious file can’t do anything at all, Brown says.
Compiling the initial whitelist of all the applications users need for their day-to-day
operations can be time-consuming and challenging. However, in the long run it’s far
easier to keep up with changes because they tend to be infrequent. Blacklists, on the other
hand, can require updates every day.
Enter Big Data
Many organizations struggle to make use of the security information they are gathering.
This isn’t surprising, given that large organizations collect upward of 50 GB of security
data each day, according to a recent Enterprise Management Associates study of 200
organizations with 1,000 employees or more. For midsize and large enterprise
organizations, tens or hundreds of millions of logs are generated throughout the network
every day, and that doesn’t even include activity information, such as who accessed
which files and what pro cesses are star ting and stopping on servers.
Figure 5
Advantages Of Application Whitelisting
“Whitelisting,” or defining what applications can be installed and run, is one way
to prevent malware from being installed on user systems.
• It’s easier to identify “known good” than “known bad.” It’s a
shorter list and less likely to change day to day.
• If the application is not on the approved list, activity is
automatically blocked.
• Not limited to fixed-function devices (such as POS systems)
you can restrict what is installed on desktops and mobile
• devices.
• Businesses can establish an application marketplace with
approved software users can install.
• Organizations can give users some choice by offering one
or two alternatives for supported software.
Data: InformationWeek Reports
Considering the sheer volume of threat intelligence and forensic data being gathered to
give administrators full visibility over their networks, many vendors are harnessing the
power of big data analytics to make sense of what’s being collected and find anomalies.
Anomalies can take many forms. They can be as simple as users logging in from an
unusual geographic location, the same credit card being used in three widespread
locations within a short period or an employee ID logging in to a corporate system at an
unusual time.
Security vendors are using big data platforms to take advantage of the customer systems
and networks to which they have access — in order to establish a baseline of “normalcy”
across the entire ecosystem. Once that baseline is established, it’s possible to add context
feeds from other sources to further fine-tune what normal means. Security teams need to
be able to analyze activity related to specific hosts, applications, users and networks to
figure out whether there are any variations from the norm.
Big data technologies can then be used to organize both structured and unstructured data
from multiple sources and analyzed to find events that differ from the baseline, Seculert’s
Matot says.
Malware tries to exfiltrate data from organizations by hiding its malicious activity among
normal data traffic. By analyzing events information and traffic flow data, patterns
indicating unusual activity can be identified. Seculert uses big data platform Hadoop to
organize and analyze log data it receives from customers and its own extensive database
of threat research to identify signs of malicious activity.
Indeed, big data analytics can pick up where SIEM technologies leave off, giving security
teams the ability to dig deeper into the data. Investigative platforms such as RSA
NetWitness and Solera Networks that combine full packet capture with analysis making it
possible to understand what is on the network on the packet level.
AV Still Has A Place
There are some who say that antivirus systems actually help malware developers, by
giving them a platform to test their malware samples against and ensure that they can’t be
detected. Some people choose not to use antivirus systems at all, saying that they create a
false sense of security.
“Good luck with that,” Thompson says, pointing out that majority of malware out in the
wild is known, so antivirus is still useful. However, he adds, antivirus should no longer
be an organization’s primary (or sole) form of defense.
AV vendors would agree. While they have not given up on signatures, pattern matching
and substring comparison, most of them have incorporated other types of scanning, such
as heuristics, behavior analysis and reputation- based scanning, to supplement signature
scanning, Brown says.
The antivirus community also is “really good” about information sharing, Ollmann says.
There are cloud-based environments dedicated to collecting threat data, whether it’s
private cloud services collecting and analyzing malware data from customers or
independent cloud services with honeynets collecting information about threats in the
wild. The centralized information sharing makes it easier for the AV community to
include reputation data that can be used to beef up security products.