Lessons Learned from Virus Infections
Internet worms are not the only things that can be
addressed with information taken from a virus infection. Malicious
attackers will often exploit these same vulnerabilities used by the
worms on a manual basis, and these people specifically target an
organization in the hopes of stealing critical data or causing a lot of
havoc. Moreover, Internet worms that were once simply an irritant are
now more likely to carry a backdoor, a Trojan, or open a session to the
author's IRC server.
1. Viruses beyond control Limits
Even the most exuberant vulnerability auditor or
penetration tester will use safe, reserved methods when testing
production network hosts for holes. This makes sense; dropping a room
full of servers just to prove a DoS attack is possible may not make the
best impression on one's manager (ie., what's the security ROI in
that?). Even penetration testing that intends to simulate a full-scale
attack may not be launched because of concerns over the impact to
production and its associated costs.
An unwanted virus infection can provide real insight to
the security of a network in ways that human-driven tests cannot. It
will attempt things that a careful penetration tester would not. It is
free from worrying about such things as whether all of your file
servers drop off line, whether you really needed those documents on
your hard disk, or if the traffic it generates makes everyone's web
surfing slow. Second, a network worm is coded for one thing: exploiting
as many hosts as it can reach -- a worm's life depends on propagating
quickly. It will test for vulnerabilities in your network like no tool
can.
2. The Lessons Learned
In each case mentioned above, there is at least one
technical and one non-technical problem that needs to be examined. Each
type of problem requires a corresponding solution. Trying to address
non-technical issues with technical tools is often a frustrating game
of the proverbial "square peg in a round hole" for administrators of
all kinds. Security professionals know all too well that there are few
technical protections that a determined user can't undo if he hasn't
been educated. Similarly, a determined user or attacker will have
little problem evading poorly configured or under engineered solutions.
Revisiting our first example, a Sasser outbreak, shows how an infection
can point to non-technical problems. In this case, if administrators
are rebuilding clients and are not aware of required patches, the
results of a vulnerability scan can be invalidated quickly. This is a
problem of information flow and configuration management -- and in
larger organizations, can often be resolved with policy changes.
The lessons from such infections often do a lot to
organize the organization's tactics for layered defense as well.
Whenever a virus causes a disruption of service, the likely reaction by
management is to ask what happened, and why. An engineer can summarize
the vulnerabilities, point to each location, and make recommendations
as to which part of the network should be changed. In most cases, the
engineer presenting such recommendations will look at the costs
involved in each change, the effectiveness of each change, and the
future administration necessary to make the adjustments successful over
the long term. This is surprisingly close to the process of providing
ROI data to managers.
In more cases than not, traditional thinking dictates
that changes to a central choke point are often more effective and
cheaper than touching every workstation, recalling mobile devices, and
so on. In other words, making a change to the firewall rules is a
better choice than installing a new filter on every single desktop,
provided each solution has comparable levels of success. The actual
step here is not important, it's more the fact that a virus infection
may challenge the notions that engineers and managers alike had about
where the network was strongest and weakest. For instance, a Lovgate
outbreak within the protected LAN may expose the user/laptop policy as
being weak, as the firewall and mail relays would have properly
prevented infection via email or network shares. Depending on the costs
involved in cleaning up the infection(s), the compromises required may
serve as the needed catalyst to spend the money on education and better
client-side security tools.
If MyDoom had spread across your network, it is likely
that the mail relays were not dropping attachments of EXE, COM, BAT,
CMD, PIF, SCR, or ZIP files. If there is a business case for
distributing these files, then another layer of the defenses will need
to be reinforced, such as user training. If there is no training
possible (because of money concerns, time constraints, or the size of
the organization comes into play), then the gateway/client side AV
software will need to be tight -- as it is all that's left to combat
this threat.
3. Detection and Alert Mechanisms
If network worms completely blindside the network
several times a year, there is likely a need for better detection
tools. In very large organizations with thousands of clients it may be
difficult to keep all client AV software updated and running properly,
particularly with a large mobile workforce that have personal firewalls
on each machine. It is always wise to have another line of viral
defense in front of the clients, and larger organizations tend to
employ a second AV vendor's tool at the gateway and/or an IDS with worm
recognition features.
Many security professionals have debated the use of an
IDS to detect viral activity. One's personal beliefs in this matter
notwithstanding, an existing (or inexpensively built) IDS can always
improve worm detection and mitigation efforts. Although it is not the
core competency of such a device, many IDS platforms allow for quick
and customizable virus signature additions. Furthermore, by its very
nature, the IDS is in a good position to identify worms as it needs to
inspect every packet traversing the network. Also, an IDS can see a
worm propagating from clients that don't have their AV client running
properly (or running at all), something that even the best AV
management console can't provide.
Virus signatures are written and published constantly on
sites like Bleeding Snort, and although they are presented with a
number of warnings about false positives, they should be more than
enough of a foundation to build a generic worm detection machine. One
thing that multiple worm infections have likely taught every
administrator is that a worm has to do a lot of reconnaissance to
spread quickly. Blaster and Sasser were certainly not trying to
emphasize stealth with routines that open up to 1024 threads to scan
for new hosts (example: Sasser.C). Basic anomaly detection would have
triggered alerts for such activity. Mass mailers, of course, have to
send a lot of messages out to compromise additional hosts. That means
detecting TCP 25 activity for non-SMTP servers/relays can tip an
administrator off to an attack before it gets too far out of hand. File
share worms (such as the vector included with the Lovgate variants) are
likely to require more specialized signatures, something that actually
uses a content field composed of the actual worm binary. However, IDS
detection is certainly capable of pointing out a lot of failed logins
to SMB resources, which is an anomaly that often indicates a worm is
trying a weak set of logins/passwords against a host in an effort to
access the machine and propagate.
|