What follows is a special guest post from Chris Todd.
We’ve all heard plenty about the pwnage of HBGary in what has to be the security fail of the year (so
far!). Sure, there are plenty of runners-up and those deserving honorable mention like Comodo, RSA,
Epsilon, and most recently the State of Texas. There were no doubt basic security practices overlooked
in each of those fails. However, the lack of detail around most makes it hard to rank them ahead of
HBGary in the fail awards.
In HBGary we have a security company that sells its services to various three-letter federal agencies, but
then gets totally pwned for mouthing off at “the Anonymous hive.” And because Anonymous was intent
on embarrassing them, detail of the attack was readily shared as seen in the excellent article by Peter
Bright at http://arstechnica.com/tech-policy/news/2011/02/anonymous-speaks-the-inside-story-of-the-
hbgary-hack.ars/. Granted, reading some of HBGary Federal CEO Aaron Barr’s interactions with those
both inside and outside his organization, one can’t help but think he was enough fail all on his own.
However, as you look through the various facets of this attack, one of the key lessons to be learned is
this: Everyone is on the security team.
As we walk through the attack, you can see at least 5 distinct groups who contributed to HBGary’s epic
fail. And now, in the general order of fail, these groups are:
1. Management who decided on a custom-built CMS
The hbgaryfederal.com web site used a custom-built content management system (CMS) from a third-
party company. There are a plethora of COTS products used for blogging, news sites, and the like that
have the benefit of a large user base. This in no way makes them problem free, but it does mean they
are more likely to be thoroughly tested for vulnerabilities than a one-of, custom-built product. Even if
the vendor isn’t the best at testing it themselves, there is still a better chance another user or security
researcher will find vulnerabilities, report them, and the vendor fix them before they hurt you.
Perhaps there was a good reason HBGary Federal management decided on a custom-built CMS, but the
fact remains that decision was the entry point for the Anonymous attack. Had they gone with a mature,
more secure product, this entire fiasco may have been avoided. Anonymous surely would have exacted
their revenge somehow, but perhaps not with ease or to the degree they did. Every application in your
environment, however seemingly insignificant, must be viewed as a potential target or entry point into
your environment and treated as such from product evaluation to production operation and everywhere
in between.

Ship Fail.jpeg

You are going to like where this is going!

2. Developers/DBAs who built the CMS
The custom-built CMS had a gaping SQL injection hole. The exact URL used to break in was:


Not terribly complex. This allowed the attackers to grab the user database containing usernames, email
addresses and password hashes. Now the designers of the CMS were not totally clueless when it came
to security – the passwords were not stored in clear text. However, the password hashes were simple
MD5. No iterative hashing. No salting. Bring on the rainbow tables! More on passwords in a moment.
Developers/DBAs were combined here since it’s entirely possible there were no actual DBAs involved.
It’s pretty simple to just fire up a database and start dumping data in it. Whoever was involved, they
were not sufficiently trained in secure coding practices and database management, or were just
careless, or maybe a little from column A and a little from column B. As with the decision to use a
custom-built CMS, the attack could have been stopped before it started. There is no shortage of
guidance in this area – just check out OWASP. And send your developers there as well.


He has “developer” written all over him

3. Security or test team who didn’t find the flaw first… or management again?
I’m actually not sure what the title of this one should be so perhaps this group is not so distinct. Was
it the security or test team that missed the SQL injection flaw in their testing? Did they lack the proper
training (even in a company that sells security)? Were they just careless? Or was it someone higher up
that didn’t bother to ensure the work of this third-party was validated, i.e. no security or test team ever
looked at the CMS? Whatever it was, someone, or multiple someones, seriously messed up here.
I thought about also calling out the security team for not catching the attack in time. But let’s be honest,
how many organizations actually have a security team that could detect and stop an attack like this
within the few weekend hours it took Anonymous to execute it? Now how about considering that after
the initial SQL injection the targets were in various locations and even in the cloud (ugh, can’t believe
I just used that word) that is Google Apps? Steps 2-6 of the incident handling process – identification,
containment, eradication, recovery, and lessons learned – are crucial, but they’re not going to bail you
out if you skip step 1 – preparation.
4. CEO and COO
Pop quiz:
Question 1 – Your password should be kept short and simple so it’s easy to remember. True or False?
Question 2 – That simple password should be used everywhere so you never forget it. True or False?
If you answered “true” to either of those questions, please stop reading now. It’s time to shut down
your computer, pack it up, return it to the store, and never touch a computer again. It will be better
for all of us. Unfortunately, despite knowing what SHOULD be done with passwords, what IS done with
passwords is often a very different story. This was the case with both HBGary Federal CEO Aaron Barr
and COO Ted Vera. Password hashes taken from the CMS system were easily cracked for both users.
Why? Because they were short and simple – six lower case letters and two numbers. Longer, more
complex passwords are unlikely to be found in rainbow tables even if stored as a simple MD5 hash, but
eight simple alphanumeric characters? No prob! It gets worse. They both used the same password to
access their email, Twitter and LinkedIn accounts. Beyond that, Ted’s password gave the attackers ssh
access to support.hbgary.com (more on this in the next item). Aaron’s password, however, proved to be
the real jackpot. He was not only a user, but also an admin of HBGary’s Google Apps email service. The
attackers could now reset any user’s password and read their email. Or impersonate them in a social
engineering attack. Or download and torrent it all. Or all of the above which is exactly what Anonymous
did with HBGary CEO Greg Hoglund’s email.
The lesson here is pretty obvious: use long, strong passwords and don’t share them across systems!
Following those two simple rules may have limited the damage to simple website defacement.
5. System administrators
A couple of key fails on the part of the support.hbgary.com sys admins made COO Ted Vera’s cracked
password more useful than it should have been. First, that password shouldn’t have provided external
ssh access. Period. At a minimum, a public/private key pair should have been required for this type of
remote access. However, even this wouldn’t have been so bad if not for the second fail – an unpatched
privilege escalation flaw. It’s not that a patch wasn’t available; it was released for most systems in
November 2010. But in February 2011 it still wasn’t applied. Maybe the problem was the patch is only
rated as important? Many sys admins will read “important” as “not critical, got other stuff to do.”
Important is a typical rating from vendors for privilege escalation flaws. Can we give an honorary fail to
the vendors here? I digress. Anyway, now with root, the many gigabytes of backups and research data
the attackers could access was promptly purged.
Another sys admin fail came courtesy of Jussi Jaakonaho, Chief Security Specialist at Nokia and an
admin for Greg Hoglund’s rootkit.com web site. After a few social engineering emails (using Hoglund’s
compromised account), the attackers had root access to this system as well, stealing email addresses
and password hashes (again simple MD5) for everyone who’d ever registered on the site.
The key fail term to remember here is two-factor. It applies to the two-factor authentication that could
have prevented access to support.hbgary.com and the two-factor (or one undisputable factor such
as in person) verification of a person requesting a password reset that could have prevented access
to rootkit.com. These are pretty simple to implement so neither of these compromises should have
happened. In fact, even the email compromise could have been limited if two-factor authentication
(offered to Google Apps customers since September of 2010) was in place for at least admin access to
the service.


Somehow.. We should have seen this coming.

Pwnage complete.
Did we miss anyone? Perhaps the janitor, but how he can be socially engineered into lending his
swipe card is a topic for a different day. We clearly see from HBGary’s misfortune that security really is
everyone’s responsibility. Had any one of the groups above done their job correctly, the damage could
have been limited.
Everyone having knowledge of basic security practices is necessary, but it’s not enough. You all passed
the pop quiz, right? I’m sure Aaron and Ted would too.
Everyone needs to understand why what they do really matters to an organization’s overall security
posture. Ask the CMS developers or whoever decided to go with a custom-built CMS in the first place if
they believe this.
Everyone needs to appreciate that one careless or lazy move on their part, especially when combined
with careless or lazy actions of others, can have dire consequences. Configuring ssh to use public key
cryptography takes about 2 minutes per user. A phone call takes 1. Would that have been too much to
ask of the sys admins?
Everyone needs to act with the care and rigor of a finely tuned security team.
Everyone needs to appreciate that they play an integral part in securing their organization.
Everyone is on the security team.

About the author