San Diego Gaslamp Quarter,
across from the Convention Center
ToorCon has two track sessions, so I didn't catch everything, and I didn't take notes for all sessions.
These notes are mine, so I may have misrepresented the speaker's remarks—but bits happen!
DVDs of this year's presentations is available from
(not listed on website).
I bought seven of their DVD's from previous, remote cons on subjects of interest—who says I don't have a life?—that'll show them.
October 2009, San Diego, California
Toorcon organizers Tim Huynh (nfiltr8), George
Spillman (geo), and David Hulton (h1kari)
Dan Kaminsky (rt.). Geo (lt.)'s T-shirt says
"I slept on Kaminsky's couch and all I got
was this pillow case and towel"
How far can ubiquitous computing get?
We started with embedded systems in the 1980s, then network embedded systems.
Now "nodes" know where they are, what they are,
and build ad hoc networks.
Having several distributed nodes can produce spectacular output effects—can program walls, furniture, and maybe even insects and dust.
Problems include power for high duty-cycle nodes, cleaning up "node guano" (dead infrastructure nodes).
Wearable nodes can be a communication endpoint and be used in augmented reality (virtual environments).
Cyberspace leaks into the real world.
Reality and real world becomes a "database"—with easier updates.
Physical failures and attacks are more-frequently from computer, software, and communication failures.
Big problem in this world is identity theft.
With ubiquitous computers, physical world becomes as volatile as the financial world.
Currently, you can relay on things like gravity and air pressure.
As controls become software-based, they become less stable.
Vulnerabilities (0-day) get the press, but according to Verizon Business, 60% of real world losses are not from software vulnerabilities, but failed authentication technology.
Examples: no passwords, bad passwords, default passwords, stolen passwords, and my passwords (Dan's account was recently hacked, as was well-publicized)
What can we do?—here's schools of thought:
Reality check. Business does "care enough" about authentication and has invested a lot in it. But something isn't working. Dan thinks it's X.509, the core technology of PKI, that's broken.
Another proposal is creating a second Internet from scratch and to not screw it up.
This is hopeful, but naïve.
It will be a never-built, never-used theoretical pile of junk.
X.509 intro. It's the identity system behind PKI. Used for SSL, IPsec (not SSH).
With X.509, public keys and names are signed by trusted certificate authorities (CAs).
Others validate against CA-issued certificates.
X.509 in the real world as one success case—SSL (HTTPS).
How to get a X.509 Certificate?
DNS very good at excluding—only one DNS root. Verisign exclusively controls .com and Afilias exclusively controls .org.
But with X.509, this is not possible—any X.509 CA can screw things up and they don't have accountability if they ("bad CAs') screw up (unlike DNS registers).
Interoperability is not optional—you have to authenticate not only within your organization, but to other organizations.
However, X.509 can not delegate (without great pain).
A new X.509 certificate always requires going back to an external CA.
X.509 version 3 has something tacked on ("Name Constraints") to allow delegation, but not well supported in field.
DNS, OTOH, delegates every well.
X.509 allows private CAs, which cannot be done securely without Name Constraints, but some CAs hand out "god-level" intermediate CAs anyway.
Dan then talked about specific X.509 attacks, but I didn't have time to write them down. One involved lying to a CA (saying www.live.com is an "internal server" and getting a cert for it. Another involved using insecure MD5 hashing (still used by RapidSSL CA).
X.509 is very fragile, but has been around 15 years.
MD2 (less secure than MD5) is also supported still by X.509. MD2 isn't used anymore, so shouldn't be a problem in theory.
Verisign's self-signed root certificate uses MD2 and doesn't expire until 2098.
It shouldn't be self-signed, because it should be trusted because it's trusted.
It's completely-meaningless crypto—you are basically saying "I am me, says I".
It only gives the appearance of more security, but actually provides less (because of the insecure MD2 signature).
Also, Verisign was still issuing MD2-hashed certificates up to 1998.
Can attack MD2-hashed certificates by appending an old certificate and getting the same MD2 hash (similar to a MD5 attack). Can create a new intermediate CA certificate this way.
Also, brute force attacks with MD2 is getting easier.
Most CAs are now reissuing certificates that don't use MD2.
But unexpired certs are still around (although not issued any longer),
and you can extend date of expired certs.
Problem can be preemptively addressed by not supporting MD2 any longer.
Self-signed CAs and certificates. OpenSSL makes this every easy to do this.
OpenSSL has protections against having multiple common names (CN), but none of this is enabled by default :-).
Which CN is used if there is multiple CNs?
In OpenSSL first CN wins, in CryptoAPI all works,
and in Firefox/NSS last CN wins.
RFC says the "most specific" CN wins!
X.509 is encoded in ASN 1, but is a travesty. ASN 1 is complex—it has 17 string times, 3 integer types. Every easy to crash under fuzzing. Used as a SQL-injection channel.
ASN uses OID, 126.96.36.199 is the OID for a CN.
Can attack with leading zeros, such as 2.5.4.0003. Works with IE (but not Firefox).
Most CAs extract a CN and throw away all the rest (which is good).
NUL character exploit in CN has been fixed in browsers.
What to do?
I asked Dan what he thought the solution was, given that he didn't propose a solution.
He said the solution was to use DNS with DNSSEC, then to use CAs only to provide real-world name validation.
Use a hybrid solution.
Don't use CAs to provide validation of the name—use DNSSEC for that. Move to DNSSEC is very good at validation and delegation.
CA's are Useful as "boots on the ground" to provide semantic validation that a domain name is really tied to a specific entity
(e.g., that bank-of-america.com is valid).
Once a DNS name is validated (with DNSSEC), use a CA to tie the DNS name to the real world.
OK, I later asked Dan that given the horrible complexity of deploying DNSSEC, why would any sane person propose this as a solution?
He said the spec is horribly complex, but like many specs, you don't have to implement all of it (such as the myriad of keys).
The first step is to have the root, then the TLDs deploy DNSSEC--this greatly simplifies implementing DNSSEC for lower-level domains.
Then, BIND and other DNS implementations need to provide signatures on the fly (rather than the current requirement of pre-processing the entire textual database).
When you set up a web application, it's common to point to a 3rd-party server with a subdomain, e.g., someremotewebapp.mydomain.com.
So, if one can exploit that third-party server, one may be able to take advantage of the greater "trust" of anything under mydomain.com by other mydomain.com apps.
Often, one subdomain trusts all the other subdomains (this doesn't have to be the case, but is easier to setup and configure).
The third-party server is commonly a server serving banner ads.
Often old DNS records are lying around that can be reused or pointed to some malicious location.
setting cookies (if not set yet for a client session).
This method was used to exploit nytimes.com, gmail, expedia, facebook,
Flash exploits (via cross-comain XML policy). Commonly, administrators whitelist \*.mydomain.com, allowing any SWF file to run from another subdomain.
Basic problem is humans—people trust domain names, specifically subdomains.
Useful exploits include phishing, spreading malware, XSS attacks, and clicking on legitimate-looking subdomain links.
Prevent subdomain attacks by restricting cookies to a specific domain, audit DNS records (and purge obsolete ones), don't point DNS at third-party apps (unless they are really trusted), and don't host user content on your domain.
Botnets (robot networks) or "zombie armies" exist because they make money.
They make money using botnets to send spam.
The spam typically has a link that a few spam receivers click on, which then directs the customer to affiliate websites (say for Viagra) and the spam sender receives the money.
Older spam methods, such as open SMTP relays, are not a problem any longer.
Evolution of Botnets. A large botnet requires a Command and Control (C&C) channels on (legitimate) IRC servers to control a large botnet.
It was easy to find botnet channels just by looking on IRC traffic.
So, then spammers setup their own (evil) IRC servers just for C&C channels.
Admins then blocked IRC, so they migrated to proprietary peer-to-peer (P2P) protocols and networks (such as W.A.S.T.E.).
Botnets next evolved to use http using distributed web servers for C&C server—this is difficult to block or shutdown.
The C&C server is hidden with interchangeable web proxies.
This architecture looks like organized crime—the boss is hidden.
Spammers use Fast-Flux DNS to quickly switch C&C servers among several distributed servers. The A (address) record is not fixed—it points to say a thousand separate servers.
Can't block because addresses are located on several networks.
Double Fast-Flux also changes the name server (NS) record, not just the A record.
This requires cooperation of a (cooperating) registrar (most registrars frown on quickly-changing NS records).
Kademlia Botnet Architecture.
Computers self-organize with random numbers identifying themselves.
They build up a closeness-relationship among each other (by hops)
stored in a distributed hash table (DHT).
Four-level Storm Botnet had workers (infected home computers sending spam) and proxies (also infected home computers). These proxies talked to a backend C&C proxy, which then talked to one of a set of distributed C&C servers.
This is similar to the Russian missile train architecture—the missile trains were located at random, changing locations, so couldn't be taken out.
The bottom two layers are coordinated using Overnet.
A "Breath of Life" (BoL) server would promote worker computers (infected home computers) to the next level, proxy computers, when the BoL found out the worker computer had good connectivity and a static IP.
Waldedac Botnet was created after Storm (which was too easy to track).
The communication between botnet layers was encrypted.
Top two layers use bullet-proof hosting (from cooperating ISPs).
The encryption uses AES, but is not perfect—all with the same key that changes periodically.
Internet randomly-scanned to find hosts.
Basically impossible to shut down.
Protocol uses union (instead of a struct) to obscure protocol,
signed/unsigned math, byte swapping, and variable length numbers.
Hard to use scripting languages to emulate (must use C or assembly).
Nth-party software is 3rd-party software "gone bad."
That is, software adopted and modified by yet another party.
It's often a problem for security because of the extra time required to get a fix out.
Common examples of nth-party software is Winamp or software on Linksys routers.
Email gives a broad attack surface due to varying message formats and wide-range of file attachments.
They investigated RIM's BlackBerry BES software and (platform-independent) Good Mobile Messaging Server (GMMS), which are two common remote email enterprise solutions.
They found enterprise FTP sites (e.g., IBM) with useful information on GMMS software structure and its configuration.
GMMS and Outside In Found GMMS uses Oracle's (Stellnet's) "Outside In" library, which filters 400+ file formats. No way is this vulnerability-free, so Josh focused on this library.
Easy to fuzz using standalone trial format conversion software from Oracle.
Focused on Excel 97 format (vsxl5.dll), which has lots of problems.
Found one and was able to find an exploit from sending an Excel attachment
and having one user open it.
BlackBerry BES Sean focused on BlackBerry's BES (BlackBerry Enterprise Server) software.
BlackBerry routers are inside the firewall.
Runs as a Windows service.
Sean choose reverse-engineering rather than fuzzing to find an exploit.
Found BES's conversion software is Arizan's AirDoc Library (acquired by RIM).
AirDoc uses zlib, ImageMagick, and PDF.
Sean thought PDF was probably the easiest to exploit, so focused on that.
Found one using a uninitialized table.
RIM guys were relatively-responsive.
GMMS were not responsive and ignored problems they reported.
They recommend vendors that include nth-party software in their email server software follow these recommendations:
Julia showed low-level examples from her research of various kinds of malware.
I couldn't follow it all due to the detail.
First, she explained PDF and other file format exploits.
PDF file format is similar to Postscript without loops or variables.
PDF has objects that point to each other.
To analyze, install xpdf (don't use (incomplete) pdftosrc).
Most malware sticks a GIF or JPG in front of a file to escape virus scanners.
Result is dual executable/graphic file.
Example: http://www.xiaonews.cn/config.gif (don't try this at home)
Everyone makes same mistakes (patterns) in coding exploit code:
Lamest Ransomeware Ever (almost ROT-26 encoding)
Other ransomware examples and comments:
ZigBee is a low-power, low-data wireless protocol.
It uses IEEE 802.15.4 and came out in 2004.
Max throughput 250Kb/s,
mesh or star topology,
long battery life (5-year goal), 10-100 m. range,
16 non-overlapping channels.
Uses AES-CCM, but network key shared for all devices.
KillerBee is a framework and tool for exploiting ZigBee (and 802.15.4 in general).
Python software with (non-viral) BSD-license.
KillerBee hardware is a AVR RZ Raven USB Stick ($40).
Need custom firmware for functions beyond network sniffing (hw programmer $300).
Tools: zbid (list), zbdump, zbconvert, zbreplay, zbsniff, zbfind (passive sniffer), zbgoodfind (key recovery), zbassocflood.
Wireshark has built-in support for decrypting ZigBee Network (NWK) encryption.
$ sudo zbid
How can this problem (key provisioning) be solved?
No key revocation possible—key burned on hardware.
ZigBee devices will become more common over time,
on critical technology—the protocol is too attractive to avoid.
Killerbee available on
Portplexd is similar to Apache's mod_rewrite, but for TCP/IP—it remaps and multiplexes TCP/IP ports. Provides (poor) security through obsecurity and allows one to bypass firewalls.
Portplexd's configure file uses regular expressions (PCRE or POSIX-style) to match IPs, ports, and even message payload.
Portplexd is lightweight, but eats up sockets if it processes http traffic.
Matches per connection (with timeout), not per packet.
AMF (Adobe Message Format) based loosely on SOAP.
"Charles" is a tool used to interpret AMF.
Joel R. Voss, AltSci Concepts
sprintf() is a good place to start in static code analysis.
People always assume sprintf() overflows don't happen to them.
The three big vulnerabilities are:
Real code is complex and has lots of relationships. E.g., "bluez" has 104k lines of code,
349 dereferences, 186 arrays, 85 for() loops, and 641 function calls.
Call graphs don't necessarily help.
You need to detect if something is used after it's destroyed, which isn't obvious from a call graph.
NULL pointer checks (if (var != NULL)) doesn't solve problems. It prevents NULL derefs, but not overflows or pointer math (including struct field references).
Joel's plan is to vet the entire project, not just part of the code.
Figure out every relationship in the code.
Q & A Static code analysis is done on source (not binary).
Joel's not sure how his software compares with commercial software.
He thinks it doesn't follow relationships fully.
Joel is involved with vetting one open source project at a time.
This talk is online at
Contempt is a Java/Eclipse program that provides a framework for collecting network information
from multiple "Seed Servers". Contempt installs via webstart.
It supports multiple users, multipe "seeds" scanning separate networks.
The Seed Server is implemented a Java jar file.
A Seed is a Java object that contains information and exposes methods.
The Contempt GUI lists views on left, seed servers on right,
More features would be nice and are planned, such as web spidering.
the statements in this blog are my personal views, not that of my employer.