See the crane in the picture above?
I find it ironic that some condos are still being built for addition to the current glut of housing, right in front of the (failed) Merrill Lynch and Washington Mutual buildings (the black and white buildings under the left side of the crane).
Those crazy junk mortgages and financial instruments won't affect me, right?
Well, moving on . . .
ToorCon has two track sessions, so I didn't catch everything, and I didn't take notes for all sessions.
These notes are mine, so I may have missed something or made mistakes.
DVDs of this year's presentations are available from
align="right" alt="h1kari, toorcon conference chairman"
width="248" height="338" border="0" />
The conference chairman, h1kari, recently traveled to India to give a talk to ClubHack.
While there, he visited a Montessori school there, Alpha Public School,
which is a school in New Delhi trying to improve the quality of education for the poor.
He is establishing the ToorCon Foundation to help Alpha and other projects.
So far the foundation donated class materials from a local Montessori school.
The school has only one laptop and needs more.
They also need the help of people traveling to India to take donated laptops and class materials to the school.
Another project of the ToorCon Foundation is Project Ecycle.
Ecycle repurposes used laptops. So far, they have donated these to
a local Boys Club and in Vietnam.
Finally, ToorCon is planning a "ToorCamp" Spring 2009.
It will be held in a Titan-1 Missle Silo 3 hours outside of Seattle.
They will have food, talks, activities for an estimated $200 admission.
No hippies, no drum circles, no Burning Man-type events—this is a technical get together.
alt="Dan Kaminsky at ToorCon 10"
DNS lookup starts at one of 13 root (top level) DNS servers,
where each lower level may be delegated to other DNS server(s).
Replies from the DNS server may be "spoofed" from a fake DNS server
that replies to the client.
A transaction ID prevents spoofing—the reply must match the request.
However the ID is only 16 bits (0-65535), so there's a 1/65536th chance of guessing the ID correctly.
DNS replies also have a TTL (Time to Live), which is how long the reply may be kept by the client, in seconds.
Low TTLs forces a DNS client to make more queries after the cached replies expire.
Early recommendations were to have TTL values (e.g., 22 days) to reduce the opportunities of spoofing DNS replies.
But, TTL is a stop gap, and was not intended as a security technology.
Basically, there's a race between a "Good Guy" (a DNS server that gives correct responses) and a "Bad Guy" (which gives spoofed replies).
Besides TTL, latency in replies give more opportunities for bad replies.
A Bad Guy can reply a few hundred times before the Good Guy's correct answer is received.
This reduces the odds of guessing to, say, 1/655.
If there's, say, 10 lookups for closely-related domain names (such as google.com, 1.google.com, and 3.google.com, etc.), this reduces odds further of guessing (and spoofing) correctly (1/65 in this example).
The Bad Guy can also send a false delegation so all further lookups are to the Bad Guy and return bogus delegations and replies.
This technique is called with "DNS Rake".
It works with BIND8/9, MS DNS, Nominu DNS servers.
It doesn't work with DJB DNS, PowerDNS, or MaraDNS, but even these can be defeated if they are behind a firewall (and they often are).
"Out of Bailiwick Referrals" is a concept that DNS servers can reply with ANY record (not just the requested DNS record).
This greatly expands opportunities for spoofing.
One can, say, easily create a new fake TLD (e.g., .foo instead of .com), or return fake replies for arbitrary domain names.
New Bailiwick Rules now prevents this (i.e., ignore unsolicited replies).
But this is easily defeated—many DNS queries can be forced (e.g., from a web browser or mail server).
Mail servers do lots of DNS lookups (for delivery, spam filter, and even SMTP HELO).
Takeaway #1: "Protocols Cannot Be Understood In Isolation".
E.g., trusted hosts resolve DNS from untrusted hosts all the time.
It's not a single protocol that's under attack, but the system.
Here's another problem. GetHostByAddr() is used by clients to Do DNS lookup.
The request goes to the attacker, and this allow the attacker to corrupt the system.
Takeaway #2: Everything You Do Can Be Used Against You.
People still firewall with router ACLs.
It's a cheap and easy way to firewall, because it does stateless, blind filtering by IP ranges and ports.
The right way is to do stateful tracking by expense.
This is expensive and error prone.
Stateless works pretty good, but the main vulnerability is spoofing.
TCP is relatively spoof-resistant with it's large request numbers.
DNS, with 16-bit numbers is an exception and is relatively easy to spoof
because of this and because of repeated, recursive lookups.
A typical attack is to spoof replies from a neighboring DNS server.
Takeaway #3: If It's Stupid And It Scales, It Isn't Stupid.
Halfway solutions often just work (e.g, router ACLs).
Often "correct" technical solutions do NOT scale (e.g., DJB DNS uses 24-bit transaction IDs, so have 163,840,000:1 odds of guessing, but doesn't scale).
Many TTLs are too low: e.g., Facebook TTL 30 sec. or Google Analytics TTL 300 sec.,
forcing repeated DNS lookups.
Also, IDS systems often block or slow down replies if they see a DNS attack. This makes attacks easier as it gives more time for a spoof attack.
You can slow down the real DNS server at the same time you're spoofing it.
It is often easier to spoof sub-domains than top level domain (e.g., 1.google.com instead of google.com).
Takeaway #4; can't just solve 99% of the problem if the remaining 1% is VERY important
One proposed solution is always hold a DNS answer (NS record) for the entire TTL (ignore updates)
But this is not realistic—will cause outages when IP addresses change.
IT admins will NOT risk their jobs doing this.
Takeaway #5: (realistically) more important for network to work than to be secure
For the last (severe) DNS BIND vulnerability, there's only a 70% patch rate, even though the only problems it caused is overloading for a few heavily-used serves.
We need DNS solutions short of DNSSEC.
DNS servers need to be smarter.
They should detect attacks (e.g., notice imbalances between requests sent and responses received).
They should defend only against the specific names being attacked.
That is, do some kind of "extra validation".
use the "0x20 defense (lower/upper case)" use mixed-case (e.g., wWw.GooGle.cOM).
DNS is case-ignoring but case-preserving, so you can only accept responses with the exact (randomized) case used in the query.
Double querying usually works (but some large name servers often don't reply with same names (e.., akamai, google, facebook)
For DNS exception handling, notice, don't ignore, exceptions.
Takeaway #6: Elegance is less important than coverage
Be as elegant as possible,, but no more.
The real world requires solutions that don't break google.
DNS clients: DNS servers get a lot of attention, but DNS clients have the same problem.
The DNS client Transaction ID is usually sequential and easily guessed.
Ports can easily be discovered and held open once found..
Email is a special case: it has it's own DNS record and no security.
DNS MX records, used to ID email servers, allow email attacks.
Can pollute messages or simply watch email.
Takeaway #8: Code that's never been attacked, is usually remarkably fragile
Web browsers/servers are very secure, but other clients are not secure, such as auto-software-update clients.
Problems with SSL:
SSL doesn't scale well.
Also, we really don't know if SSL is secure because of crypto or just because nobody bothered.
SSL is not widely deployed.
Takeaway #7: Never forget the human factor
SSL has many problems.
SSL has no virtual hosting support.
SSL certificate handling is fragile.
Users often ignore security alerts (e.g., DNS mismatches), even when banking.
SSL logins often start at insecure sites.
E.g., login at http://www.paypal.com/ to SSL https://www.paypal.com.
Many NON-web browser clients don't even validate SSL certs (fragile code).
42% of SSL certs for non-browser clients are self-signed (e.g., SSL VPN).
SSL is signed with MD5, which is known to be weak.
SSL revocation is a myth, browsers barely check, most non-browsers don't even bother.
There are many badly-generated Debian/Ubunto SSL certs out there still being used
(a typical SSL cert lifespan is 5 years).
SSL certificate acquisition itself requires DNS (a weak spot).
Scripting on a SSL page can still use non-SSL (http).
Takeaway #9: No bug is so good, that another bug can't make it better.
E.g., Nimda combined email/share pollution/IIS exploits, brower bugs.
EV SSL certs are not the answer and costs $$$$$.
EV is a display mechanism, not a security mechanism.
Why is the web broken with passwords?
Most secure web sites have a "Forgot my password" link
(e.g., google, live, yahoo, paypal, ebay, myspace, facebook, linkedin, bebo, cragslist).
They use email, which is NOT-secure to deliver the password.
Is OpenID a solution?
No—it uses DNS again (weak).
What about OpenID with SSL?
There's a still problem with bad certs (e.g., Debian-generated OpenSSL certs).
Takeaway #10: Flawed Authentication is the unifying theme of 2008's major bugs (auth/encryption weaknesses)
We are failing big time (e.g., DNS, SSL VPN, NRNG, OpenID, package update, package mgr without auth.
SNMPv3 flaw. SNMPv3 is a challenge-response auth, via HMAC.
It has a bug where the SNMPv3 client only needs to get 1 byte right to break this
(1 in 256 chance)
The attack spoofs UDP packets.
Takeaway #11: No bug so good, that another bug cannot make it better
align="right" alt="Ben Feinstein"
Debian OpenSSL vulnerability discovered by Luciano Bello in 2008.
The problem was a lack of PRNG entropy in Debian OpenSSL.
Vulnerability existed since 2006-09-17, migrated to the "stable" release, and includes distributions downstream from Debian (e.g., Ubuntu).
Debian created a tool sshvulnkey to search for these bad keys; it should be used elsewhere.
Ben created "Snort Dynamic Preprocessor", released in DEFCON 16 and building on work from others.
It analyzes packets captured with tcpdump (viewable with pcap).
It works in part by linking in and using the vulnerable Debian OpenSSL library to brute force packets.
use version strings to focus just on vulnerable SSH servers and clients.
align="right" alt="Ariel Wassbein"
Can "play with toys (tools)": scan systems periodically and archive.
After you learn about a threat, check for changes.
keep installers and updates, and take snapshots of software,
Run exploit and look for changes.
"Insight" is a tool that simulates attacks against computer networks, using a pen-testing framework model.
Copy exploit scenarios to your simulator to determine potential impact of an exploit.
Scenario includes network device, OS/version, filesystem, & vulnerabilities.
Not simulated to full detail—only some syscalls are simulated.
Nick and Eric will talk about setting up sustainable community wireless networks in DC.
Cofounders of HacDC, and both are CHNN )Columbia Heights Neighborhood Network) Community Coordinators.
Network not up yet, but will use a network in NYC as a model.
Initial community wireless networks are "accidental", ad-hoc, and short-lived.
German "Freifunk" networks (means "Free Network") are active mesh-networks
provided by people with bandwidth.
Most popular protocols to allow computers to migrate among multiple APs:
OSLR finds a path to the "cloud" (multiple exit points to Internet).
Usually easy to setup; great for multiple end points.
But doesn't do QOS checking and it's not useful as we only have one exit point.
Hard to manage, but getting better.
WDS has lots of overhead, works only with a single exit point.
Easy to setup/manage.
Community wifi gets lots of different people excited and you get lots of participation.
Big failure rate because: nobody to pay bandwidth, nobody to fix it,
non-technical users and stakeholders not engaged, failure to improve network,
and eclipsed by commercial efforts (DSL, cable).
Phase I: build a stable base.
Physical base: rewire a church building (old building with dead wire).
Funding base: from lowering costs, economy of scale.
People base: make friends and provide training, service, and support.
Phase II: Community through technology.
Network builds around community wireless, outside church building.
Train during community events for users within network range,
and provide hardware for extension.
Routers use OpenWRT and WDS and powered with POE (power over Ethernet).
Phase III: Extend the Network.
"If you build it, they will come."
Encourage experimental extensions to main network.
Place nodes where needs exist and encourage extensions.
align="right" alt="Joe McCray"
Inband: can learn data types from errors.
try to push results to your own server (web site, DNS, email, etc.),
Inferential: if you can't see errors, it's "Blind SQL Injection".
Results must be reversed-engineered.
SQL Injection tools are not useful: e.g., vulnerability scanners.
Injection types (in order of preference):
Manual testing is usually more useful—can use multiple methods.
MS SQL easiest to inject (over Oracle).
MySQL closer to MS SQL, but doesn't have error-based injection technique available.
MS SQL has more built-ins, is more flexible, and has less security mechanisms.
For privilege escalation, you can add, say, a ping for 8 seconds to see if it's being executed.
Bypass alphanumeric filtering: e.g., "like" can replace "=".
Can bypass signature-based IDS with 1=1, encoded comment, or string injection.
NTLM is a MS Windows challenge-response auth protocol, then the server validates with a separate Domain Controller (DC).
With Kerberos, OTOH, both the server and client auths with the Kerberos Domain Controller (KDC).
NTLM is ubiquitous—on iPhone.
NTLM seems strong, so no inducement to replace with Kerberos.
NTLM is supported over everything: HTTP, IMAP, POP3, SMTP, NNTP, etc.
NTLM has a weakness—authentication is one-time when the Microsoft IE domain is set to Intranet (internal or local network).
grutz demo'ed a tool he wrote that took advantage of this weakness,
Squirtle that can serve as a Rogue DC that signs you in to the domain.
It assumes you are in the local network domain, as in a Corporate Intranet, where auth is less strict.
Kerberos is not vulnerable to rogue DCs—NTLM is vulnerable.
But, unfortunately, you can not force a Windows client to not use NTLM and only use Kerberos—it can always use NTLM.
With Google SOAP search API, you're limited to queries of 10 words/2K, 1K queries/day, and 999 results.
API: doGoogleSearch, doGoogleSearchResponse.
Problem is new Google API keys not released any longer-have to get existing keys.
Take results from Google search, build a list of domains/ports (e.g., 80, 8080, etc.).
Call nmap using this list.
Marc's talk is brute-force attack on crypt() password hashes on PS3.
UNIX crypt is 30 years old!
Even today, it is still the default for Solaris and Solaris Nevada [OpenSolaris] (see CRYPT_DEFAULT in /etc/security/policy.conf).
UNIX crypt builds a 56-bit key from the password (up to 8 chars) + 12 bit salt.
Best brute force tool is John the Ripper.
This tool uses Bitslice DES implemented by Dr. Eli Biham in 1997.
DES uses 8 S-Boxes, where each S-Box takes 6-bits input and produces 4-bits output.
The DES S-Boxes can be represented as logical instructions on 1-bit values.
With this representation, one can perform N encryptions/decryptions in parallel on N-bit processors.
The PS3 processor is the Cell B.E.
This proc has 7 usable cores "SPUs", each 128-bits wide—very good for using bitslice DES to crack UNIX crypt().
All the other DES stuff besides S-boxes doesn't need optimization, but hasn't been done yet
Brute force speed is 11.1 million passwords/second (on PS3 with 6 SPUs at 3GHz).
This is better than the current best implementation.
Also the best in performance/watt (now that hardware is cheap, energy still costs $$$).
Full printable password space is 95\*\*8:..:95\*\*0.
Using 50 PS3s (costs $20K) will take 70 days average to find any password.
Cost in energy (10 cents/KWh) is $1100 average.
This Brute forcing S-Box tool will be released next week as open source at
Chris uses a black-block approach to information gathering.
Start with just a domain name.
FInd out what it is, web servers, mail servers, DNS, etc.
Want external/internal IPs, network information.
Fierce DNS is a brute force DNS tool to get information from DNS.
Brute forces DNS names—finds subnets.
SEAT (Search Engine Assessment Tool) uses Google to get information.
Similar tool is Goolag.
Goog-mail.py, theHarvester.py, and other tools harvests the domain's email addresses using Google.
Very useful to find naming conventions in email addresses.
Extract Metadata from Document files: user names, dates, times, path names, phone numbes, software versions.
Online web-based tools are really good as they use their IP address, not yours for pen testing:
Maltego ties together information in a relationship chart (people, web sites, organizations, etc.).
Start with a hostname, shows related domains that use the same DNS server.
Shows hostnames under a domain.
Looks up net blocks and looks up names for IP addresses.
Maltego harvests documents and parses out Metadata.
Finds shared domains.
Shows incoming links to a web page.
Sorts relationships in different ways.
Maltego has a free version and $430 version—pay version is worth it.
This relationship stuff is good for social engineering.
Software versions especially useful for MS Office document attacks.
align="right" alt="Thomas Ristenpart"
Academic research project focus on understanding threats, and then build privacy-preserving device tracking.
I.e., tracking functionality + location privacy.
Project named Adeona after some Greek Goddess.
Adversary can eavesdrop on tracking information.
Cached updates reveal past locations, not just present.
Intrusions, subpoenas, insider abuse.
lightweight client with a location module and the Adeona Core that does encryption.
Another client can retrieve the location information.
First strawman idea was to install off-the-shelf encryption.
Issue 1: how to retrieve.
Issue 2: privacy of past locations.
Solution uses Anonymous, unlinkable, forward-private updates.
Uses a forward-secure pseudorandom generator (FSPRG).
Appears as random bits to eavesdropper.
Past locations before a given time can't be decrypted.
So you can decrypt location information only from time device went missing.
Also, having time information too fine-grain reveals identity.
Remote server storage implemented with OpenDHT (Distributed Hash Table).
It's still left encrypted on the remote server.
First released publicly a few months ago (Windows, OSx, Linux).
Hoping this app will take on a life of its own.
Privacy-preserving tracking resolves the fundamental tension
between gathering forensic data and user privacy.
Adeona core is crypto engine and has wide use.
Adeona system is immediately deployable privacy-preserving tracking solution.