ToorCon 10 Computer Security Conference

ToorCon 10

San Diego, California Gaslamp District from the Convention Center

Intro

[Toorcon] ToorCon 10 is the 10th annual Computer Security Convention held in San Diego—this is the 6th one I've attended. It's held at the Convention Center on the bay front, with a view downtown of the Gaslamp District.

See the crane in the picture above? I find it ironic that some condos are still being built for addition to the current glut of housing, right in front of the (failed) Merrill Lynch and Washington Mutual buildings (the black and white buildings under the left side of the crane). Those crazy junk mortgages and financial instruments won't affect me, right? Ha ha. Well, moving on . . .

ToorCon has two track sessions, so I didn't catch everything, and I didn't take notes for all sessions. These notes are mine, so I may have missed something or made mistakes. DVDs of this year's presentations are available from http://www.mediaarchives.com/

"ToorCon Foundation" by h1kari

h1kari, toorcon conference chairman The conference chairman, h1kari, recently traveled to India to give a talk to ClubHack. While there, he visited a Montessori school there, Alpha Public School, which is a school in New Delhi trying to improve the quality of education for the poor. He is establishing the ToorCon Foundation to help Alpha and other projects. So far the foundation donated class materials from a local Montessori school. The school has only one laptop and needs more. They also need the help of people traveling to India to take donated laptops and class materials to the school.

Another project of the ToorCon Foundation is Project Ecycle. Ecycle repurposes used laptops. So far, they have donated these to a local Boys Club and in Vietnam.

ToorCamp

Finally, ToorCon is planning a "ToorCamp" Spring 2009. It will be held in a Titan-1 Missle Silo 3 hours outside of Seattle. They will have food, talks, activities for an estimated $200 admission. No hippies, no drum circles, no Burning Man-type events—this is a technical get together.


"Keynote: Black Ops of DNS 2008: It's The End of the Cache As We Know It" by Dan Kaminsky

Dan Kaminsky at ToorCon 10 Dan Kaminsky was the Keynote speaker. He's given talks to ToorCon before and is always entertaining. He's most-recently famous for finding the serious DNS (domain name service lookup) vulnerabilities and spoke in his keynote of current DNS problems, and general "takeaways" learned from these vulnerabilities.

DNS lookup starts at one of 13 root (top level) DNS servers, where each lower level may be delegated to other DNS server(s). Replies from the DNS server may be "spoofed" from a fake DNS server that replies to the client. A transaction ID prevents spoofing—the reply must match the request. However the ID is only 16 bits (0-65535), so there's a 1/65536th chance of guessing the ID correctly.

DNS replies also have a TTL (Time to Live), which is how long the reply may be kept by the client, in seconds. Low TTLs forces a DNS client to make more queries after the cached replies expire. Early recommendations were to have TTL values (e.g., 22 days) to reduce the opportunities of spoofing DNS replies. But, TTL is a stop gap, and was not intended as a security technology. Basically, there's a race between a "Good Guy" (a DNS server that gives correct responses) and a "Bad Guy" (which gives spoofed replies). Besides TTL, latency in replies give more opportunities for bad replies. A Bad Guy can reply a few hundred times before the Good Guy's correct answer is received. This reduces the odds of guessing to, say, 1/655. If there's, say, 10 lookups for closely-related domain names (such as google.com, 1.google.com, and 3.google.com, etc.), this reduces odds further of guessing (and spoofing) correctly (1/65 in this example). The Bad Guy can also send a false delegation so all further lookups are to the Bad Guy and return bogus delegations and replies. This technique is called with "DNS Rake". It works with BIND8/9, MS DNS, Nominu DNS servers. It doesn't work with DJB DNS, PowerDNS, or MaraDNS, but even these can be defeated if they are behind a firewall (and they often are).

"Out of Bailiwick Referrals" is a concept that DNS servers can reply with ANY record (not just the requested DNS record). This greatly expands opportunities for spoofing. One can, say, easily create a new fake TLD (e.g., .foo instead of .com), or return fake replies for arbitrary domain names. New Bailiwick Rules now prevents this (i.e., ignore unsolicited replies). But this is easily defeated—many DNS queries can be forced (e.g., from a web browser or mail server). Mail servers do lots of DNS lookups (for delivery, spam filter, and even SMTP HELO).

Takeaway #1: "Protocols Cannot Be Understood In Isolation". E.g., trusted hosts resolve DNS from untrusted hosts all the time. It's not a single protocol that's under attack, but the system. Here's another problem. GetHostByAddr() is used by clients to Do DNS lookup. The request goes to the attacker, and this allow the attacker to corrupt the system.

Takeaway #2: Everything You Do Can Be Used Against You. People still firewall with router ACLs. It's a cheap and easy way to firewall, because it does stateless, blind filtering by IP ranges and ports. The right way is to do stateful tracking by expense. This is expensive and error prone. Stateless works pretty good, but the main vulnerability is spoofing. TCP is relatively spoof-resistant with it's large request numbers. DNS, with 16-bit numbers is an exception and is relatively easy to spoof because of this and because of repeated, recursive lookups. A typical attack is to spoof replies from a neighboring DNS server.

Takeaway #3: If It's Stupid And It Scales, It Isn't Stupid. Halfway solutions often just work (e.g, router ACLs). Often "correct" technical solutions do NOT scale (e.g., DJB DNS uses 24-bit transaction IDs, so have 163,840,000:1 odds of guessing, but doesn't scale).

Many TTLs are too low: e.g., Facebook TTL 30 sec. or Google Analytics TTL 300 sec., forcing repeated DNS lookups. Also, IDS systems often block or slow down replies if they see a DNS attack. This makes attacks easier as it gives more time for a spoof attack. You can slow down the real DNS server at the same time you're spoofing it. It is often easier to spoof sub-domains than top level domain (e.g., 1.google.com instead of google.com). Takeaway #4; can't just solve 99% of the problem if the remaining 1% is VERY important

Proposed DNS ground rules

  • Secure all names, not just most
  • Secure all authoritative (primary) name servers
  • Can use DNSSEC, but not a solution yet
  • Must secure the client link to root and .com servers.

One proposed solution is always hold a DNS answer (NS record) for the entire TTL (ignore updates) But this is not realistic—will cause outages when IP addresses change. IT admins will NOT risk their jobs doing this. Takeaway #5: (realistically) more important for network to work than to be secure For the last (severe) DNS BIND vulnerability, there's only a 70% patch rate, even though the only problems it caused is overloading for a few heavily-used serves.

We need DNS solutions short of DNSSEC. DNS servers need to be smarter. They should detect attacks (e.g., notice imbalances between requests sent and responses received). They should defend only against the specific names being attacked. That is, do some kind of "extra validation". E.g., use the "0x20 defense (lower/upper case)" use mixed-case (e.g., wWw.GooGle.cOM). DNS is case-ignoring but case-preserving, so you can only accept responses with the exact (randomized) case used in the query. Double querying usually works (but some large name servers often don't reply with same names (e.., akamai, google, facebook) For DNS exception handling, notice, don't ignore, exceptions. Takeaway #6: Elegance is less important than coverage Be as elegant as possible,, but no more. The real world requires solutions that don't break google.

DNS clients: DNS servers get a lot of attention, but DNS clients have the same problem. The DNS client Transaction ID is usually sequential and easily guessed. Ports can easily be discovered and held open once found..

Email is a special case: it has it's own DNS record and no security. DNS MX records, used to ID email servers, allow email attacks. Can pollute messages or simply watch email.

Privilege escalation doesn't work for DNS. Users often conspire against you to get around it. SPF is not a solution for email— SPF is a DNS record, so can also be spoofed.

Takeaway #8: Code that's never been attacked, is usually remarkably fragile Web browsers/servers are very secure, but other clients are not secure, such as auto-software-update clients.

Problems with SSL: SSL doesn't scale well. Also, we really don't know if SSL is secure because of crypto or just because nobody bothered. SSL is not widely deployed. Takeaway #7: Never forget the human factor

SSL has many problems. SSL has no virtual hosting support. SSL certificate handling is fragile. Users often ignore security alerts (e.g., DNS mismatches), even when banking. SSL logins often start at insecure sites. E.g., login at http://www.paypal.com/ to SSL https://www.paypal.com. Many NON-web browser clients don't even validate SSL certs (fragile code). 42% of SSL certs for non-browser clients are self-signed (e.g., SSL VPN). SSL is signed with MD5, which is known to be weak. SSL revocation is a myth, browsers barely check, most non-browsers don't even bother. There are many badly-generated Debian/Ubunto SSL certs out there still being used (a typical SSL cert lifespan is 5 years). SSL certificate acquisition itself requires DNS (a weak spot). Scripting on a SSL page can still use non-SSL (http).

Takeaway #9: No bug is so good, that another bug can't make it better. E.g., Nimda combined email/share pollution/IIS exploits, brower bugs. EV SSL certs are not the answer and costs $$$$$. EV is a display mechanism, not a security mechanism.

Password problems Why is the web broken with passwords? Most secure web sites have a "Forgot my password" link (e.g., google, live, yahoo, paypal, ebay, myspace, facebook, linkedin, bebo, cragslist). They use email, which is NOT-secure to deliver the password. Is OpenID a solution? No—it uses DNS again (weak). What about OpenID with SSL? There's a still problem with bad certs (e.g., Debian-generated OpenSSL certs).

Takeaway #10: Flawed Authentication is the unifying theme of 2008's major bugs (auth/encryption weaknesses) We are failing big time (e.g., DNS, SSL VPN, NRNG, OpenID, package update, package mgr without auth. SNMPv3 flaw. SNMPv3 is a challenge-response auth, via HMAC. It has a bug where the SNMPv3 client only needs to get 1 byte right to break this (1 in 256 chance) The attack spoofs UDP packets. Takeaway #11: No bug so good, that another bug cannot make it better

"Loaded Dice: SSH Key Exchange & the OpenSSL PRNG Vulnerability" by Ben Feinstein

Ben Feinstein Ben will talk about key exchange in SSL and SSH2 and PRNG vulnerability. Won't talk about solving the discrete logarithm problem. Weakness of symmetric key crypto is a shared secret/pre-shared key (PSK) (as opposed to public-key crypto). Key management/key exchange is a common, easy attack point for symmetric key crypto. Hellman and Diffie came up with sharing secret on a insecure channel in 1976. Similar to Merkle's independent work, but Merkle's was classified at the time (1976).

Debian OpenSSL vulnerability discovered by Luciano Bello in 2008. The problem was a lack of PRNG entropy in Debian OpenSSL. Vulnerability existed since 2006-09-17, migrated to the "stable" release, and includes distributions downstream from Debian (e.g., Ubuntu). Debian created a tool sshvulnkey to search for these bad keys; it should be used elsewhere. Ben created "Snort Dynamic Preprocessor", released in DEFCON 16 and building on work from others. It analyzes packets captured with tcpdump (viewable with pcap). It works in part by linking in and using the vulnerable Debian OpenSSL library to brute force packets.

Protocol exchanges SSH versions. Server replies with Diffie-Hellman Key exchange, and supported mac, encryption and compression algorithms. Client replies similarly. Exchange is encrypted and is NUL-padded to thwart traffic analysis. Random value g is generated, with value (g\^a mod p) sent to server, and server replies with (g\^b mod p), where p is a "safe" large prime. Client knows g and a. Server knows b and learns the value of g. Shared secret is (g\^(ab) mod p). Eavesdropper knows g, but not a or b. Rekeying can be requested at any time (once per GB or per hour is recommended).

With vulnerable Debian OpenSSL library, a or b is completely predictable by brute force, based on OpenSSH process PID. Observers can easily tell, without decryption, if the client or server is using the vulnerable Debian OpenSSL library.

Future work: use version strings to focus just on vulnerable SSH servers and clients.

"Your risk is not what it used to be" by Ariel Waissbein

Ariel Wassbein Ariel's talk is on risk analysis and core security technologies. Ph.D in computer math and complexity. Vulnerabilities used to be reported to a closed circle and patches were slowly pushed. But what have we learned?

  • First, no patches (updates) means you're uninformed.
  • Second, you need to check what are the threats fixed by patches and analyze them for impact on you.

Can "play with toys (tools)": scan systems periodically and archive. After you learn about a threat, check for changes. Do-it-yourself method: keep installers and updates, and take snapshots of software, Run exploit and look for changes.

"Insight" is a tool that simulates attacks against computer networks, using a pen-testing framework model. Copy exploit scenarios to your simulator to determine potential impact of an exploit. Scenario includes network device, OS/version, filesystem, & vulnerabilities. Not simulated to full detail—only some syscalls are simulated.

Ref: http://corelabs.coresecurity.com/

Nick Farr and Eric Michaud

"Freifunk in the USA: Leveraging Community Organizations to build Neighborhood Wireless Networks" by Nick Farr & Eric Michaud

Nick and Eric will talk about setting up sustainable community wireless networks in DC. Cofounders of HacDC, and both are CHNN )Columbia Heights Neighborhood Network) Community Coordinators. Network not up yet, but will use a network in NYC as a model.

  • Must be actively engaged in community
  • Must be sustainable
  • Partner with local organizations
  • Must have someone pay the bills
  • "Free Beer" is not sustainable

Initial community wireless networks are "accidental", ad-hoc, and short-lived. German "Freifunk" networks (means "Free Network") are active mesh-networks provided by people with bandwidth.

Most popular protocols to allow computers to migrate among multiple APs:

  • WDS - Wireless Distribution System (planned for initial use)
  • OSLR - Optimized Link State Routing Protocol (phase 3)

OSLR finds a path to the "cloud" (multiple exit points to Internet). Usually easy to setup; great for multiple end points. But doesn't do QOS checking and it's not useful as we only have one exit point. Hard to manage, but getting better.

WDS has lots of overhead, works only with a single exit point. Easy to setup/manage.

Community wifi gets lots of different people excited and you get lots of participation. Big failure rate because: nobody to pay bandwidth, nobody to fix it, non-technical users and stakeholders not engaged, failure to improve network, and eclipsed by commercial efforts (DSL, cable).

Phase I: build a stable base. Physical base: rewire a church building (old building with dead wire). Funding base: from lowering costs, economy of scale. People base: make friends and provide training, service, and support.

Phase II: Community through technology. Network builds around community wireless, outside church building. Train during community events for users within network range, and provide hardware for extension. Routers use OpenWRT and WDS and powered with POE (power over Ethernet).

Phase III: Extend the Network. "If you build it, they will come." Encourage experimental extensions to main network. Place nodes where needs exist and encourage extensions.

Concepts:

  • build community first, network second
  • build solid base
  • let users take ownership; provide training/support
  • provide more than just free wireless

"Advanced SQL Injection" by Joe McCray

Joe McCray Joe of Learn Security Online gave some SQL injection techniques. SQL injection is where raw SQL is injected into the URL to query a SQL server behind a web server. The SQL Injection "classes" are:

  • Inband - see results on screen
  • Out-of_band - results elsewhere (email)
  • Infrential: have to test what's going on

Inband: can learn data types from errors.

Out-of-band: try to push results to your own server (web site, DNS, email, etc.),

Inferential: if you can't see errors, it's "Blind SQL Injection". Results must be reversed-engineered.

SQL Injection tools are not useful: e.g., vulnerability scanners. Injection types (in order of preference):

  • error - learn from error messages
  • union - can combine multiple SQL queries
  • blind - have to learn without results (just 1 bit of info: either succeeds or doesn't (or longer delay))

Manual testing is usually more useful—can use multiple methods. MS SQL easiest to inject (over Oracle). MySQL closer to MS SQL, but doesn't have error-based injection technique available. MS SQL has more built-ins, is more flexible, and has less security mechanisms.

For privilege escalation, you can add, say, a ping for 8 seconds to see if it's being executed. Can usually remove client-side filtering (JavaScript) to make results more clear. Bypass alphanumeric filtering: e.g., "like" can replace "=". Can bypass signature-based IDS with 1=1, encoded comment, or string injection.

"One XSS To Rule The Enterprise" by grutz

grutz This talk is about using enterprise-favored single sign-on (specifically NTLM) against itself. grutz does penetration (pen) tests for a living. Web vulnerabilities have not been considered a big deal, except for SQL injections. But with "Reflected XSS" one can gain secure file access, admin functions, corporate email, etc.

NTLM is a MS Windows challenge-response auth protocol, then the server validates with a separate Domain Controller (DC). With Kerberos, OTOH, both the server and client auths with the Kerberos Domain Controller (KDC). NTLM is ubiquitous—on iPhone. NTLM seems strong, so no inducement to replace with Kerberos. NTLM is supported over everything: HTTP, IMAP, POP3, SMTP, NNTP, etc.

NTLM has a weakness—authentication is one-time when the Microsoft IE domain is set to Intranet (internal or local network). grutz demo'ed a tool he wrote that took advantage of this weakness, Squirtle. Squirtle that can serve as a Rogue DC that signs you in to the domain. It assumes you are in the local network domain, as in a Corporate Intranet, where auth is less strict.

Kerberos is not vulnerable to rogue DCs—NTLM is vulnerable. But, unfortunately, you can not force a Windows client to not use NTLM and only use Kerberos—it can always use NTLM.

"Googless" by Christian Heinrich (cmlh)

Christian Heinrich Christian is an Australian security manager and this presentation is at www.slideshare.net/cmlh

With Google SOAP search API, you're limited to queries of 10 words/2K, 1K queries/day, and 999 results. API: doGoogleSearch, doGoogleSearchResponse. Problem is new Google API keys not released any longer-have to get existing keys. Take results from Google search, build a list of domains/ports (e.g., 80, 8080, etc.). Call nmap using this list.


"Breaking UNIX crypt() on the PlayStation 3" by Marc Bevand

Marc Bevand Marc Bevand was the speaker. I've recently integrated Marc's optimized MD5 and ARCFOUR amd64 assembly open source code into OpenSolaris, and I thanked him personally for the code.

Marc's talk is brute-force attack on crypt() password hashes on PS3. UNIX crypt is 30 years old! Even today, it is still the default for Solaris and Solaris Nevada [OpenSolaris] (see CRYPT_DEFAULT in /etc/security/policy.conf). UNIX crypt builds a 56-bit key from the password (up to 8 chars) + 12 bit salt.

Best brute force tool is John the Ripper. This tool uses Bitslice DES implemented by Dr. Eli Biham in 1997. DES uses 8 S-Boxes, where each S-Box takes 6-bits input and produces 4-bits output. The DES S-Boxes can be represented as logical instructions on 1-bit values. With this representation, one can perform N encryptions/decryptions in parallel on N-bit processors.

The PS3 processor is the Cell B.E. This proc has 7 usable cores "SPUs", each 128-bits wide—very good for using bitslice DES to crack UNIX crypt().

All the other DES stuff besides S-boxes doesn't need optimization, but hasn't been done yet (e.g., E-box).

Brute force speed is 11.1 million passwords/second (on PS3 with 6 SPUs at 3GHz). This is better than the current best implementation. Also the best in performance/watt (now that hardware is cheap, energy still costs $$$). Full printable password space is 95\*\*8:..:95\*\*0. Using 50 PS3s (costs $20K) will take 70 days average to find any password. Cost in energy (10 cents/KWh) is $1100 average.

This Brute forcing S-Box tool will be released next week as open source at http://epitech.net/~bevand_m

"New School Information Gathering" by Chris Gates

Chris Gates Chris is a penetration (pen) tester, whose blog is at carnal0wnage.blogspot.com/

Chris uses a black-block approach to information gathering. Start with just a domain name. FInd out what it is, web servers, mail servers, DNS, etc. Want external/internal IPs, network information.

Fierce DNS is a brute force DNS tool to get information from DNS. Brute forces DNS names—finds subnets.

SEAT (Search Engine Assessment Tool) uses Google to get information. Takes awhile. Similar tool is Goolag.

Goog-mail.py, theHarvester.py, and other tools harvests the domain's email addresses using Google. Very useful to find naming conventions in email addresses.

Extract Metadata from Document files: user names, dates, times, path names, phone numbes, software versions.

Online web-based tools are really good as they use their IP address, not yours for pen testing:

  • ServerSniff.net snifs web, SSL certificates, DNS. Gets domain names, affiliated companies.
  • DomainTools.com finds prior domain history, alerts for new registrations, and searches for all domains a registrant owns. Best tools cost $$$$, but reasonable cost.
  • CentralOps.net: DNS scans, verifies email with SMTP probes. Clez.net similar.
  • spoke.com is a people search tool that finds people that works for a company.

Maltego ties together information in a relationship chart (people, web sites, organizations, etc.). Start with a hostname, shows related domains that use the same DNS server. Shows hostnames under a domain. Looks up net blocks and looks up names for IP addresses. Maltego harvests documents and parses out Metadata. Finds shared domains. Shows incoming links to a web page. Sorts relationships in different ways. Maltego has a free version and $430 version—pay version is worth it.

This relationship stuff is good for social engineering. Software versions especially useful for MS Office document attacks.

"Privacy-preserving Location Tracking of Lost or Stolen Devices: Cryptographic Techniques and Replacing Trusted Third Parties with DHTs" by Thomas Ristenpart

Thomas Ristenpart Thomas Ristenpart is a PhD student at UCSD working on Adeona, "Private, Reliable, Open Source". Internet Device Tracking Systems is a client that uses IP address, SSID, GPS, webcam pictures, etc. and calls home with the information when on the net. Problems with these systems are:

  • Ensure tracking can't be removed/disabled
  • Forfeit owner location privacy. This talk focuses on this concern

Academic research project focus on understanding threats, and then build privacy-preserving device tracking. I.e., tracking functionality + location privacy. Project named Adeona after some Greek Goddess.

Adversary can eavesdrop on tracking information. Cached updates reveal past locations, not just present. Intrusions, subpoenas, insider abuse.

Adeona client: lightweight client with a location module and the Adeona Core that does encryption. Another client can retrieve the location information.

First strawman idea was to install off-the-shelf encryption. Issue 1: how to retrieve. Issue 2: privacy of past locations.

Solution uses Anonymous, unlinkable, forward-private updates. Uses a forward-secure pseudorandom generator (FSPRG). Appears as random bits to eavesdropper. Past locations before a given time can't be decrypted. So you can decrypt location information only from time device went missing. Also, having time information too fine-grain reveals identity.

Remote server storage implemented with OpenDHT (Distributed Hash Table). It's still left encrypted on the remote server.

First released publicly a few months ago (Windows, OSx, Linux). Hoping this app will take on a life of its own.

Summary: Privacy-preserving tracking resolves the fundamental tension between gathering forensic data and user privacy. Adeona core is crypto engine and has wide use. Adeona system is immediately deployable privacy-preserving tracking solution.

<script type="text/javascript" src="http://platform.twitter.com/widgets.js"></script>
<script src="http://connect.facebook.net/en_US/all.js#xfbml=1"></script>
Comments:

Post a Comment:
Comments are closed for this entry.
About

Solaris cryptography and optimization.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today