Toorcon 2004 Security Conference, San Diego, www.toorcon.org
Once again, I attended San Diego's
annual hacker and Security Convention sponsored by a local hacker group.
It's cheap ($60), tax-deductible, and convenient for me! (my time, my money).
Disclaimers: These are my notes, so it has typos and isn't highly polished.
I may have misinterpreted other people's words or ideas.
Opinions here are not mine nor my employer.
Here's the best of the sessions I attended:
Keynote: The Future of Encryption
OpenPGP Standard Author, PGP Co-founder,
History. Cryptography invented after the third person in the world could learn to read and write.
Traditionally a arcane skill and done by a few clever people.
Became common after WW II: Enigma machine worked and put Crypto people out of work.
Computer invented specifically to break Enigma machine.
Software cryptography came into play in mid-1970's with DES. Became a standard, a technology (not a secret, not a art).
Public key cryptography took care of secret key management problem.
Present. Network everywhere. Encryption must be done with non-clever or non-computer people.
Adoption and Human Interface Design is current focus.
Problems being ignored:
Digital signatures (DS): problem is in laws, not technology. DS not a signature (signature is an act, not a thing),
but more like a seal or voice. Is a DS a commitment? Is it a tamper-evident seal? How do we know?
DS pushes liability to signer. Credit card. Email. Agreement.
Are users or servers certified? If everyone has a cert, why should they be trusted?
Sysadmins more responsible than typical user.
Same problem with universal DS as with universal ID cards to fight terrorism.
Another example is using DS to fight spam (spammers can get DS also).
What does "Non-repudiation (sp)" mean?
Need to have accountability when using DS, otherwise not believable.
Blinded signatures. Chom Patent expiry Summer 2005—may be used more once this happens.
Certifies something without revealing private information.
Group signatures. Someone in a known group signed, but don't know which one.
Gives accountability while preserving privacy.
Reliability—always interwined with security. Security: protect against intelligent attacks. Reliability: protect against unintelligent attacks.
Mediated Locks: Can only put worthless things in a unpickable safe (only a mad person would put valuable stuff there).
Must have access to protected keys or data.
Pervasive Encryption: Humans make wrong decisions in the heat of the moment.
E.g., security vs. keeping job. Or email vs. IMs or hotmail or dialup modems.
Policies need to be setup beforehand to be followed and automated.
End-to-End Security myth. Not possible. What's an "end"? What's important?
Close ends lose reliability and usefulness (e.g., spam filter or archiving). Distant ends lose security.
"Ends" need to be at appropriate location, depending on these trade-offs.
Digital Rights Management (DRM). Not solvable. Can always be broken. Works against polite or lazy hackers.
Doesn't work in real world with cell phone cameras and recorders.
Nobody wants it. Works only if everyone honest. Legal liabilities will stop DRM documents in corporations.
DRM useful for niche markets though (e.g., government or financial).
The Accountable Net.
Can provide privacy and security wht the right questions.
Issue is accountability and reputation, not identity (e.g., do you pay your bills?)
Authority-based authentication useful (e.g., ,are you a spammer).
Identity Management. Trendy words for single sign-on. Everyone wants it, but
trades off security for management.
Federated Identity—not useful for end users. Breaks privacy from tying
Hash Functions Breaking.
PGP 2 (not sacred because of Zimmerman; don't use—use PGP 5).
SHA-1 still safe; can move to SHA-256 if needed.
More advances coming.
Secure hash functions easy, but fast hash functions are hard.
E.g., MD5 half the speed of MD4.
Advances with little impact.
Fast ciphers (don't care which one is being used), public key systems, encryption_authentication.
Quantum "Cryptography" interesting physics, but not cryptography.
Pet peeve of his.
Sci-fi-like Technology. Unlikely but possible. Quantum computing. DNA computing.
Faster-than-light (FTL) information transfer.
Unexpected advances in math (factoring, discrete log, AES algebraic equation solution).
Summary. Cryptography pervasive, invisible, interoperable, invisible core
technology, and more use in future.
Went next door to Seaport Village. Had a bean burrito while lots
of young girls were singing Karoake(sp). Some were good, some not,
but they were having fun.
I also visited the Hyatt's pool on the 4th floor. Lots of people. There was
a nice view of San Diego Bay. I saw 2 aircraft carriers and I noticed they
are now surrounded by large inflatable pontoons (to protect against suicide
boats, I guess).
PATRIOT Act, Privacy and You
Jennifer Granick, Esq.,
Stanford Law School,
Goal here is not to review USA PATRIOT Act—too complicated. But to review
impact to you. There's a patchwork of several laws about privacy.
Will talk about some of them 4th Amendment, Stored Communications Act, Electronic Communications Privacy Act,
Wiretap Act, Computer Frad and Abuse Act of 1986, and USA PATRIOT Act.
Privacy: right to be left alone (autonomy) and right to control your
information. Privacy enables other rights, such as speech, association, or
US 4th Amendment is her favorite (even over 1st—speech). Protects against unreasonable search and
seizure. Gives a reasonable expectation of privacy (e.g., in your house).
Sometimes have gray areas.
If there's a reasonable expectation of privacy, you need probable cause to get a warrant from a judge.
With warrant, you must knock and announce.
If these not followed, evidence is excluded.
PATRIOT Act allows secret search ("sneak and peek").
Computer Frad and Abuse Act. Disallows damage or unauthorized access.
E.g., court says this includes spam, DNS search robots, Internet auction or Travel agent spiders,
and port scanning.
This is if it especially true if it downs or DOSs the computer (must "cause harm").
Otherwise rulings not consistent. Law is vague and overly broad.
Interception of communications.
Information more private than just fact there's some communication ("chatter").
Need warrant for information.
Rules differ for intelligence agency, law enforcement, ISp, and employer.
Wiretapping can't be done (excepts require a wiretap warrant).
PATRIOT Act made wiretapping easier: giving support to terrorists.
Nationwide/roving wiretaps now legal.
Monitoring computer "trespasser" now ok (if no business relationship).
ISPs may monitor.
At this time my laptop ran out of power.
The two important remaining points.
Most of the PATRIOT Act provisions "sunset" (expire) Summer 2005.
A nation has the right to defend itself.
However, it's important to make sure that, when it's renewed next year they not be so broad as they are now.
Honeynet Project: Honeynets for the Masses
Patrick McCarty Azusa Pacific University
Honeypot - a decoy, no production value. Purpose is gathering information.
Separating production from malicious(sp) project.
Honeynet - system of Honeypots. Architecture, not a product.
Data Control - no restrictions incoming to Honeypot. Scrubbed/limited
outgoing connections (keep honeypot compromises from spreading to Internet).
Data Capture -
Network-based uses tcpdump or Snort.
host-based uses Sebek (module that captures all sys_read kernel calls).
Attacker can't sniff for monitoring traffic with Sebek (not network based).
Takes a lot of resources to properly maintain (ton of data).
Anti-honeynet technologies available (such as anti-Sebek).
Honeynets can attack other Honeynets.
Privacy a possible issue.
Honeywall - "control center" of a honeynet. Goals are data capture and
then altering attacks.
Tools used: IPTables, snort, swatch, gr-security, tcpdump, and (soon) ntop.
Available on the "Honeywall CD" (a bootable CDROM with a UI).
Future - distributed analysis among different physical locations
with a central database.
A Survey of Novel Approaches to Network Security
aempirei, Baseline Research
Profile - used to assess and predict someone's behavior: behavior and appearance.
Most things too complex to profile automatically.
Behavioral analysis - used to create a profile, then can preduct future behavior
based on a "fingerprint" of known profiles.
Stochastic process - non-deterministic (random or complex) behavior.
E. g., traffic, gambling.
Can be modeled with statistical models.
Static stochastic processes - games of chance or quantum dynamics.
Can't predict--can only use statistics.
Dynamic stochastic processes - don't understand underlying model.
Context-sensitive state and changing probabilities.
E.g., "i before e except after c" only true 80% of the time.
Less return with more refinements to rule.
Still 170 words after this grep:
grep -v '\\([rc]ei\\)\\|\\(ei[sdlnkrtzg]\\)'
Primes are deterministic, but sufficiently complex to defy prediction and appears
to be stochastic.
Can use frequency distributions to distinguish random data from meaningful data (e.g., DNA sequences, English text, or (poorly encrypted) XOR-ed text is meaningful, PRNG or DES encrypted data appears random).
Markov Process - predicts stochastic processes.
Can use Sequitur (Nevill-Manning Algorithm) to analyze non-context-free (CFG) grammars with a n-gram model to reveal common structures (useful for compression).
E.g., several repetitive patterns in
"In the beginning God created the heaven and the earth."
Can use this to identify authors from anonymous text
(or author gender or text language or dog bread DNA)
PRNG: Linux good, \*BSD good for 2 high bytes, 2 low bytes not good (predictable).
Found by pumping random stream in Gzip and looking at byte distribution.
Anyone can do this.
Pkzip was used to classify European language similarities (by how well they compress).
Sequitur (dictionary builder) can also be used instead of pkzip, by
comparing number of rules generated by Sequitur.
Can use this to identify hackers who break into a system:
sort ~/.bash_history|uniq -c |sort -nr > frequency.dat
Compare output using Stereotype.
"Most Likely Path First" (similar to OSPF routing)
tree built of adjacent word comparisons. Can use to identify spam, for example.
Can also use to prioritize hosts that are probably vulnerable
(looking at activity or open ports).
Making Privacy Enhancing Technology a Reality
Len Sassaman, PGP Security
PGP released over 10 years ago. Other security software developed:
SSL/TLS, S/MIME, PEM, MOSS, disk encryption,
ecash (wiped out with patents and Paypal), Anonymizer, Mixmaster.
Problem is consumers don't demand privacy (want it, but won't take steps
tp protect themselves—it's inconvenient).
"Privacy policies are the opiate of the Internet." A feel-good measure.
Some companies violate own policies
(e.g., Jet Blue giving out travel info.).
Criminals certainly don't obey privacy policies.
Most crypto software "cool projects" but not usable.
Political problems also.
Often designed by committee and is often bloated with options and details.
People often in the know explain encryption software by how it's implemented
not by what it does (e.g., "PGP is a public key . . ." not
"encrypts and signs files")
PGP's "web of trust" is shallow.
Too easy to misuse.
SSL is worse than PGP.
Has top-down trust model, but easy to get a certificate.
Excessive SSL warnings give click-fatigue.
Users click through certificate warnings.
Verisign says need a trusted third party to use SSL/TLS.
Crypto is a success where it's mandated (e.g., military, banking).
True user-empowering encryption should have:
Friendly UI, simplified concepts, 1-click.
User goal is not encryption, but to keep email from being observed.
No reading, no extra skill required.
Spam Forenics: Reverse-Engineer Spammer Tactics
Justin Mason, SpamAssassin,
Antispam tools work because spammers don't write their own tools—they
buy spamware (currently most popular are Dark Mailer and Send Safe).
Spammers target AOL, since they are relatively clueless (and buy the junk
advertised in spam).
Spammers like HTML as you can hide text and malicious script in it.
Early days, spammers identified themselves with X-Mailer headers.
Now spam is disguised as being from MS Outlook Express.
However, can parse Message-ID to tell spam from real MS OE email.
Hashing Systems. E.g., Razor (open source) Pyzor, DCC, or AOL internal filters.
If same message body sent to say 500 people, its spam.
Also user-reported spam (but sometimes users report non-spam as spam).
Hashbusting. Spammers adding random gibberish to email body.
But, "random" not really random—patterns observable in gibberish.
Length of "random" string character range, or location were static.
E.g., time(NULL)/4444 used as random email address: firstname.lastname@example.org.
Spammers top priority is avoiding abuse reports to their ISPs
(expensive for them). They "list wash" reporters off their list
so the ISP doesn't get reports.
They encode recipients email address with rot13 in body.
Spammers like ROT13, even though it's a trivial "spy decoder ring"
Spam software then added templates to specify where randomness and
parameters are placed. But it also makes it easy for anti-spam ware.
Spamware also hides behind proxies (legit bulk mail and mailing lists do not).
Bayes-Busters. Bayesian filtering popular recently.
Random word sequences used to defeat filter.
However word sequences are the wrong length. Easy to detect.
Look for a high number of HTML tags that don't exist:
li<modem>ke recei<benzedrine>ved th<false>is ema<downey>il
Easy to detect gibberish and chaff with long word detection,
bad tag detection, or a lot of invalid html.
Look for 18th century words
(much gibberish is text taken from Project Guttenberg etext).
Many strange email headers in spam, rarely or never seen in normal email.
Spam software also has special MIME boundary patterns.
Pretty crappy, of course—spammer friendly.
One loophole. Only "ISPs" can now bring action.
If you host a few other people's email, for example, you can qualify as
a ISP and sue spammers.
Complaint system: SpamCop and AOL are good (AOL only for AOL customers).
Hard to do by hand (examing headers time-consuming and non-trivial).
Future: download and reverse-engineer spamware (DMCA an issue).
Can learn a lot from just spamware docs.
Currently SpamAssassin is overloaded writing rules.
Spam Assassin has always tried for high-accuracy at the expense of
high system load. In future, will have a plug-in system to choose the
set of filters to use.
Risks in Passive Network Discovery Systems
Brian Hermacki, Symantec Research Labs,
Security systems require knowledge of their environment to operate effectively.
I.e., net topology, host, user, local policies. Can't be hard-coded.
Even large companies rarely have their network topology sketched out well.
Efforts to write tools (active discovery or passive discovery). But
these tools suck.
Active Network Discovery System (ANDS).
Usually take an old map and update it.
This sucks. Slow, labor intensive, human error, not detailed, snapshot only,
obtrusive (triggers security sensors), misses hardened assets and dark nodes,
doesn't work through proxies and firewalls.
Passive Network Discovery System (PNDS).
PNDS listens to a network to gather info on
host OS, general topology, apps and patch levels, peers.
PNDS vs. ADNS:
Deeper information (than just probing).
Dark spots: active hosts visible even if scanning hardened (but may still miss a quiet host).
A large number of sensors, scalability problems for large networks,
lots of app knowledge required (so high dev costs). Security.
PNDS Security issues.
Can poison PNDS with lots of noise: just plug a laptop in.
Can use tcpreplay 2.x to do this.
Can flood out old (correct) results.
DOS not a problem. Easy to detect.
Should be suspicious of changes (non-trivial;
easier when DHCP networks segmented).
Need to be robust
Use both ANDS and PNDS for best results.
Hard to compromise from outside network.
Need inside knowledge.
Don't build your own NDS—tricky.
Technorati Tags: security