![]() The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo |
Toorcon 15
is the 15th annual security conference held in San Diego.
I've attended about a third of them and blogged here about the previous conferences I attended here, starting in 2003.
As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them.
I blame my dog for any typos.
Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information.
![]() Andrew Furtak and Oleksandr Bazhaniuk |
Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations.
They previously gave this talk at the BlackHat 2013 conference.
UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS.
UEFI is processor and architecture independent.
Malware can replace bootloader (bootx64.efi, bootmgfw.efi).
Once replaced can modify kernel.
Trivial to replace bootloader.
Today many legacy bootkits—UEFI replaces them most of them.
MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes.
A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor.
Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx).
X.509 certs with SHA-1/SHA-256 hashes.
Keys are stored in non-volatile (NV) flash-based NVRAM.
Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)).
Root cert uses RSA-2048 public keys and PKCS#7 format signatures.
Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation
Secure Boot does NOT protect from physical access.
Can disable from console.
Each BIOS vendor implements Secure Boot differently.
There are several platform and BIOS vendors.
It becomes a "zoo" of implementations—which can be taken advantage of.
Secure Boot is secure only when all vendors implement it correctly.
Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off.
Can also exploit TPM in a similar manner.
One is not supposed to be able to directly modify the PK in SPI flash from the OS though.
But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot.
Multiple vendors are vulnerable.
They will disclose this exploit to vendors in the future.
Recommendations:
![]() Yoel Gluck and Angelo Prado |
CRIME is software that performs a
"compression oracle attack."
This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header.
CRIME requests with every possible character and measures the ciphertext length.
Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time.
SSL Compression
uses LZ77 to reduce redundancy.
Huffman coding replaces common byte sequences with shorter codes.
US CERT thinks the SSL compression problem is fixed, but it isn't.
They convinced CERT that it wasn't fixed and they issued a CVE.
BREACH,
breachattrack.com
BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding).
It takes advantage of the fact that the response is not compressed.
BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds.
It needs attacker-supplied content (say from a web form or added to a URL parameter).
BREACH listens to a session's requests and responses, then inserts extra requests and responses.
Eventually, BREACH guesses a session's secret key.
Can use compression to guess contents one byte at-a-time.
For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes,
and "Supersecret Supersecret" (a correct guess) compresses 11 bytes,
so it can find each character by guessing every character.
To start the guess, BREACH needs at least three known initial characters in the response sequence.
Compression length then "leaks" information.
Some roadblocks
include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same).
The solutions include:
Mitigations
Future work
![]() Ryan Huber |
Ryan first discussed various ways to do a denial of service (DoS) attack against web services.
One usual method is to find a slow web page and do several wgets. Or download large files.
How to identify malicious hosts
Some mitigation
Bouncer, goo.gl/c2vyEc and
www.github.com/rawdigits/Bouncer
Bouncer is software written by Ryan in netflow.
Bouncer has a small, unobtrusive footprint and
detects DoS attempts.
It closes blacklisted sockets immediately (not nice about it, no proper close connection).
Aggregator collects requests and controls your web proxies.
Need NTP on the front end web servers for clean data for use by bouncer.
Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms.
Future features: gzip collection data, documentation, consumer library,
multitask, logging destroyed connections.
Takeaways:
![]() Peleus Uhley and Karthik Raman |
Peleus and Karthik talked about response to mass-customized exploits.
Attackers behave much like a business.
"Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School.
Mass customization is differentiating a product for an individual customer, but at a mass production price.
For example, the same individual with a debit card receives basically the same customized ATM experience around the world.
Or designing your own PC from commodity parts.
Exploit kits are another example of mass customization.
The kits support multiple browsers and plugins, allows new modules.
Exploit kits are cheap and customizable.
Organized gangs use exploit kits.
A group at Berkeley looked at 77,000 malicious websites
(Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits.
Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now.
So the drive with malware is towards mass customized exploits, to avoid detection
There's plenty of evidence
that exploit development has Project Manager bureaucracy.
They infer, from looking at the malware code, bureaucratic edicts to:
Exploits have "loose coupling" of multipe versions of software (Adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions.
Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs.
However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat.
Future research includes normalizing malware and Javascript.
Conclusion:
The coming trend is that
mass-malware with mass zero-day attacks will result in mass customization of attacks.
![]() Richard Wartell |
The attack vector we are addressing here is:
"RoP" is Return-oriented Programming attacks.
RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets").
Pinkie Pie was paid $60K last year for a RoP attack.
One solution is using anti-RoP compilers that
compile source code with NO return instructions.
ASLR does not randomize address space, just "gadgets".
IPR/ILR ("Instruction Location Randomization")
randomizes each instruction with a virtual machine.
Richard's goal was to randomize a binary with no source code access.
He created
"STIR" (Self-Transofrming Instruction Relocation).
STIR disassembles binary and operates on "basic blocks" of code.
The STIR disassembler is conservative in what to disassemble.
Each basic block is moved to a random location in memory.
Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations.
The old code is copied and rewritten with jumps to new code.
the original code sections in the file is marked non-executible.
STIR has better entropy than ASLR in location of code.
Makes brute force attacks much harder.
STIR runs on MS Windows (PEM) and Linux (ELF).
It eliminated 99.96% or more "gadgets" (i.e., moved the address).
Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!).
The unique thing about STIR is it requires no source access and the modified binary fully works!
Current work is to rewrite code to enforce security policies.
For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk.
![]() Collin Greene |
Collin talked about Facebook's bug bounty program.
Background at FB: FB
has good security frameworks,
such as security teams, external audits, and cc'ing on diffs.
But there's lots of "deep, dark, forgotten" parts of legacy FB code.
Collin gave several examples of bountied bugs.
Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care).
We use security questions, as does everyone else,
but they are basically insecure (often easily discoverable).
Collin didn't expect many bugs from the bounty program,
but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in.
Bug bounties bring people in with different perspectives, and are paid only for success.
Bug bounty is a better use of a fixed amount of time and money versus
just code review or static code analysis.
The Bounty program started July 2011 and paid out $1.5 million to date.
14% of the submissions have been high priority problems that needed to be fixed immediately.
The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year.
Spammers like to backstab competitors.
The youngest submitter was 13.
Some submitters have been hired.
Bug bounties also allows to see bugs that were missed by tools or reviews,
allowing improvement in the process.
Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet.
![]() Anna Shubina |
(I missed the start of her talk because another track went overtime.
But I have the DVD of the talk, so I'll expand later)
![]() Fuzzynop |
This talk is not about threat attribution (finding who),
product solutions, politics, or sales pitches.
But who are making these malware threats?
It's not a single person or group—they have diverse skill levels.
There's a lot of fat-fingered fumblers out there.
Always look for low-hanging fruit first:
Reverse engineering is hard.
Disassembler use takes practice and skill.
A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly.
Key loggers are used a lot in targeted attacks.
They are typically custom code or built in a backdoor.
A big tip-off is that non-printable characters need to be printed out
(such as "[Ctrl]" "[RightShift]")
or time stamp printf strings.
Look for these in files.
Presence is not proof they are used.
Absence is not proof they are not used.
Java exploits.
Can parse jar file with idxparser.py and decompile Java file.
Java typially used to target tech companies.
Backdoors are the main persistence mechanism (provided externally) for malware.
Also malware typically needs command and control.
![]() John Ashaman |
John Ashaman, Security Innovation
Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives.
Also tried using grep, but tis fails to find anything even mildly complex.
First the tool generated an Abstract Syntax Tree (AST) with the nodes
created from method declarations and edges created from method use.
Then the tool generated a control flow graph with the
goal to find a path through the AST (a maze) from source to sink.
The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order.
The tool, called "Scat" (Static Code Analysis Tool),
currently looks for C# vulnerabilities and some simple PHP.
Later, he plans to add more PHP, then JSP and Java.
For more information see his posts in
Security Innovation blog
and
NRefactory on GitHub.
![]() Eric (XlogicX) Davisson |
Eric (XlogicX) Davisson
Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address.
But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP.
Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care.
If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data).
Other parts of packet may leak more data about the IP.
TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum.
Also, one can usually determine the OS from the TTL field and ports in a packet header.
If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results,
such as look at packet contents for domain or geo information.
With hundreds of results, can import as CSV format into a spreadsheet.
Can corelate with geo data and see where each possibility is located.
Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender.
Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security.
![]() Sergey Bratus |
Sergey Bratus, Dartmouth College
(and Julian Bangert and Rebecca Shapiro, not present)
"Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper.
"You can't trust code that you did not totally create yourself."
There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs.
Thompson showed how a compiler can introduce and propagate bugs in unmodified source.
But suppose if there's no bugs and you trust the author, can you trust the code?
Hell No!
There's too many factors—it's Babylonian in nature.
Why not? Well,
Any Input is a program
ELF ABI (UNIX/Linux executible file format) case study.
Problems can arise from these steps (without planting bugs):
The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable).
Only solution is to freeze code and sign it.
But you can't freeze everything!
Can't freeze ASLR or loading—must have tables and metadata.
Any sufficiently complex input data is the same as VM byte code
Next Sergey talked about "Parser Differentials".
That having one input format, but two parsers,
will create confusion and opportunity for exploitation.
For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA.
Conclusions
Further information
See
and langsec.org.USENIX WOOT 2013 (Workshop on Offensive Technologies)
for "weird machines" papers and videos.