The views expressed on this blog do not necessarily reflect the views of Oracle.
is Oregon's local Computer Security Conference lasting two days.
About half the speakers are local and half out-of-town.
The talks were all 20 minutes, some of which I summarize below.
The conference also had CTF and Hacker Jeopardy contests.
I didn't summarize all the talks—it was hard given the fast pace of the presentations.
See my previous blog for the
keynote speech by Oregon's Senator Ron Wyden.
alt="BSides 2016: Joseph FitzPatrick and Esteban Gutierrez"
height="280" width="281" />
Joseph FitzPatrick, BSides PDX organizer, and
Esteban Gutierrez, BSides PDX founder in 2011.
See my previous blog for the
keynote speech by Oregon's Senator Ron Wyden.
spoke of hardening the Cloud, specifically AWS.
AWS Security is not the default—it's not baked in.
There are tools to audit and to check the configuration, but it needs more.
Andrew and other volunteers built these tools, at
They built a CLI, AWS_IR, for incident response.
It's built in the spirit of a quote from Dam Kaminsky this year in a Black Hat talk—"defense without offense is just compliance."
It mainly collects incident data and disables compromised keys.
Andrew said the most common compromises are are key compromises.
This is a big threat as businesses have gone under from this—losing all their data.
To mitigate, use 2-factor authentication.
To support security tools, a cloud environment must have a rich set of APIs to support writing Security tools.
Joel Ferrier spoke about another
Margaritia Shotgun, which automates memory capture after a compromise.
After a compromise, don't pull the plug—capture memory first.
Linux has a tool, LiME, to do this using a network socket or local file system.
It uses a kernel module to capture memory and stream it for network upload over ssh.
Jem Berkes, of Galois, spoke on an experimental defense against DDOS attacks with a community of peers,
This is a research project, where the peers are ISPs and large networks (called Autonomous Systems (AS)).
DDOS attacks are growing dramatically. From 2012 to 2015 ithey grew in size from 60 to 500 GBPS. This is big since most ISPs have a capacity of 200 GPS and Internet Exchange points between ISPs have a capacity of 500 GPS.
A cheap botnet for hire can flood with <200 GPS.
Larger DDOS, >300 GPS is possible with amplification and reflection by spoofing IP addresses. Half of DDOS attacks use UDP (mostly DNS, NTP).
To mitigate, some propose more bandwidth, but that doesn't really solve the problem and only a few ISPs have the capacity, and this mitigation can be costly.
Another way to mitigate, a "traffic scrubber", is also expensive and works only when the outside network is not saturated.
The solution proposed with this project is to share traffic information between networks in a cooperative way. The idea is to mitigate at the sources of an attack.
When an attack occurs, the victim communicates over a management network or if that's down, a cell phone tether. The victim sends a digitally-signed "I'm being attacked" message that only the destination can report. That way, the source networks can filter out the packets.
No public announcement is needed for this solution and no central authority is required.
There were three crypto presentations right after lunch, but I only captured one—sorry hunger and speedy presentations got in the way! Josh Datko gave an overview of hardware crypto and Joey Dodds on high assurrance crypto.
Vim, (Yahoo.com email handle vimiam), gave an overview of the Trusted Platform Module (TPM) security chip.
TPM is represented as the "Swiss Army Knife" of computer security. It provides cryptographic functions, a random number generator, secure key store, and measured boot.
TPM as been criticized for providing a unique ID to track people, implement DRM, and invade privacy, and spy on people. But, although possible to kludge, it's not a TPM function. Besides, unique IDs are available elsewhere on computers. GRUB boot software refuses to support TPM. Apple used TPM but phased it out, and Germany bans TPM's use.
Measured Boot is where checksums are taken of firmware during booting, as opposed to Microsoft Secure Boot where digital signatures are checked. The problem with Measured Boot is there's no enforcement mechanism—booting occurs in any case. An altenative is Microsoft-style Secure Boot which uses public key signatures to verify software.
TPM 1.2 implements several crypto algorithms, although most are obsolete (MD5, SHA1, RSA-512). It's also very slow crypto on the order of seconds. TPM access is also single-threaded. TPM 2.0 implements modern crypto algorithms, but it is still slow and neither are widely-adopted.
Anothe TPM feature is a keystore. But it's not a true Hardware Security Module (HSM), but uses the hard disk to store keys. The TPM only provides a wrapping key and the keys are inaccessible if either the disk or the TPM is lost or breaks.
Among the Snowden papers was an abstract of presentation given at the CIA Trusted Computing Base Jamboree 2010. Basically they were able to extract TPM keys through a power analysis side-channel attack. They were able to access Microsoft Bitlocker encrypted data.
TPM 2.0 is supposed to improve on TPM 1.2, but is a classic example of the "Second-system Effect" named by Frederick Brooks in his Mythical Man Month book. Many features were added and the TPM spec grew from 3 to volumes with over 1000 pages.
TPM is a classic example of mis-design by committee. The TPM standard is produced by the Trusted Computing Group (TCG), which is based in Portland and sponsored by Microsoft, Intel, and HP.
The only full stack TPM 2.0 implementation is from Microsoft, although Linux has a bare-bones TPM 2.0 driver.
Recommendations: Don't use TPM (even 2.0). Don't use hardware crypto, except when embedded in instructions (a la Intel AES). Don't implement your own crypto, but use FIPS 140-2 certified software.
Use Secure Boot over TPM Measured Boot.
During the panel discussion, someone asked for a FIPS alternative, but I didn't catch it.
Virginia Robbins spoke about fileless-malware.
Traditional malware installs itself on the file system.
Fileless-malware doesn't bother—it's loaded into memory over the network.
The malware arrives either through email or clicking on a web link.
Being fileless is less of an issue than it used to be because it doesn't need to live long to accomplish its goal. Examples of damage include drive encryption with ransomware or a banking Trojan stealing credentials.
Gary Smith, of the Pacific NW National Labs, Richland, Washington, spoke about blacklisting.
Their facility is constantly under attack, so they need to blacklist.
But how? Write a script to sort and remove duplicate IP addresses of attackers.
This could be 60,000 rules, which doesn't scale with IPtables.
The rules are applied in the kernel during a stack interrupt and there's not a lot of time to do this.
Fortunately, there's IPset, which is a IPtables extension that hashes the IP addresses for quick lookup.
IPset is quick and the CPU cost is immeasurable.
So who's dropped?
Mostly China-based IP addresses with over 3/4 of attacks—most of those from Hebei, the People's Revolutionary Cyber Army.
Next comes the US and some European countries (NL, DE, UK).
The main attack port is 22 (ssh) for 3/4 of attacks—next comes telnet
(yes they still use telnet).
Xiaofei (Rex) Guo and Xueyan Wang of Intel discussed NumChecker, which detects rootkits using hardware performance counters.
Linux Kernel Rootkits subvert the kernel directly and are difficult to detect.
Most rootkits hijack control flow, so what NumChecker does is use hardware performance counters to detect unusual control flow.
The counters were built for performance tuning.
They run at the VM level, so can't be tampered from the OS.
The first phase is to log the counters, which could be triggered on the host or remotely.
What's done is a test program runs and the counter data is gathered and saved into a log.
The second phase, the offline phase, performs statistical analysis on selected frequent events. These events are stable, "High Touch" syscalls popularly used by rootkits.
So for a test run, NumChecker calculates a deviation and measures it against a "deviation threshold", currently set to 5%. If the deviation is greater, it's marked as a Rootkit.
In practice, there are no false positives. The deviation of rootkits over normal kernel objects is so great that the deviation of legitimate kernel objects and root kits don't overlap.
The performance impact of NumChecker is minimal. If it's invoked frequently, every 5-10 seconds, the impact is 1-12%.
Michael Leibowitz discussed Horsepill, a new kind of Linux Rootkit.
Historically, rootkits have been detected by kernel module signing, although
Unsigned drivers are still a problem.
With Secure Boot, the boot loader, kernel, kernel modules, and (mostly) rootfs are protected.
This leaves the ramdisk as largely unprotected.
Ramdisk is a cpio archive with minimal utiliites and init scripts.
What Horsepill does is add a backdoor here.
To remain undetected, it remounts /proc, makes fake kernel threads and hides the real threads with thread groups, then cleans up the initrd modifications.
Horsepill also hides its network traffic.
Falcon Darkstar Momot
discussed Language Security (LangSec) in pentesting.
LangSec has nothing to do with what programming language is used or which is best.
LangSec is the assurance a language does what it should do.
LangSec needs to be designed in—not retrofitted.
Multiple programs need to treat input the same way to avoid vulnerabilities.
For example, with Dan Kaminsky's cert bug, some programs ignore an embedded '\0' in the DN, and some do not.
Parser bugs occur when input is not validated correctly or not validated at all.
Pentesting should check that programs do validation of preconditions and also check that what's validated is what's used (sometimes the format requires duplicate information that may be inconsistent).
XML or JSON are presented as language saviors because of their well-formed input.
But only a subset of XML or JSON is used for a given language.
The specification can be more complex than the base XML or JSON.
Parser-generators (such as yacc or lex) help with parser safety.
They force your grammar to be context free.
Basically, you want the input language specified in BNF.
That allows discrete validation.
talked about Machine Learning.
In the world of Security Tools there's a lot of BS—products claim to use Machine Learning, but really just use statistics or basic correlation.
Ken says that Security Data is not just Big Data, but is morbidly obese big data.
Threats have a half-life and expire, so instead of hoarding big data, one must look at as a stream of data and look at it as throughput.
He mentions Lambda Architecture, which handles big data as a flow, instead of hoarded and kept for long periods at high cost.
Machine Learning comes in two flavors—Supervised Machine Learning and Unsupervised Machine Learning. Supervised requires hands-on interaction to use, while Unsupervised is automated (for example, learning from user behavior)
Lee Fisher (firmwaresecurity.com)
reviewed UEFI firmware security tools:
Morgan Miller (morganmillerux.com)
discussed Security and Usability.
These are often presented as opposing goals, but are actually secret BFFs. Good usability supports good security and good security supports good usability.
What is usability? It is:
Kevin Haley of Symantec
reviewed historic scams, with what they can teach about stopping social engineering.
Everyone knows the Nigerian 419 Scam. It has it's origin from the Spanish Prisoner Scam of 1588. Sir Francis Drake was an English Privateer against the Spanish Armada.
After a historic defeat of the Spanish Armada of 1588, scams appeared of a Spanish Prisoner claiming to know where Drake hid the gold.
The same idea morphed into the Nigerian 419 Scam after several English-taught Nigerian civil servants lost their jobs.
The Nigerian 419 Scam was finally stopped with Internet memes.
Then there's the Brooklyn Bridge scam. It first appeared in 1883. Reed Waddell sold the Brooklyn Bridge several times to Immigrants who didn't know the world they were entering. They were used to a Europe with private roads and tolls, and had a little money with them to start a business. So many were taken and bought the bridge for use in collecting tolls.
What stopped it? There were no Internet meems.
The scam was finally stopped with a pamphlet for emigrants.
So basically, the way to stop modern scams, such as Social Engineering, is education. We shouldn't say they are stupid, but are operating in an unfamiliar environment (Internet). We need to give people end-user education.
Isaac Robinson talked about end-user education of phishing scams.
89% of phishing scams come from crime syndicates and 9% are state-sponsored.
Phishing is dangerous because it allows for firewall bypass and steals passwords.
30% of phishing email is opened.
Basically, reading or "click through" presentations or online coursework does not work.
Instead, what's needed is teaching users to spot non-obvious phishing attempts.
One effective method is mock phishing training, if done on a regular basis (he recommends quarterly).
What he does is use Phishing Frenzy", which is open source software, used to attack users for Phishing. It can be modified for use in training.
Modify it by not sending credentials and remove Phishing Frenzy templates with real exploits.
When a user clicks on the mock phishing email, the link's directed to a warning/instructional page about phishing.
Of course, mock phishing needs to be done with management and IT authorization. The phishing email must be blocked at corporate firewalls to prevent escape.
The phishing click through went to single digits after the training.
This track had three presenters, all with excellent presentations.
They went too fast for me to take detailed notes, however.
Jeff Costlow (@costlow_sec)
spoke about securing the Software Engineering Process.
Secure design must be baked-in in advance.
First consider your threat models, then how to defend against them.
For 3rd-party code, evaluate prior vulnerability history and maturity.
Have a owner for all 3rd-party code you use.
Adopt a Vulnerability policy—how to react based on severity level (such as CVSS score).
Include security tools in your code workflow—static analyzers, ASLR, NX stack, and root-cause analysis.
Testing should be proactive and weigh more heavily on areas that have had the most vulnerabilities.
Use fuzz testing.
Finally, communicate security bugs to customers.