It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference!
Toorcon is an annual conference for people interested in computer security.
This includes the whole range of
hackers, computer hobbyists, professionals, security consultants, press,
law enforcement, prosecutors, FBI, etc.
We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003).
This year's "con" was held at the Westin on Broadway in downtown San Diego, California.
The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers.
Also, I only reviewed some of the talks, below, which I attended and interested me.
Aditya talked about Android smartphone malware.
There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities.
Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit).
Android protection includes sandboxing, security scanner, app permissions, and screened Android app market.
The Android permission checker has fine-grain resource control, policy enforcement.
Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker.
What security problems does Android have?
Aditya classified Android Malware types as:
What are the threats from Android malware?
leak info (contacts),
corporate network attacks,
malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions.
Android malware is frequently "masquerated".
That is, repackaged inside a legit app with malware.
To avoid detection, the hidden malware is not unwrapped until runtime.
The malware payload can be hidden in, for example, PNG files.
Less common are Android bootkits—there's not many around.
What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself.
For example, the DKF Bootkit (China).
Android App Problems:
"ELF" is an executable file format used in linking and loading executables
(on UNIX/Linux-class machines).
"Weird machine" uses undocumented computation sources
(I think of them as unintended virtual machines).
Some examples of "weird machines" are those that:
return to weird location,
does SQL injection,
corrupts the heap.
Bx then talked about using ELF metadata as (an uintended) "weird machine".
Some ELF background:
For ELF, the "weird machine" is found and exploited in the loader.
ELF can be crafted for executing viruses, by
tricking runtime into executing interpreted "code" in the ELF symbol table.
One can inject parasitic "code" without modifying the actual ELF code portions.
Think of the ELF symbol table as an "assembly language" interpreter.
It has these elements:
The ELF weird machine
exploits the loader by relocating relocation table entries.
The loader will go on forever until told to stop.
It stores state on stack at "end" and
uses IFUNC table entries (containing function pointer address).
The ELF weird machine, called "Brainfu*k" (BF) has:
Bx showed example BF source code that implemented a Turing machine printing "hello, world".
More interesting was the next demo, where bx modified ping.
Ping runs suid as root, but quickly drops privilege.
BF modified the loader to disable the library function call dropping privilege, so it remained as root.
Then BF modified the ping -t argument to execute the -t filename as root.
It's best to show what this modified ping does with an example:
$ ping localhost -t backdoor.sh # executes backdoor
The modified code increased from 285948 bytes to 290209 bytes.
A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable.
The tool modifies .dynsym and .rela.dyn table, but not code or data.
Valkyrie talked about mobile handset privacy.
Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers.
Franken asked the FCC to find out what obligations carriers think they have to protect privacy.
The carriers' response was that they are doing just fine with self-regulation—no worries!
Carriers need to collect data, such as missed calls, to maintain network quality.
But carriers also sell data for marketing.
The data sold is not individually identifiable and is aggregated.
But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly.
The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007.
Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC).
FTC is trying to improve mobile app privacy,
but FTC has no authority over carrier / customer relationships.
As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated.
Who are the consumer advocates?
Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant.
What to do?
Dan talked about hacking measured UEFI boot.
First some terms:
Microsoft pushing TPM (Windows 8 required), but Google is not.
Intel TianoCore is the only open source for UEFI.
Dan has Measured Boot Tool at
with a demo where you can also view TPM data.
TPM support already on enterprise-class machines.
UEFI toolkits are evolving rapidly, but UEFI has weaknesses:
Boris talked about problems typical with current security audits.
"IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest.
There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)).
Regulations don't make you secure.
The cloud is not secure (because of shared data and admin access).
Defense and pen testing is not sexy.
Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills.
Step 1: First thing is to
Google and learn the company end-to-end before you start.
Get to know the management team (not IT team), meet as many people as you can.
Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE).
Learn different Business Units, legal/regulatory obligations,
learn the business and where the money is made,
verify company is protected from script kiddies (easy),
learn sensitive information (IP, internal use only),
start with low-hanging fruit (customer service reps and social engineering).
Policies. Keep policies short and relevant.
Generic SANS "security" boilerplate policies don't make sense and are not followed.
Focus on acceptable use, data usage, communications, physical security.
Step 3: Implementation: keep it simple stupid.
Open source, although useful, is not free (implementation cost).
The difference between hackers and investigative journalists:
For hackers, the motivation varies, but method is same, technological specialties.
For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills.
J-School in 60 Seconds:
Generic formula: Person or issue of pubic interest, new info, or angle.
Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence.
Media awareness of hackers and trends:
journalists becoming extremely aware of hackers with
congressional debates (privacy, data breaches),
demand for data-mining Journalists,
use of coding and web development for Journalists,
Journalists busted for hacking (Murdock).
Info gathering by investigative journalists include
Anonomity is important to whistleblowers.
They want no digital footprint left behind (e.g., email, web log).
They don't trust encryption, want to feel safe and secure.
Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it.
Anna talked about how accessibility and security are related.
Accessibility of digital content (not real world accessibility).
mostly refers to blind users and screenreaders, for our purpose.
Accessibility is about parsing documents, as are many security issues.
"Rich" executable content causes accessibility to fail, and often causes security to fail.
For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML.
Accessibility is often the first and maybe only sanity check with parsing.
They have no choice because someone may want to read what you write.
Google Chrome doesn't handle PDF correctly, causing several security bugs.
PDF has an accessibility checker and PDF tagging, to help with accessibility.
But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all.
The "Halting Problem" is: can one decide whether a program will ever stop?
The answer, in general, is no (Rice's theorem).
The same holds true for accessibility checkers.
Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem.
W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust"
Not much help though, except for "Robust", but here's some gems:
A bad example is Google News—hidden scrollbars, guessing user input.
Adam talked about PCI compliance for retail sales.
Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants.
Often applying some patches make no sense (like fixing a browser vulnerability on a server).
"Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds).
Scanners give a false sense of security.
In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open).
Adam says good recommendations come from NIST 800-40.
Instead use sane patching and focus on what's really important. From NIST 800-40:
"McAfee Secure Trustmark" is a website seal marketed by McAfee.
A website gets this badge if they pass their remote scanning.
The problem is a removal of trustmarks act as flags that you're vulnerable.
Easy to view status change by viewing McAfee list on website or on Google.
"Secure TrustGuard" is similar to McAfee.
Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines.
If their certification image changes to a 1x1 pixel image, then they are longer certified.
Their scripts take deltas of scans to see what changed daily.
The bottom line is change in TrustGuard status is a flag for hackers to attack your site.
Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.