Wednesday May 19, 2010

The wireless traffic of MIT students

Wireless traffic is both interesting and delightfully accessible thanks to the broadcast nature of 802.11. I have spent many a lazy weekend afternoon watching my laptop, the Tivo, and the router chatting away in a Wireshark window.

As fun as the wireless traffic in one's house may be, there's something to be said for being able to observe a much larger ecosystem - one with more people with a more diverse set of operating systems, browsers, and intentions as they work on their wireless-enabled devices, giving rise to more interesting background and active traffic patterns and a greater set of protocols in play.

Now, it happens to be the case that sniffing other people's wireless traffic breaks a number of federal and local laws, including wiretapping laws, and while I am interested in observing other people's wireless traffic, I'm not interested in breaking the law. Fortunately, Ksplice is down the road from a wonderful school that fosters this kind of intellectual curiosity.

I met with MIT's Information Services and Technology Security Team, and we came up with a plan for me to gather data that would satisfy MIT's Athena Rules of Use. I got permission from Robert Morris and Sam Madden to monitor the wireless traffic during their Computer Systems Engineering class and made an announcement at the beginning of a class period explaining what I'd be doing. Somewhat ironically, that day's lecture was an introduction to computer security.

Some interesting results from the data set collected are summarized below. Traffic was gathered with tcpdump on my laptop as I sat in the middle of the classroom. The data was imported into Wireshark, spit back out as a CSV, and imported into a sqlite database for aggregation queries, read back into tcpdump and filtered there, or hacked up with judicious use of grep, as different needs arose.

Protocol # Packets % Packets
MDNS 259932 30.46
TCP 245268 28.74
ICMPv6 116167 13.61
TPKT 78311 9.18
SSDP 31441 3.68
HTTP 28027 3.28
UDP 17006 1.99
LLMNR 16991 1.99
TLSv1 14390 1.69
DHCPv6 11572 1.36
DNS 10870 1.27
SSH 8804 1.03
SSLv3 3094 0.36
Jabber 2507 0.29
ARP 2003 0.23
SSHv2 1503 0.18
IGMP 1309 0.15
SNMP 1232 0.14
NBNS 619 0.073
WLCCP 513 0.060
AIM Buddylist 265 0.031
NTP 245 0.029
HTTP/XML 218 0.026
MSNMS 192 0.022
IP 139 0.016
AIM 121 0.014
SSL 105 0.012
IPv6 90 0.011
AIM Generic 84 0.0098
DHCP 64 0.0075
ICMP 60 0.0070
X.224 59 0.0069
AIM SST 45 0.0053
AIM Messaging 39 0.0046
BROWSER 37 0.0043
YMSG 35 0.0041
AARP 18 0.0021
OCSP 16 0.0019
AIM SSI 11 0.0013
SSLv2 8 0.00094
T.125 5 0.00059
AIM Signon 4 0.00047
NBP 4 0.00047
AIM BOS 3 0.00035
AIM ChatNav 3 0.00035
AIM Stats 3 0.00035
AIM Location 2 0.00023
ZIP 2 0.00023

Basic statistics
Time spent capturing: 45 minutes
Packets captured: 853436
Number of traffic sources in the room: 21
Number of distinct source and destination IPv4 and IPv6 addresses: 5117
Number of "active" traffic addresses (eg using HTTP, SSH, not background traffic): 581
Number of protocols represented: 48 (note that Wireshark buckets based on the top layer for a packet, so for example TCP is in this count because someone was sending actual TCP traffic without an application layer on top and not because TCP is the transport protocol for HTTP, which is also in this count). These protocols and how much traffic was sent over them are in the table on the left.

IPv4 v. IPv6
Number of IPv4 packets: 580547
Number of IPv6 packets: 270351

2.15 IPv4 packets were sent for every IPv6 packet. IPv6 was only used for background traffic, serving as the internet layer for the following protocols: DHCPv6, DNS, ICMPv6, LLMNR, MDNS, SSDP, TCP, and UDP. The TCP over IPv6 packets were all icslap, ldap, or wsdapi communications between our Windows user discussed below and his or her remote desktop. The UDP over IPv6 packets were all ws-discovery communications, part of a local multicast discovery protocol most likely being used by the Windows machines in the room.

Number of ICMP packets: 60
Number of ICMPv6 packets: 116167

1936.12 ICMPv6 packets were sent for every ICMP packet. The reason is that ICMPv6 is doing IPv6 work that is taken care of by other link layer protocols in IPv4. Looking at the ICMP and ICMPv6 packets by type:

ICMP Type/Code Pkts
Dest unreachable (Host administratively prohibited) 1
Dest unreachable (Port unreachable) 35
Echo (ping) request 9
Time-to-live exceeded in transit 15

ICMPv6 Type/Code Pkts
Echo request 8
Multicast Listener Report msg v2 7236
Multicast listener done 86
Multicast listener report 548
Neighbor advertisement 806
Neighbor solicitation 105710
Redirect 353
Router advertisement 461
Router solicitation 956
Time exceeded (In-transit) 1
Unknown (0x40) (Unknown (0x87)) 1
Unreachable (Port unreachable) 1

These ICMPv6 packets are mostly doing Neighbor Discover Protocol (NDP) and Multicast Listener Discovery (MLD) work. NDP handles router and neighbor solicitation and is similar to ARP and ICMP under IPv4, and MLD handles multicast listener discovery similar to IGMP under IPv4.

Number of TCP packets: 383122
Number of UDP packets: 350067

1.09 TCP packets were sent for every UDP packet. I would have thought TCP would be a clear winner, but given that MDNS traffic, which is over UDP, makes up over 30% of the packets captured, I guess this isn't surprising. The 14% of packets unaccounted for at the transport layer are mostly ARP and ICMP traffic. See also this post.

Instant Messenging

Awesomely, AIM, Jabber, MSN Messenger, and Yahoo! Messenger were all represented in the traffic:

Participants # Packets
AIM 22 580
Jabber 6 2507
YMSG 4 35
MSNMS 3 192

AIM is the clear favorite (at least with this small sample size). Note that Jabber has about 1/4th the AIM participants but 4x the number of packets. Either the Jabberers are extra chatty, or the fact that Jabber is an XML-based protocol inflates the size of a conversation dramatically on the wire. Note that some IM traffic (like Google Chat) might have instead been bucketed as HTTP/XML by Wireshark.

That Windows Remote Desktop Person

119489 packets, or 14% of the traffic, were between a computer in the classroom and what is with high probability a Windows machine on campus running the Microsoft Remote Desktop Protocol (see also this support page for a discussion of the protocol specifics).

RDP traffic from client to remote desktop: T.125, TCP, TPKT, X.224
RDP traffic from remote desktop to client: TCP, TPKT

Most of the traffic is T.125 payload packets. TPKTs encapulate transport protocol data units (TPDUs) in the ISO Transport Service on top of TCP. TPKT traffic was all "Continuation" traffic. X.224 transmits status and error codes. TCP "ms-wbt-server" traffic to port 3389 on the remote machine seals the deal on this being an RDP setup.


All SSH and SSHv2 traffic was to Linerva, a public-access Linux server run by SIPB for the MIT community, except for one person talking to a personal server on campus.

Protocol # Packets % Packets
TLSv1 14390 81.8
SSLv3 3094 17.6
SSL 105 .59
SSLv2 8 .045

5 clients were involved with negotiations with SSLv2, which is insecure and disabled by default on most browsers and never got past a "Client Hello".


I wanted to be able to answer questions like "what were the top 20 most visited website" in this traffic capture. The proliferation of content distribution networks makes it harder to track all traffic associated with a popular website by IP addresses or hostnames. I ended up doing a crude but quick grep "Host:" pcap.plaintext | sort | uniq -c | sort -n -r on the expanded plaintext version of the data exported from Wireshark, which gives the most visited hosts based on the number of GET requests. The top 20 most visited hosts by that method were:

Rank GETs Host Rank GETs Host
1 336 11 88
2 229 12 87
3 211 13 66
4 167 14 66
5 149 15 64
6 114 16 61
7 113 17 57
8 111 18 56
9 93 19 56
10 90 20 55

Alas, pretty boring. The blogcdn, blogsmith and eyewonder hosts are all for Engadget, and fbcdn is part of Facebook. I'll admit that I'd been a little hopeful that some enterprising student would try to screw up my data by scripting visits to a bunch of porn sites or something. CDNs dominate the top 20, and in fact almost all of the roughly 410 web server IP addresses gathered are for CDNs. Akamai led with 39 distinct IPs, followed by Amazon AWS with 23, and Facebook and Panther CDN with 16, with many more at lower numbers.


Using the Internet means a lot more than HTTP traffic! 45 minutes of traffic capture gave us 48 protocols to explore. Most of the captured traffic was background traffic, and in particular discovery traffic local to the subnet.

Sniffing wireless traffic (legally) is a great way to learn about networking protocols and the way the Internet works in practice. It can also be incredibly useful for debugging networking problems.

What are some of your fun or awful networking experiences?


Tuesday Mar 09, 2010

How to quadruple your productivity with an army of student interns

Startup companies are always hunting for ways to accomplish as much as possible with what they have available. Last December we realized that we had a growing queue of important engineering projects outside of our core technology that our team didn't have the time to finish anytime soon. To make matters worse, we wanted the projects completed right away, in time for our planned product launch in early February.

So what did we do? The logical solution, of course. We quadrupled the size of our company's engineering team for one month using paid student interns.

20 men and women posing for a group photo

The Ksplice interns, ca. January 2010.

Now, if you happen to know Fred Brooks, please don't tell him what we did. He managed the Windows Vista of the 1960s---IBM's OS/360, a software project of unprecedented size, complexity, and lateness---and wrote up the resulting hard-earned lessons in The Mythical Man-Month, which everyone managing software projects should read. The big one is Brooks's Law: "adding manpower to a late software project makes it later". Oops.

Brooks's observation usually holds. New people take time to get up to speed on any system---both their own time, and the time of your experienced people mentoring them. They also add communication costs that grow quadratically with the size of the team.

Fortunately, Ksplice benefits from a bit of an engineering reality distortion field (our very product is supposed to be technologically impossible) and, with the right techniques, quadrupling our engineering team's size for one month worked out great. Every intern was responsible for one of the company's top unaddressed priorities for the month, and every intern successfully completed their project.

So, how do you quadruple the size of your engineering team in one month and still keep everyone productive?

Ten people working at computers in a room
At work in the Ksplice engineering office.

  • Tolerate a little crowding. It took a little creativity to suddenly find a dozen new workspaces in our two-room office. Fortunately, we've found that a room can always fit one more person---and by induction, you can fit as many as you need. (All those years we spent proving math theorems came in handy after all.) Seating everyone close to each other has an important advantage, too: when lots of people on your team have just started, it's handy for them to work right next to the mentors who are answering their questions and helping them ramp up on the learning curve of the organization. With the right team, the crowding can also create an energetic office environment that makes people love to come in to work. (Sometimes it gets in the way of concentration, though---that's when I put on a good pair of headphones.)
  • Locate next to a deep pool of hackers. OK, so we're a bit spoiled by being headquartered a few blocks away from the Massachusetts Institute of Technology. At MIT, January is set aside for students to pursue projects outside of the curriculum---perfect for hiring an intern army. Many other institutions have either a similar "January term", or a program for students to spend time working in industry during the term.
  • Know who the best people are and only hire them. Ksplice was born four years ago at SIPB, MIT's student computing group. When a group of students run computing services thousands of people rely on, and spend hours each week discussing, dreaming, collaborating, and learning from each other on computer systems---some of them get really good at it. Even better, everyone sees everyone else in action and knows exactly what it's like to work with them. Investing some time into getting involved with technical communities makes it possible to hire people based on personal experience with them and their work, which is so much better than hiring based on resumes and interviews. Companies like Google and Red Hat have known for years that being involved in the open source community can provide an excellent source of vetted job candidates.
  • Pay well. In some industries, "intern" means "unpaid"---but computer science students have plenty of options, and you want to be able to hire the best people. We looked at pay rates for jobs on campus, and pegged our rate to the high end of those.
  • Divide tasks to be as loosely-coupled as possible. Our internship program would never have worked if we had assigned a dozen new people to hack on our kernel code---the training time and communication costs that drive Brooks' Law would have swallowed their efforts whole. Fortunately, like any growing business, we had a constellation of tasks that lie around the edges of our core technology: infrastructure upgrades, additional layers of QA, business analytics, and new features in the management side of our product. These had manageable technical interfaces our existing software, so our interns were able to become productive with minimal ramp-up and rely on relatively little communication to get their projects done.
  • Design your intern projects in advance. A key challenge when scaling up your engineering team quickly is making sure that the interfaces are all well designed and the new projects will meet the company's needs. So we spent a good deal of time getting these designs together before the interns started. We also allocated plenty of our core engineers' time for code reviews and other feedback with our interns in order to make sure their work would be maintainable after they left.
Have you achieved more in one month than anyone thought should be possible? How did you do it?

[Edited to make clear our interns were paid and to say more about how we designed their projects for high-quality output.]



Tired of rebooting to update systems? So are we -- which is why we invented Ksplice, technology that lets you update the Linux kernel without rebooting. It's currently available as part of Oracle Linux Premier Support, Fedora, and Ubuntu desktop. This blog is our place to ramble about technical topics that we (and hopefully you) think are interesting.


« April 2014