## Monday Jun 27, 2011

### Building a physical CPU load meter

I built this analog CPU load meter for my dev workstation:

All I did was drill a few holes into the CPU and probe the power supply lines...

Okay, I lied. This is actually a fun project that would make a great intro to embedded electronics, or a quick afternoon hack for someone with a bit of experience.

### The parts

The main components are:

• Current meter: I got this at MIT Swapfest. The scale printed on the face is misleading: the meter itself measures only about 600 microamps in each direction. (It's designed for use with a circuit like this one). We can determine the actual current scale by connecting (in series) the analog meter, a variable resistor, and a digital multimeter, and driving them from a 5 volt power supply. This lets us adjust and reliably measure the current flowing through the analog meter.

• Arduino: This little board has a 16 MHz processor, with direct control of a few dozen input/output pins, and a USB serial interface to a computer. In our project, it will take commands over USB and produce the appropriate amount of current to drive the meter. We're not even using most of the capabilities of the Arduino, but it's hard to beat as a platform for rapid development.

• Resistor: The Arduino board is powered over USB; its output pins produce 5 volts for a logic 'high'. We want this 5 volt potential to push 600 microamps of current through the meter, according to the earlier measurement. Using Ohm's law we can calculate that we'll need a resistance of about 8.3 kilohms. Or you can just measure the variable resistor from earlier.

We'll also use some wire, solder, and tape.

### Building it

The resistor goes in series with the meter. I just soldered it directly to the back:

Some tape over these components prevents them from shorting against any of the various junk on my desk. Those wires run to the Arduino, hidden behind my monitor, which is connected to the computer by USB:

That's it for hardware!

### Code for the Arduino

The Arduino IDE will compile code written in a C-like language and upload it to the Arduino board over USB. Here's our program:

``````#define DIRECTION 2
#define MAGNITUDE 3

void setup() {
Serial.begin(57600);
pinMode(DIRECTION, OUTPUT);
pinMode(MAGNITUDE, OUTPUT);
}

void loop() {
if (x == -1)
return;

if (x < 128) {
digitalWrite(DIRECTION, LOW);
analogWrite (MAGNITUDE, 2*(127 - x));
} else {
digitalWrite(DIRECTION, HIGH);
analogWrite (MAGNITUDE, 255 - 2*(x - 128));
}
}
``````

When it turns on, the Arduino will execute `setup()` once, and then call `loop()` over and over, forever. On each iteration, we try to read a byte from the serial port. A value of `-1` indicates that no byte is available, so we `return` and try again a moment later. Otherwise, we translate a byte value between 0 to 255 into a meter deflection between −600 and 600 microamps.

Pins 0 and 1 are used for serial communication, so I connected the meter to pins 2 and 3, and named them `DIRECTION` and `MAGNITUDE` respectively. When we call `analogWrite` on the `MAGNITUDE` pin with a value between 0 and 255, we get a proportional voltage between 0 and 5 volts. Actually, the Arduino fakes this by alternating between 0 and 5 volts very rapidly, but our meter is a slow mechanical object and won't know the difference.

Suppose the `MAGNITUDE` pin is at some intermediate voltage between 0 and 5 volts. If the `DIRECTION` pin is low (0 V), conventional current will flow from `MAGNITUDE` to `DIRECTION` through the meter. If we set `DIRECTION` high (5 V), current will flow from `DIRECTION` to `MAGNITUDE`. So we can send current through the meter in either direction, and we can control the amount of current by controlling the effective voltage at `MAGNITUDE`. This is all we need to make the meter display whatever reading we want.

### Code for the Linux host

On Linux we can get CPU load information from the `proc` special filesystem:

``````keegan@lyle\$ head -n 1 /proc/stat
cpu  965348 22839 479136 88577774 104454 5259 24633 0 0
``````

These numbers tell us how much time the system's CPUs have spent in each of several states:

1. user: running normal user processes
2. nice: running user processes of low priority
3. system: running kernel code, often on behalf of user processes
4. idle: doing nothing because all processes are sleeping
5. iowait: doing nothing because all runnable processes are waiting on I/O devices
6. irq: handling asynchronous events from hardware
7. softirq: performing tasks deferred by irq handlers
8. steal: not running, because we're in a virtual machine and some other VM is using the physical CPU
9. guest: acting as the host for a running virtual machine

The numbers in `/proc/stat` are cumulative totals since boot, measured in arbitrary time units. We can read the file twice and subtract, in order to get a measure of where CPU time was spent recently. Then we'll use the fraction of time spent in states other than idle as a measure of CPU load, and send this to the Arduino.

We'll do all this with a small Python script. The pySerial library lets us talk to the Arduino over USB serial. We'll configure it for 57,600 bits per second, the same rate specified in the Arduino's `setup()` function. Here's the code:

``````#!/usr/bin/env python

import serial
import time

port = serial.Serial('/dev/ttyUSB0', 57600)

old = None
while True:
with open('/proc/stat') as stat:
if old is not None:
diff = [n - o for n, o in zip(new, old)]
idle = diff[3] / sum(diff)
port.write(chr(int(255 * (1 - idle))))
old = new
time.sleep(0.25)
``````

### That's it!

That's all it takes to make a physical, analog CPU meter. It's been done before and will be done again, but we're interested in what you'd do (or have already done!) with the concept. You could measure website hits, or load across a whole cluster, or your profits from trading Bitcoins. One standard Arduino can run at least six meters of this type (being the number of pins which support `analogWrite`), and a number of switches, knobs, buzzers, and blinky lights besides. If your server room has a sweet control panel, we'd love to see a picture!

~keegan

## Wednesday Jul 28, 2010

### Learning by doing: Writing your own traceroute in 8 easy steps

Anyone who administers even a moderately sized network knows that when problems arise, diagnosing and fixing them can be extremely difficult. They're usually non-deterministic and difficult to reproduce, and very similar symptoms (e.g. a slow or unreliable connection) can be caused by any number of problems — congestion, a broken router, a bad physical link, etc.

One very useful weapon in a system administrator's arsenal for dealing with network issues is `traceroute` (or `tracert`, if you use Windows). This is a neat little program that will print out the path that packets take to get from the local machine to a destination — that is, the sequence of routers that the packets go through.

Using `traceroute` is pretty straightforward. On a UNIX-like system, you can do something like the following:

```    \$ traceroute google.com
traceroute to google.com (173.194.33.104), 30 hops max, 60 byte packets
1  router.lan (192.168.1.1)  0.595 ms  1.276 ms  1.519 ms
2  70.162.48.1 (70.162.48.1)  13.669 ms  17.583 ms  18.242 ms
3  ge-2-20-ur01.cambridge.ma.boston.comcast.net (68.87.36.225)  18.710 ms  19.192 ms  19.640 ms
4  be-51-ar01.needham.ma.boston.comcast.net (68.85.162.157)  20.642 ms  21.160 ms  21.571 ms
5  pos-2-4-0-0-cr01.newyork.ny.ibone.comcast.net (68.86.90.61)  28.870 ms  29.788 ms  30.437 ms
6  pos-0-3-0-0-pe01.111eighthave.ny.ibone.comcast.net (68.86.86.190)  30.911 ms  17.377 ms  15.442 ms
7  as15169-3.111eighthave.ny.ibone.comcast.net (75.149.230.194)  40.081 ms  41.018 ms  39.229 ms
8  72.14.238.232 (72.14.238.232)  20.139 ms  21.629 ms  20.965 ms
9  216.239.48.24 (216.239.48.24)  25.771 ms  26.196 ms  26.633 ms
10 173.194.33.104 (173.194.33.104)  23.856 ms  24.820 ms  27.722 ms```

Pretty nifty. But how does it work? After all, when a packet leaves your network, you can't monitor it anymore. So when it hits all those routers, the only way you can know about that is if one of them tells you about it.

The secret behind `traceroute` is a field called "Time To Live" (TTL) that is contained in the headers of the packets sent via the Internet Protocol. When a host receives a packet, it checks if the packet's TTL is greater than `1` before sending it on down the chain. If it is, it decrements the field. Otherwise, it drops the packet and sends an ICMP `TIME_EXCEEDED` packet to the sender. This packet, like all IP packets, contains the address of its sender, i.e. the intermediate host.

`traceroute` works by sending consecutive requests to the same destination with increasing TTL fields. Most of these attempts result in messages from intermediate hosts saying that the packet was dropped. The IP addresses of these intermediate hosts are then printed on the screen (generally with an attempt made at determining the hostname) as they arrive, terminating when the maximum number of hosts have been hit (on my machine's `traceroute` the default maximum is 30, but this is configurable), or when the intended destination has been reached.

The rest of this post will walk through implementing a very primitive version of `traceroute` in Python. The real `traceroute` is of course more complicated than what we will create, with many configurable features and modes. Still, our version will implement the basic functionality, and at the end, we'll have a really nice and short Python script that will do just fine for performing a simple `traceroute`.

So let's begin. Our algorithm, at a high level, is an infinite loop whose body creates a connection, prints out information about it, and then breaks out of the loop if a certain condition has been reached. So we can start with the following skeletal code:

```    def main(dest):
while True:
# ... open connections ...
# ... print data ...
# ... break if useful ...
pass

if __name__ == "__main__":
```

Step 1: Turn a hostname into an IP address.

The `socket` module provides a `gethostbyname()` method that attempts to resolve a domain name into an IP address:

```    #!/usr/bin/python

import socket

def main(dest_name):
while True:
# ... open connections ...
# ... print data ...
# ... break if useful ...
pass

if __name__ == "__main__":
```
Step 2: Create sockets for the connections.

We'll need two sockets for our connections — one for receiving data and one for sending. We have a lot of choices for what kind of probes to send; let's use UDP probes, which require a datagram socket (`SOCK_DGRAM`). The routers along our traceroute path are going to send back ICMP packets, so for those we need a raw socket (`SOCK_RAW`).

```    #!/usr/bin/python

import socket

def main(dest_name):
icmp = socket.getprotobyname('icmp')
udp = socket.getprotobyname('udp')
while True:
recv_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp)
send_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, udp)
# ... print data ...
# ... break if useful ...

if __name__ == "__main__":
```

Step 3: Set the TTL field on the packets.

We'll simply use a counter which begins at `1` and which we increment with each iteration of the loop. We set the TTL using the `setsockopt` module of the socket object:

```    #!/usr/bin/python

import socket

def main(dest_name):
icmp = socket.getprotobyname('icmp')
udp = socket.getprotobyname('udp')
ttl = 1
while True:
recv_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp)
send_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, udp)
send_socket.setsockopt(socket.SOL_IP, socket.IP_TTL, ttl)

ttl += 1
# ... print data ...
# ... break if useful ...

if __name__ == "__main__":
```

Step 4: Bind the sockets and send some packets.

Now that our sockets are all set up, we can put them to work! We first tell the receiving socket to listen to connections from all hosts on a specific port (most implementations of `traceroute` use ports from 33434 to 33534 so we will use 33434 as a default). We do this using the `bind()` method of the receiving socket object, by specifying the port and an empty string for the hostname. We can then use the `sendto()` method of the sending socket object to send to the destination host (on the same port). The first argument of the `sendto()` method is the data to send; in our case, we don't actually have anything specific we want to send, so we can just give the empty string:

```    #!/usr/bin/python

import socket

def main(dest_name):
port = 33434
icmp = socket.getprotobyname('icmp')
udp = socket.getprotobyname('udp')
ttl = 1
while True:
recv_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp)
send_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, udp)
send_socket.setsockopt(socket.SOL_IP, socket.IP_TTL, ttl)
recv_socket.bind(("", port))
send_socket.sendto("", (dest_name, port))

ttl += 1
# ... print data ...
# ... break if useful ...

if __name__ == "__main__":
```
Step 5: Get the intermediate hosts' IP addresses.

Next, we need to actually get our data from the receiving socket. For this, we can use the `recvfrom()` method of the object, whose return value is a tuple containing the packet data and the sender's address. In our case, we only care about the latter. Note that the address is itself actually a tuple containing both the IP address and the port, but we only care about the former. `recvfrom()` takes a single argument, the blocksize to read — let's go with 512.

It's worth noting that some administrators disable receiving ICMP `ECHO` requests, pretty much specifically to prevent the use of utilities like `traceroute`, since the detailed layout of a network can be sensitive information (another common reason to disable them is the `ping` utility, which can be used for denial-of-service attacks). It is therefore completely possible that we'll get a timeout error, which will result in an exception. Thus, we'll wrap this call in a `try/except` block. Traditionally, `traceroute` prints asterisks when it can't get the address of a host. We'll do the same once we print out results.

```    #!/usr/bin/python

import socket

def main(dest_name):
port = 33434
icmp = socket.getprotobyname('icmp')
udp = socket.getprotobyname('udp')
ttl = 1
while True:
recv_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp)
send_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, udp)
send_socket.setsockopt(socket.SOL_IP, socket.IP_TTL, ttl)
recv_socket.bind(("", port))
send_socket.sendto("", (dest_name, port))
try:
except socket.error:
pass
finally:
send_socket.close()
recv_socket.close()

ttl += 1
# ... print data ...
# ... break if useful ...

if __name__ == "__main__":
```
Step 6: Turn the IP addresses into hostnames and print the data.

To match `traceroute`'s behavior, we want to try to display the hostname along with the IP address. The `socket` module provides the `gethostbyaddr()` method for reverse DNS resolution. The resolution can fail and result in an exception, in which case we'll want to catch it and make the hostname the same as the address. Once we get the hostname, we have all the information we need to print our data:

```    #!/usr/bin/python

import socket

def main(dest_name):
port = 33434
icmp = socket.getprotobyname('icmp')
udp = socket.getprotobyname('udp')
ttl = 1
while True:
recv_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp)
send_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, udp)
send_socket.setsockopt(socket.SOL_IP, socket.IP_TTL, ttl)
recv_socket.bind(("", port))
send_socket.sendto("", (dest_name, port))
curr_name = None
try:
try:
except socket.error:
except socket.error:
pass
finally:
send_socket.close()
recv_socket.close()

curr_host = "%s (%s)" % (curr_name, curr_addr)
else:
curr_host = "*"
print "%d\t%s" % (ttl, curr_host)

ttl += 1
# ... break if useful ...

if __name__ == "__main__":
```

Step 7: End the loop.

There are two conditions for exiting our loop — either we have reached our destination (that is, `curr_addr` is equal to `dest_addr`)1 or we have exceeded some maximum number of hops. We will set our maximum at 30:

```    #!/usr/bin/python

import socket

def main(dest_name):
port = 33434
max_hops = 30
icmp = socket.getprotobyname('icmp')
udp = socket.getprotobyname('udp')
ttl = 1
while True:
recv_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp)
send_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, udp)
send_socket.setsockopt(socket.SOL_IP, socket.IP_TTL, ttl)
recv_socket.bind(("", port))
send_socket.sendto("", (dest_name, port))
curr_name = None
try:
try:
except socket.error:
except socket.error:
pass
finally:
send_socket.close()
recv_socket.close()

curr_host = "%s (%s)" % (curr_name, curr_addr)
else:
curr_host = "*"
print "%d\t%s" % (ttl, curr_host)

ttl += 1
break

if __name__ == "__main__":
```

Step 8: Run the code!

We're done! Let's save this to a file and run it! Because raw sockets require root privileges, `traceroute` is typically setuid. For our purposes, we can just run the script as root:

```    \$ sudo python poor-mans-traceroute.py
1       router.lan (192.168.1.1)
2       70.162.48.1 (70.162.48.1)
3       ge-2-20-ur01.cambridge.ma.boston.comcast.net (68.87.36.225)
4       be-51-ar01.needham.ma.boston.comcast.net (68.85.162.157)
5       pos-2-4-0-0-cr01.newyork.ny.ibone.comcast.net (68.86.90.61)
6       pos-0-3-0-0-pe01.111eighthave.ny.ibone.comcast.net (68.86.86.190)
7       as15169-3.111eighthave.ny.ibone.comcast.net (75.149.230.194)
8       72.14.238.232 (72.14.238.232)
9       216.239.48.24 (216.239.48.24)
10     173.194.33.104 (173.194.33.104)```

Hurrah! The data matches the real `traceroute`'s perfectly.

Of course, there are many improvements that we could make. As I mentioned, the real traceroute has a whole slew of other features, which you can learn about by reading the manpage. In the meantime, I wrote a slightly more complete version of the above code that allows configuring the port and max number of hops, as well as specifying the destination host. You can download it at my github repository.

Alright folks, What UNIX utility should we write next? strace, anyone? :-) 2

1 This is actually not quite how the real `traceroute` works. Rather than checking the IP addresses of the hosts and stopping when the destination address matches, it stops when it receives a ICMP "port unreachable" message, which means that the host has been reached. For our purposes, though, this simple address heuristic is good enough.

2 Ksplice blogger Nelson took up a DIY strace on his personal blog, Made of Bugs.

## Wednesday Jul 07, 2010

### Building Filesystems the Way You Build Web Apps

FUSE is awesome. While most major Linux filesystems (ext3, XFS, ReiserFS, btrfs) are built-in to the Linux kernel, FUSE is a library that lets you instead write filesystems as userspace applications. When something attempts to access the filesystem, those accesses get passed on to the FUSE application, which can then return the filesystem data.

It lets you quickly prototype and test filesystems that can run on multiple platforms without writing kernel code. You can easily experiment with strange and unusual interactions between the filesystem and your applications. You can even build filesystems without writing a line of C code.

FUSE has a reputation of being used only for toy filesystems (when are you actually going to use flickrfs?), but that's really not fair. FUSE is currently the best way to read NTFS partitions on Linux, how non-GNOME and legacy applications can access files over SFTP, SMB, and other protocols, and the only way to run ZFS on Linux.

But because the FUSE API calls separate functions for each system call (i.e. `getattr`, `open`, `read`, etc.), in order to write a useful filesystem you need boilerplate code to translate requests for a particular path into a logical object in your filesystem, and you need to do this in every FUSE API function you implement.

### Take a page from web apps

This is the kind of problem that web development frameworks have also had to solve, since it's been a long time since a URL always mapped directly onto a file on the web server. And while there are a handful of approaches for handling URL dispatch, I've always been a fan of the URL dispatch style popularized by routing in Ruby on Rails, which was later ported to Python as the Routes library.

Routes dissociates an application's URL structure from your application's internal organization, so that you can connect arbitrary URLs to arbitrary controllers. However, a more common use of Routes involves embedding variables in the Routes configuration, so that you can support a complex and potentially arbitrary set of URLs with a comparatively simple configuration block. For instance, here is the (slightly simplified) Routes configuration from a Pylons web application:

```from routes import Mapper

def make_map():
map = Mapper()
map.minimization = False

# The ErrorController route (handles 404/500 error pages); it should
# likely stay at the top, ensuring it can always be resolved
map.connect('error/{action}/{id}', controller='error')
map.connect('/', controller='status', action='index')
map.connect('/{controller}', action='index')
map.connect('/{controller}/{action}')
map.connect('/{controller}/{action}/{id}')

return map
```

In this example, `{controller}`, `{action}`, and `{id}` are variables which can match any string within that component. So, for instance, if someone were to access `/spend/new` within the web application, Routes would find a controller named `spend`, and would call the `new` action on that method.

### RouteFS: URL routing for filesystems

Just as URLs take their inspiration from the filesystem, we can use the ideas from URL routing in our filesystem. And to make this easy, I created a project called RouteFS. RouteFS ties together FUSE and Routes, and it's great because it lets you specify your filesystem in terms of the filesystem hierarchy instead of in terms of the system calls to access it.

RouteFS was originally developed as a generalized solution to a real problem I faced while working on the Invirt project at MIT. We wanted a series of filesystem entries that were automatically updated when our database changed (specifically, we were using `.k5login` files to control access to a server), so we used RouteFS to build a filesystem where every filesystem lookup was resolved by a database query, ensuring that our filesystem always stayed up to date.

Today, however, we're going to be using RouteFS to build the very thing I lampooned FUSE for: toy filesystems. I'll be demonstrating how to build a simple filesystem in less than 60 lines of code. I want to continue the popular theme of exposing Web 2.0 services as filesystems, but I'm also a software engineer at a very Git- and Linux-heavy company. The popular Git repository hosting site Github has an API for interacting with the repositories hosted there, so we'll use the Python bindings for the API to build a Github filesystem, or GithubFS. GithubFS lets you examine the Git repositories on Github, as well as the different branches of those repositories.

### Getting started

If you want to follow along, you'll first need to install FUSE itself, along with the Python FUSE bindings - look for a `python-fuse` or `fuse-python` package. You'll also need a few third-party Python packages: Routes, RouteFS, and github2. Routes and RouteFS are available from the Python Cheeseshop, so you can install those by running `easy_install Routes RouteFS`. For github2, you'll need the bleeding edge version, which you can get by running `easy_install http://github.com/ask/python-github2/tarball/master`

Now then, let's start off with the basic shell of a RouteFS filesystem:

```#!/usr/bin/python

import routes
import routefs

class GithubFS(routefs.RouteFS):
def make_map(self):
m = routes.Mapper()
return m

if __name__ == '__main__':
routefs.main(GithubFS)
```

As with the web application code above, the `make_map` method of the `GithubFS` class creates, configures, and returns a Python Routes mapper, which RouteFS uses for dispatching accesses to the filesystem. The `routefs.main` function takes a RouteFS class and handles instantiating the class and mounting the filesystem.

### Populating the filesystem

Now that we have a filesystem, let's put some files in it:

```#!/usr/bin/python

import routes
import routefs

class GithubFS(routefs.RouteFS):
def __init__(self, *args, **kwargs):
super(GithubFS, self).__init__(*args, **kwargs)

# Maps user -&gt; [projects]
self.user_cache = {}

def make_map(self):
m = routes.Mapper()
m.connect('/', controller='list_users')
return m

def list_users(self, **kwargs):
return [user
for user, projects in self.user_cache.iteritems()
if projects]

if __name__ == '__main__':
routefs.main(GithubFS)
```

Here, we add our first Routes mapping, connecting `'/'`, or the root of the filesystem, to the `list_users` controller, which is just a method on the filesystem's class. The `list_users` controller returns a list of strings. When the controller that a path maps to returns a list, RouteFS automatically makes that path into a directory. To make a path be a file, you just return a single string containing the file's contents.

We'll use the `user_cache` attribute to keep track of the users that we've seen and their repositories. This will let us auto-populate the root of the filesystem as users get looked up.

Let's add some code to populate that cache:

```#!/usr/bin/python

from github2 import client
import routes
import routefs

class GithubFS(routefs.RouteFS):
def __init__(self, *args, **kwargs):
super(GithubFS, self).__init__(*args, **kwargs)

# Maps user -&gt; [projects]
self.user_cache = {}
self.github = client.Github()

def make_map(self):
m = routes.Mapper()
m.connect('/', controller='list_users')
m.connect('/{user}', controller='list_repos')
return m

def list_users(self, **kwargs):
return [user
for user, projects in self.user_cache.iteritems()
if projects]

def list_repos(self, user, **kwargs):
if user not in self.user_cache:
try:
self.user_cache[user] = [r.name
for r in self.github.repos.list(user)]
except:
self.user_cache[user] = None

return self.user_cache[user]

if __name__ == '__main__':
routefs.main(GithubFS)
```

That's enough code that we can start interacting with the filesystem:

```opus:~ broder\$ ./githubfs /mnt/githubfs
opus:~ broder\$ ls /mnt/githubfs
opus:~ broder\$ ls /mnt/githubfs/ebroder
anygit	    githubfs	 pyhesiodfs	 python-simplestar
auto-aklog  ibtsocs	 python-github2  python-zephyr
bluechips   libhesiod	 python-hesiod
debmarshal  ponyexpress  python-moira
debothena   pyafs	 python-routefs
opus:~ broder\$ ls /mnt/githubfs
ebroder
```

### Users and projects and branches, oh my!

You can see a slightly more fleshed-out filesystem on (where else?) Github. GithubFS lets you look at the current SHA-1 for each branch in each repository for a user:

```opus:~ broder\$ ./githubfs /mnt/githubfs
opus:~ broder\$ ls /mnt/githubfs/ebroder
anygit	    githubfs	 pyhesiodfs	 python-simplestar
auto-aklog  ibtsocs	 python-github2  python-zephyr
bluechips   libhesiod	 python-hesiod
debmarshal  ponyexpress  python-moira
debothena   pyafs	 python-routefs
opus:~ broder\$ ls /mnt/githubfs/ebroder/githubfs
master
opus:~ broder\$ cat /mnt/githubfs/ebroder/githubfs/master
cb4fc93ba381842fa0c2b34363d52475c4109852
```

### What next?

Want to see more examples of RouteFS? RouteFS itself includes some example filesystems, and you can see how we used RouteFS within the Invirt project. But most importantly, because RouteFS is open source, you can incorporate it into your own projects.

So, what cool tricks can you think of for dynamically generated filesystems?