Every so often, someone brings up the fact that fsck takes longer and longer on large filesystems. The more stuff that goes in there, the more stuff needs to be fsck'd. I just read about "chunkfs", one possible answer to the fsck problem. It works by dividing the single large filesystem into 256 smaller sub-filesystems which are presented to the user as just one.
What happens when one sub-filesystem fills up?
The answer here is the creation of a "continuation inode." These inodes track the allocation of blocks in a different chunk; they look much like files in their own right, but they are part of a larger array of block allocations. The "real" inode for a given file can have pointers to up to four continuation inodes in different chunks; if more are needed, each continuation inode can, itself, point to another four continuations. Thus, continuation inodes can be chained to create files of arbitrary length.
We've all heard the term spaghetti code, but this is the first time I've heard of a spaghetti filesystem.