Why blocks?

We've been doing a lot of thinking lately about the blocks in block storage. At some level blocks make sense. It makes sense to break the disk media into fixed-size sectors. Disks have done this for years and up until the early 1990s, disk drives had very little intelligence and could only store and retrieve data that was pre-formatted into their native sector size. The industry standardized on 512-byte sectors and file systems and I/O stacks were all designed to operate on these fixed blocks.

Now fast-forward to today. Disk drives have powerful embedded processors in integrated circuits with wasted real-estate where more could be added. Servers use RAID arrays with very powerful embedded computers that internally operate on RAID volumes with data partitioning much larger than 512 byte blocks. These arrays use their embedded processors to emulate the 512-byte block interface of a late 1980s disk drive. Then, over on the server side, we still have file systems mapping files down to these small blocks as if IT were talking to an old drive.

This is what I'm wondering about. Is it time to stop designing storage subsystems that pretend to look like an antique disk drive and is it time to stop writing file systems and I/O stacks designed to spoon-feed data to these outdated disks.

Comments:

Even more strange to my mind is that the ATA world hasn't managed to dump the horribly outdated Cylinder/Head/Sector addressing mechanics. Logical Block Addressing has been the defacto mechanism for 20 years for SCSI devices and an option for ATA for almost 15 years. Yet plenty of tools and PC BIOs remain very CHS focused. Sigh!

Posted by Mike Duigou on August 01, 2005 at 10:44 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

kgibson

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today