Enemy Of State
By stern on Jun 28, 2009
Carl Trieloff from Red Hat started literally down in the wires talking about the AMQP and how this might change the way we think about state persistence. Rather than worrying about the end points for state management, Trieloff argued that we should think about the messaging vehicles themselves as methods for ensuring that we don't create interoperability and persistence problems. I was reminded of Reuven Cohen's blog proclaiming XMPP as the new glue of the internet, supplanting HTTP, citing the use of XMPP in Google's Wave protocol as evidence. While I never believe a protocol spec serves as physical proof of a phase change in the matter of any system (SOAP based web services, anyone? Buehler?), it is one more indicator that we way in which systems carry their state is becoming as critical as where the state is preserved, particularly if the state is short-lived (whether edits to a Google document or stock exchange order book information).
Stanley Young, CIO of NYSE Euronext, discussed the exchange's core messaging platform, built on the Wombat engine, since acquired by the NYSE. It's another example of messaging trumping structured data management, and it served as a foundation for Young's discussion of how future exchanges - emphasis on plural - will be built. He declared that the "data center is the new trading floor" and that nearly 80% of the NYSE Euronext future data center will be available for co-location and what is effectively a private hosted data center. He closed by stating that the NYSE's goal is to be able to spin up a new market in 24 hours: the listed instruments, settlement functions, and order management defined, deployed and connected to a critical mass of players that truly defines "capitalism". If you can't value it, trade it, and make it fungible, it's not capital. The NYSE has its eyes set on expanding, rather than contracting, the capital exchanges. It's an equally strong statement about the growing importance of application agility.
I got to speak after them but before the coffee break, which is a slightly better slot than the "after lunch nap hour". While going through an update on cloud computing use cases for test and development and space/time adjunct scaling, as well as thoughts on building private clouds, I emphasized how cloud computing is making us rethink reliability. You can't build a cluster out of what you can't physically configure - unless you do it in software.
Application reliability has historically been about recovering state after a failure. With a virtualization layer intermediating the application and underlying hardware, tried and true clustering methods no longer make sense. Rather than keeping in-memory state we should be encapsulating it (hence the emphasis on REST); similarly we should be putting applications in more explicit control of their replication of data and memory instances. This doesn't mean that persisted state goes away -- databases, table oriented stores (BigTable, SimpleDB), and replicated file object systems (Mogile, HDFS) are going to increase in use, not decrease. But each of those components has explicit control of replication and failure recovery, rather than relying on clustering at the hardware level to do it implicitly.