Tuesday Apr 05, 2011

New Post

I am just doing a quick post to my blog.

Tuesday Jul 21, 2009

Is there room for Intelligence in the Internet?












One of the well entrenched dogmas in
the Internet is the role of the intermediate nodes has to be kept
minimal, mainly to the level of a dumb lookup 'n forward action based
on packets' destination address. The cost of adding any intelligent
processing per packet is believed to be prohibitively high.
Traditionally, the function of a router a compared to that of the
airport in the air travel industry.


The space and time considerations for
routers and airports are summarized in the following table:



























Airport



Router



Space



High cost of real estate


Closeness to the departure gates



Buffering capacity



Time



Short time for travelers



Marginal per packet latency




Real estate in an airport is very
limited, because all vendors are competing for the areas of high
traffic of passengers as possible. These are typically the areas
close to the departure gates, where passengers tend to spend the
longest waiting time after clearing security. A router also has a
limited fast memory, just as a buffering space to absorb the
occasional peaks and jitters in packet inflows. It is not intended
for storage, because the moment it fills up routers will have to
resort to the dreaded packets dropping.


Time is also scarce for a traveler
through an airport. Since the whole travel experience is a non
productive and not very enjoyable time, the shorter it is the better.
Even more so for packets across an intermediate node in the Internet,
the marginal per-packet latency has to be kept as short as possible.
With dozens of intermediate nodes between the source and the
destination of a packets, the time costs for each extra hop can add
up to long delays that degrade the overall quality of service. It can
also cause some transports to back off rather aggressively, in an
attempt to avoid network congestion.


Providing any service in the airport
beyond getting to the next leg in the trip is expensive, because
vendors need to charge more to recover the high fixed costs. For
similar reasons, adding any extra processing in the way for packets
has been highly discouraged for router designers and intentionally
made difficult by the routing protocols.


Implication
for routing design


Routing protocols take into account the
limitations above. The lack of buffering capacity made it very
inconvenient to keep per flow state for example. For a router, all
packets are anonymous and all traffic is stateless. For example, a
router typically does not reconstitute fragments of big datagrams
because such functionality would require setting aside memory for
storing the fragments until the last one is received. The main
reasons behind that kind of restrictions are first, the fact that the
router is a shared resource, and tying it up by any specific flow
means it is less available for other occupants of the network, with a
risk of exposure to denial of service by malignant of bogus clients.
The second reason is the unavoidable extra latency from adding non
trivial packet processing in the network.


Of course, the need for adding more
than just forwarding of packets in the network is real and has been
addressed by special purpose hardware (Network Processors). These
hardware appliances commonly run some imbedded OS, with a limited
programmability. You may find specialized appliances that add some
transformation of the payload (e.g. encoding for video or audio
streams), or of the shape of the traffic., Other appliances may be
dedicated to intercept traffic of certain type for scanning and
analysis, or for metring and charging for example.


Adding any intelligence to the packets
in flight is costly. It involves parsing of each packet's chain of
relevant headers, looking up a predefined flow table to retrieve a
matching rule, then executing the action for that rule, and finally
keeping record (or account) of that action. This is typically
implemented in a special ASIC. It is no no job for software running
in running a general purpose operating system on an off-the-shelf
computer.


Or is it?


Opening
the Door More Intelligent Networks


Looking back at the
network virtualization and resource partitioning
(aka Crossbow),
it is a collection of innovative solutions to complex networking and
resource management problems in the data center. It brings a deeper
understanding of applications' and virtual machines' footprint on the
network, and offers a more efficient and controlled use of the
networking resources. From that point of view, Crossbow is an
effective solution for reducing the deployment and operation
complexity and costs of increasingly virtualized and network dense
data centers.


There is a more exciting (to me at
least) aspect to Crossbow, which is at the heart of this blog's
topic. The Crossbow technology ushers the convergence of the
networking and computing. By interacting directly with the advanced
hardware capabilities of modern NICs (support of multiple receive and
transmit rings/lanes, hardware classification, MAC addresses
filtering, etc), and by changing the scheduling algorithms of the
networking to be host driven instead of interrupt driven, Crossbow
delivers a vertically integrated network and computing stack that can
meet a great deal of the challenges facing the router designers
mentioned above. The laborious packet parsing and classification is
off-loaded to the hardware of an intelligent NIC. Packets are steered
on the fly to the hardware lane that matches the flow it was
programmed for. From that moment, packets are taken in accordance
with the configured priority, bandwidth allocation and CPU
assignment for the flow. They can be forwarded by IP after a route
lookup, delivered to destination application, or passed along to a
kernel-level consumer function that may transform the packets and/or
re-inject them in the traffic. The right balance between what is
off-loaded to the hardware vs what runs under the main OS on the one
side, and the virtually limitless programmability for the
applications on the general purpose OS on the other side, are what
makes adding any arbitrary processing on the data in transit credibly
possible in software, and with no need for custom made expensive
dedicated hardware.


Crossbow
Technology: A Game Changer


There are obvious limitations to what I
just described in the previous paragraph. The hardware capabilities
of the NICs, the total number of NICs that can be added, the speed
across the I/O bus, etc, all these are limiting factors. With that
said, I do view Crossbow as a disruptive technology that changes the
parameters of the game in many aspects. Essentially:



  • It lowers the bar of entrance for
    developers who see many opportunities for more intelligent
    processing inside the network, but have been so far kept out of the
    game because of the steep walls of proprietary and closed OS'es of
    major router vendors. You can design, develop, test and debug the
    application using the same convenient IDEs as desktop applications,


    Further, the developer can build a
    Crossbow VWire that emulates the conditions (multiple hops at
    varying link speeds and delays, etc) of an arbitrarily complex
    network when the behavior of applications can be studied and
    debugged in the virtual world, before real world deployment [1].


  • It changes the economics of the
    routers market. There are definitely a low to mid-range lines or
    routing products that do not require the top performance of the
    high-end specialized routers. The needs of that market can be met by
    commodity servers built with off-the-shelf components and running
    OpenSolaris/Crossbow. A solution vendor, can use these commodity
    appliances, which cost a fraction of the price of the traditional
    closed sources router, to conveniently and quickly develop their
    solution, bundle it as an appliance and market it at a very
    competitive price.



In future blogs, I'll be sharing the
experience of the Crossbow team's cooperation with a few partners
that successfully designed solutions based on commodity hardware
running OpenSolaris/Crossbow. Some of them were featured during the
OSCON
2009 Crossbow BOF
and the latest JavaOne/CommunityOne
conference


References:


[1] S. Tripathi, N. Droux, T.
From hardware virtualizedd. Crossbow:
NICs to virtualized networks. In Proceedings of the ACM SIGCOMM

shop VISA’09 (To appear), 2009.

Thursday Feb 21, 2008

Dynamic Bandwidth Allocation

Resource control features of the project Crossbow can be used pro-actively to offer a fair and predictable sharing of the bandwidth resources. It can also be employed re-actively to contain unexpected surges of network traffic, or to help mitigate a denial of service attack

[Read More]

Wednesday Aug 29, 2007

Latency matters

I recently went across a bridge in Austin, Tx, and there were two speed limits: a minimum (45 Mph) and a maximum (65 Mph). This made sense, since the bridge has a limited capacity, and, allowing slow vehicles will result in crowding it, and creating a bottle neck. The Internet is often compared to the highway system, and this example actually applies here too. Web transation involve mutiple tiers of servers in the data center, involving a front end tier (or a web tier) and a set of other servers collaborating in handling the transaction. These may include access to an authenttcation service, a database service, and a number of other applications in the data center. Some of these maintain a state for the whole transaction, or for phases thereof. Resources are occupied by these state contexts while the transaction is executed. The higher the latency between various collaborating nodes, the longer the resources are occupied, and consequently the lesser transactions the overall distributed system can accomodate. Latency is usually associated with the response time of a single transaction, whi results in a quicker execution of a trade order for example, or a shorter waiting time for a use. It turns out that latency actually plays also a role in the scalability of distributed systems capacity, and indirectly affects resources provisioning. Now a disclaimer about the bridge example: I am in no way advocating exceeding the driving speed limits with the pretext that it helps reduce traffic congestion.

Saturday May 06, 2006

Bridging the two worlds of cryptographic APIs

Those who follow the Open Solaris security community have probably seen Darren Moffat's latest announcement on Open Slaris' Security list, about OpenSolaris contribution to OpenSSL.

Sun just contributed the OpenSSL PKCS#11 engine to the OpenSSL community This is a very significant development in the cryptographic programming interfaces arena.

Over the last decade, many proprietary APIs for cryptography were developed, deployed with some limited success, and were usually tied to their owner's application.

Two major APIs however, reached the critical mass of adoption by multiple vendors to become de-facto standards for applications that need to include encryption, decryption, key management, etc ... in their security mechanisms.

The first one is OpenSSL's, which is a simple and quite intuitive interface. Its "killer" application is the Apache Webserver. It has many other consumers, and many computer science introduction to security and cryptography classes use OpenSSL's libcrypto.

PKCS#11 (Public Key Cryptographic Standard # 11, also known as the CryptoKi or Cryptographic Token Interface) is the other one. It originated from a consortium of vendors of cryptographic solutions, fostered by RSA security, Inc. Its earlier versions initially followed the model of a single user of a personal cryptographic device, referred to as a "Token". The closest image illustrating this model is a smartcard. Soon enough, PKCS#11 evolved to a full-fledged cryptographic API. In addition to basic encryption operations, it offers secure persistent storage of keys, as well defined access controls and authentication mechanisms. NSS, used by Mozilla, the JES webserver , and LDAP directory servers are among the biggest users of PKCS#11 natively.

The presence of the two de-facto standards, meant so far that vendors of hardware cryptographic accelerators need to write deploy, and maintain two cryptographic libraries in order to maximize their utilization by customer applications. the fact that the two standards evolve independently at their own pace, meant that Vendors have to produce patches and updates to keep up with both.

There're a more fundamental difficulty though, having been the techinical leader of the Solaris Cryptographic Framework team, I learned the difficulty of having to juggle with two models for invoking cryptographic functions: A sessionless model, OpenSSL's, where all the information to execute a crypto operation (algorithm, keys, plaintext and ciphertext locations) are readily available within the context of that operation, and a session-oriented model (the PKCS#11 way) where the caller established a session, authenticated it as the case may be, creates or retrieves the key objects, then performs crypto operation in the context of that session. The challenge was to put in place a robust, flexible, and scalable framework that internally interfaces vendors plugging in as providers that expose either model, while externally exposes a unified programming interface. Offering the same level of pluggability and programmability in both userland and kernel added a whole other dimension to the challenge ;) (but that's another story, for a future blog, perhaps)

With the PKCS#11 OpenSSL engine now part of the OpenSSL distribution, vendors of cryptographic cards can focus on developing a PKCS#11 library for their hardware, and be useful for all user level consumers of with OpenSSL's libcrypto or PKCS#11 cryptoki. This benefit covers any operating system that the vendor supports a PKCS#11 interface with their driver on.

For vendors that write to the kernel level cryptographic SPI (service provider interface) in Solaris 10 and later update releases, the crypto hardware can be utilized by other kernel level solaris consumers, such as Kerberos, IPsec, or Kernel SSL proxy.

It is exciting to see the industry maturing, enough to have such a smooth consolidation effort materializing, without sacrificing either APIs. It's exciting because vendors, less burdened by supporting multiple APIs, can focus on innovating in their core competences: faster, safer, and better quality hardware cryptography.

Monday May 01, 2006

Thinking Big, in order to Succseffully Serve the Small

By its size, a small enterprise can't afford to invest in a big IT infrastructure. Instead of spending a few millions of dollars in computers, network equipment, and high speed connectivity, it is more cost effective to delegate all the application hosting and web presence to out-sourcing companies.

Though still maturing in the IT arena, this concept of co-hosting is hardly new in other industries, such as textile, sporting goods, electronics, etc... It is no secret that Adidas, Reebock, and perhaps a dozen other brands of sport shoes brands are manufactured in the same fab lines in east Asia.

While the big players see the reduction of production cost as the main advantage for outsourcing, such a model lowers the bar of entry for small and medium enterprises. It allows them to experiment, and take higher risks because the cost of failure is much lower. The advantage in two fold: Investors are more willing to take the risk with them, and he company can focus on innovations in core competence - its product design, to differentiate itself from the competition, and buy pay a specialized third party for the manufacturing services. Of course, the new comer would need to rely on its manufacturing vendor to be have the equipment and skills to do the job, and be trusted not to disclose its product's unique features to the competition.

Let's take another example closer to IT. A medium size hotel, typically operates in a seasonal way. In the high season, it has hundreds of checkins/checkouts daily, with big supporting staff, etc ... It requires a few servers connected with high bandwidth pipes to handle the transactions needed to run the business. During the low season, on the other hand, the needs may go down to a fraction of the maximum capacity. Outsourcing the IT services for this enterprise allows it to better adapt to a fluctuating and periodical business.

The two examples above illustrate two main features that small and medium business look for in an IT outsourcing vendor:

  • - Security, because it needs to trust that the vendor doesn't mix its operations with its competitors.
  • - Flexiility, allowing it to increase or decrease the amount of resources utilization, at will. This is also referred to on-demand computing.
  • Client needs drive the requirements on the side of outsourcing companies, such as the (Application, Internet, ...) Service Providers. Such a \*SPs, wants to offer to each one of its customers the illusion that they have full control over their application running invironemnt.

    Additionally, they need to put a price tag on the share of resources they dedicate to each client.

    The price has to be proportional to the actual utilization, thus the need for accurate accounting of all resources.

    In order to fairly accommodate multiple clients, the service provider enforces the limits and guarantees of their resource consumption.

    Finally, since the \*SP is like any other enterprise, endeavors to grow and thrive, it needs the ability to maximize the utilization of the resources they invested in, and, when nearing the maximum capacity, they need the ability to horizontally scale its infrastructure in order to host new customers, or upgrade the level of service for existing ones.

    We're talking here about the emerging need for Virtualization.

    At this stage of IT's evolution, virtualization is among the fastest growing, and probably most changing areas. Since the needs were first perceived by middleware application developers, early ad-hoc virtualization solution have been developed at that level. Stepping back though, a DB management system, a directory server and a web server have practically the same virtualization needs. They all require fine grain access control, secure separation between multiple instances running in virtual domains and guarantees of resources entitlement. It only makes sense that virtualization services are getting sedimented in the lower layers of the operating environment.

    Xen, Zones, BrandZ (See OpenSolaris.org), are the main OS virtualization techniques native to Solaris, or currently under development. Each serves specific customer needs, and is tailored to particular deployment scenarii. They all offer isolated execution environments for processes, and a more or less virtualization of the machine's CPU and storage resources. Networking virtualization has been rudimentary though. Project Crossbow started a few months ago in order to fill in that gap in the virtualization technologies offered by Solaris's networking subsystem. It includes the ability to share access to the networking resources (NICs, and connectivity), between multiple virtual domains, while ensuring

  • . fairness - A single domain gets its share of the resource, and doesn't stomp over others'.
  • . accuracy - A domain's utilization of resources is accurately metered.
  • . optimal utilization - Sophisticated hardware networking resources, such as multiple receive and transmit queues, offloading capabilities, etc ... are leveraged as much as possible.
  • . robustness - denial of service attacks that aim at exhausting networking resources are contained.
  • . flexibility - seamless and live assigning and changing attributes of networking resources to virtual domains.
  • At the end, successful companies are the ones that make their customers profitable. Outsourcing companies that offer Fair, Flexible, Reliable and Secure operating environment contribute to the success of for their SME/SMB customers. For Technolgy leaders, success will be measured by their ability to offer innovative solutions that empower the outsourcing companies to meet their customers need, while maximizing their Return On Investment.

    About

    kais

    Search

    Categories
    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today