X

Communications Reimagined.

The History of WAN

Very good expensive WANs have moved from rock solid service on point-to-point leased circuits that carried real regulatory weight and consequences for reliability to less reliable packet switch technologies with QoS but with substantially weaker SLAs and consequences for failure to meet the SLA. IPSec/SSL VPNs over broadband are good for some data users but the broadband connections they use at remote offices are via providers and networks that have issues with network availability, are frequently oversubscribed, and have unpredictable performance for latency, loss, and jitter. These networks are not reliable enough on their own for most business quality voice and mission critical data needs.

The availability and the amount of bandwidth of broadband have continued expand while the cost per bit continues to be lower. Network applications and users have continued to evolve to be more forgiving of less “than a pin drop” quality service  but are ever more demanding for more bandwidth. And they want their access to the internet and the cloud on their own terms with their own devices but on your constrained WAN budget.

The IT team is held accountable to the company and the users for its own SLA. In the end, providing for reliable, cost effective WAN services to the enterprise and  end users and has shifted from the service provider’s problem to the enterprise IT team themselves. The burden has grown and the odds are less in your favor. How can we change those odds so the enterprise has a higher probable solution for success while still containing costs and improving user access to ever more bandwidth?


Expensive, deterministic technology evolves to redundant, inexpensive, highly probabilistic technology

A pendulum consistently swings in the world of technology evolution. Technology typically starts in its first generation as a highly engineered, expensive, deterministic solution. Soon, as the market place evolves with greater competition, the need for cost reductions pushes the pendulum to a much more inexpensive solution that is less deterministic and more probabilistic in nature. The initial low cost solution is frequently too unreliable for many mission critical uses and so the market is driven to improve the odds. As the market continues to mature, the technology is enhanced to swing back (retrograde) to at a higher probabilistic solution where the likelihood of failure is less but the economic cost still at a reasonable.

When we use the term deterministic, the meaning is that the outcome of the technology’s use is predestined prior to its use. There is little to no variability or chance that it will work differently than its predetermined outcome. When we use the term probabilistic technology, the meaning is that the outcome is not entirely determined prior to its use. There exists some chance that the technology may not perform well and the outcome is only really determined by empirically observing its actual use (heuristic). Low probabilistic technology is a relative measure that there exists a substantial chance the technology may not perform well enough for its needed typical use. Higher probabilistic technology is relative measure where the typical use of the technology has a very high chance of being satisfactory to most typical use cases.

Why, you may be wondering, would anybody not want to use deterministic technology at all times? The simple answer is: Because probabilistic technology is almost always very much less expensive and therefore more plentiful than expensive deterministic technology. When a probabilistic cheap alternative is developed it is able to be utilized by more consumers resulting in a rapidly expanding market. The probabilistic technology is cheap but the chance of it performing reliably enough is too low for many use cases that evolved in the more deterministic time prior. By utilizing techniques, such as redundancies and greater optimizing for the intended typical use cases, the chances of successful outcomes with the probabilistic technology may be enhanced to be more acceptable for the greater market. This improvement in the probability of success does come at some additional incremental costs but the cost is significantly less than the prior available deterministic alternative.

Let’s look at a few non networking historical examples:
Mainframes were/are expensive. Today they can cost from a hundred thousand dollars to millions of dollars per unit. Because of the expense and the criticality of their function to the users they were/are highly engineered for up time and deterministic outcomes. Down time is measured in seconds per year.

With the arrival of the microprocessor based personal computer the client/server world was born. PCs, even in the first generation, were relatively cheap at a few thousand dollars each. Many companies moved to use the PC platforms as cheap alternatives to expensive mainframes. Soon it became evident that an off the shelf PC acting as a server was not up to the task for most enterprise users. Typical PCs running server software were not reliable enough for mission critical use. They were cheap but they failed far too often. So enhancements were made to increase the redundancies and robustness within the server, at some higher incremental cost, to increase probability of solution being reliable. To cite a few, redundant array of inexpensive hard drives (RAID) were introduced, redundant power supplies, battery backup systems were added, the central processors, buses and memories (ECC) were improved for reliability. These enhancements came at some additional costs but, even with these additions, the costs of the higher probabilistic servers were substantially less than the prior mainframe deterministic equivalent. Mainframes still exist and are used for many of our everyday mission critical needs but the microprocessor servers are a permanent player in the marketplace.

Another example is computer memories. In the 1980s, the race was on to increase the speed of all computer memories. The reason was that the CPU speeds had advanced beyond the rate that the memories could feed them data. The CPUs were stalling waiting for the memories to provide data to process. Memories were pushing the limits of physics and budgets. This resulted in very expensive Static RAM (SRAM) technology. Having 10 nanosecond access SRAMs was a very deterministic approach to computer processing but it came at great expense. The cost per unit of SRAM is a thousand or more times per unit as compared to Dynamic RAM (DRAM). The dedicated SRAMs worked very well and were deterministic. It was possible to do mathematical static modeling of algorithms and pre determine the performance but the costs were exorbitant and prohibitive for anything but very special use cases.

The cheaper probabilistic approach was to implement a CPU cache. Cache methods apply a limited amount of SRAMs as fast access fetch storage over top of much cheaper and slower DRAM. That said, processor caches are not deterministic. There are some algorithms, for example memory scans, that run worse in a CPU with memory caches than if the CPU had no caches at all, but for typical use cases, caches provide a decent high probability that the data the CPU needs will be there when it is required without stalling the CPU for long durations. Larger caches further increase the probability of data being ready in Cache with fewer CPU stalls as a result. First implementations of cache on Intel computers used separate external SRAMs to the CPU. Later the cache was integrated into the CPU to minimize I/O bus width and latency. When cache misses where considered still too costly, a second tier of cache was added. This L2 cache was typically bigger than the L1 cache but provided a second level of probabilistic tech that reduced potential for misses for the L1 and L2. L2 caches also reduced the consequence of missing the L1 cache, since the memory requests could be frequently served from the lower latency L2 cache as compared to the much slower DRAM. For many processors, the L2 cache was eventually added into the CPU die as well.

So the pattern applies again. Deterministic costly technology is replaced with cheaper intelligent probabilistic technology using redundancy and optimization techniques.

The same pattern can be seen with DASDs to NAS Appliances, TCP/IP networks over SNA, Web browsing over 3270/VT100 terminals, Virtual Machines over dedicated servers, cloud over data centers, quality of digital mobile phones over analog mobile phones, WAN optimization over local site file servers, remote desktop (RDI) over local PCs, and many more instances.

Interesting, maybe, but what does this have to do with my WANs?

The evolution of private WANs: LANs and WANs converge

Plain, old, slow, regulated, expensive but quality WAN service
In the mid to late 1970s and 1980s, enterprises created wide area networks with interconnecting sets of dedicated leased circuits using T1s and later T3s.  These Time Division Multiplex (TDM) point to point leased circuits were originally developed for digital voice (telecom) but served reasonably well for data communication (datacom).  The circuits were deterministic but expensive per bit.   The payloads were fixed and cells were the rule since the bandwidth was low and large frames could potentially cause substantial voice blocking and jitter.  What we now refer to as plain old telephone service (POTS) provided by phone switch telephone network providers (PSTN) is remembered for its high quality voice and data services over long distances.  For example, Sprint marketed the quality of their long distance voice service with the ability to audibly hear a pin drop over long distances phone calls.  The quality of the network was supported end to end.   If you sent a packet into the network at one side, it was coming out the other side with extremely high confidence. You paid for a TDM time slot in the network and you got it if you needed to use it.  On the other hand if you did not need to use the network, you paid for it anyway.  “Use it or lose it” was the arrangement. This and others factors resulted in the networks being reliable but, by today’s standards, relatively expensive.

Enterprise shift from deterministic, hierarchical, centralized mainframe systems to decentralize, fast, inexpensive probabilistic LAN networks
Meanwhile at the enterprise, in the 1980s- 1990s the enterprise data centers were evolving away from hub and spoke mainframe centric datacom networks toward local area networks (LANs) which utilized common shared network infrastructure with much higher speeds and much lower cost per bit. The economy of collectively using shared resources started driving a new generation of applications and data movement models such as client server and peer to peer. Technologies such as CDMA, Ethernet, LAN hubs, and LAN switches demonstrated that, for most uses, probabilistic LAN technology was good enough for many enterprises’ needs in terms of quality and the cost were low enough where they could afford to provide it to more user and for more diverse uses.

CoS/QoS improves probability, but more bandwidth is easier
Local area network technologies were enhanced to make them relatively higher probabilistic.  Methods such as token passing, priority queue (PQ) tagging and non-blocking layer 2 switches were developed.  For many enterprises, most local area networks issues of low quality service on the network could be dealt with the simple approach of just adding more bandwidth to eliminate congestion points. In the 1990s, it was common for LAN technology to grow in capacity by 10 times every few years. Local area networks were able to increase in speed more rapidly, as compared to wide area networks, because the shorter distances permitted the use of cheaper high speed copper technologies that are not viable over the longer haul wide area networks.

LAN economics and WAN worlds converge, WANs become more probabilistic
In the 90s and early 2000s, the worlds of enterprise telecom wide areas networks and datacom local area networks started to converge with the emergence of CLEC and the ending of the days of regulation.  The change was accelerated by the wide adoption of the Internet Protocol (IPv4) as the de facto standard for all networks LAN or WAN for OSI layer 3 and up over alternatives such as SNA, IPX, OSI, and other alternatives. New packet switched WAN technologies were developed, such as ATM, SMDS, Frame Relay, and eventually MPLS that were friendlier to variable packet sizes and higher speed with less cost per bit but are probabilistic as compared to the deterministic leased circuit services used prior.

The capacity of the service providers access points greatly increased by the adoption of fiber service within provider’s infrastructure.  These speed improvements were brought to some enterprises customer premises where customers could gain access to OC3 (155 Mbps), OC12 (633 Mbps) and OC24 (2.4 Gbps) but the cost of these services to the enterprises was very high as compared to the equivalent speed improvements in available to LAN. The availability of these high speed optical circuits was limited to a small set of geographic locations.  At the remote offices the WANs stayed slow and expensive.

But at least they have Service Level Agreements…Or did they?

(Note to the international reader: In this section we refered to historical patterns as they pertain to the United States market.  Many of these same patterns may have or may not have happened in other national markets.  The trends notes are relevant to the overall evolution of networking and the underlying approach to technology.)

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.