Wednesday Aug 18, 2004

software pricing models must adapt

A very interesting and important change to software pricing models occurs because of the huge advances in the manufacturing of CPUs. Software pricing models must change too. In the bad old days, software companies tried to use all sorts of clever schemes to try to prevent copying disks. It took many years and the ubiquity of the internet to finally get them to forget about it (yet the RIAA hasn't realized yet that music is software... go figure). Many software companies then moved onto per-node or per-user licensing enforced with distributed license managers. This worked for a while, even though we could always spoof the license manager. When Sun introduced SMPs, and they became widely available to the mass market, the software vendors tried to implement per-CPU pricing, which is still common today. But even that scheme is broken. Almost all of the major CPU manufacturers are now putting more than one CPU core on a single die. And it is even further complicated by multi-threading where we use one CPU core to run two different threads (I'll identify these by process counter) simultaneously. The upshot is that the traditional definition of CPU used by the software vendors no longer makes any sense. Occasionally, they will try to use some sort of horsepower basis for pricing, charging more money for more powerful processors, but that also doesn't work well for a highly scalable product line such as SPARC where the same binary can run on a wide variety of cost and performance targets. The upshot is that the software vendors are going to have to adapt again and find some other way to account for and justify charging different prices for different classes of customers. At Sun, this constantly drives us nuts because a customer often makes purchase decisions based on the total cost of the system: hardware and software. Today, a minor increase or decrease in the hardware cost may have a dramatic affect, sometimes in the counter-intuitive direction, on the software cost. Alas, as a hardware vendor supporting 3rd party software vendors, we are pretty much powerless to change their pricing policies. It is up to our customers to do so. Put pressure on your software vendors for a sane pricing method[1]. I'm not advocating that everyone adopt the JES pricing model, but find something that makes sense as we continue to build innovative hardware platforms. And that something can't be based on copy-protection or CPU counts.

[1] there is no free lunch.

Tuesday Aug 10, 2004

Gigabit Ethernet is dual duplex!

James Hsieh recently blogged about Ethernet autonegotiation. Which reminds me of one very cool feature of Gigabit Ethernet over copper. When 10BASE-T (10 Megabit Ethernet over unshielded twisted pair (UTP)) was standardized, it was a breakthrough technology which allowed people like myself to inexpensively and easily network large numbers of systems together. At the time, I was the Manager of Network Support for the College of Engineering at Auburn University. A few years previously, the entire campus got a telecom overhaul and every building was wired for the future networking technologies. We had UTP wiring with spare pairs everywhere. And we had fiber in a home-run configuration to the central switch room. It was quite an awesome blank canvas and we proceded to try to network everything. Anyway, back to the technology... when 100BASE-T (FastEthernet, 100 Megabit, over UTP) came out, it was basically the same technology as 10BASE-T. Both used 2 pairs, one for transmit and one for receive. The big irritation with this is that if you wanted to connect two devices together without using a hub (or later, switch) you could use a "null-Ethernet" cable where you cross-wire the transmit and receive pairs. While this wasn't nearly as irritating as the RS-232 cabling fiasco, it still causes lots of myths and problems.

With Gigabit Ethernet over UTP (1000BASE-T) something had to give. They couldn't just bump up the speed by an order of magnitude. What they did was very, very clever and only possible with the advances in semiconductor technology at the time. First, they use all 4 pairs in a cat-5 cable, rather than 2. Next, they use a signalling method which allows a signal level to represent more than one bit (no, I won't explain that here, see the official IEEE site for details). Finally, they put little DSP engines on each pair so they could operate in full-duplex mode which simultaneously receiving and transmitting on the same wires. This is known as dual-duplex. The smarts in a Gigabit Ethernet UTP interface are clever enough to be able to negotiate all of this including negotiating back to prior, slower, standards. All-in-all it is vastly simpler to implement. Just plug it in and it works! Goodbye null-Ethernet cables, I never liked you anyway!

Deja-vu: Solaris on PowerPC

There's been a bit of water-cooler discussion this week about Solaris on PowerPC. Solaris was ported to PowerPC way back when everyone expected PowerPC to be the Next Awesome Architecture. But due to historical reasons beyond my control, the PowerPC bits weren't carried forward to the 64-bit port, Solaris 7. But the 'Net has infinite memory and you can still find some interesting references. WindowsNT?! heh, heh...

About

relling

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today