Wednesday Jun 14, 2006

Happy Birthday, Open Solaris

OpenSolaris 1 Year Anniversary Just a Year since Sun made Solaris 'Open'. While not a Solaris Guru, I've been a fan for a very long time and its one of the reasons I came to Sun. I've experimented with various operating systems for my work's laptop and since peer pressure makes it harder & harder to use Windows for this and my Linux build is a bit knackered (because of a poor install), I've turned to Solaris over the last couple of months. I also really want to play with some of the Solaris 10 features & the N1 products and help some friends solve some administration problems so I have recently turned my attention to the Solaris build on my laptop, just in time for opensolaris' 1st Birthday party.

This article talks about connecting to the internet over a wireless connection & obtaining some additional software products.

Solaris Express (Nevada V35) goes on quite nicely, add Casper's frkit. This enables wireless to work and I have used it quite happily at home, at work and at BT sites around the country.

At home, I have a linksys router so I have created a profile for Inetmenu for use at home. This involves creating a file in the /etc/inetmenu. The name may not have spaces and I created it by copying the sample in /etc/inetmenu and amended the domain, IP address and DNS server lines. I have done this because the linksys router's dhcp server seems a bit shit (alternatively optimised for windows) and other good friends seem to manage around this by avoiding the linksys dhcp service for their UNIX based systems. NB I say UNIX, I mean Linux & Solaris, I haven't checked what my MAC using colleagues do. So I use a static IP address. I am experimenting with whether I need to specify the ISP DNS server or not as I currently have a Firefox (but not Mozilla) problem.

I have downloaded Firefox (See here....) & Thunderbird and obtained my favourite themes, I downloaded the 'proxybutton' extension. (Why? See here...). It comes with StarOffice 7, so I have got myself a copy of SO8, which supports "Open Document Format". I installed this as root and then edited the JDS Launch Menu entry for SO7 to invoke Eight, this may mean some faulty National Language Support but it seems to work. (Launch -> Focus on SO7 and [right click] -> Edit Properties. I still need to check out Adobe Acrobat, which is still only available at some unbelievably old version, so I am using the Gnome embedded .PDF reader. (Adobe! Get your act together).

I shall also be looking to install Bluefish, my preferred HTML editor for UNIX. (I just love their fish logo).

Bluefish

NB This article isn't written with Blufish.

Altogether, nearly an acceptable work platform.

tags:

Thursday Apr 27, 2006

Open Source in the Uk

I attended an "Open Source Day" conference at BT's Global R & D centre having arranged for Simon Phipps to speak on behalf of Sun at the event. We took a couple of laptops with Solaris "Nevada" release 35/37 and Simon Cook blogs here..., and now in my sidebar, demonstrated the use of Zones, Net Beans and Glassfish.

It was great to meet so many people; who fell into two camps, those that remember Sun as an open systems company and those that have never known and repeat our competitor propaganda that we're proprietary. It was good to have the opportunity to explain what we do.

Another article posted several days later than occurred and back dated.

tags: ""

Friday Mar 10, 2006

About: opensolaris appliances community

I've been considering how best to contribute to the opensolaris so called appliances community and while there is an excellent discussion forum, I think I'll see if I can create a technorati watchlist that the opensolaris site can access using RSS.

tags: ""

Saturday Mar 04, 2006

More......

I have now (yeah really) added a picture to my article "Designing Data Centre automation solutions" illustrating the two dimensions of virtualisation & posted my notes from the N1 Architecture sessions.

tags:

Friday Mar 03, 2006

Virtualisation has two dimensions

I have added a picture to my article "Designing Data Centre automation solutions" illustrating the two dimensions of virtualisation.. It seems like I didn't! Sorry. I have now. Mysterious Smiley

tags:

Data Centre Vision

Sohrab has sent me an edited version of his slides. You can download it here... and I have put it up as a bookmark in the right hand side bar. Its the "N1 (.pdf)" one.

tags:

Wednesday Mar 01, 2006

Architecting DC automation

During a presentations on N1, Garima Thockburn & Doan Nguyen showed an architectural functional model for the applications provisioning technology.

N1 Service Provisioning Architecture diagram

What's really cool is that, as I'd expect from Sun, although we can forget what it made us, is that openness is being designed in from the ground level. Its recognised that a DC automation solution will have to utilise external capabilities. A second piece of ideology is borrowed from SOA and the products need to workflow manage various transactions, i.e. systems management transactions in the data centre. (Thuis is the service manager ).The whole SOA vocabulary of composition & orcestration fits the problem. The industry (or community if you prefer) are also developing business applications design skills for the infrastructure. This is also necessary if automation is to suceed. The block diagram puts work flow at the centre of the design. NB The diagram is accurate today but may change.

We were given the opportunity to workshop with members of the team. We had a discussion about solutions selling and examined the reality of the partnering dimension. I argued that we need to take more ownership of the relationship; in many cases, our "Partners" don't use that word to describe us. On a good day, we're suppliers on a bad day we're competition. I have thought through this since I said it, and it clear that this problem requires inter-company co-operation. At least Sun is comfortable working with others and partnership will help our customers meet broader requirements today, and as I said partnering helps meet the hetrogeneity requirement, at the least we'll need help dealing with oddities. Matthias Pfuetzner (a german colleague) argued for openness of the products APIs at both the plugin level, which determines what objects the software can manage and at the UI level, because as the plugin capability extends, it may be necessary to customise the UI to invoke and monitor the extended functionality. I like his idea of making a "plugin builder".

tags:

Designing Data Centre automation solutions

Today was planned as a look ar Sun's system management solutions and the day was started by Sohrab Modi, the V/P for the group. We really have a problem with our branding, but we're sticking with N1, managing maNy systems as 1. Sohrab's presentation called "Simplify, Integrate, Automate", hit two key points for me.

First, N1 is a solutions sale and the customer will define the scope and boundaries. Sun's field need to capture these requirements and map both Sun & partner products to the requirements and take integration responsibility, at the least the solution design needs to be a collaborative process between the customer and vendor. Our proposition needs to be we meet your need and solve your problems. Its not good enough to try and win feature/benefit beauty parades. Sohrab showed how in the next 12 months Sun (his people) will be bringing virtualisation and management products to market to allow us to more address more and more comprehensive requirements. He demo'd some of the more advanced features of our logical domaining and provisioning technology which gives me confidence that his team are finally getting their delivery train in order, its been a long hard struggle for them. (One of his staff produced one of the analyst slides showing Sun in a strong position today, at least we're gaining mind share.)

The second piece of his talk that I liked is his two dimensions of consolidation. The virtualisation technologies mean that the data centre might have more operating system images, which without automation is more difficult and costly to manage, therefore we have to aggregate systems into run time resources and management objects.

2D of Virtualisation

This something I have been calling pools, but I suspect that Sun and the industry will call them something else. The demand for multi-node pools by applications is a crucial part of the capability supply and demand nexus and the economics and architectural constraints that aggregation enables and and creates make solutions design and integration more important not less. It is unlikely that one technology can answer the management consolidation and virtualisation problems.

N1 is a solutions sell. Customers will define the boundaries of scope, and we need to meet all of the customer problem. Sohrab gets this and is committed to a full technology offering that meets customer needs. We're not looking at point product releases, we're selling/building a platform and field people i.e. me need to learn how to ensure that customers define problems, sun offer's technology and the sales process collaborates on solutions design. As I said we don't want to sell each of our N1 products on a feature benefit basis, and customers shouldn't want to buy that way.

Presentations later in the day emphasised Sun's co-opetive approach, building from the ground up to enable integrators to add value. These integrators are as likely to be customer internal staff as they are to be consultants. This means that a central piece of our platform is going to be the orchestration functionality so the "N1 backplane" will be able to orchestrate micro transactions, which themselves can be undertaken by either Sun product or partner/competitor product. It all enables solutions design.

tags:

Tuesday Feb 28, 2006

Tuesday Highlights

Bill Vass Sun's CIO presented to us today. I last heard him speak last February. He offered three useful insights.

  1. Sun IT runs a Sun-on-Sun reference program. I need to check out their "Secure Mail" offering and the resource management solution as we have implemented soft containers throughout the estate.
  2. Sun ran a Solaris 8 to 9 adoption programme. This took 3 months and the speed was enabled by de-mirroring swap and haveing tow versions of the OS on each system. They booted on S9 with minimal pre-testing and reverted if there were any problems. There weren't depsite the move to active resource management. I asked if he was frightened of failing to adequatley test the mythical "end of year" program. He said not, they JFDY'd it.
  3. He spoke about "encouraging" us to move towards personal compliance to their standards. One inducement is that they propose to implement wine as part of the JDS build, for those final stubborn windows apps. Hopefully, he'll find ways of enabling us all to benefit from their build standards.

tags: ""

Monday - Virtualising the Data Centre

Joost Pronk Van Hoogeveen, Solaris Virtualisation Product Manager presented. He had one rather excellent slide, showing Sun's technologies as a spectrum, from Dynamic System Domains, though a Hypervisor solution, to Containers and then the Resource Manager.

Virtualisation Spectrum

While this misses the aggregation dimension of virtualisation (and I know he understands this), placing these technologies as a spectrum and making the deployment decision accountable to the applications' non functional qualities is very powerfull. It allows better evaluation of technology choice and hopefully deprecates the "I'm only using one virtualisation technology" view and encourages people to use requirements driven design. It may also enable a richer solution design capability to solve the hetrogeneity question; data centre managers need to implement a "Real Time Infrastructure" delivering multiple APIs i.e. windows, J2EE, Oracle, Solaris & Linux etc. If performing architecture on the virtualisation question forces the explicit statement of an applications non-functional qualities, then a service will have been performed.

Joost kindly sent me this reference, which is a Sun Inner Circle article called "The Many Faces of Viurtualization", from which I have taken the picture. (I've put the link up; I think it'll be an interesting read).

tags:

Monday's Data Centre Bites

Sun gets a two year beta with its Express programme, customers get early access to allow rapid adoption of our innovation.

Solaris 10 offers 30-40% perfromance improvement over previous versions.

A key inhibitor to consolidation using containers is that "downtime" is accountable to seperate business units, who will/can not agree or compromise. Certainly, downtime (or availablity) is a service's non-functional quality. These ownerships are sometime expressed through legal ownership.

Cool Tools at OpenSPARC is advertising (as coming soon) a new gcc compiler backend for SPARC. This promises superior compilation and performance for opensource (or other Linux optimised) programs.

These bites were taken from yesterday.

tags:

 

An epiphany about ZFS

The hightlight of yesterday's conference to me was a presentation about ZFS. How long am I going to hang out for a british pronounciation Mysterious Smiley. The preso was delivered by Dave Brittle, Lori Alt & Tabriz Leman.

While much of the material delivered yesterday was standard "Dog & Pony" material, this version stayed away from the administrative management interface and while mentioning the ideological substitution of pool for volume, it concentrated on the transactional nature of the filesystem update, the versioning this enables and also "bringing the ZFS goodness to slash".

Somehow I suddenly get it. ZFS revolutionises the storage of disk data blocks and their meta data. It writes new blocks before deleting old one and so can roll back if the write errors. This also allows versioining to occur, the old superblock becomes a snapshot master superblock. The placement of parity data in the meta blocks (as opposed to creating additional leaf node blocks) means that error correction is safer and and offers richer functionally. More....

It seems to me that this technology will enable a sedimentation process to occur and that much of a DBMS's functionality can migrate to the operating system (or in this case file system). When I say much, when I first started working with DBMS (i.e. in the last century Smug Smiley), they often used the filesystem and often didn't use write ahead logs. By bringing this DBMS functionality to the file system, a process started by the adoption of direct & async i/o, the ZFS designers have closed a loop and borrowed from the DBMS designer's learning curve. Only the DBMS can "know" if two blocks are part of the same "success unit", but ZFS can implement a sucess unit and should begin to weaken the need for a write ahead log. It will also enable the safe(r) use of open source databases.

The versioning feature of the file system, when certified for use as a root file system will enable much safer and faster patching; it will enable snapshot and rollback . If system managers use these features to adopt a faster software technology refresh, then innovation will come to the data centre faster since newer code is better quality and should contain new usefull features. Disk cloning, snapshot and rollback will also enable the rapid spawing of Solaris Containers. Fantastic.

We are also released from the tyranny of the partition table, which for the last 15 years we have required a volume manager for.

Despite these fantastic advances, when it becomes available, it'll be a V1.0 product, so care will be needed. Certainly, the authors seem to have some humility about this, but with Solaris Express, we can get hold of it now and begin acceptance and confidence testing. A final really great feature is that ZFS has been donated/incorporated into OpenSolaris.

This stuff should be available as an update in Solaris 10, maybe sometime over the summer and I'm going to get hold of an "Express" version for my laptop.

Edited A correspondent called Igor asked for a link to the slides. OpenSolaris has a documentation page which hosts a .pdf presentation.

tags:

Tuesday Jan 31, 2006

From Qube to Cube

I can't be-eeeive it. I have my Qube running. It's hosting services and I now need to explore if "snipsnap" is what I want, but as I explore the next set of applications that might be useful I begin to see how behind the curve the old Cobalt O/S is. I spoke to Chris Gerhard who's thinking about upgrading his three Qubes, but using opensolaris, (see here... for his article, & here... for the opensolaris appliance group). I really like the form factor, and I'm quite impressed with the "headless" system. So I set to looking for replacement cases, in which I might build my own server. Yeah! Right.. Interestingly, CNET have also published an article detailing the growing differentiation between computer vendors around the case (or colour in this instance), but both ACER, with their Ferrari range, & Alienware (See here...) are offering very different laptops.

I came across a number of e-shops, including X-Case, Silver PCs & Directron (their Cube page), all of which do a number of parts including system cases. (This is the word you need if using Google! i.e. case.) I've found a couple of cases that look quite neat. Again no recommendations, I'm merely sharing research.

Firstly, the Lian Li PC 880, this looks very cute, has two internal disks and room for two 5.25 exchangable media devices. CD-ROM & ZIP? Is ZIP any good? Probably not, if we're using 120Gb (or maybe larger disks), then ZIP is shagged. Maybe we need to look at DVD/W. Can we do this with UNIX? The case supports an ATX card. Dimensions W443 x H205 x D503mm, a typical desktop pizza box and while very pretty, maybe a bit large. Its certainly the widest and deepest of those I've found.

lian li PC 880

V800

They also do a more industrial looking system called the V880. You can check this out at their site here.... Its dimensions are 380x160x440 mm (W,H,D), so smaller than V880 but it's still only got two internal disks

What's this called? A company called Shuttle do this. It looks nice, comes with a CPU and hence a bit expensive. Dimensions 200 mm x 300 mm x 185 mm, which makes it the smallest of these cases. This looks like the case that Hitech Savvy use as Qube replacements, it supports SATA disk, but only two (?), and has one 5.25 and one 3.5 external slots so again CD-ROM/W and ZIP devices become available . This one's quite small, but I need to see if it comes without the motherboard as while this'll probably do me, I can that some of my potential collaborators may want something better. Dimensions: 300 x 200 x 185 mm. These people seem to be OEM only, I havn't found their home page.

{short description of image}

{short description of image}

This is the Lian Li PC-402A. An aluminum Mini Case - taking a Mini Flex ATX card. Very Pretty. Again 2 internal disks, but three exchangable media slots, which is probably more than required. IAs said, it supports Mini flex ATX Motherboard. Dimensions 210 x 240 x 340mm (W x H x D). Small foot print if a bit high.

This is the Antec Aria, and the picture came from Xcase's site (here...). This one's got three drives, Accepts motherboards up to MicroATX (24,4 x 24,4 cm) and 4 full-height PCI expansion cards, only one external slot but 5.25 so a CD-ROM/W is a possibility, for emergency boot and backup. Dimensions 263 mm (W) x 210 mm (H) x 393 mm (D), so not as big as it looks.

{short description of image}

Note: The pictures in this article are hosted at their publisher's sites. The links are above. They presumably want to sell this stuff and may not be the most appropriate vendor for you (or me). My research isn't finished and I'm not necessarily recommending anything here, except that Cube cases look neat, but they're all much bigger than the Cobalt. Oh Dear

Note: Also I have broken one of my rules and used a table to format this article, maybe it would have been better to write seperate articles, but I didn't. I hope this looks OK in your browser.

tags:

Tuesday Nov 01, 2005

Making Sybase Scream

This article is about running Sybase on a sophisticated UNIX. It discusses sizing Sybase's max engines parameter, the effect of resource management tools & leveraging UNIX & Consolidation. Also note that this is not a Sun Blueprint, its meant to show you that you can, not how to.


I am often asked, "Given that Sybase recommend one engine/CPU, how can you consolidate Sybase onto a large computer?"

The first thing to say is that it is my view that consolidation is about architecture and co-operating systems, not merely an excuse to inappropriately pimp big-iron; an ideal consolidation host may be of any size. However, it is a fact that higher levels of utilisation are more likley if the available system capability is organised in large systems rather than the equivialently capable number of smaller systems.

Sybase is a database and most databases are good citizens within the operating system. This is true of Sybase which is a good Solaris citizen and is hence easy to consolidate.

It is absolutley true that Sybase have traditionally recommended that a Sybase engine (implemented as a UNIX process) should be mapped onto a physical CPU, although a common alternative rule is that the number of engines configured (max engines) should be the number of CPUs, leaving one spare. Either

E=#CPU or E=#CPU - 1

where E is number of Engines, #CPU is the number of CPUs.

The reason these rules are important is that "A well tuned database server will be CPU bound".

However the equations above are only performance tuning rules, and what's more, they're out of date. One of the key reasons that they're out of date is that CPUs are now much faster and more capable than when the rules were first developed. Due to the increased capability of modern CPUs, potentially fractions of CPU are required to support the applications requirement, but only integer numbers of engines can be deployed. The poverty of these rules is compounded by the fact that because they do not take into account the capability of the CPUs, and they are of little use in deciding the capacity requirements of replacement systems, or comparing between system architectures.

The tuning rules above have been useful where Sybase is the only piece of work the system is running and only one instance of the database server is running. In these cases, the designer needs to determine how many cycles/MIPS/specINTs etc. the server needs to deliver the expected/required performance. BTW, I shall define the delivered power of CPU as 'omph' (O) for the rest of this article. With Sybase, if it requires more than one CPU's worth of omph, then multiple engines will be required.

E=roundup(OReq/Ocpu,0)

where OReq is the amount of CPU required and Ocpu is the capability of the CPU. If we are looking to use the Solaris Resource Manager, then we need to translate this rule into SRM concepts and talk about shares. The rule above has stated OReq, and Ocpu \* #CPU defines the capability of the system (or pool). In the world of managed resource, the number of engines is,

E=roundup((SReq/Stot)\*#CPU,0)

where S is the number of SRM shares either required (SReq) or defined (Stot) & roundup() takes the arguments of expression, sig.figs (set to zero (0) not O for omph). This assumes that the system proposed is more than powerful enough. i.e.

i.e. SReq/Stot < 1

Or in other words the number of CPUs in a system must be greater than (or equal to) the number of engines. Where this is not the case, then the amount of omph delivered will be equal to the number of CPUs, unless constrained by SRM.

We now have three cases, where SRM is used to constrain system capability availble to the database server, where the Engine/#CPU is used to constrain system capability to the database server or where the database server consumed all available system resource.

  1. where SRM is used to constrain system capability availble to the database server,
  2. where the Engine/#CPU is used to constrain system capability to the database server or
  3. where the database server consumed all available system resourece.

These might be expressed in our equation language as

O=( SReq/Stot\*#CPU ) || E/#CPU\*#CPU || 1\*#CPU

where O is the proportion of the system consumed and the conditions are,

if SRM on || E < #CPU, SRM off || E>#CPU, SRM off.

These conditions are important; they show the massive difference in configuration rules depending on how actively the systems are resource managed. A single or multiple Sybase instance can then be placed under resource management using the S Ratio {SReq/Stot} to define the resources allocated to the Sybase instance. This can be enforced by user namespace design (S8) or projects (S9/S10). The permitted resources can be enforced using Solaris Resource Manager, processor sets with or without domain dynamic reconfiguration. It should be noted that the max engines parameter can be used to enforce rule two in a consolidation scenario; more engines than CPUs can be configured. In one assignment undertaken, the customer required a ratio of two engines/CPU and rationed between server instances by varying the number of engines.

I have written previously here... in my blog about designing the database instance schema as an applications map and why Consolidating Sybase instances onto fewer Solaris instances makes sense. These multiple server instances can be managed using SRM or the Solaris Scheduler. (I propose to research and write something more comprehensive about scheduler classes and zones.) I recommend that the operating system should be the resource rationing tool. Only the OS knows about that hardware and the system capability and unlike Sybase, Solaris with its project construct can approximate an application, and can therefore ration in favour, or against such objects. A Sybase server will not discrimate between applications, nor between queries.

In a world of virtual servers, and managed resource, another factor is that the number of CPUs in a domain is no longer static within an operating system instance, or any resource management entity. Our ability to move resources from & into a resource management entity permit us to change the constraints that enable the rule above. An example is that an ASE instance can be started with eight engines & eight CPUs within its processor set and that the processor set is shrunk to four CPUs. The ASE is originally consuming eight CPUs worth of omph, and after the configuration change, it only consumes four. These options permit configuration rules where the maximum number of engines for a single instance of Sybase may be either less than or equal to the Max number of CPUs planned over time. Also where multiple data servers are running on a single system the total number of engines is likely to exceed the number of CPUs. (One leading Sybase user in the City aims at two engines/CPU, while another moves system resources between domains on an intra-day basis.) These are both key consolidation techniques. The number of engines for each ASE Server instance should be set to the maximum required and the resources underpinning it can be changed using dynamic domaining, Solaris processor management utilities or Solaris Resource Manager. Consolidating database servers permits the resource management functionality of the OS (or hardware) to allocate CPU resource to the server. These can be dynamically changed. This means configuring sufficient engines to take advantage of the maximum available CPUs. i.e. if at a short time of day Sybase is required to use 12 CPUs, and for the rest of the day only four, then 'max engines' needs to be set to 12 and constrained at the times of day when only four are required.

In summary, in a world where consolidation & virtualisation, exist (or are coming), the value of max engines can no longer be assumed to be the based on the simple rules of the past.

tags:

About

DaveLevy

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today