Tuesday Feb 28, 2006

Tuesday Highlights

Bill Vass Sun's CIO presented to us today. I last heard him speak last February. He offered three useful insights.

  1. Sun IT runs a Sun-on-Sun reference program. I need to check out their "Secure Mail" offering and the resource management solution as we have implemented soft containers throughout the estate.
  2. Sun ran a Solaris 8 to 9 adoption programme. This took 3 months and the speed was enabled by de-mirroring swap and haveing tow versions of the OS on each system. They booted on S9 with minimal pre-testing and reverted if there were any problems. There weren't depsite the move to active resource management. I asked if he was frightened of failing to adequatley test the mythical "end of year" program. He said not, they JFDY'd it.
  3. He spoke about "encouraging" us to move towards personal compliance to their standards. One inducement is that they propose to implement wine as part of the JDS build, for those final stubborn windows apps. Hopefully, he'll find ways of enabling us all to benefit from their build standards.

tags: ""

Monday - Virtualising the Data Centre

Joost Pronk Van Hoogeveen, Solaris Virtualisation Product Manager presented. He had one rather excellent slide, showing Sun's technologies as a spectrum, from Dynamic System Domains, though a Hypervisor solution, to Containers and then the Resource Manager.

Virtualisation Spectrum

While this misses the aggregation dimension of virtualisation (and I know he understands this), placing these technologies as a spectrum and making the deployment decision accountable to the applications' non functional qualities is very powerfull. It allows better evaluation of technology choice and hopefully deprecates the "I'm only using one virtualisation technology" view and encourages people to use requirements driven design. It may also enable a richer solution design capability to solve the hetrogeneity question; data centre managers need to implement a "Real Time Infrastructure" delivering multiple APIs i.e. windows, J2EE, Oracle, Solaris & Linux etc. If performing architecture on the virtualisation question forces the explicit statement of an applications non-functional qualities, then a service will have been performed.

Joost kindly sent me this reference, which is a Sun Inner Circle article called "The Many Faces of Viurtualization", from which I have taken the picture. (I've put the link up; I think it'll be an interesting read).

tags:

Monday's Data Centre Bites

Sun gets a two year beta with its Express programme, customers get early access to allow rapid adoption of our innovation.

Solaris 10 offers 30-40% perfromance improvement over previous versions.

A key inhibitor to consolidation using containers is that "downtime" is accountable to seperate business units, who will/can not agree or compromise. Certainly, downtime (or availablity) is a service's non-functional quality. These ownerships are sometime expressed through legal ownership.

Cool Tools at OpenSPARC is advertising (as coming soon) a new gcc compiler backend for SPARC. This promises superior compilation and performance for opensource (or other Linux optimised) programs.

These bites were taken from yesterday.

tags:

 

An epiphany about ZFS

The hightlight of yesterday's conference to me was a presentation about ZFS. How long am I going to hang out for a british pronounciation Mysterious Smiley. The preso was delivered by Dave Brittle, Lori Alt & Tabriz Leman.

While much of the material delivered yesterday was standard "Dog & Pony" material, this version stayed away from the administrative management interface and while mentioning the ideological substitution of pool for volume, it concentrated on the transactional nature of the filesystem update, the versioning this enables and also "bringing the ZFS goodness to slash".

Somehow I suddenly get it. ZFS revolutionises the storage of disk data blocks and their meta data. It writes new blocks before deleting old one and so can roll back if the write errors. This also allows versioining to occur, the old superblock becomes a snapshot master superblock. The placement of parity data in the meta blocks (as opposed to creating additional leaf node blocks) means that error correction is safer and and offers richer functionally. More....

It seems to me that this technology will enable a sedimentation process to occur and that much of a DBMS's functionality can migrate to the operating system (or in this case file system). When I say much, when I first started working with DBMS (i.e. in the last century Smug Smiley), they often used the filesystem and often didn't use write ahead logs. By bringing this DBMS functionality to the file system, a process started by the adoption of direct & async i/o, the ZFS designers have closed a loop and borrowed from the DBMS designer's learning curve. Only the DBMS can "know" if two blocks are part of the same "success unit", but ZFS can implement a sucess unit and should begin to weaken the need for a write ahead log. It will also enable the safe(r) use of open source databases.

The versioning feature of the file system, when certified for use as a root file system will enable much safer and faster patching; it will enable snapshot and rollback . If system managers use these features to adopt a faster software technology refresh, then innovation will come to the data centre faster since newer code is better quality and should contain new usefull features. Disk cloning, snapshot and rollback will also enable the rapid spawing of Solaris Containers. Fantastic.

We are also released from the tyranny of the partition table, which for the last 15 years we have required a volume manager for.

Despite these fantastic advances, when it becomes available, it'll be a V1.0 product, so care will be needed. Certainly, the authors seem to have some humility about this, but with Solaris Express, we can get hold of it now and begin acceptance and confidence testing. A final really great feature is that ZFS has been donated/incorporated into OpenSolaris.

This stuff should be available as an update in Solaris 10, maybe sometime over the summer and I'm going to get hold of an "Express" version for my laptop.

Edited A correspondent called Igor asked for a link to the slides. OpenSolaris has a documentation page which hosts a .pdf presentation.

tags:

Tuesday Nov 01, 2005

Making Sybase Scream

This article is about running Sybase on a sophisticated UNIX. It discusses sizing Sybase's max engines parameter, the effect of resource management tools & leveraging UNIX & Consolidation. Also note that this is not a Sun Blueprint, its meant to show you that you can, not how to.


I am often asked, "Given that Sybase recommend one engine/CPU, how can you consolidate Sybase onto a large computer?"

The first thing to say is that it is my view that consolidation is about architecture and co-operating systems, not merely an excuse to inappropriately pimp big-iron; an ideal consolidation host may be of any size. However, it is a fact that higher levels of utilisation are more likley if the available system capability is organised in large systems rather than the equivialently capable number of smaller systems.

Sybase is a database and most databases are good citizens within the operating system. This is true of Sybase which is a good Solaris citizen and is hence easy to consolidate.

It is absolutley true that Sybase have traditionally recommended that a Sybase engine (implemented as a UNIX process) should be mapped onto a physical CPU, although a common alternative rule is that the number of engines configured (max engines) should be the number of CPUs, leaving one spare. Either

E=#CPU or E=#CPU - 1

where E is number of Engines, #CPU is the number of CPUs.

The reason these rules are important is that "A well tuned database server will be CPU bound".

However the equations above are only performance tuning rules, and what's more, they're out of date. One of the key reasons that they're out of date is that CPUs are now much faster and more capable than when the rules were first developed. Due to the increased capability of modern CPUs, potentially fractions of CPU are required to support the applications requirement, but only integer numbers of engines can be deployed. The poverty of these rules is compounded by the fact that because they do not take into account the capability of the CPUs, and they are of little use in deciding the capacity requirements of replacement systems, or comparing between system architectures.

The tuning rules above have been useful where Sybase is the only piece of work the system is running and only one instance of the database server is running. In these cases, the designer needs to determine how many cycles/MIPS/specINTs etc. the server needs to deliver the expected/required performance. BTW, I shall define the delivered power of CPU as 'omph' (O) for the rest of this article. With Sybase, if it requires more than one CPU's worth of omph, then multiple engines will be required.

E=roundup(OReq/Ocpu,0)

where OReq is the amount of CPU required and Ocpu is the capability of the CPU. If we are looking to use the Solaris Resource Manager, then we need to translate this rule into SRM concepts and talk about shares. The rule above has stated OReq, and Ocpu \* #CPU defines the capability of the system (or pool). In the world of managed resource, the number of engines is,

E=roundup((SReq/Stot)\*#CPU,0)

where S is the number of SRM shares either required (SReq) or defined (Stot) & roundup() takes the arguments of expression, sig.figs (set to zero (0) not O for omph). This assumes that the system proposed is more than powerful enough. i.e.

i.e. SReq/Stot < 1

Or in other words the number of CPUs in a system must be greater than (or equal to) the number of engines. Where this is not the case, then the amount of omph delivered will be equal to the number of CPUs, unless constrained by SRM.

We now have three cases, where SRM is used to constrain system capability availble to the database server, where the Engine/#CPU is used to constrain system capability to the database server or where the database server consumed all available system resource.

  1. where SRM is used to constrain system capability availble to the database server,
  2. where the Engine/#CPU is used to constrain system capability to the database server or
  3. where the database server consumed all available system resourece.

These might be expressed in our equation language as

O=( SReq/Stot\*#CPU ) || E/#CPU\*#CPU || 1\*#CPU

where O is the proportion of the system consumed and the conditions are,

if SRM on || E < #CPU, SRM off || E>#CPU, SRM off.

These conditions are important; they show the massive difference in configuration rules depending on how actively the systems are resource managed. A single or multiple Sybase instance can then be placed under resource management using the S Ratio {SReq/Stot} to define the resources allocated to the Sybase instance. This can be enforced by user namespace design (S8) or projects (S9/S10). The permitted resources can be enforced using Solaris Resource Manager, processor sets with or without domain dynamic reconfiguration. It should be noted that the max engines parameter can be used to enforce rule two in a consolidation scenario; more engines than CPUs can be configured. In one assignment undertaken, the customer required a ratio of two engines/CPU and rationed between server instances by varying the number of engines.

I have written previously here... in my blog about designing the database instance schema as an applications map and why Consolidating Sybase instances onto fewer Solaris instances makes sense. These multiple server instances can be managed using SRM or the Solaris Scheduler. (I propose to research and write something more comprehensive about scheduler classes and zones.) I recommend that the operating system should be the resource rationing tool. Only the OS knows about that hardware and the system capability and unlike Sybase, Solaris with its project construct can approximate an application, and can therefore ration in favour, or against such objects. A Sybase server will not discrimate between applications, nor between queries.

In a world of virtual servers, and managed resource, another factor is that the number of CPUs in a domain is no longer static within an operating system instance, or any resource management entity. Our ability to move resources from & into a resource management entity permit us to change the constraints that enable the rule above. An example is that an ASE instance can be started with eight engines & eight CPUs within its processor set and that the processor set is shrunk to four CPUs. The ASE is originally consuming eight CPUs worth of omph, and after the configuration change, it only consumes four. These options permit configuration rules where the maximum number of engines for a single instance of Sybase may be either less than or equal to the Max number of CPUs planned over time. Also where multiple data servers are running on a single system the total number of engines is likely to exceed the number of CPUs. (One leading Sybase user in the City aims at two engines/CPU, while another moves system resources between domains on an intra-day basis.) These are both key consolidation techniques. The number of engines for each ASE Server instance should be set to the maximum required and the resources underpinning it can be changed using dynamic domaining, Solaris processor management utilities or Solaris Resource Manager. Consolidating database servers permits the resource management functionality of the OS (or hardware) to allocate CPU resource to the server. These can be dynamically changed. This means configuring sufficient engines to take advantage of the maximum available CPUs. i.e. if at a short time of day Sybase is required to use 12 CPUs, and for the rest of the day only four, then 'max engines' needs to be set to 12 and constrained at the times of day when only four are required.

In summary, in a world where consolidation & virtualisation, exist (or are coming), the value of max engines can no longer be assumed to be the based on the simple rules of the past.

tags:

Sunday Apr 03, 2005

Consolidating Sybase

When designing system platforms for Sybase based applications, three patterns are available.

  1. Traditionally, a single Sybase instance is installed onto a system host. This serves an application and user community. This remains a popular pattern for ISV products, however, where a rigorous environment management policy is adopted, this leads rapidly to 'server sprawl' as each DBMS instance requires not only production but also a development, testing and contingency system.
  2. Alternatively, multiple applications can be collected into a single Sybase instance. This is referred to by Sun as ‘Aggregation’. It is best done in conjunction with a single data model so that, for instance, only one logical customer (or counter party) table exists.
  3. The final implementation model is to install multiple Sybase instances within a single instance of the operating system. As noted above, with Solaris 10, these can be installed within their own Zone, or they can share zones. This Sun refers to as database server ‘Consolidation’.

Sybase's development of multiple workspaces has again extended the parallelisation of the server and reduces the number of serial bottlenecks. This is a technique that might permit multiple applications to be hosted within a single ASE server instance i.e. adopt pattern two above. ASE has for many years had multiple #145;databases’ within it but a database is a unit of recovery, not of applications logic and while Sybase has dramatically reduced the number of scarce resources within the ASE instance, it has no concept of application and its resource management algorithms have no concept of priority nor of service quality.

These weaknesses can be overcome by the Solaris resource manager which permits an application via a project to have its system resources guaranteed against anti-social behaviour be other applications. Having multiple Sybase instances within a Solaris instance would allow a more sophisticated resource management policy to be declared. Further reasons for ‘consolidating’ is that the regression tests for application changes are simpler. A change in one application will not require tests in the databases and procedures that are required by others and start/stop requests do not impact other applications. Any conflict around the values of the server run time parameters can be resolved using the consolidation model, to the extent that even different versions & EBFs of Sybase can be applied to different applications (using the file system name space to enforce and differentiate between versions). This last factor is very important if the platform designers do not own the Sybase version definition policy, such as when ISV code is being used.

Platform designers have a choice and they should carefully consider which of the three patterns they wish to implement as they develop their infrastructure plans.

tags:

Sunday Aug 01, 2004

Sybase commits to Solaris x86???

Since starting to write, I have been reading others entries to the Sun blogging community. This is a link & repeat of John Gardner's web log entry about Sybase & Sx86. Click here .... for John's original blog. The entry & hyperlink interested me as Sybase commit their ASE to Solaris x86, even as a developer cut. Click here .... for the download. I've been using Sybase for a long time as those of you who have access to internal publications know and am glad to see them follow me & Sun into fabric, Solaris based computing. This will give those sites committed (or locked into) Sybase two advantages. The first is that since Sybase have historically worked hard to ensure that hardware is not a performance constraint, internal contention is more frequently the problem than with its competitors. Utilising Solaris x86 and the x86 CPUs will allow Sybase to 'crack' serial hot spots or "code paths" more effectively. Secondly, Sybase has always had an effective 'data movement' solution and the new platform changes the price and hence enables a the deployment of distributed database solutions, based on caching patterns or real life geography.

I've started to write a Sun paper on consolidating Sybase on Solaris, its 30% written and you may see some of it here. I will be downloading the sybase binaries onto my Sun laptop sometime soon.

Version 1.2

About

DaveLevy

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today