Friday Sep 29, 2006

Voting in the Blogosphere

As you may see, I have amended my theme to offer you blogosphere voting buttons. How did I do this. With help from Rich Burridge I amended my themes _day template file.


<!---- This is the digg example ---->

<a href="http://digg.com/submit?phase=3&url=$absBaseURL$entry.permaLink;title=$entry.title"
  rel="external" title="Submit it to Digg">

<!-- I am using Rich Burridge's Icons -->

  <img src="http://blogs.sun.com/richb/resource/bookmarkIcon-digg.png"
      width="16" height="14" alt="" border="0"/>
  </a> &nbsp;
 

The Border="0" is quite important, as in my original implementation I did not treat the four icons equally. This controls if there is a border around the picture, which since I have a white background and white icons, looks better than the active link colour border, which is as you can see orange.

I wonder if I should convert the theme to my web space greens.

tags:

Thursday Sep 28, 2006

Hatsoff to UNIX

Chris Ratcliffe spoke today and stated that Red Hat are about to drop enterprise server R3, and this is an opportunity for their users. I didn't know Solaris 10 supports more hardware and more applications than Red Hat and that the transition from R3 to R4 is not necessarily easy, and that adopting Solaris may be as easy. Sun has more to say about this on it's web site.

tags:

Thursday May 11, 2006

Can software migrations be strategic??

I am the co-author of the Sun Blueprint, "Migrating to the Solaris OS". Much of the book is now avaiable at http://www.sun.com/blueprints. Including -

Prentice Hall advertise the book here..., although they're the publishers and you'll need to go to a book shop to buy a copy.

tags:

Saturday Mar 04, 2006

More......

I have now (yeah really) added a picture to my article "Designing Data Centre automation solutions" illustrating the two dimensions of virtualisation & posted my notes from the N1 Architecture sessions.

tags:

Friday Mar 03, 2006

Virtualisation has two dimensions

I have added a picture to my article "Designing Data Centre automation solutions" illustrating the two dimensions of virtualisation.. It seems like I didn't! Sorry. I have now. Mysterious Smiley

tags:

Data Centre Vision

Sohrab has sent me an edited version of his slides. You can download it here... and I have put it up as a bookmark in the right hand side bar. Its the "N1 (.pdf)" one.

tags:

Wednesday Mar 01, 2006

Designing Data Centre automation solutions

Today was planned as a look ar Sun's system management solutions and the day was started by Sohrab Modi, the V/P for the group. We really have a problem with our branding, but we're sticking with N1, managing maNy systems as 1. Sohrab's presentation called "Simplify, Integrate, Automate", hit two key points for me.

First, N1 is a solutions sale and the customer will define the scope and boundaries. Sun's field need to capture these requirements and map both Sun & partner products to the requirements and take integration responsibility, at the least the solution design needs to be a collaborative process between the customer and vendor. Our proposition needs to be we meet your need and solve your problems. Its not good enough to try and win feature/benefit beauty parades. Sohrab showed how in the next 12 months Sun (his people) will be bringing virtualisation and management products to market to allow us to more address more and more comprehensive requirements. He demo'd some of the more advanced features of our logical domaining and provisioning technology which gives me confidence that his team are finally getting their delivery train in order, its been a long hard struggle for them. (One of his staff produced one of the analyst slides showing Sun in a strong position today, at least we're gaining mind share.)

The second piece of his talk that I liked is his two dimensions of consolidation. The virtualisation technologies mean that the data centre might have more operating system images, which without automation is more difficult and costly to manage, therefore we have to aggregate systems into run time resources and management objects.

2D of Virtualisation

This something I have been calling pools, but I suspect that Sun and the industry will call them something else. The demand for multi-node pools by applications is a crucial part of the capability supply and demand nexus and the economics and architectural constraints that aggregation enables and and creates make solutions design and integration more important not less. It is unlikely that one technology can answer the management consolidation and virtualisation problems.

N1 is a solutions sell. Customers will define the boundaries of scope, and we need to meet all of the customer problem. Sohrab gets this and is committed to a full technology offering that meets customer needs. We're not looking at point product releases, we're selling/building a platform and field people i.e. me need to learn how to ensure that customers define problems, sun offer's technology and the sales process collaborates on solutions design. As I said we don't want to sell each of our N1 products on a feature benefit basis, and customers shouldn't want to buy that way.

Presentations later in the day emphasised Sun's co-opetive approach, building from the ground up to enable integrators to add value. These integrators are as likely to be customer internal staff as they are to be consultants. This means that a central piece of our platform is going to be the orchestration functionality so the "N1 backplane" will be able to orchestrate micro transactions, which themselves can be undertaken by either Sun product or partner/competitor product. It all enables solutions design.

tags:

Tuesday Feb 28, 2006

Monday - Virtualising the Data Centre

Joost Pronk Van Hoogeveen, Solaris Virtualisation Product Manager presented. He had one rather excellent slide, showing Sun's technologies as a spectrum, from Dynamic System Domains, though a Hypervisor solution, to Containers and then the Resource Manager.

Virtualisation Spectrum

While this misses the aggregation dimension of virtualisation (and I know he understands this), placing these technologies as a spectrum and making the deployment decision accountable to the applications' non functional qualities is very powerfull. It allows better evaluation of technology choice and hopefully deprecates the "I'm only using one virtualisation technology" view and encourages people to use requirements driven design. It may also enable a richer solution design capability to solve the hetrogeneity question; data centre managers need to implement a "Real Time Infrastructure" delivering multiple APIs i.e. windows, J2EE, Oracle, Solaris & Linux etc. If performing architecture on the virtualisation question forces the explicit statement of an applications non-functional qualities, then a service will have been performed.

Joost kindly sent me this reference, which is a Sun Inner Circle article called "The Many Faces of Viurtualization", from which I have taken the picture. (I've put the link up; I think it'll be an interesting read).

tags:

Monday's Data Centre Bites

Sun gets a two year beta with its Express programme, customers get early access to allow rapid adoption of our innovation.

Solaris 10 offers 30-40% perfromance improvement over previous versions.

A key inhibitor to consolidation using containers is that "downtime" is accountable to seperate business units, who will/can not agree or compromise. Certainly, downtime (or availablity) is a service's non-functional quality. These ownerships are sometime expressed through legal ownership.

Cool Tools at OpenSPARC is advertising (as coming soon) a new gcc compiler backend for SPARC. This promises superior compilation and performance for opensource (or other Linux optimised) programs.

These bites were taken from yesterday.

tags:

 

An epiphany about ZFS

The hightlight of yesterday's conference to me was a presentation about ZFS. How long am I going to hang out for a british pronounciation Mysterious Smiley. The preso was delivered by Dave Brittle, Lori Alt & Tabriz Leman.

While much of the material delivered yesterday was standard "Dog & Pony" material, this version stayed away from the administrative management interface and while mentioning the ideological substitution of pool for volume, it concentrated on the transactional nature of the filesystem update, the versioning this enables and also "bringing the ZFS goodness to slash".

Somehow I suddenly get it. ZFS revolutionises the storage of disk data blocks and their meta data. It writes new blocks before deleting old one and so can roll back if the write errors. This also allows versioining to occur, the old superblock becomes a snapshot master superblock. The placement of parity data in the meta blocks (as opposed to creating additional leaf node blocks) means that error correction is safer and and offers richer functionally. More....

It seems to me that this technology will enable a sedimentation process to occur and that much of a DBMS's functionality can migrate to the operating system (or in this case file system). When I say much, when I first started working with DBMS (i.e. in the last century Smug Smiley), they often used the filesystem and often didn't use write ahead logs. By bringing this DBMS functionality to the file system, a process started by the adoption of direct & async i/o, the ZFS designers have closed a loop and borrowed from the DBMS designer's learning curve. Only the DBMS can "know" if two blocks are part of the same "success unit", but ZFS can implement a sucess unit and should begin to weaken the need for a write ahead log. It will also enable the safe(r) use of open source databases.

The versioning feature of the file system, when certified for use as a root file system will enable much safer and faster patching; it will enable snapshot and rollback . If system managers use these features to adopt a faster software technology refresh, then innovation will come to the data centre faster since newer code is better quality and should contain new usefull features. Disk cloning, snapshot and rollback will also enable the rapid spawing of Solaris Containers. Fantastic.

We are also released from the tyranny of the partition table, which for the last 15 years we have required a volume manager for.

Despite these fantastic advances, when it becomes available, it'll be a V1.0 product, so care will be needed. Certainly, the authors seem to have some humility about this, but with Solaris Express, we can get hold of it now and begin acceptance and confidence testing. A final really great feature is that ZFS has been donated/incorporated into OpenSolaris.

This stuff should be available as an update in Solaris 10, maybe sometime over the summer and I'm going to get hold of an "Express" version for my laptop.

Edited A correspondent called Igor asked for a link to the slides. OpenSolaris has a documentation page which hosts a .pdf presentation.

tags:

Sunday Feb 26, 2006

The Firefox Proxy Button

I met up with Sean Harris and Ann-Marie Jamison, earlier this week who are colleagues at work. Sean demonstrated his personal edge solution based on his Sony Erricson K700i and SyncML (see here... for more) where he keeps his e-diary, todo list and addresses. Interestingly he also let slip about the Firefox Proxy Button, which I have bookmarked here....

Proxy Button on Firefox

It allows you to compensate for the loss of profiles within Firefox by turning the proxy server on/off from a toolbar toggle button, as opposed to using the radio button in the "options" editor. If you're a Firefox user, I strongly recommend that you use this. You can also see that I have installed a del.icio.us add on, giving me toolbar buttons to go to my del.icio.us, or post a page to my del.icio.us.

The theme I am using is "Glassier"

tags:

Thursday Sep 15, 2005

Sybase 15

I popped into the Brewer's Hall at the invitation of Sybase for thier UK launch of Sybase 15. The have three great new features to help them compete with Oracle and the free ones, and its good to see them spend some money and more importantly intellect on the database server.

The optimiser has been given a major overhaul. They've looked long and hard at what they can do and what academia and the market say is possible and adopted more modern, flexible and cheaper query plans, including multiple entry points, eliminating work tables, minimising the use of nested loops, new storage structures, more row level locked defaults etc. Some of the query performance improvements they claim would be unbelievable if you don't understand what they've done. This has been done to support their re-engineering of the size limits to allow VLDB implementations, together with changes to enable more effective OLAP mixed work load solutions. However, perfromance improvements can/will be obtained by more traditional OLTP implementations.

Sybase 15 has improved XML support, both storage and more excitingly, the ability to expose stored procedures as a web service. This might not be enough to encourage people to locate the data server in a DMZ, but either Sybase replication or standard web proxy architectures should be sufficient to protect the solutions.

Another featured highlight is LDAP implementation, I was discussing with Mark Hudson of Sybase about this because it extends the solutions design capability based on Sun/Sybase co-operation and he pointed out that Sybase have been on a road to LDAP implementation since V12.0, initially offering support for the server maps but now offering User authentication. I expect that understanding the definition, storage and aquistion of privilidges and aliases will need a fuller reading of the documentation.

They've also responded to customer calls and the markets growing requirement for more pervasive Encryption. My final personal highlight is that they've improved statement level statistics capture, permitting the simpler tracking of rogue queries.

Mark also let slip that Grid enablement is next. It'll be interesting to see what competition will do to the market's solutions designs when active/active HA databases become non-proprietary.

tags:

Wednesday Nov 24, 2004

Consolidation & Sybase

This article explores a couple of basic Sybase consolidation techniques and the UNIX and Solaris technologies that support consolidation. It offers a couple of configuration tips on the way.

When consolidating multiple Sybase (or any RDBMS) hosts, designers have the option to implement either an aggregation solution, which would involve implementing each application as a separate database within a server, or alternatively co-locating multiple ASE server instances within a single operating system environment. The first of these techniques I refer to as aggregation, and the second as consolidation (or work load sharing). The former technique may involve developer time because the data & business models require reconciling. It also may lead to constraints for scarce resources within the Sybase instance. These are typically Sybase memory objects, or system database resources. While a choice between Aggregation & Consolidation exists, consolidation is easier and delivers more benefits.

Sybase is implemented in Solaris as a a number of processes that attach themselves to a single shared memory segment. (The processes are referred to as Sybase engines). The processes are linked in the process table and inherit a Sybase name from the master..sysservers table. (Actually at run time, the name is inherited from the -S switch in the run server file.) The data in master..sysservers table needs to be replicated into the interfaces file. (More recent versions of Sybase can utilise LDAP for this purpose). The purpose of the interfaces file is to map the Sybase server name onto a tcp/ip address consisting of {tcp/ip address:port no}. Each data server requires two ports on the same system. The default Solaris syntax has been to document the Sybase name direct to a tcp/ip address, although using Solaris' name aliasing is syntactically supported and a superior method. As a diversion, you should always convert the generated addresses into hostname:port no format. Also Solaris will support multiple tcp/ip addresses for each network interface. It is thus possible to co-locate multiple Sybase names within a single instance of the operating system (without containers) and ensure that remote processes can find the correct database server without changing the applications code, or client configuration parameters.

The number of processes cable of running in a Solaris instance/domain is defined and controlled by the /etc/system file. It is folklore and good practice to derive the number of engines as either the same as the number of CPUs (or related in some way to the number of CPUs). There is no technical constraint in either Solaris or Sybase that mandates this tuning rule. In the world of workload sharing, the share of a domain or system utilised by a Sybase server instance needs to be actively managed and I recommend that the Solaris Resource Manager is used to perform this function. This is a superstructure product in Solaris 8 and integrated into the OS from Solaris 9 on.

The default memory configuration of Solaris systems is to implement Sybase's shared memory as 'intimate'. The effect this has is to 'pin' Sybase's buffer caches into real memory. This leads to the configuration rule that sum of the consolidated server's caches needs to be less than real memory. If this configuration rule is not implemented, it is likely that one (or more) Sybase ASE instances will fail to start. (Oh, by the way, always set your SHMMAX parameter to HIGH VALUES, it saves a reboot when you breach a bound constraint. Systems on sale today (64 bit systems), are generally configured with ample memory for this not to be a problem. Thirdly, SHMMAX is a permissive parameter, setting it higher than needed is free.) Also Sybase DBAs have historically chosen/had to install their databases on raw disks and hence all the data cache is configured as database cache through the "Total Memory" parameter. (The UNIX file system cache is of no relevance to the DBMS). Most systems administrators and solutions designers are also aware of this and 'reserve' memory for Sybase. In the world of consolidation the solutions designer needs to be sure that real memory is greater than the sum of "Total Memory", plus the UNIX kernel image.

Sybase has a very rich semantic for abstracting disk objects into tables and rows, and this can be used to scale IO resource. I recommend that raw disks are presented by the systems administrator with permission bits set to 600 and then that file system links with relevant names are created. The disk init command should use the link name as the physname argument. This creates a level of indirection in the naming conventions and also permits a rich naming convention so that stored procedures such as sp_helpdevice can actually tell the DBA stuff about the disks in use. Discovering that the database is mounted on /dev/rdsk/md22 is not helpful!. The abstraction means that data can be moved between disks without changing the database configuration (although the database can't be running). The other huge advantage of this technique is that striping and locating the sybase devices across multiple disks, RAID devices, controllers or switches becomes transparent to the database's mounting script. It allows DBAs to begin to use the language of storage attribution, and leverage the system scalability.

The UNIX file system will permit multiple versions of Sybase to exist within a file system hierarchy. The default sybase installation model incorporates the Sybase version into the UFS directory name for the install tree. This will cope with the circumstances where the applications portfolio is running two (or maybe more) versions of the database. A further advantage is that effort to support patch management is reduced. Consolidation will force increased standardisation, which will lead to a reduction in the breadth of the problem as a reduced number of hardware platforms hosting sybase require to be patched and tested against diverse requirements.

In summary

  • Decide to Aggregate or Consolidate, or how to combine these strategies
  • Design your engine/process/instance map
  • Design you memory implementation
  • Design your disk map and its abstraction interface

Sunday Aug 01, 2004

Sybase commits to Solaris x86???

Since starting to write, I have been reading others entries to the Sun blogging community. This is a link & repeat of John Gardner's web log entry about Sybase & Sx86. Click here .... for John's original blog. The entry & hyperlink interested me as Sybase commit their ASE to Solaris x86, even as a developer cut. Click here .... for the download. I've been using Sybase for a long time as those of you who have access to internal publications know and am glad to see them follow me & Sun into fabric, Solaris based computing. This will give those sites committed (or locked into) Sybase two advantages. The first is that since Sybase have historically worked hard to ensure that hardware is not a performance constraint, internal contention is more frequently the problem than with its competitors. Utilising Solaris x86 and the x86 CPUs will allow Sybase to 'crack' serial hot spots or "code paths" more effectively. Secondly, Sybase has always had an effective 'data movement' solution and the new platform changes the price and hence enables a the deployment of distributed database solutions, based on caching patterns or real life geography.

I've started to write a Sun paper on consolidating Sybase on Solaris, its 30% written and you may see some of it here. I will be downloading the sybase binaries onto my Sun laptop sometime soon.

Version 1.2

About

DaveLevy

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today