Thursday Mar 29, 2012

ODA: Announcing ODA External Storage (read/write) Support

Effective immediately a customer can now use ODA with database files stored on an external NFS file server for both read and write operations.   This effectively eliminates ODA's 4TB storage capacity limit, which has been a concern with some of potential ODA customers.

Below is some additional information regarding the ODA external storage support

Q: What are we announcing?

A: Effective immediately, you can now use the Oracle Database Appliance with database files stored on a NFS-attached file server for both read and write operations.

Q: Does this differentiate the Oracle Database Appliance from an Oracle Database running on a 3rd-party server?

A: No, this feature is a generic feature of the Oracle Database. This puts Oracle Database Appliance on par with other Oracle Database server platforms.

Q: Is there an approved list of NFS servers I can use with the Oracle Database Appliance?

A: No, but we strongly recommend using a commercial file server such as an Oracle ZFS Storage Appliance, or a similar product from a major storage vendor (i.e., EMC, NetApp).

Q: Do I use ASM with the NFS-attached storage?

A: No. Simply mount the file system and store your files on it the same way you would store Oracle database files in any file system. Use of ASM for NFS-attached storage is not recommended.

Q: Can I have some of their database files on internal ODA storage, and some on NFS-attached storage?

A: Yes, you can have a mix of tablespaces—some on the local SAS storage internal to the ODA, and some on the NFS server.

Q: Can I create an ASM diskgroup that spans the local storage and the NFS storage?

A: No. ASM diskgroups should be constructed on disks internal to the ODA. ASM is not needed (nor recommended) for use on NFS-attached storage.

Q: If I don’t use ASM with my NFS Filer, how do I online add storage to my NFS-attached file system.

A: Most commercial NFS filers provide this functionality.

Q: Will I be required to patch their storage sub-system separately or will the ODA patching process take care of the NFS server?

A: The ODA patching process will only patch the local ODA server. If patches are required for an NFS filer, the customer is responsible for patching that themselves.

Q: If I put critical business data on an NFS filer and the server is down, what happens?

A: Your database is down. Make sure you use a highly available NFS filer.

Q: if I put non-critical archive data on an NFS filer, can I configure it so a failure to the filer won’t affect my database?

A: Yes, mark the archive data read-only, and set the init.ora parameter READ_ONLY_OPEN_DELAYED=TRUE

Q: What NIC in the ODA should I use to connect to NFS-attached storage?

A: The ODA has plenty of NICs. Use a bonded pair that is not in use.

Q: What mount options should I use?

A: rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,tcp,actimeo=0*

Q: Where can I find more information?


Q: What is Direct NFS (dNFS)?

A: dNFS is a feature of the Oracle Database that allows an NFS client in the database kernel to directly connect to and access data on an NFS Filer.

Q: Should I used the native OS client or dNFS to connect to my NFS Filer?

A: You can use either, but Oracle recommends using dNFS.

Q: How do I configure and use dNFS?

A: Refer to MOS note 762374.1.

Q: How can I use Hybrid Columnar Compression (HCC) with ODA?

A: HCC is available for use with database files stored on and Oracle ZFS Storage Appliance. You can thus use HCC with ODA as long as the files are stored on and NFS-attached ZFS Storage Appliance. Your ODA must be running database (releasing in April 2012) and you must connect via dNFS.

Q: How should I distribute my data between the local storage and the NFS filer?

A: Put hot data on the internal storage, as that will give the best performance. Put colder data on the NFS filer.

Q: I have tables containing both hot and cold data. How can I move just the cold data to the filer without affecting the application which expects everything in one table?

A: Use the database partitioning option to keep data in different partitions. Place the hot data in partitions on the internal storage, and cold data in partitions on the NFS Filer.

Q: Is there a limit to the amount of data I can store on the NFS Filer?

A: Yes, it is limited by the maximum database size supported by the Oracle Database. This is very big.

Q: Is there a white paper on this subject I can read?

A: An ODA specific white paper will be available shortly. In the mean time, a dNFS white paper is available describing the generic functionality. You can find it here:

eSTEP: Virtualization@Oracle (Part 4: Oracle Solaris Zones and Linux Containers)

After the Oracle VM coverage in the previous two articles we will now cover the Operating System side by looking at the

Oracle Solaris Zones and Linux Containers

Oracle Solaris Zones or also Linux Containers are not a separate product, but a technology, a feature of an Operating System. Both technologies are in principle based on the same technologies. They are a virtualization at the application level, so “above” the OS kernel. Compared to the Hypervisor based virtualization, we do not have such an additional software layer here. We have one OS kernel that is shared by many zones or containers.

To put it into perspective, let’s reuse the image from the first articles, where we show the positioning of Oracle Solaris Zones, which can roughly be compared to Linux Containers. The difference between both technologies is more at the implementation level and on the way it is integrated into the OS.

Let’s first dive more into detail with the

Oracle Solaris Zones

This Solaris feature at first showed up in Solaris Express and Sun Solaris 10 3/05 as Solaris Containers, but has always been called Solaris Zones. With Oracle Solaris 11 we now officially call it Oracle Solaris Zones. Zones are a virtualization technology that create a virtualization layer for applications. We could say a zone is a “sandbox” that provides a playground for an application. Those zones are called non-global zones and are isolated from each other, but all share one global zone. The global zone holds the Solaris kernel, the device drivers and the devices, the memory management system, the filesystem and in many cases the network stack.

So the global zone sees all physical resources and provides common access to these resources to the non-global zones.

The non-global zones appear to applications like separate Solaris installations.

Zones have their own filesystems, their own process namespace, security boundaries, and own network addresses. Based on requirements, zones can also have their own network stack with separated network properties. And yes there also is a separated administrative login (root) for every non-global zone, but still even as a privileged user there is no way to break-out/in from one non-global zone into a neighborhood non-global zone. But looking from the global zone, such a non-global zone is just a bunch of processes grouped together by a tag, called zoneid.

This type of virtualization is often called lightweight virtualization, because we have nearly no overhead in which we have to invest for the virtualization layer and the applications, running in the non-global zones. Therefore we get native I/O-performance from the OS. Thus zones are a perfect choice, if many applications need to be virtualized and high performance is a requirement.

Due to the fact, that all non-global zones share one global zone, all zones run the same level of OS software – with one exception. Branded zones run non-native application environments. With that, for Oracle Solaris 10 we have the special case of being able to create Solaris 8 and Solaris 9 Legacy Containers, providing Solaris 8 and Solaris 9 runtime environments, but still sharing the Solaris 10 kernel in the global zone. With Oracle Solaris 11 it is possible to create Solaris 10 Zones.

Within Oracle Solaris 11, zones have been much more integrated with the OS, compared to zones in Solaris 10. It’s no longer just an additional feature of the OS. Zones are well integrated into the whole lifecycle management process of the OS when it comes to (automatic) installation or updates of zones. A big step forward is, once again, the better integration of zones with more kernel security features, which enables more delegated administration of Zones. Better integration into ZFS, consistent use of boot environments, network virtualization features and the Solaris resource management are additional improvements, made to the zones in Oracle Solaris 11. Oracle Solaris Zones have always been very easy to setup on the command line and easy to use. If you want to use a Graphical Tool to configure Zones, you can use Oracle Enterprise Manager OpsCenter (which we will cover later on in this series).

Now while we have discussed Oracle Solaris Zones, what are:

Linux Containers (LXC)

Is this the same technology like zones or if not, how do they differ ?

First of all, compared to Oracle Solaris Zones, it’s really a new technology in Linux starting with kernel 2.6.27 and provides the resource management through control groups (also called userspace process containers) and resource isolation through namespaces. The LXC project page at has a very good explanation of Linux Containers: “Linux Containers take a completely different approach than system virtualization technologies such as KVM and Xen, which started by booting separate virtual systems on emulated hardware and then attempted to lower their overhead via paravirtualization and related mechanisms. Instead of retrofitting efficiency onto full isolation, LXC started out with an efficient mechanism (existing Linux process management) and added isolation, resulting in a system virtualization mechanism as scalable and portable as chroot, capable of simultaneously supporting thousands of emulated systems on a single server while also providing lightweight virtualization options to routers and smart phones.”

So we are talking here about chroot-environments, that can be created on various isolation levels, but also share as isolated group of processes one Linux kernel.


Oracle Solaris Zones and Linux Containers are offering a lightweight virtualized runtime environment for applications. Solaris Zones exist since Solaris 10 and are now highly integrated into Oracle Solaris 11. Linux Containers are available as BETA for Oracle Linux with the Unbreakable Enterprise Kernel only for testing and demonstration purposes.

With that we'd like to close this article on Oracle Solaris Zones and Linux Containers and hope we've kept you eager to read the ones coming in the following newsletters.

Further Reading

This series already had the following articles:

  • December 2011: Introduction to Virtualization (Matthias Pfützner)
  • January 2012: Oracle VM Server for SPARC (Matthias Pfützner)
  • February 2012: Oracle VM Server for x86 (Matthias Pfützner)

The series will continue as follows (tentative):

  • April 2012: Resource Management as Enabling Technology for Virtualization
    (Detlef Drewanz)
  • May 2012: Network Virtualization (Detlef Drewanz)
  • June 2012: Oracle VM VirtualBox (Detlef Drewanz)
  • July 2012: Oracle Virtual Desktop Infrastructure (VDI) (Matthias Pfützner)
  • August 2012: OpsCenter as Management Tool for Virtualization (Matthias Pfützner)

If you have questions, feel free to contact me at: Detlef Drewanz

Read more:

<<< Part 3: Oracle VM Server for x86 >>>> Part 5: Resource Management as Enabling Technology for Virtualization

Tuesday Mar 20, 2012

eSTEP Newsletter March 2012 now available

Dear Partners,

We would like to inform you that the March issue of our Newsletter is now available.
The issue contains informations to the following topics:

Notes from Corporate:

  • Oracle Outperforms NetApp in Midrange Systems on SPECsfs2008 NFS Benchmark, Costs Less Than 1/5 the Price, Global CIO: Larry Ellison's 10-Point Plan For World Domination

Technical Corner:

  • Virtualization @ Oracle (Part 4: Oracle Solaris Zones and Linux Containers), Maximizing Consolidation ROI with Real-Time Data Integration, The Effects of Big Data on the Logistics Industry, Compression in Oracle Database world, Which Software runs on which Hardware, End of Features (EOF) for the Oracle Solaris 11 Release, New material for Oracle Exalogic Elastic Cloud X2-2, SPARC Product Line Update, End of Life: Netra T3 and T2 Systems

Learning & Events:

  • eSTEP Events Schedule, Recently Delivered TechCasts 

How to ...:

  • Fast Track to move your Application to Oracle Solaris 11 (Solaris and Systems Information for ISVs), How to Script Oracle Solaris 11 Zones Creation for a Network-In-a-Box Configuration, How to Manage the ZFS Storage Appliance with JavaScript, OMG! What Did I Just Install?, Serving Multiple Repositories of Oracle Solaris 11, Using Oracle Ksplice to Update Oracle Linux Systems Without Rebooting

You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.


Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.

Thanks and best regards,

Partner HW Enablement EMEA

Monday Mar 19, 2012

ODA: Mandatory Patch available

Today announcing the availability of an important mandatory patch for the Oracle Database Appliance. Please check MOS note 888888.1 for more details and installation instructions.

Friday Mar 16, 2012

Short Q&A for Oracle Exalogic Elastic Cloud X2-2

Why should a customer consider buying an Exalogic system rather than assemble one themselves using commodity off-the-shelf components (COTS)?

  • Exalogic is pre-integrated and tested to provide fastest time to market and superior overall experience - we build the whole system, test it and ensure that it is functioning correctly before our installation engineer leaves the site
  • Because Oracle develops the Exalogic system and all of it's major components, we are able to test the Exalogic system to discover component defects or limitations which impair overall system performance, manageability or stability and then execute programs to enhance individual components in ways that make the Exalogic system better - when you consider the case of an Oracle Application workload, there is clearly no other vendor that can make a long term commitment to ensuring that the entire application-to-disk deployment will be increasingly efficient, reliable, easier to manage and lower cost to own over time.
  • Only Oracle can support Oracle's applications in a fully virtualized, mission-critical production deployment, and the only platform for such a deployment is Exalogic.
  • Complete monitoring and management stack for the system infrastructure (IB Gateway, compute nodes, storage, OS, OVM), which includes system-level command line diagnostic and configuration utilities
  • Oracle Linux, OVM, all 7320 Storage Device features (including replication and snapshots), HW/VM/IB Fabric management (deployment automation), Exabus APIs (and the underlying InfiniBand stack including EoIB support) software license and support is included in the Exalogic Elastic Cloud Software license - a comparable suite of software and support from any combination of component vendors is both very costly and very complex to deploy and support
  • The Exalogic system is the same system used by our support organization, greatly improving our ability to diagnose and correct issues - our TTR is much shorter on Exalogic than for the same Oracle software products on COTS platforms, largely because the platform is both widely used (Oracle alone has more than 75 Exalogic configurations in production across the company) and is essentially identical in all deployments - every new deployment improves the product quality by exposing it to additional workloads and environments, which feeds back into our support capability and engineering program
  • The number and severity of defects in Exalogic systems is greatly reduced (versus a COTS multi-vendor system) by the preemptive distribution of fully tested full-stack patches (which include device firmware, device drivers, OS patches, and software updates for OVM, OTD, OpsCenter, Exalogic Deployer) and pre-tested non disruptive patching procedures

What is included in an Exalogic system?

  • Exalogic is an engineered system comprised of Hardware and Software.
  • Exalogic hardware and software are separate items on the Exalogic Price List, but they are closely coupled and are of little value separately. As with the Exadata Storage Server, the Exalogic Elastic Cloud Software is installed on the Exalogic Elastic Cloud Hardware at the factory.
  • The Oracle Exalogic Elastic Cloud hardware is available in three basic configurations: Eighth, Quarter, Half and Full rack. Each configuration contains x86 servers with flash SSDs, InfiniBand switches and gateways that comprise I/O fabric and an integrated disk storage subsystem.
  • Each Exalogic hardware configuration is pre-installed with major components of the Exalogic Elastic Cloud Software called the 'Base Image' at the factory. The Exalogic Elastic Cloud Software is the result of thousands of hours of testing, tuning, optimization and hardening, which are the source for Exalogic's greatly improved overall performance, stability and manageability. It is extremely unlikely that any customer (or any competitor) could replicate the Exalogic Elastic Cloud Software, even if they were willing and able to invest the thousands of hours in testing and development that Oracle has since mid 2009.
  • The Exalogic Elastic Cloud Software is the only OS platform on which the Exalogic optimizations in the latest releases of HotSpot, JRockit, WebLogic Server and Oracle Coherence are supported. All of the Exalogic performance optimizations made to those upper-stack products rely on Ethernet-over-InfiniBand (EoIB) and Sockets Direct protocol are enabled through the Exalogic Networking Stack. In practice, this means that they are logically extensions of the Exalogic platform as well, although obviously those features cannot be used unless the customer has licensed the appropriate products that contain them (i.e. WebLogic Suite, Coherence, etc.).

How does the Exalogic hardware provide 99.9999% (six-nines) high-availability?

Exalogic is designed to be extremely reliable and tolerant of hardware component failures and applies a no-single-point-of-failure redundancy strategy. Any given Exalogic rack configuration will remain continuously available throughout it's life in any given production deployment, uninterrupted by either individual component failures or regular servicing of the system.
Every Exalogic X2-2 configuration is fully redundant and provides automated fault detection and fail-over using techniques which are completely independent of any external software in the following ways:

  • Exalogic Power Distribution Units (PDU) in each Exalogic rack configuration are redundant. For complete redundancy, each PDU should be wired to a different AC source.
  • Exalogic has 2 (two) independent power supplies in each component (InfiniBand switch/gateway, compute node and storage head) actively balance power - if one fails, or is connected to a PDU that fails, the other takes over (2N redundancy, continuous availability)
  • Exalogic has excess fan capacity in each component (InfiniBand switch/gateway, compute node and storage head) and if a fan dies the temperature sensors will up the RPM on remaining fans to maintain safe operating temperature (N+1 redundancy, continuous availability)
  • Exalogic compute nodes use on-board enterprise-grade SSDs in a RAID1 configuration (continuous availability). These SSDs are hot-swappable.
  • The InfiniBand ports in each HCA are bonded and by default have a link failure detection in the single-digit milliseconds or less. (fast fail-over)
  • Each InfiniBand port on each HCA is connected to a different physical InfiniBand switch, and each port is capable of handling more I/O traffic than a single compute node can generate or receive, making it possible for an Exalogic system to operate at full capacity with up to ½ (half) of the InfiniBand switches disabled.
  • Each InfiniBand gateway provides redundant up to 8 (eight) physical 10GbE connections to the data center service network, allowing each gateway to connect to multiple (redundant) external modular switches. Each Exalogic compute node is configured (by default) with a bonded EoIB interface that is associated with minimum two 10G ports on separate InfiniBand Gateways.
  • Exalogic compute nodes talk to the storage heads using NFS and the fail-over delay is governed by the NFS client configuration and can take 30 seconds to a minute. Following fail-over, the read performance of the storage device will be impaired while the read cache is rebuilt. (fail-over)
  • The disks in the Exalogic storage array are each separately cabled and are connected to the storage heads in a ZFS cluster. It is possible to configure the storage subsystem for multiple levels of redundancy, including striping and mirroring. The hard disks in the storage array are hot-swappable.
  • The Exalogic storage heads support block-level storage replication, which is the foundation of Exalogic's Disaster Recovery capability. It is possible to use up to 2 (two) GbE ports on each storage head for redundant direct connection to the datacenter network that will be used to connect the primary Exalogic site to the backup Exalogic site. It is also possible to bond these ports for the purposes of automated fault detection and fail-over.
  • The GbE management network is not redundant because the failure of that network will not cause the system to cease serving end-user traffic. The only exception would be in a case where the customer has configured the system to access external NDIS, DNS or similar resources exclusively over the management network. We recommend having those external resources available on the Data Center service network or deployed on one of the compute nodes directly attached to the InfiniBand fabric.

The likelihood of the failure of any single component is so low that it is extremely unlikely that it would not be possible to repair/replace/restart a failed component before it's backup/replacement also failed. This means that the reliability of the Exalogic X2-2 configuration as a whole is extremely high, and system downtime is unlikely to ever exceed the time required for applications to fail-over to the secondary storage head in the system. Average cumulative downtime of an Exalogic configuration, during any one-year window, is very likely to be less than 5 minutes, even in the case of multiple sequential component failures.

It is important to note that all software executing on a given compute node will be unavailable if that compute node fails or is taken out of service. Exalogic does not provide any mechanism for continuous availability of applications that do not perform their own state replication or clustering. All sessions on a given application server instance, for example, will be lost if the compute node hosting that instance fails or is taken out of service unless the application server is deployed in a fault-tolerant configuration.

What is the Exalogic Elastic Cloud Software?

The Oracle Exalogic Elastic Cloud Software is the unique set of software components, tools and documentation required to make the Exalogic Elastic Cloud Hardware functional and usable as a platform for Oracle's Fusion Middleware and business applications. The Exalogic Elastic Cloud Software consists of a number of components, many of which are pre-integrated with the specific Oracle Solaris and Oracle Linux operating system images and device firmware installed on the Exalogic Elastic Cloud Hardware at the time of manufacture. There is no practical means of using the Exalogic Elastic Cloud Hardware that does not require use of the Exalogic Elastic Cloud Software, nor are there any supported approaches to "hard partitioning" that would allow customers to avoid licensing all physical processors for a given Exalogic Elastic Cloud X2-2 Hardware compute node (server) that is powered on and in use.
The principal components of the Exalogic Elastic Cloud Software are as follows:

  • Exabus: an assembly of special InfiniBand gateway hardware, device drivers, device firmware, software libraries and configuration files that allow other software ("applications";) to make use of the Exalogic Elastic Cloud hardware and ensure the optimal performance and reliability of the system. The Exabus firmware and software extends and integrates Oracle Linux, Oracle Solaris, the OpenFabrics Enterprise Distribution (OFED) with the unique hardware design of the InfiniBand gateways and switches in the so-called "I/O backplane" of the Exalogic system. This software is installed on the Exalogic Elastic Cloud Hardware at the time of manufacture.
  • Exalogic Configuration Utility: A desktop tool used to configure the Exalogic system management and data center service network interfaces and internal subnets.
  • Exalogic Distributed Command Line Interface: A command-line tool that allows commands to be executed on some or all of the Exalogic nodes simultaneously, at the discretion of the operator. This software is installed on the Exalogic Elastic Cloud Hardware at the time of manufacture. 
  • Exalogic Topology Verifier: This command-line tool automatically verifies the InfiniBand topology of the Exalogic system, ensuring that the correct topology is applied for each given system configuration: Quarter Rack, Half Rack or Full Rack. This software is installed on the Exalogic Elastic Cloud Hardware at the time of manufacture.
  • Exalogic InfiniCheck: This tool verifies the correct operation of every InfiniBand device and port on the fabric, ensuring that all ports and connectors are functioning correctly. This software is installed on the Exalogic Elastic Cloud Hardware at the time of manufacture. 
  • Exalogic Hardware & Firmware Profiler: This tool automatically verifies that the all of the hardware devices and firmware versions connected to the Exalogic system fabric are verified and supported, with the correct and compatible device firmware versions. This software is installed on the Exalogic Elastic Cloud Hardware at the time of manufacture. 
  • Exalogic Software Profiler: This tool verifies that all of the Linux or Solaris software packages installed on any of the system's compute nodes are of the correct version and do not jeopardize the Exalogic system's performance, security or stability. This software is installed on the Exalogic Elastic Cloud Hardware at the time of manufacture. 
  • Exalogic Boot Manager: This tool allows system operators to easily re-image individual Exalogic compute nodes, via external PXE servers or network-mounted disk images. This software is installed on the Exalogic Elastic Cloud Hardware at the time of manufacture.
  • Exalogic Elastic Cloud Software options for WebLogic Suite, coherence and Tuxedo: This is a set of features implemented within the other Fusion Middleware products that are technically dependent on the underlying Exalogic Elastic Cloud Hardware and Software. 

Friday Mar 02, 2012

ODA: How to configure DB Vault on an Oracle Database Appliance

The procedure for installing the option on Database Vault for ODA involves two steps:
  1. Use chopt to enable DV on ODA
  2. Enable DV in the Database by using the Database Configuration Assistant (DBCA)
Here are some more details:
  1. When you install Oracle Database, some options are enabled and others are disabled. To enable or disable a particular database feature for an Oracle home, shut down the database and use the chopt tool.
    See:Oracle Database Installation Guide - 5.2.8 Enabling and Disabling Database Options
    • To enable the Oracle Database Vault option in your Oracle binary files, use the following command:
      • cd %ORACLE_HOME%
      • srvctl stop database -d <myDb>
      • chopt enable dv
      • srvctl start database -d <myDb>
  2. After you install Oracle Database, you must register (that is, enable) Oracle Database Vault…using the Database Configuration Assistant (DBCA).
    See: Database Vault Administrator’s Guide
    - Registering (Enabling) Oracle Database Vault

ODA: Available on MOS now

OAK is available on MOS now.

Oracle Database Appliance patches consists of two components

  • OAK aka Oracle Appliance Manager
    Contains OS, Component Firmware and Oracle Appliance Manager Modules 
  • End-User Bundle
    Contains the Grid Infrastructure and RDBMS components

Latest Releases

For more details please review MOS note  888888.1.

ODA: Oracle Database Appliance Resource Center

In the OPN Oracle Database Appliance Knowledge Zone under the Sell tab you find a hint to the Oracle Database Appliance Resource Center.

This partner community workspace includes extensive FAQs, sales tools, presentations, press references, and more. Access the workspace (Note: Registration is required).

Thursday Mar 01, 2012

eSTEP TechCast - March 2012 Material available

Dear Partners,

We would like to extend our sincere thanks to those of you who attended our TechCast on "ZFS Storage Appliance - Today and Tomorrow (history obsoleted)".

The materials (presentation, replay) from the TechCast are now available for all of you via our eSTEP portal.  You will need to provide your email address and the pin below to access the downloads. Link to the portal is shown below.

PIN: eSTEP_2011

The downloads can be found under tab Events --> TechCast.

Feel free to explore also the other delivered TechCasts and more useful information under the Download and Links tab. Any feedback is appreciated to help us improve the service and information we deliver.

Thanks and best regards,

Partner HW Enablement EMEA


eSTEP LogoeSTEP is an integrated program for our partner, focusing at the technical community to provide them with relevant technical information for their day-to-day business with us


« March 2012 »