Monday Feb 16, 2015

How to Build Software Defined Networks Using Elastic Virtual Switches - Part 1

Oracle Solaris 11.2 enhances the existing,integrated software-defined networking (SDN) technologies provided by earlier releases of Oracle Solaris to provide much greater application agility without the added overhead of expensive network hardware.

It now enables application-driven, multitenant cloud virtual networking across a completely distributed set of systems; decoupling from the physical network infrastructure; and application-level network service-level agreements (SLAs)—all built in as part of the platform. Enhancements and new features include the following:

• Network virtualization with virtual network interface cards (VNICs), elastic virtual switches, virtual local area networks (VLANs), and virtual extensible VLANs (VXLANs)
• Network resource management and integrated, application- level quality of service (QoS) to enforce bandwidth limits on VNICs and traffic flows
• Cloud readiness, a core feature of the OpenStack distribution included in Oracle Solaris 11
• Tight integration with Oracle Solaris Zones

About the Elastic Virtual Switch Feature of Oracle Solaris

The Elastic Virtual Switch (EVS) feature provides a built-in distributed virtual network infrastructure that can be used to deploy and manage virtual switches that are spread across several compute nodes. These compute nodes are the physical machines that host virtual machines (VMs).

An elastic virtual switch is an entity that represents explicitly created virtual switches that belong to the same Layer 2 (L2) segment. An elastic virtual switch provides network connectivity between VMs connected to it from anywhere in the network.

For more information how-to combine software-defined networking (SDN) and Elastic Virtual Switch (EVS) technologies using examples see the following article:How to Build Software Defined Networks Using Elastic Virtual Switches - Part 1

Monday Nov 24, 2014

Impressions from Oracle Week 2014

Last week the 21st Oracle Week took place in Herzlia Israel with more than 1800 participates.
This is the largest paid IT event in Israel. The event included 100 technical seminars covering many Oracle technologies such as: Database & Big Data, Business Analytics, Development and Infrastructure. Source
The event was very impressive with lots of good energy. It was a unique opportunity to meet with customers, colleagues, develop new business contacts and to eat delicious deserts :)
The Oracle Systems division presented two demos . The first was ZFS Storage Appliance and the second Oracle Database Appliance (ODA). Hundreds of people came by to see and hear about Oracle technology and received cool headsets and a smiley on the badge :)
During the event I ran three seminars: Introduction to OpenStack with a guest lecture, Mark Markman from ECI Telecom. Hadoop Hands On Lab . This Hadoop Lab demonstrated how to set up a Hadoop cluster on Oracle Solaris 11 using Zones, ZFS. Big Data Infrastructure - Challenges, Technologies and Practices with a guest lecture, Sam Babad from MidLink. The total audience for these sessions was 40 people.

A big thank-you to the participants for the great interaction and to the organizers from John Bryce especially Yael Dotan, Margarita Ripkin and Oshrat Shem-Tov, Issrae Sakhafie, Oren Parnes and to Dorit Almog from Oracle for a great event!

Looking forward to see you next year in Oracle Week 2015!

Wednesday Oct 01, 2014

Oracle Software in Silicon Cloud

I'm happy to announce that the Oracle Software in Silicon Cloud is available today.

Oracle’s revolutionary Software in Silicon technology extends the design philosophy of engineered systems to the chip. Co-engineered by Oracle’s software and microprocessor engineers, Software in Silicon implements accelerators directly into the processor to deliver a rich feature-set that enables quick development of databases and applications that are more reliable and run faster. Now, with the Oracle Software in Silicon Cloud, developers can have a secure environment to test and improve their software and exploit the unique advantages of Oracle’s Software in Silicon technology.

Oracle Software in Silicon Cloud provides developers a ready-to-run virtual machine environment to install, test, and improve their code in arobust and secure cloud platform powered by the revolutionary Software in Silicon technology in Oracle’s forthcoming SPARC M7 processor running Oracle Solaris.

This hardware-enabled functionality can be used to detect and prevent data corruptions and security violations. Test workloads have demonstrated average results of 40x faster than software-only tools, with some tests showing it’s more than 80x faster. This performance advantage illustrates the capability to eventually be always-on in production and not limited to test environments.

Oracle Software in Silicon Cloud users will have access to the latest Oracle Solaris Studio release that includes tools to detect numerous types of memory corruption errors and provide detailed diagnostic information to aid developers in quickly improving code reliability.
Code examples, demonstrations, and documentation will help users more quickly exploit the unique advantage of running applications with Software in Silicon technology.

Software in Silicon features implemented in Oracle’s forthcoming SPARC M7 processor include:

Application Data Integrity is the first-ever end-to-end implementation of memory-access validation in hardware. Designed to help prevent security bugs such as HeartBleed from putting systems at risk, it enables hardware monitoring of memory requests by software processes in real-time and it stops unauthorized access to memory whether that access is due to a programming error or a malicious attempt to exploit buffer overruns. It also helps accelerate code development and helps ensure software quality, reliability and security.

Query Acceleration increases in-memory database query processing performance by operating on data streaming directly from memory via extremely high-bandwidth interfaces –-with speeds up to 160 GB/s—resulting in tremendous performance gains. Query acceleration is implemented in multiple engines in the SPARC M7 processor.

Decompression units in the Software in Silicon acceleration engines significantly increase usable memory capacity. The units on a single processor run data decompression with performance that is equivalent to 16 decompression PCI cards or 60 CPU cores. This capability allows compressed databases to be stored in-memory while being accessed and manipulated at full performance.

For more information about the chip see this.
Read the full news press here.
For recent posts about the Oracle Software in Silicon Cloud "Securing a Cloud-Based Data Center","Building a Cloud-Based Data Center".

Wednesday Sep 17, 2014

Hadoop Hands-on Lab Oracle OpenWorld 2014

In a few days the largest IT event in world will start.  Oracle OpenWorld 2014 . As in the past, and growing exponentially each year, we will host over 2000 sessions and many ‘hands-on’ labs.

These labs are a unique opportunity to familiarize yourselves with the Oracle products which address the entire IT portfolio.

Our specific interest is in Big Data, Solaris, and virtualization.  This year Jeff Taylor and I will present the following lab:  Set Up a Hadoop 2 Cluster with Oracle Solaris Zones, Oracle Solaris ZFS, and Unified Archive [HOL2086] 
This hands-on lab addresse all the requirements and demonstrates how to set up an Apache Hadoop 2 (YARN) cluster using Oracle Solaris 11 technologies such as Oracle Solaris Zones, Oracle Solaris ZFS, and Unified Archive.  Key topics include the Hadoop Distributed File System (HDFS) and the Hadoop MapReduce programming model. 
It also covers the Hadoop installation process and the cluster building blocks, namely: NameNode, Resource Manager, History Server, and DataNodes.

In addition, you will learn how to combine Oracle Solaris 11 technologies for better scalability and data security and will learn how to enable a HDFS high-availability cluster and run a MapReduce job.

Please register by using the link below:

Set Up a Hadoop 2 Cluster with Oracle Solaris Zones, Oracle Solaris ZFS, and Unified Archive [HOL2086]

See you at OpenWorld!

Orgad

Tuesday Aug 19, 2014

Securing a Cloud-Based Data Center

No doubt, with all the media reports about stolen databases and private information, a major concern when committing to a public or private cloud must be preventing unauthorized access of data and applications.
In this article, we discuss the security features of Oracle Solaris 11 that provide a bullet-proof cloud environment.
As an example, we show how the Oracle Solaris Remote Lab implementation utilizes these features to provide a high level of security for its users.

Note: This is the second article in a series on cloud building with Oracle Solaris 11. See Part 1 here. 

When we build a cloud, the following aspects related to the security of the data and applications in the cloud
become a concern:

• Sensitive data must be protected from unauthorized access while residing on storage devices, during
transmission between servers and clients, and when it is used by applications.

• When a project is completed, all copies of sensitive data must be securely deleted and the original
data must be kept permanently secure.

• Communications between users and the cloud must be protected to prevent exposure of sensitive information
from “man in a middle attacks.

• Limiting the operating system’s exposure protects against malicious attacks and penetration by
unauthorized users or automated “bots” and “rootkits” designed to gain privileged access.

• Strong authentication and authorization procedures further protect the operating system from tampering.

Denial of Service attacks, whether they are started intentionally by hackers or accidentally by other cloud users, must be quickly detected and deflected, and the service must be restored.

In addition to the security features in the operating system, deep auditing provides a trail of actions that can identify violations,issues, and attempts to penetrate the security of the operating system.

Combined, these threats and risks reinforce the need for enterprise-grade security solutions that are specifically designed to protect cloud environments. With Oracle Solaris 11, the security of any cloud is ensured.

This article explains how.

Tuesday Jul 08, 2014

Network Virtualization High Availability

How to add high availability to the network infrastructure of a multitenant cloud environment using the DLMP aggregation technology introduced in Oracle Solaris 11.1.
This article is Part 1 of a two-part series. In Part 1, we will cover how to implement network HA using datalink multipathing (DLMP) aggregation technology, which was introduced in Oracle Solaris 11.1.

In Part 2 of this series, we will explore how to secure the network and perform typical network management operations for an environment that uses DLMP aggregations.

Once we virtualize a network cloud infrastructure using Oracle Solaris 11 network virtualization technologies—such as virtual network interface cards (VNICs), virtual switches, load balancers, firewalls, and routers—the network itself becomes an increasingly critical component of the cloud infrastructure.

In order to add resiliency to the network infrastructure layer, we need to implement an HA solution at this layer, such as we would do for any other mission-critical component of the data center.

A DLMP aggregation allows us to deliver resiliency to the network infrastructure by providing transparent failover and increasing throughput.
The objects that are involved in the process are VNICs, irrespective of whether they are configured inside Oracle Solaris Zones or in logical domains under Oracle VM Server for SPARC.

Using this technology, you can add HA to your current network infrastructure without the cross-organizational complexity that might often be associated with this kind of solution.

The benefits of this technology are clear and they take into account the limitations of existing technologies:

Since the IEEE 802.3ad trunking standard does not cover the case for building a trunk across multiple network switches, the network switch becomes a single point of failure (SPOF). Some vendors have added this capability to their product, but these implementations are vendor-specific and, therefore, prevent combining switches from multiple vendors when building a multi-switch trunk. Because Oracle Solaris provides resilience, DLMP aggregation can be implemented across two different network switches, thus eliminating the network switch as a SPOF. As an additional benefit, because the aggregation is implemented at the operating system layer, there is no need to set anything up on the switch.

Building a network HA solution that is based on previously available IP network multipathing (IPMP) can be a complex task. With IPMP, HA is implemented at Layer 3 (the IP layer), which needs to be configured in the global zones and within each zone, and requires multiple VNICs to be assigned to each zone or virtual machine (VM). This involves more configuration steps, requires spare IP addresses out of the address pool, and generally can be an error-prone process. In contrast, the DLMP aggregation setup is much simpler since all the configuration takes place at Layer 2 in the global zone; therefore, every non-global zone can directly benefit from the underlying technology without the need for additional configuration. In addition, every new Oracle Solaris Zone that is provisioned automatically benefits from this capability. Moreover, we can create an aggregation over four 10 Gb/sec network interfaces; combining all the interfaces together, we can achieve up to 40 Gb/sec of network bandwidth.

DLMP can provide additional benefits when employed together with other network virtualization technologies that are implemented in the Oracle Solaris 11 operating system, such as link protection and the ability to configure a bandwidth limit on a VNIC or a traffic flow to meet service-level agreements (SLAs). Combining these technologies provides for a uniquely compelling network solution in terms of HA, security, and performance in a cloud environment.

Monday May 19, 2014

How to Set Up a Hadoop 2.2 Cluster From the Unified Archive

Tech Article: How to Set Up a Hadoop 2.2 Cluster From the Unified Archive.
Learn how to combine an Apache Hadoop 2.2 (YARN) cluster using Oracle Solaris Zones, the ZFS file system, and the new Unified Archive capabilities of Oracle Solaris 11.2 to set up a Hadoop cluster on a single system.
Also see how to configure manual or automatic failover, and how to use the Unified Archive to create a “cloud in a box” and deploy bare-metal system.



The article starts with a brief overview of Hadoop and follows with an example of setting up a Hadoop cluster with two NameNodes, a Resource Manager, a History Server, and three DataNodes. As a prerequisite, you should have a basic understanding of Oracle Solaris Zones and network administration.

Table of Contents:
About Hadoop and Oracle Solaris Zones
Download and Install Hadoop
Configure the Network Time Protocol
Configure the Active NameNode
Set Up the Standby NameNode and the ResourceManager
Set Up the DataNode Zones
Format the Hadoop File System
Start the Hadoop Cluster
About Hadoop High Availability
Configure Manual Failover
About Apache ZooKeeper and Automatic Failover
Configure Automatic Failover
Create a "Cloud in a Box" Using Unified Archive
Deploy a Bare-Metal System from a Unified Archive

Tuesday Apr 08, 2014

Tech Article: Building a Cloud-Based Data Center - Part 1

Tech Article: Building a Cloud-Based Data Center - Part 1 by Ron Larson and Richard Friedman
This article discusses the factors to consider when building a cloud,
the cloud capabilities offered by Oracle Solaris 11, and the structure of Oracle Solaris Remote Lab, an Oracle implementation of an Oracle Solaris 11 cloud. 

Topics included in this article:

Why a Cloud? And Why Oracle Solaris 11?
Cloud Benefits—Solving Business Needs
The Cloud as Multitier Data Center Virtualization
OS Virtualization with Oracle Solaris Zones
The Cloud as a Service
Choosing the Virtualization Model That Fits
Advantages of Creating a Cloud Using Oracle Solaris Zones
Oracle Solaris Remote Lab—An Overview

This is the first in a series of articles that show how to build a cloud with Oracle Solaris 11.
In our next article, we take a look at how Oracle Solaris 11 provides data security for Oracle Solaris Remote Lab.


Sunday Dec 29, 2013

Presentations from Oracle & ilOUG Solaris Forum meeting, Dec 18th, Tel-Aviv

Thank you for attending the Israel Oracle User Group (ilOUG) Solaris Forum meeting on Wednesday. I am posting here presentations from the event. I am also pointing you to the dim_STAT tool, if you are looking for a powerful monitoring solution for Solaris systems.
The event was reasonably well attended with 20 people from various companies.

The meeting was broken up into two sections: presentations about Solaris eco system, Solaris as a big data platform with Hadoop as a use case; and comparison between Oracle Solaris and Linux and a customer use-case of Peta-scale data migration using Solaris ZFS.

I am posting here responses and links for the topics that came up during the evening:
During the meeting the participants asked us, “how can we verify if an application which is running in Solaris 10 will run smoothly in Solaris 11 and how to build an application that will able to leverage the new SPARC CPUs”.
For this task you can use the Oracle Solaris Preflight Applications Checker which enables you to determine the Oracle Solaris 11 “readiness” of an application by analyzing a working application on Oracle Solaris 10.
A successful check with this tool is a strong indicator that an ISV may run a given application without modifications on Oracle Solaris 11.
In addition, you can analyze and improve the application performance during the development phase using Oracle Solaris Studio 12.3

Another useful tool for performance analysis is the Solaris DTrace. You can leverage the DTrace Toolkit which is a collection of ~200 scripts that help troubleshoot problems on a system. You can find the latest version of these scripts from /usr/demo/dtrace in Oracle Solaris 11.
Another topic that came up was, “How to accelerate application deployment time by using pre-built Solaris images”.
For this task, you can use Oracle VM Templates which provide an innovative approach to deploying a fully configured software stack by offering per-installed and per-configured software images.
Use of Oracle VM Templates eliminates the installation and configuration costs, and reduces the ongoing maintenance costs, thus helping organizations achieve faster time to market and lower cost of operations.

During the Solaris vs Linux comparison presentation, the topic that received the most attention from the audience was, “ What are the benefits of Oracle Solaris ZFS versus Linux btrfs?” 
In addition to the link above, here is a link to COMSTAR which is a Solaris storage virtualization technology.
Common Multiprotocol SCSI Target, or COMSTAR, a software framework that enables you to convert any Oracle Solaris 11 host into a SCSI target device that can be accessed over a storage network by initiator hosts.

In the final section of the meeting, Haim Tzadok from Oracle partner Grigale and Avi Avraham from Walla presented together the customer use case of Peta-scale data migration using Oracle Solaris ZFS.
Walla is one of the most popular web portals in Israel with an unlimited storage for email accounts.
Walla uses Solaris ZFS for their Mail server storage, the ZFS feature that allows them to reduce storage cost and improve DISK I/O performance is the ZFS ability to change the block size (4 KB up-to 1 MB) when creating the file system.
Like many Mail systems you have many small files and by using the Solaris ZFS ability to change to the ZFS default block size you can optimize your storage subsystem and align it to application (Mail server).
The final result is DISK I/O performance improvement without the need to invest extra budget in superfluous storage hardware.
For more information about ZFS block size
A big thank-you to the participants for the engaged discussions and to ilOUG for a great event!
Register to ilOUG and get the latest updates.







Tuesday Dec 17, 2013

Performance Analysis in a Multitenant Cloud Environment Using Hadoop Cluster and Oracle Solaris 11

Oracle Solaris 11 comes with a new set of commands that provide the ability to conduct
performance analysis in a virtualized multitenant cloud environment. Performance analysis in a
virtualized multitenant cloud environment with different users running various workloads can be a
challenging task for the following reasons:

Each virtualization software adds an abstraction layer to enable better manageability. Although this makes it much simpler to manage the virtualized resources, it is very difficult to find the physical system resources that are overloaded.

Each Oracle Solaris Zone can have different workload; it can be disk I/O, network I/O, CPU, memory, or combination of these.

In addition, a single Oracle Solaris Zone can overload the entire system resources.It is very difficult to observe the environment; you need to be able to monitor the environment from the top level to see all the virtual instances (non-global zones) in real time with the ability to drill down to specific resources.


The benefits of using Oracle Solaris 11 for virtualized performance analysis are:

Observability. The Oracle Solaris global zone is a fully functioning operating systems, not a propriety hypervisor or a minimized operating system that lacks the ability to observe the entire environment—including the host and the VMs, at the same time. The global zone can see all the non-global zones’ performance metrics.

Integration. All the subsystems are built inside the same operating system. For example, the ZFS file system and the Oracle Solaris Zones virtualization technology are integrated together. This is preferable to mixing many vendors’ technology, which causes a lack of integration between the different operating system (OS) subsystems and makes it very difficult to analyze all the different OS subsystems at the same time.

Virtualization awareness. The built-in Oracle Solaris commands are virtualization-aware,and they can provide performance statistics for the entire system (the Oracle Solaris global zone). In addition to providing the ability to
drill down into every resource (Oracle Solaris non-global zones), these commands provide accurate results during the performance analysis process.

In this article, we are going to explore four examples that show how we can monitor virtualized environment with Oracle Solaris Zones using the built-in Oracle Solaris 11 tools. These tools provide the ability to drill down to specific resources, for example, CPU, memory, disk, and network. In addition, they provide the ability to print statistics per Oracle Solaris Zone and provide information on the running applications.


Read it 
Article: Performance Analysis in a Multitenant Cloud Environment

Monday Dec 09, 2013

Presentations from Oracle Week 2013

Thanks for attending the Oracle week 2013, the largest paid IT event in Israel with 140 technical seminars and 1800 participants.


Source

Oracle ISV Enginerring ran two seminars Built for Cloud: Virtualization Use Cases and Technologies in Oracle Solaris 11 with two Oracle partners Grigale and 4NET Plus and Hadoop Cluster Installation and Administration – Hands on Workshop

I am posting here presentations and link to the Hadoop hands-on lab, for further reading on Hadoop and Oracle Solaris.

A big thank-you to the participants for the engaged discussions and to the organizers from John Bryce especially Yael Dotan and Yaara Raz for a great event!












Tuesday Oct 22, 2013

How to Set Up a Hadoop Cluster Using Oracle Solaris (Hands-On Lab)


Oracle Technology Network (OTN) published the "How to Set Up a Hadoop Cluster Using Oracle Solaris" OOW 2013 Hands-On Lab.
This hands-on lab presents exercises that demonstrate how to set up an Apache Hadoop cluster using Oracle Solaris
11 technologies such as Oracle Solaris Zones, ZFS, and network virtualization. Key topics include the Hadoop Distributed File System
(HDFS) and the Hadoop MapReduce programming model.
We will also cover the Hadoop installation process and the cluster building blocks:
NameNode, a secondary NameNode, and DataNodes. In addition, you will see how you can combine the Oracle Solaris 11 technologies for better
scalability and data security, and you will learn how to load data into the Hadoop cluster and run a MapReduce job.

Summary of Lab Exercises
This hands-on lab consists of 13 exercises covering various Oracle Solaris and Apache Hadoop technologies:
    Install Hadoop.
    Edit the Hadoop configuration files.
    Configure the Network Time Protocol.
    Create the virtual network interfaces (VNICs).
    Create the NameNode and the secondary NameNode zones.
    Set up the DataNode zones.
    Configure the NameNode.
    Set up SSH.
    Format HDFS from the NameNode.
    Start the Hadoop cluster.
    Run a MapReduce job.
    Secure data at rest using ZFS encryption.
    Use Oracle Solaris DTrace for performance monitoring.
 

Read it now

Thursday Sep 12, 2013

Solaris 11 Integrated Load Balancer Hands on Lab at Oracle Open World 2013

Oracle Solaris 11 offers a new feature called the Integrated Load Balancer (ILB). To find out more on this capability, and how to set up and configure ILB from scratch join a hands on lab I'm hosting at the Oracle Open World 2013 called  Oracle Solaris Integrated Load Balancer in 60 Minutes [HOL10181]

Open World Logo


The objective of this lab is to demonstrate how Oracle Solaris Integrated Load Balancer (ILB) provides an easy and fast way of deploying a load balancing system. Various Solaris 11 technologies such as Zones, ZFS and Networking Virtualization are used in order to create a virtual "data center " in the box in order to test different load balancing scenarios.

During this session, you are going to configure and use ILB inside an Oracle Solaris Zone, to balance the load towards three Apache Tomcat web servers running a simple Java Server Page (JSP) application specially developed for this lab.

Register Now

Tuesday Sep 10, 2013

The Network is the Computer -still

In 1984, John Gage was able to predict the future of computing with the phrase, "the network is the computer". In his visionary mind, he saw how computer networks would accelerate the capability of every device
that will be connected into the network.  Can you imagine iPhone without app-store? or web search without Google? It goes on and on...

Another aspect of this prediction is the ability of the network infrastructure to connect many systems together in order to become a huge computing environment - as we are seeing with the Cloud computing model.

As the system's hardware is more powerful in terms of CPU,  "In-Memory" capability, and advanced operating system virtualization, we see that most of the physical network infrastructure can be
"virtualized" using software without negligible performance degradation.

An excellent example of this capability is the Oracle Solaris 11 Network Virtualization. This allows us to build any physical network topology inside the Oracle Solaris Operating System including virtual network interface cards (VNICs), virtual switches (vSwitches), and more sophisticated network components (for example, load balancers, routers, and firewalls).

The benefits for using this networking technology are exemplified in reducing infrastructure cost since there is no need to invest in additional network equipment.

In addition, the infrastructure deployment is much faster, since all the network building blocks are based on software and not in hardware.

Although we have the flexibility in building rapid network infrastructure, we also need to be able to monitor, benchmark and predict  future growth, so how we can do this?

In the upcoming white paper this is demonstrated using three main use cases: how we can analyze, benchmark and monitor our physical and virtualized network environment.  In addition, we can demonstrate the benefits of using the built in Solaris 11 Networking Tools.

For further reading see  "Advanced Network Monitoring Using Oracle Solaris 11 Tools"



Thursday Aug 22, 2013

Hadoop Cluster with Oracle Solaris Hands on Lab at Oracle Open WORLD 2013

If you want to learn how-to build a Hadoop cluster using Solaris 11 technologies please join me at the following Oracle Open WORLD 2013 lab.
How to Set Up a Hadoop Cluster with Oracle Solaris [HOL10182]



In this Hands-on-Lab we will preset and demonstrate using exercises how to set up a Hadoop cluster Using Oracle Solaris 11 technologies like: Zones, ZFS, DTrace  and Network Virtualization.
Key topics include the Hadoop Distributed File System and MapReduce.
We will also cover the Hadoop installation process and the cluster building blocks: NameNode, a secondary NameNode, and DataNodes.
In addition how we can combine the Oracle Solaris 11 technologies for better scalability and data security.
During the lab users will learn how to load data into the Hadoop cluster and run Map-Reduce job.
This hands-on training lab is for system administrators and others responsible for managing Apache Hadoop clusters in production or development environments.

This Lab will cover the following topics:

    1. How to install Hadoop

    2. Edit the Hadoop configuration files

    3. Configure the Network Time Protocol

    4. Create the Virtual Network Interfaces

    5. Create the NameNode and the Secondary NameNode Zones

    6. Configure the NameNode

    7. Set Up SSH between the Hadoop cluster member

    8. Format the HDFS File System

    9. Start the Hadoop Cluster

   10. Run a MapReduce Job

   11. How to secure data at rest using ZFS encryption

   12. Performance monitoring using Solaris DTrace

Register Now


About

This blog covers cloud computing, big data and virtualization technologies

Search

Categories
Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today