Saturday Nov 15, 2008

CEC Update

CEC 2008 is over land I will be bringing back good memories and experiances on with respect to networking with the attendees.

For Sun this event I felt was important from the perspective of "Skill upgrade" of the folks in the services side be it on Sales or Delivery and I believe that this is the right strategy since to remain competitive you need to be competent. Pursuing this direction the following steps are taken

  • the sessions are made more technical,
  • more observed
  • Attendee needs to stick to a single track
  • and are evaluated at the end of the session giving in accreditations
  • provisioning for free tests encouraging the attendees to get certified

All the sessions went with vigour till the last day though marred to some extent because of announcement RIF on the last day.

During my interaction I felt that there is need for know how with respect to each feature on the following

  • What is the product feature
  • How is that implemented (And Why That Way if possible)
  • Why are we doing some thing in a particular way and the implications of that

I resolved that in my future technical items I will commit to writing in that way since that would be of more use to the people in many ways.

Tuesday Nov 11, 2008

Attending First CEC

I am attending to Sun's CEC (Customer Engineering Conference) from Nov 10th - 14th 2008. This is my first CEC, since I moved to Professional Services Delivery in June 2008.

From the general session of yesterday its evident that Services is going to play a major part in Sun's offering and there is great emphasis on the quality of serivices that are provided. I am currently enjoying meeting my friends from Monrovia and attending sessions on Java CAPS track. I am getting a lot to learn since I am meeting the folks from the field and talking to them.

I will be posting more on CEC

Tuesday Oct 28, 2008

NetBeans 6.5 RC1 out!

NetBeans 6.5 RC1 is now available for download. The information is available at http://www.netbeans.org/community/releases/65/

You can download the same at http://dlc.sun.com.edgesuite.net/netbeans/6.5/rc/.

What it means to me from Java /SOA perspective are the following:


  • Improved Camelcase code completion

  • Improved Encapsulate Fields Refactoring

  • XML and Schema Tools

  • Quick Search

Not limited to these there are several other!! I checked out the this one and it seems to be cool!
Try it out today!

Tuesday Sep 16, 2008

Utilizing 64bit JVMs in Java CAPS Integration Server

A very common error that is observed with Java CAPS Integration
Server is "Out of Memory". What can we do about it?

[Read More]

Common Reasons for Out Of Memory Errors

Out Of Memory error can happen due to following reasons:



  • Garbage Collection Issues

  • Orphaned Class loaders

    • Thread context classloader

    • new Thread()

    • Dangling thread



  • Classes with following references

    • static variables

    • SQL Driver

    • Commons logging

    • java.util.logging.Level

    • Bean util

      • Details






etc..


We can analyze the problem(s) associated with such errors at application level by having GC details and/or heap dump



  • -verbose:gc with -XX:+PrintGCDetails for observing GC model for tuning or altering suitable GC for the application

  • -XX:+HeapDumpOnOutOfMemoryError

  • -Xrunhprof:heap=dump,format=b

  • jmap -dump:format=b,file=heap.bin <pid>


Analyze heap dump using Jhat



  • jhat -J-mx1024m heap.bin

  • http://localhost:7000

  • Using built-in or custom queries to narrow down leak suspects

  • Identify an object or class in the application

  • List reference chains

Friday Jul 11, 2008

Running web application under large concurrent user loads


Whenever we talk about the load/performance
typically it depends on all the tiers on which the application is
working. Hence we need to look at the following things.

  • Nature of the Application vs.Hardware on which the application
    runs because -- " machine which appears slower in a single-threaded
    test will likely be faster in a multi-threaded world."

  • Tune the DB/EIS tier,

  • Tune the Application Server for the middle tier (where POJOs
    & SLSBs run) and also

  • Tune the Web/portlet Container where the eVision application
    runs.

  • Apart from that the application since it can have some really
    "messy code" which takes longer time to load the page or loads in
    memory data.



Ideally in in such cases we should think of load balancing the Application under
consideration in a cluster. Once you do this there are few other things you need to
look at with respect to the following:

  • JVM options of the Application server. A set of properties like Heapsize, Garbage Collection properties etc. depending upon the Application's nature.

  • Increase the pool size of the DataBase that is being hit from the connector/JCA perspective with in the Application Server
  • Do DB/EIS tuning: Tuning the database performance is not a simple task and it
    depends on Application specific requirements, the operating system
    and the target hardware. There is no single approach.
    The goal is to avoid obvious slowdowns and balance the available
    resources (I/O bandwidth, memory and CPU).
  • From the Web/portal server perspective.We need to tune the

    • WebContainer

    • AccessManager (Manages Sign on) - this can have issues if running behind firewall or the network settings.

    • Directory Server
    • Run some tuning scripts those come by default with the portal

  • Thread Pools : The Java Virtual Machine (JVM) can support
    many threads of execution
    at once. To help performance, both Access Manger and Portal Server or Application Server
    maintains
    one or more thread pools. Thread pools, allow you to limit the total
    number
    of threads assigned to a particular task. When a request is passed into the web container from a browser
    it will flow
    through several thread pools. A thread pool contains an array of WorkerThread
    objects. These objects are the individual threads that make up
    the pool. The WorkerThread objects will start and stop
    as work arrives for them. If there is more work than there are WorkerThreads,
    the work will backlog until WorkerThreads free up. Assigning an
    insufficient amount of threads in a thread pool can cause a bottleneck
    in
    the system which is hard to see. Assigning too many threads to a thread
    pool
    is also undesirable but normally is not critical. These can be managed either using Admin console or any script provided or using any other means given by the vendor.

Wednesday Jul 09, 2008

Clustering in Glashfish

Clustering and Load balancing in JBI Components


The aim of this note is to peek at the Glassfish’s implementation with respect to clustering and load balancing and to formulate a strategy for the components to be cluster aware.

The approach I took while writing this note is to give an over view of clustering, the components that are involved in making the clustering possible and also discuss the strategies that AppServer took while making its components cluster aware.

When we speak of AppServer the following 3 components come to our mind.

   1. Web Server – catering to the http requests
   2. EJB container – catering to the ORB requests and
   3. MQ – catering to JMS

When we observe Glassfish each of the component has its own strategy for clustering and load balancing. Each of the strategy vows its implementation either to its legacy (meaning its already there like in case of http load balancer) or need arising out for that specific protocol in a given scenario. We can observer variance with respect to the same protocol being implemented with different strategies, since not being owners as in case of JMS.

I tried to explain each of the strategy for the above protocols with respect to Glassfish, and concluded that each protocol must come up with its own strategy and the same can not be generalized for all the protocols because of the sheer variety of them and the need of each protocol. Though we can generalize certain aspects and can bring forth the LCM of all the protocols at a protocol agnostic way the time and effort might not match the same.

I also tried to discuss different strategies that we can adopt for making the components cluster aware and having the load balancing and failover facilities in a generic way. I also tried to explore the API so that the BC while writing the cluster aware code can get the details and info required of the cluster.


AppServer Cluster environment:


Each server instance, whether it is standalone, DAS or clustered         will contain a JBI runtime. DAS will contain Facade mbeans that will be communicating to the cluster instances or standalone instances based on the target information.


Basic JBI runtime will not have any special capability to handle clustering. The components on top of it will be aware of EE clustering. So, it will be the component's responsibility to work in a clustered manner. For example, a BPEL process doing a correlation will need the messages to routed to the same instance. However this will be done in a way specific to the component. For example, BPEL engine deployment to a cluster will use a group ID to identify the cluster. This can be the cluster name.


Load balancing to the EE cluster will be handled in a component specific way. For example, the SOAP BC, will use HTTP load balancer. and the JMS BC will use inbound JMS load balancing.


Resources:
Each Resource accessed by applications reference external resources such as JDBC database resources and their associated connection pools, CMP persistence managers, JMS resources, java mail resources, custom JNDI resources, and connector resources. Like an application, each resource has a JNDI name which is unique across the domain. An un-clustered server instance or cluster can reference zero or more resources, and a resource can be staged in which case it is referenced by no server instances.

Repository:
At the highest level Glassfish configuration information (including Java EE applications and resources) for a domain are stored in a Central Repository that is shared by all instances in the domain. The Central Repository is written to, by a single entity – the DAS (Domain Administration Server). All applications and resources deployed to a domain are stored in the central repository. They are also locally cached at each instance.

Each server instance maintains its own local cache of the central repository that serves two important purposes: to allow the instance to read its configuration in the absence of the DAS and for performance purposes (e.g. class loaders reading from the local file system are much more efficient). The server instance must synchronize its state with that of the Central Repository in two cases: incrementally as configuration changes are made to the Central Repository and at instance startup time (e.g. because an instance might miss configuration changes when it is down).

How does AppServer handle Clustering and Load Balancing?


This understanding is needed so that we can be aware of how things happen at the AppServer side and how can they be extrapolated in the JBI environment.

For AppServer there are 2 perspectives with respect to clustering:



1.    Administrative -- from this stand point cluster is a bunch of homogeneous machines/Server Instances for the DAS2.    Per Instance – this is for individual instance need.



Why do we need a load balancer at all? – The Load balancer the AppServer has is only a HTTP load balancer which in fact works only with in the Web Server component and the AppServer has only a hook for that via proxy (This portion is implemented by grizzly in AS). The LB is in fact a native LB for performance reasons. The functions of this LB are


a.    load balancer
b.    Route the request. – this mechanism works like having context root – port mapping which helps in routing the request.
c.    Maintaining sticky session -- the LB acts as façade for HTTP requests that are coming in and maintains a session store in HADB for performance reasons and for optimization.

There should be one logical question as to how does the EJB Container is handling the issue of clustering and associated issues? We should remember that this load balancer is for only HTTP Protocol and not for any other protocol like JMS/IIOP etc..

Lets look at IIOP/ORB:
The Cluster aware ORB Runtime handles the requests from the App client. Whenever there is a request from the client the runtime checks for the cluster configuration information of the client if the client is not updated with the information it will set the information. The App client has the inbuilt routing capability to route the request and the stickiness (SFSB, servlets..) for performance and session maintenance requirements.

JMS: Cluster awareness is needed in case of inbound only since outbound communication to EIS can happen with out any issue (I will talk about the transactions involved here later.). In inbound of JMS we have 2 types.


1.    Sun’s own MQ: In this case, when the MDB end point gets activated, it is set with the cluster aware setting (ClusterName+InstanceName+MDBName). This actually helps as a hook to the instance on which the MDB is listening for the JMS. This helps the MDB maintains the stickiness by way of maintaining Local Delivery Profile (LDP).
2.    Third party MQs supported via generic JMSJCA: This is achieved by way of maintaining a timestamp based selector with respect to each server instance.

How are the transactions handled?
Every server instance is having its own Transaction Manager. This TM actually maintains the list of transactions that are handled. It also maintains the log that is pertaining to the transaction. If an application instance is killed during the transaction the TM pertaining to that server instance will take care of that transaction. Since transactions are atomic in nature the resource recovery (commit/rollback) would not be effected. There would not be a case where in a transaction is started by one instance and had to be dealt by other instance.

What modules are available for supporting Clustering in AppServer?


1. Group Management Server (GSM): GSM is an independent software module, from Project Shoal (https://shoal.dev.java.net), which may be embedded and started by processes that require runtime cluster communications and group management services such as:


1.    Static Group Membership Composition change notifications :

a.    Member Added Notification
b.    Member Removed Notification




2.     Dynamic Group Membership Composition change notifications :

a.    Join Notifications
b.    Failure Suspicion Notifications
c.    Failure Notifications
d.    Planned Shutdown Notifications              




3.     Recovery Oriented Support Services:

a.    Delegate Recovery Instance Selection and notification
b.    Protecting recovery operations through failure fencing




4.    Messaging Service API for Group and Member-to-Member messaging
5.    Distributed Caching of lightweight state and recovery states


GMS is an in-process component that can be accessed by other components within the process to receive events occurring in a group of distributed processes. GMS will provides the following features:


 a.    Failure Notifications
b.    Recovery member selection and corresponding notifications
c.    Failure Fencing
d.    Member Joins and Planned Shutdown Notifications
e.    Support for administrative configurations
f.    Group, One-to-Many and One-To-One Messaging
g.    A Distributed State Cache implementation to store data in a shared cache that lives in each instance's GMS module.

Examples of GMS clients in the application server include the Timer Service, the Transaction Service, the EJB Container for Read-Only or Read-Mostly beans' cache update notifications, the IIOP Failover Loadbalancer, the In-Memory replication module and the instance that serves as the administration server for reporting cluster health.  

GMS provides a simple, easy-to-use API to its clients for accessing and consuming its functionalities. GMS provides a Group Communication Service Provider Interface for group communications provider technologies to be integrated. In our implementation, we use a Service Provider implementation based on JXTA peer-to-peer technology to construct the desired group communications infrastructure.  

2. Load balancer Module:
Load balancer component of the application server is a webserver plug-in, which distributes the http requests to the application server instances. Currently it only supports simple round robin load balancing policy.

AppServer study Conclusion:
The above discussion says that AppServer has the Clustering/Load balancing feature for HTTP Protocol and JMS / IIOP in its own way. It also says that the need for clustering and load balancing differ from protocol to protocol and each individual protocol needs to take care of the need from its own perspective.


Why Interoperability? And What it takes to be really interoprable?

Why Interoperability?


Organizations invest time and money for building information systems over a period of time. These systems are built with the best of hardware and software that are available during that time. Due to the change in the socio-economic-political and opportunity reasons organizations are constantly challenged to update update themselves temporally, technology wise also. If we want that the existing solutions run as is (Which the client obviously wants since invested time and money) and yet see that these disparate systems talk to each other seamlessly we need interoperability amongst the systems.


When ever we think of such solutions in heterogeneous systems for integration we tend to think of SOA or WebServices. We also that these solutions are of panacea to all the ills of heterogeneous systems while integrating. But it takes more than just WS Software systems to be truly inter-operable. Lets examine what it takes to be inter-operable.

Lets get to the definition of interoperability:

ISO/IEC 2382 Information Technology Vocabulary defines interoperability as:

“The capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units."

By this definition we can deduce the following areas or levels are at which systems to be inter-operable:



    \* Network and Infrastructure Layer - For interoperability amongst different protocols like TCP/IP, DNS, DHCP/BOOTP, AppleTalk, 802.1x, NFS/NIS (Different OS have different tools for making file systems accessible) etc..

    \* Data Layer - For data access JDBC, ODBC, OLEDB, ADO, ADO.Net etc. Apart from that we are talking about file formats like XML and the different variants of XML organizations (the XML Serializers and De-Serializers available on different platforms make these documents inter-operable)

    \* Program Layer - For taking care of Program level interoperability. We create services / programs that deal with the data in the Data Layer. These are of the following Channels


          o Technology specific binary formats -- RMI/IIOP in case of Java, .Net Remoting or COM interop. with respect to MS technologies. These can deal with the stateful data and have good performance numbers. But these have a caveat that they are proprietary that is the other end of the system also needs to be either Java in case of RMI and MS based COM/ .Net object in case of .Net Remoting and COM inter-op. But there are some tools like J-Integra, etc. for inter-operability between the MS – Java worlds. IIOP is supported by both Java and MS worlds.

          o WS - \* Standard programs for exposing services. There are built on several set of standards like security, transaction, co-ordination and trust etc. for building robust inter-operational heterogeneous systems. WS stack is supported by both Java and MS worlds.

          o Rest style services using HTTP. These deal with stateless data.


    \* Process Layer - This layer builds on top of the program layer and helps in orchestrating the programs or services. The need for this layer is because of the geographical / temporal dispersion of the organization of the Business and Systems. And also the raising business demand calling for


o Asynchronous Availability of the systems for
          o High Availability
          o Transactional Nature of the Heterogeneous systems
          o Reliability


These requirements of the systems are met by the industry in terms of


          o Messaging Infrastructure (support for Queues and topics). Most of the MQ vendors have adopted to JMS standards and gave API for accessing the MI.
          o Mainframe Infrastructure – These are of BLI/SLI/Data and RPG types. These systems provide for unlocking of existing production systems but at the same time require callable interfaces. Again most of the systems have provided Java API for accessing the systems.
          o Business Process Infrastructure – these are evolved as open standards like BPMN and BPEL4WS etc.. These process infrastructure is supported with Adapters for connecting to the data channels. These are typically supported in Integration Servers of the vendors like Sun's Glassfish with Open-ESB, MS Biztalk server etc..


    \* Security Layer – This layer integrates the users with the systems in a secured way. We can call this identity layer also. Here we see AAAs (Authentication, Autherrization and Accounting) for Inter/Intra Organization.


o Authentication : Kerberos / Directories / X.509 / SSL/TLS etc..
o Authorization : ACL systems / RBAC systems
          o Here We further talk about the SSO (Single Sign On) and the other inter-operable standards like SAML, WS – Federation, User token profile, CardSpace etc.. WS – Security standards deal with message level security too for maintaining integrity and non repudiation of the message that is traveling. The IDM (Identity Management Suites) comes into picture at this layer and all the major vendors like Sun/MS have their own IDM Suites for comprehensive solutions for organizational needs.


    \* Management Layer – When we think of all the above infrastructure and disparate systems talking to each other for different needs, we need a way to manage the solution that is offered from some console. There comes need for Management protocols like SNMP/ WMI/ WBEM/CIM/JMX etc. for managing the application exposed interfaces and configurations. Every integration vendor offers some management connector for deployed applications.

So when look for a truly inter-operable system we need to keep in mind all the 6 layers in mind during system design. Proper care for all these will ensure that the systems will not only be inter-operable but at the same time scalable,  and be furure proof with technology. Lets examine some of the layers from the JCAPS perspective and how does JCAPS offers interoperability in my next post.


Wednesday Jun 11, 2008

JCAPS 6.0 Released!!!!

I read some of the review(s) some skeptical some encouraging with respect to customers, licensing charges, open-source etc..


But from a JCAPS/EAI/SOA developer's perspective what are the exiting and encouraging points lets have a peep.


  • Design Time:

    • All the old editors available in the latest version too.

    • Old projects since 5.0.5 can be migrated into 6.0 with the click of a mouse with out any changes

    • The build and deploy times coming down significantly with the power of support for latest JDK

    • Power of having the Glassfish or the Sun Application Server a.k.a. IS now on the Services tab of the NetBeans.

    • Power to manage resources on Application Server from the NetBeans IDE.

    • You have now the unified dev environment of NetBeans 6.0. This gives the power of developing other types of Applications along with CAPS applications

  • Runtime:

    • Power of Glassfish Application Server

    • Open-ESB container working with in glassfish giving the power of seamless talk to your existing applications. (You just need to have a facade of WS endpoint(s) for the existing applications to talk to the open-esb based Composite Applications)

    • We can bundle the old CAPS generated ear file along with the new service units of the open-esb components in a Service Assembly and deploy it to the Glassfish Application Server

    • You will not have a separate UDDI server rather it comes in a bundled along with the Application server as one more war file. So in effect we will have 1 installation less

    • Enterprise Manager still comes as a different server which needs to be started separately.

  • Repository is left as is. We can upload the sar files and can get the installation in the NetBeans 6.0. We can have the flexibility of having alternative storage for the project artifacts from with in the IDE instead of using a separate clients like CVS/SVN etc.., since NetBeans provides for alternatives for Version Control styles.

  • Installation gives the power of having all the nbms pre-installed in the NetBeans so that we do not need to connect to the repository during the first time running. This considerably reduces the installation time of JCAPS.

  • This release has one more advantage from the perspective of Adapters. They are available now “Outside the JCAPS environment”. This feature alone has lot of other implications like

    • Small Application developers can now just have a single adapter and can have their POJOs/EJBs / MDBs built around the adapter.

    • This gives more power with respect to the open-esb based integration. What I mean here is Open-esb components are XML/XSD based, so there is no provision to use the finer API's. Using these adapters will bridge the gap of finer API access.

    • Also deployment of single adapter from the Application Server stand point can have a single pool utilized by many J2EE components. Where as if we bundle the adapters as rar files for each of the rar file Application Server needs to create 1 connection pool consuming more system resources.


I will be posting more on JCAPS..


Monday Nov 05, 2007

Running Java code under different locle

We need to be extra cautious while writing "truly platform neutral code". We ran into issue while running on a platform other than Windows and other than the locale English US. I tried to put the issue and how I fixed the same.

[Read More]

Monday Jul 16, 2007

Performance in FCAPS


The 'P' in FCAPS is a very strong Quality Of Service(QOS) measure for any Networking environment and means a lot from the Service Level Agreement(SLA) perspective since it involves money. Lets see in this post what does comprise parameters for “P” and what would an NMS require from this perspective.


Network bandwidth or the performance depends upon the traffic or the payload that is being carried in the network. There are many factors those are responsible for the performance. Each of the device or Network Element has certain set of characteristics which effect the bandwidth and these characteristics depend upon some input criteria. The mechanism of optimizing the input parameters for the device so that the specified output criteria is met is called Throttling. This can mean different things for different devices for different parameters, either hardware or software. Lets look at different kinds of throttling:


Bandwidth throttling is a method of ensuring a bandwidth intensive device, such as a server, will limit ("throttle") the quantity of data it transmits and/or accepts within a specified period of time. Bandwidth throttling helps provide quality of service (QoS) by limiting network congestion and server crashes


A server, such as a web server, is a host computer connected to a network, such as the Internet, which provides data in response to requests by client computers. Understandably, there are periods where client requests may peak (certain hours of the day, for example). Such peaks may cause congestion of data ( bottlenecks) across the connection or cause the server to crash, resulting in downtime. In order to prevent such issues, a server administrator may implement bandwidth throttling to control the number of requests a server responds to within a specified period of time.


When a server using bandwidth throttling has reached the allowed bandwidth set by the administrator, it will block further read attempts, usually moving them into a queue to be processed once the bandwidth use reaches an acceptable level. Bandwidth throttling will usually continue to allow write requests (such as a user submitting a form) and transmission requests, unless the bandwidth continues to fail to return to an acceptable level.

Likewise, some software, such as peer-to-peer (P2P) network programs, have similar bandwidth throttling features, which allow a user to set desired maximum upload and download rates, so as not to consume the entire available bandwidth of his or her Internet connection.

CPU throttling refers to a series of methods for reducing power consumption in computers by lowering the clock frequency. Other methods include reducing the supply voltage and the capacitance

I/O throttling: input /output throttling, a technique used to more efficiently handle memory processing. During low-memory conditions, a system will slow down the processing of I/O memory requests, typically processing one sequence at a time in the order the request was received. I/O throttling slows down a system but typically will prevent the system from crashing.

When the NMS wants to tackle throttling issue it should serve the following purpose:


  • Determine the problems if any prior to their affecting the services

  • Maximize network utilization

  • Preempt the occurrence of the congestion

  • Demonstrate the compliance with the agreed SLAs (Service Level Agreement)

  • Indicate when extra network investment is needed.


Requirements for such system:


  • Receives asynchronously to the generated NE data

  • Pro actively retrieve the data from the NE the data such as Bandwidth consumption with respect to interfaces

  • Configuring intelligent NE to give out this data either as Alarm/Trap

  • Mediation being introduced for producing sanitizing details for the downstream use by third party applications

  • Aggregation of separate performance data records of Nodes / Interfaces / Links / LSPs / Multi service cross connections – Ethernet to MPLS, FR over ATM etc.

  • Policies can be built with predefined format and the device/network views can be polled

  • Correlation of the aggregated data with the associated managed objects like no of IP packets by an LSP End to End, No of Ethernet frames forwarded by LSP End to End, No of Cells carried or dropped by an ATM switch etc.

  • Reports can be generated from such data with respect to the utilization of managed objects like interfaces, nodes etc., difference between actual and planned loads, real time and historical SLA conformance etc.

  • Topology issues like detection of a congested link, traffic flow in the circuit which would have serious implications with respect to the performance of the network.

  • Maintaining Database tables for this effect at the NE level and at network level is essential for producing such kind of data and reports.


In my next post I would delve further on the strategies from NMS perspective.


Friday Jul 06, 2007

FCAPS of NMS

We shall continue this post from the previous one exploring the FCAPS of the Network Management.


The International Organization for Standardization (ISO) network management model defines five functional areas of network management.

  • Fault Management:

    • Detect, isolate, notify, and correct faults encountered in the network, like whether some device is down etc.

    • Increases Network reliability and effectiveness

    • Increases productivity of the users

  • Configuration Management:

    • Configuration aspects of network devices such as configuration file management, inventory management, and software management.

    • Allows rapid access to configuration information

    • Facilitates remote configuration and provisioning

    • Up-to-date inventory of the Network elements

    • It talks about issues like speed, trunking, STP state etc.

  • Performance Management:

    • Monitor and measure various aspects of performance so that overall performance can be maintained at an acceptable level.

    • Reduces network overcrowding and inaccessibility

    • Very good measure of QOS

    • With proactive utilization of data generated here NMS can prevent performance issues

    • it gives an account of issues like CPU utilization, Free Memory, utilization, Broadcast rates, interference in case of WLANS etc.

  • Security Management:

    • Provide access to network devices and corporate resources to authorized individuals.

    • Builds user confidence

    • protects Network from malicious attacks

    • This takes into consideration different ways of Security like RADIUS, ACL, Signature, Encryption etc.

  • Accounting Management:

    • Usage information of network resources for establishing metrics, check quotas, determine costs and bill users etc.

    • Measures and reports accounting information based on individual groups and users

    • Internal verification of third party billing for usage.

Further exploration of NMS in next post.


Monday Jul 02, 2007

Network Virtualization in open-esb (1)

When we talk about Network Management we tend to talk about fundamentally at 3 levels

  • Device / Element level
  • Group / View Level
  • Network as a whole
SNMP
and NetConf are the 2 Protocols we discuss normally for Network
Management. In this note I compare both the protocols with their pros
and cons and suggest ways for looking at the same differently. In my
future writings I will delve how can we achieve virtualization of
Network and the integration of the same in open-esb

 In this article I tried to explore Netconf as an alternative to SNMP and how can we achieve the same with a paradigm shift with respect to the current network management solutions.

[Read More]

Leadership some random thoughts

While reading something on leadership, I stumbled upon a paper of Lawrence Rabiner. It contained some thoughts on being a successful engineer out of his experiance. I liked it pretty much. I thought that I should share the crux of that.

[Read More]

Thursday Jun 21, 2007

Tips on writing Better

One of the objectives I had my SEED term is improving my communication skills, both oratory and writing. While searching over net for improving I stumbled on some tips.

I hope you like it!!

About

I was part of Sun R&D in Java CAPS and later Glassfish ESB. I moved from R&D to Consulting. I am currently working as a Solution Architect in Oracle Consulting Services (India). I was sharing my experience w.r.t. Java CAPS and other technologies during Sun period. Now in Oracle world I share my experiences with Oracle FMW product line as well as other Oracle Technologies and products.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today