X

Articles from Oracle IT Content & Experience Services Team

Delivery Disaster Recovery for WebCenter Sites

Sunder Thyagarajan
Principal Member Technical Staff ( WCM Architect )

We wanted to share a neat customization that we have built for WebCenter Sites at Oracle to tackle Disaster Recovery for WebCenter Sites Delivery servers.

Challenge:

Setup active-active disaster recovery (DR) for a WebCenter Sites delivery environment.

Solution Summary:

The solution is fairly simple, on the management environment customize real-time publishing using the hooks that WebCenter Sites provides out of the box. Then when assets are approved to a single target (e.g. "Live") they are dual-published to both the delivery environment and also the delivery DR environment.

In case of an issue or a necessary upgrade to our primary delivery infrastructure our Network Ops team can re-route traffic for www.oracle.com or java.com to the DR hardware in a separate datacenter.

We actually take this one step further and have the "Live" publishing target publish to not only delivery + delivery(DR) but also to an internal-only "QA" environment. We can then use this QA environment to regression test new code releases, config changes and upgrades against real-world content.


Common Questions

What will happen to publishing when one of the configured Delivery environments is down?

Delivery or Delivery (DR) environments may need to be taken down for maintenance. We do not want such a situation to prevent our contributors from updating the website.

If any
one of the target environments is down publishing does not stop, the multi-transporter would continue
to publish content to the servers that were up during the publish session.

With this solution Contributors can seamlessly continue with their work as normal, they don't even need to be informed that an environment is down. 

If one environment is down, wouldn't the content between these
two environments become out of sync?

Interestingly the product addresses this problem out of the box. An asset is marked as successfully published only if it was pushed to all the servers configured on the
PubTarget. So the assets will remain in queue until it is pushed to all
servers. So once the Delivery server that was down is restored, the content will be synchronized automatically.

Note: Assuming this is mission critical for mostly Production Infrastructure the servers are not expected to be down for a longer duration of time. If the servers are down for a longer duration the Publish Queue could get flooded with a lot assets.The number of assets that could keep piling on the Publish Queue depends upon how active contributors are, the frequency of publishes and the elapsed time for which servers were down.


References / Code Examples

The product documentation on this is really good, lots of useful examples:

Steps to deploy a custom multi-transporter

  • Deploy the Custom Multi Transporter Code in Management Server

Use the reference code from Product Documentation, build a binary and deploy the on the Management Server. 
Refer Section (49.2.5 Full Code Listing) of  Product Documentation: http://docs.oracle.com/cd/E29542_01/doc.1111/e29634/realtimehooks.htm#CIHHJJCI 
Code in this Section would work perfectly fine for any simple multitransporter implementation.

  •  Install the transporter by editing the classes/AdvPub.xml file on the management side.


Update the Spring Bean AdvPub.xml

replace the line:

<bean id="DataTransporter"
class="com.fatwire.realtime.MirrorBasedTransporterImpl"
singleton="false">

with:

<bean id="DataTransporter"
class="[your transporter class]"
singleton="false">

  • Configure the PubTarget as follows
1. In the Destination Address, specify comma-separated destination URLs.

    For example:

  http://deliveryA:9030/cs/,http://deliveryB:9040/cs/

2. In the More Arguments, specify ampersand-separated username, password, and optional proxy information for the additional servers, suffixed with indexes starting at 1.

    For example, with two additional targets (3 Delivery Servers)

  REMOTEUSER1=fwadmin&REMOTEPASS1=xceladmin&REMOTEUSER2=fwadminB&REMOTEPASS2=xceladminB

(With Proxy)

&PROXYSERVER1=proxy.com&PROXYPORT1=9090&PROXYUSER1=pxuser&PROXYPASSWORD1=pxpass  


Password Security

All that we had done so far was use the out of the box feature and examples, but here is the little innovation we had to do to overcome a security issue. If you see the above screenshot it does not have the MOREARGUMENTS field populated with a REMOTEPASS as described in the Product Documentation. If we add the REMOTEPASS in the MOREARGUMENTS it will show up on the PubTarget inspect screen and this can be seen by anybody who has access to the Admin Interface.

Requirements

  • Protect User ID and Password for delivery servers
  • Simplify setup/configuration if management & delivery use the same user/password (eg. same LDAP servers) 

Solution

We introduced a new argument named SAMECREDS as a flag to identify if the delivery environment uses the same credentials. If the delivery environment uses different credentials, then to protect the password, we use an encrypted password.

Example Config for different credentials (encrypted password)

Code

Modify the init() method of the default transporter code. The code just shows we can read the MOREARGUMENTS and manipulate it before using. Feel free to use any encryption based on the security requirements.

  /**  * Do some initialization by parsing out the configuration settings and  * instantiating a standard http transport to each target.  */  private void init() {     if (!initialized) {         String remoteURLs = getRemoteUrl();         int count = 0;         for (String remoteUrl : remoteURLs.split(",")) {             String suffix = (count == 0) ? "" : String.valueOf(count);             AbstractTransporter t1 = AbstractTransporter                             .getStandardTransporterInstance();             URL url;             try {                 url = new URL(remoteUrl);             }catch (Exception e) {                 throw new RuntimeException(e);             }             t1.setRemoteUrl(remoteUrl);             t1.setHost(url.getHost());             t1.setUsername(getParam("REMOTEUSER" + suffix));               if(count!=0){                 CipherEncrypter decrypter=new CipherEncrypter("EncryptForWCS");                 t1.setPassword(decrypter.decrypt(getParam("REMOTEPASS" + suffix)));             }else{                 t1.setPassword(getParam("REMOTEPASS" + suffix));             }                         t1.setUseHttps("https".equalsIgnoreCase(url.getProtocol()));             t1.setContextPath(url.getPath());             t1.setPort(url.getPort());             t1.setProxyserver(getProxyserver());             t1.setProxyport(getProxyport());             t1.setProxyuser(getProxyuser());             t1.setProxypassword(getProxypassword());             t1.setHttpVersion(getHttpVersion());             t1.setTargetIniFile(getTargetIniFile());             transporters.add(t1);             if(getParam("SAMECREDS")!=null &&!getParam("SAMECREDS").equals("1")){                 ++count;             }         }     initialized = true;     }     writeLog("Initialized transporters: " + toString()); } 


Authored by: Sunder Thyagarajan, Mark Fincham

Join the discussion

Comments ( 4 )
  • Bob Thursday, October 9, 2014

    Thanks for sharing this with the community. Without the password security the solution works well. I am facing a problem in implementing the solution with password security because I cannot find CipherEncrypter class anywhere on the WCS classpath.


  • Sunder Thyagarajan Thursday, October 9, 2014

    Bob,

    CipherEncrypter is a proprietary encryption logic that cannot be shared. However we can easily use any custom encryption logic.

    1. Encrypt the password

    2. Set the Encrypted password in the REMOTEPASS MORE ARGUMENTS

    3. Retrieve the Encrypted value using the below statement

    String encryptedPass = getParam("REMOTEPASS" + suffix))

    4. Use custom logic to decrypt the value and set it to

    t1.setPassword(decrypt(encryptedPass));

    Hope this helps.


  • Ganesh Narasimhan Tuesday, June 26, 2018
    HI Sunder
    We are also trying to implement the same solution for our delivery clustered environment (not actual cluster sharing the same db and shared system, but a new environment).
    so the question would be,
    1) does it have any lag between 2 publishing. i.e 2) does it complete one environment 1st and then start second or is it publish one by one assets to each destination and starts with next.
  • Sunder Thyagarajan Monday, March 30, 2020
    Hi Ganesh,

    The solution is atomic, the publishing will fail if it does not sync both the environments. There is absolutely very negligible delay as it sync the servers as part of the same publishing event. It could maximum be a minute or 2 depending on the number of assets. Also cache flush happens only if they are published to both environments so you can safely assume the sync happens at the same time.

    Also if you need to take down one environment then I would recommend allowing publishing to pile up assets and sync once the second is up and running. So it works pretty well.
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.