Sunday May 29, 2016

Autosuggest generates huge amounts of DB transactions and severely impedes Indexer performance

I came upon an interesting issue last week. I had a 11.1.1.8.0 content server configured with a ADF UI domain with ~850,000 documents. Users were reporting that any new documents that were being checked-in using the interface or Desktop Integration Suite (DIS) - Windows Explorer Integration (WEI) were not immediately available. In fact they were taking far too long, at times up to 15 minutes, to become available. 

Monitoring the indexer, it became clear that the indexer thread, which is expected to start at a checkin/update or every 5 minutes by default, was not firing. I was able to manually start the Automatic Update Cycle though and it would pick up the documents in queue and index them immediately. It became pretty evident that something was holding back the indexer thread. There was also a general slowness with search query performance.

The content server systemdatabase traces were reporting massive amounts of database transactions to the CACHESTORE table, especially for Auto-Suggest index update.documents.doriginalname. 

>systemdatabase/7 05.25 12:21:45.396 Auto-Suggest index update.documents.doriginalname (start) SELECT dCacheValue FROM CacheStore WHERE dRegionName='autosuggestindexprimary' AND dCacheKey='autosuggestindexprimary.documents.doriginalname:OccurrenceStorage.2E85C9E57D12A7E9182954E57BC0707C'
>systemdatabase/6 05.25 12:21:45.398 Auto-Suggest index update.documents.doriginalname 2.51 ms. SELECT dCacheValue FROM CacheStore WHERE dRegionName='autosuggestindexprimary' AND dCacheKey='autosuggestindexprimary.documents.doriginalname:OccurrenceStorage.2E85C9E57D12A7E9182954E57BC0707C'[Executed. Returned row(s): true]
>systemdatabase/7 05.25 12:21:45.399 Auto-Suggest index update.documents.doriginalname (start) Executing PreparedStatement (UPDATE CacheStore SET dCacheValue=?, dCreateOrUpdateTime=?, dEntryStatus=?, dAutoExpiryTime=? WHERE dRegionName=? AND dCacheKey=?)
>systemdatabase/6 05.25 12:21:45.403 Auto-Suggest index update.documents.doriginalname 3.72 ms. Executing PreparedStatement (UPDATE CacheStore SET dCacheValue=?, dCreateOrUpdateTime=?, dEntryStatus=?, dAutoExpiryTime=? WHERE dRegionName=? AND dCacheKey=?)[Executed. 1 row(s) affected.]

The content server system logs were also reporting the following message frequently: 

!csSubjectMonitorStop!csUnableToLoadSubject,idccacheevent-ucm-persistent.autosuggestindexprimary,intradoc.server.cache.IdcCacheEventSubjectCallback!csIdcCacheNotificationError!syJavaExceptionWrapper,java.io.IOException: ORA-01555: snapshot too old: rollback segment number with name "" too small
ORA-22924: snapshot too old
!syJavaExceptionWrapper,java.sql.SQLException: ORA-01555: snapshot too old: rollback segment number with name "" too small
ORA-22924: snapshot too old

The database was reporting huge amounts (in GB) of REDO logs being generated within hours. The following SQL statements were being executed millions of times: 

UPDATE CacheStore SET dCacheValue=:1 , dCreateOrUpdateTime=:2 , dEntryStatus=:3 , dAutoExpiryTime=:4 WHERE dRegionName=:5 AND dCacheKey=:6
SELECT dCacheValue FROM CacheStore WHERE dRegionName=:"SYS_B_0" AND dCacheKey=:"SYS_B_1"

The AutoSuggest feature is required for the ADF UI domain to work and so the AutoSuggestConfig component on the content server cannot be disabled. However, you can disable the auto suggest activity by adding the following configuration entry into your config.cfg file and restarting the content server. 

EnableAutoSuggest=0

Sure enough, after making the changes, the system became much more responsive and the indexer thread was firing off as expected. This, however, did disable all auto suggest type-ahead fields, like the User ACL fields in the ADF UI. There is another configuration variable that can be used to disable auto suggest indexing of certain fields. As the issue is with the update.documents.doriginalname field, adding the following configuration entry into your config.cfg file and restarting the content server will also help without disabling auto suggest. 

# EnableAutoSuggest=0
DisabledAutoSuggestFields=table=Documents:fields=dOriginalName

I would highly recommend that you read and implement guidelines in this wonderful note by my colleague from Oracle Support, Cordell Melgaard, if you plan to use the ADF UI and auto-suggest: What is AutoSuggest and How To Tune It? (Doc ID 1938996.1)

Monday Aug 24, 2015

Oracle WebCenter Content: FullText Search Examples

Search Type I want to search for… Content Server FullText Query
WORD MATCH All documents with the word CACTUS in them <ftx>CACTUS</ftx>
PHRASE MATCH All documents with the phrase “CACTUS BUTTONS” in them <ftx>”CACTUS BUTTONS”</ftx>
AND All documents with both words CACTUS and BUTTONS in them <ftx>CACTUS BUTTONS</ftx>
<ftx>CACTUS AND BUTTONS</ftx>
AND All documents with all words CACTUS and BUTTONS and LUXURY and AMULET in them <ftx>CACTUS BUTTONS LUXURY AMULET</ftx>
<ftx>CACTUS AND BUTTONS AND LUXURY AND AMULET</ftx>
AND (PHRASE) All documents with all the phrases “CACTUS BUTTONS” and “LUXURY AMULET” in them <ftx>”CACTUS BUTTONS” “LUXURY AMULET”</ftx>
<ftx>”CACTUS BUTTONS” AND “LUXURY AMULET”</ftx>
OR All documents with either words CACTUS or BUTTONS in them <ftx>CACTUS,BUTTONS</ftx>
<ftx>CACTUS OR BUTTONS</ftx>
OR All documents with either words CACTUS or BUTTONS or LUXURY or AMULET in them <ftx>CACTUS,BUTTONS,LUXURY,AMULET</ftx>
<ftx>CACTUS OR BUTTONS OR LUXURY OR AMULET</ftx>
OR (PHRASE) All documents with either one of the phrases “CACTUS BUTTONS” or “LUXURY AMULET” in them <ftx>”CACTUS BUTTONS”,“LUXURY AMULET”</ftx>
<ftx>”CACTUS BUTTONS” OR “LUXURY AMULET”</ftx>
NOT All documents with the word CACTUS not in them <ftx>-CACTUS</ftx>
<ftx>NOT CACTUS</ftx>
NOT, AND All documents with none of the words CACTUS and BUTTONS and LUXURY and AMULET in them <ftx>-CACTUS -BUTTONS -LUXURY -AMULET</ftx>
<ftx>NOT CACTUS AND NOT BUTTONS AND NOT LUXURY AND NOT AMULET</ftx>
XOR (EOR) All documents with only one of the words CACTUS or BUTTONS and not the other in them <ftx>(CACTUS,BUTTONS) (-CACTUS,-BUTTONS)</ftx>
<ftx>(CACTUS OR BUTTONS) AND (NOT CACTUS OR NOT BUTTONS)</ftx>
<ftx>(CACTUS -BUTTONS) OR (BUTTONS -CACTUS)<ftx>
<ftx>(CACTUS AND NOT BUTTONS) OR (BUTTONS AND NOT CACTUS)<ftx>
WILDCARD All documents with the word CACTUS and all words starting with CACTUS <ftx>CACTUS*</ftx>
<ftx>CACTUS%</ftx>
WILDCARD All documents with the word CACTUS and all words ending with CACTUS <ftx>*CACTUS</ftx>
<ftx>%CACTUS</ftx>
WILDCARD All documents with the word CACTUS and all words containing the word CACTUS <ftx>*CACTUS*</ftx>
<ftx>%CACTUS%</ftx>
WILDCARD All documents with 8 letter words starting with CACTUS <ftx>CACTUS??</ftx>
WILDCARD All documents with 6 letter words ending with FORM <ftx>??FORM</ftx>
ESCAPE CHARACTERS All documents with the keyword NEAR <ftx>{NEAR}</ftx>

Monday Jul 27, 2015

Migrating huge content repositories using Archiver

I was recently asked to migrate a WebCenter Content repository to a new instance. They had the content exported locally using the Web Archiver applet and they were copying the archive folder to every destination content server file system. The archive folder was ~300 GB in size and it took around ~6 hours to tarball the archive directory and another ~2 hours to copy the tarball to the destination content server and then another ~6 hours to extract the tarball. And this process was repeated every time we wanter to replicate the repository. 

A easier solution may have been to mount a shared volume on the source content server and export the content there. Then you could just mount this shared volume to any destination server and use it for the import. However, the Archiver web applet only allows you to have the archives under the default location in the instance directory, that is [INSTANCE_DIR]/ucm/cs/archives. The key is to run the Archiver from the console as a standalone application. The standalone version of the Archiver application is required to create new collections or browse the local file system to connect to new collections.
  1. After you have identified a location for your archive collection, say /u02/share/archives/abc, create a new file, collection.hda, under the new directory and enter the following lines. Make sure that the IDC_Name parameter does NOT match the IDC_Name for the content server instance.

    @Properties LocalData
    IDC_Name=wcc-ucm
    blDateFormat=M/d{/yy}{ h:mm[:ss]{ a}}!mAM,PM!tAmerica/New_York
    @end
    @ResultSet Archives
    2
    aArchiveName
    aArchiveDescription
    @end

  2. Start the Archiver utility as a standalone application, [DOMAIN_HOME]/ucm/cs/bin/Archiver, go to Options -> Open Archive Collection… and click Browse Local… to create a new collection location. Browse to the newly created directory and select the collection.hda file we just created, /u02/share/archives/abc/collection.hda.
    Open Archive Collection

  3. In the Browse To Archiver Collection, enter the correct paths for the vault and weblayout directories on the current system and click Ok.
    Browse to Archive Collection

  4. Once the new Archive Collection is added to the list, click Open to make it current. You can now create your new archive and export content into it.
  5. After you have completed your content export/import, make sure that you return to Options -> Open Archive Collection… and make the default Archive Collection active.
To IMPORT this archive collection on another content server, make the shared volume available on the target server and follow steps 2-5 to load the collection for import.

Monday May 11, 2015

Rule-based Classification using Oracle Text in WebCenter Content

A major problem facing businesses and institutions today is that of information overload. Sorting out useful documents from documents that are not of interest challenges the ingenuity and resources of both individuals and organizations. Given that WebCenter Content stores all the content for an organization, you would like to classify documents checked into the system.

Oracle Text offers various approaches to document classification. Under rule-based classification, you have a predefined set of categories and you write the classification rules yourself. With supervised classification, Oracle Text creates classification rules based on a set of sample documents that you pre-classify. Finally, with unsupervised classification (also known as clustering), Oracle Text performs all the steps, from writing the classification rules to classifying the documents. WebCenter Content uses Oracle Text as its underlying indexing engine when you use Database FULLTEXT or OracleTextSearch and so we should be able to use the Oracle Text approaches to classification and apply it within Content.

In this post, I will use rule-based classification where we will decide on categories, formulate the rules that define those categories, index the rules and use the MATCHES operator to classify documents.

Rule-based classification is very accurate for small document sets. Results are always based on what you define, because you write the rules which are actually query phrases. However, defining rules can be tedious for large document sets with many categories. As your document set grows, you may need to write correspondingly more rules.

The first step is to really define a list of categories and the corresponding query a document must match to be placed in that category. For the purpose of this example, we will have the following 5 categories. You will notice that the rules are simple phrases delimited by an operand.

Category Rule
Astronomy Jupiter or Earth or star or planet or Orion or Venus or Mercury or Mars or Milky Way or Telescope or astronomer or NASA or astronaut
Paleontology fossils or scientist or paleontologist or dinosaur or Nature
Health stem cells or embryo or health or medical or medicine or World Health Organization or AIDS or HIV or virus or centers for disease control or vaccination
Natural Disasters earthquake or hurricane or tornado
Technology software or computer or Oracle or Apple or Intel or IBM or Microsoft

Next create a new table within WebCenter Content using the Configuration Manager to store the categories and rules.
Create table using Configuration Manager

Create a new view on the table we just created.
Create new View using Configuration Manager

Load the categories and the rules into the table using this view.
List of Categories

Create a new metadata to store the categories. We will auto populate this metadata for a document with the categories that match the document.
Create a new Metadata using Configuration Manager

Enable option lists for the new metadata, select the option list type as Multiselect List and use the view we just created.
Enable Option List for new Metadata

Create a new database index of type CTXSYS.CTXRULE on the rule table.
Create CTXSYS.CTXRULE index in the Database

Create a new database function to return a comma separated list of categories for a document. In my example, I will use the Oracle Text indexes WebCenter Content creates as WebCenter Content is capable of extracting text from a wide variety of content including attachments in emails and files within compressed archives.
Create FUNCTION categorizeDocument in the Database

Run the following script to generate the categories for all documents that were released today. This script will also update the revisions table to mark the document record for update so that the indexer will pick it up the next time it runs.
PL/SQL Code to generate Categories

After running the script, you should see the metadata updated with the categories on the Content Information screen for any document you checked and released today.



NOTE: Please note that these code snippets are examples and should be used for research and demonstration purposes only, as such they are unsupported.

Sunday Mar 22, 2015

Drag and drop file upload using HTML5 (YUI3)

I have seen a lot of ADF applications provide drag and drop file upload capabilities. I was wondering if there was an easy way to have this on the Content Server. If you are familiar with the UI components on the Content Server, you will know that it relies a lot on YUI.

In this example, I will use YUI3 Uploader to provide drag and drop multiple file upload capability on the Content Server. I will create a simple HTML file and check it into the Content Server. Accessing it from the content server using will show a form and you will be able to upload files. If you want to have Content Server like pages, you can modify this HTML file to insert IDOC includes and save it as a HCSP file.
Upload Example Screenshot

I have used the example provided by Yahoo "Example: Multiple Files Uploader with POST Variables and Server Data Retrieval" and modified it to upload files to a Content Server.

First and foremost, you will require the following lines to include YUI3 libraries and stylesheets.
JavaScript and Stylesheet Includes

Second, create a form to capture some metadata. In this example, we will apply these metadata values to all files being uploaded together. You can also move this form to the uploader table to set metadata at the file level.
Metadata Input Form

Third, load the Uploader and the JSON libraries. Also, the Content Server expects the file bytes to be uploaded using the primaryFile field name.
Load YUI Libraries and create Instance

Fourth, set the POST variables or the metadata to be assigned to the files. In this example, we will use the same values for all files. You can have different metadata for individual files too.
Set Metadata as POST Variables

Fifth and finally, once the checkin is complete, we need to parse the JSON data and then display the Content ID.
Process Data returned from the Content Server

You can download the HTML (file_upload.htm) here.

Note:

  • According to the announcement here, Yahoo has stopped development on YUI. However, you should be able to use the concepts used here with other frameworks.
  • If your browser does not support HTML 5, for example IE <= 9, you will need to use the flash component which requires additional configuration not discussed here.
  • This uploader is not supported on iOS devices.

Saturday Mar 07, 2015

RIDC using Oracle Java Stored Procedures

Have you faced a situation where you have a database table filled with content in a BLOB column and you want to migrate them into the Oracle Content Server. If you have and you want to write a PL/SQL procedure to loop through the records and check-in the content directly into the Content Server, you can use the Oracle JVM that is configured with an Oracle Database. Then you can load your RIDC libraries and your RIDC code into the database as schema objects and then write a Java Stored Procedure.

In this example, I will explain how use a table with content in a BLOB column and check them into the Content Server. We will use two approaches, one to save the file to the disk and then checkin, and the other to keep the file in memory and checkin. Note that the in memory solution should not be used with large files.

The first step is to check if your Oracle Database has Oracle JVM enabled. Usually, it is enabled by default, otherwise, you can find instructions to enable it here.
http://docs.oracle.com/cd/B28359_01/java.111/b31225/chfour.htm#BABEBCJA

The second step is to grant certain permissions to the schema user. In this example, I have created a new user, TEST.

CREATE USER test IDENTIFIED BY <passwd>;
GRANT ALL TO test;
EXEC DBMS_JAVA.GRANT_PERMISSION('TEST', 'java.net.SocketPermission', '*', 'connect, resolve');
EXEC DBMS_JAVA.GRANT_PERMISSION('TEST', 'java.io.FilePermission', '/tmp/file1.txt', 'read, write');

The third step and the most complicated one is to load all the required Java libraries and dependencies as database objects. This will include all the jar files that are in the RIDC library as well as the jars that contain all the classes referenced by the RIDC jars. You can download the RIDC library here. The dependency list is extensive and can be a nightmare to build; but Oracle does provide a tool to make things a bit easier.

The ojvmtc tool enables you to resolve all external references, prior to running the loadjava tool. The ojvmtc tool allows the specification of a classpath that specifies the JARs, classes, or directories to be used to resolve class references. When an external reference cannot be resolved, this tool either produces a list of unresolved references or generated stub classes to allow resolution of the references, depending on the options specified. Generated stub classes throw a java.lang.ClassNotfoundException, if it is referenced at runtime.

Please note that you will need to resolve all the dependencies for the entire modules to work. However, for this article we do not need to resolve all dependencies.

The following command will check for references and will generate stub classes for the classes it did not find. I downloaded and placed the RIDC and other jars in the "jars" directory under my home directory.

<dbhome>/bin/ojvmtc -list ~/jars/oracle.ucm.ridc-11.1.1.jar -classpath ~/jars/*.jar -jar dummy.jar -bootclasspath <dbhome>/jdk/jre/lib/rt.jar:<dbhome>/jdk/jre/lib/jce.jar:<dbhome>/jdk/jre/lib/jsse.jar

Now we are ready to load all these libraries into our schema. Run the following command to load all the jar files.

<dbhome>/bin/loadjava -verbose -resolve -user test/<passwd> ~/jars/*.jar

The fourth step is to write your Java class, compile and load it into your database schema. I will use the two methods below to checkin content.

// This method will take the file path and checkin the file
public static String checkinFile(String fPath) {
String dDocName = "";
try {
manager = new IdcClientManager();
client = (IdcClient<IdcClientConfig, Protocol, Connection>) manager.createClient("idc://10.0.0.5:4444");

IdcContext userContext = new IdcContext("sysadmin");
DataBinder binder = client.createBinder();
binder.putLocal("IdcService", "CHECKIN_NEW");
binder.putLocal("dDocTitle", "Test File - ABC000001");
binder.putLocal("dDocType", "Document");
binder.putLocal("dSecurityGroup", "Public");
binder.putLocal("dDocAccount", "");
binder.addFile("primaryFile", new TransferFile(new File(fPath)));
ServiceResponse response = client.sendRequest(userContext, binder);
DataBinder responsebinder = response.getResponseAsBinder();
dDocName = responsebinder.getLocal("dDocName");
client = null;
manager = null;
} catch (Exception e) {
// Nothing to do
dDocName = e.getLocalizedMessage();
}
return dDocName;
}
// This method will take a BLOB field and checkin in memory
public static String checkinBlob(BLOB bFile, String filename) {
String dDocName = "";
try {
manager = new IdcClientManager();
client = (IdcClient<IdcClientConfig, Protocol, Connection>) manager.createClient("idc://10.0.0.5:4444");

// Read the BLOB and create a Input Stream
// Make sure that the files are small as we are using in memory
byte[] filearray = bFile.getBytes(1, (int) bFile.length());
ByteArrayInputStream stream = new ByteArrayInputStream(filearray);

IdcContext userContext = new IdcContext("sysadmin");
DataBinder binder = client.createBinder();
binder.putLocal("IdcService", "CHECKIN_NEW");
binder.putLocal("dDocTitle", "Test File - ABC000002");
binder.putLocal("dDocType", "Document");
binder.putLocal("dSecurityGroup", "Public");
binder.putLocal("dDocAccount", "");
binder.addFile("primaryFile", new TransferFile(stream, filename, bFile.length()));
ServiceResponse response = client.sendRequest(userContext, binder);
DataBinder responsebinder = response.getResponseAsBinder();
dDocName = responsebinder.getLocal("dDocName");
client = null;
manager = null;
} catch (Exception e) {
// Nothing to do
dDocName = e.getLocalizedMessage();
}

return dDocName;
}

Run the following command to load the class file.

<dbhome>/bin/loadjava -verbose -f -resolve -user test/<passwd> ~/jars/CheckinFileIntoContentServer.class

Oracle DB Commands

The fifth step is to create PL/SQL functions that use the Java methods from the class we just loaded.

  CREATE
      OR
 REPLACE
FUNCTION test.checkinFile(fPath IN VARCHAR2)
  RETURN VARCHAR2
      AS LANGUAGE JAVA
    NAME 'com.oracle.justin.wcc.CheckinFileIntoContentServer.checkinFile(java.lang.String) return java.lang.String';

  CREATE
      OR
 REPLACE
FUNCTION test.checkinBlob(bFile IN BLOB, filename IN VARCHAR2)
  RETURN VARCHAR2
      AS LANGUAGE JAVA
    NAME 'com.oracle.justin.wcc.CheckinFileIntoContentServer.checkinBlob(oracle.sql.BLOB, java.lang.String) return java.lang.String';

The sixth step is to create and insert two rows with content into the table.

CREATE TABLE TEST.blob_table (
  fileid INTEGER NOT NULL,
  filename VARCHAR2(255) NOT NULL,
  filedata BLOB
);

The seventh and the final step is to loop through the table and checkin the files.

  -- Save the file to a temporary location
  -- and checkin the file
  SELECT filename,
         filedata
    INTO filename,
         vblob
    FROM blob_table
   WHERE fileid = 1;
  lbloblen := DBMS_LOB.getlength(vblob);
  lfile := UTL_FILE.FOPEN('TEMP_DIR', 'file1.txt', 'w', 32767);
  WHILE lpos < lbloblen LOOP
    DBMS_LOB.READ(vblob, lamount, lpos, lbuffer);
    UTL_FILE.PUT_RAW(lfile, lbuffer, TRUE);
    lpos := lpos + lamount;
  END LOOP;
  UTL_FILE.FCLOSE(lfile);
  DBMS_OUTPUT.PUT_LINE ('dDocname: ' || CHECKINFILE('/tmp/' || filename));

  -- IN MEMORY file checkin
  SELECT filename,
         filedata
    INTO filename,
         vblob
    FROM blob_table
   WHERE fileid = 2;
   DBMS_OUTPUT.PUT_LINE ('dDocname: ' || CHECKINBLOB(vblob, filename));

After the PL/SQL is executed, you should see two files in your Content Server with metadata that was coded in the Java class.

You can download the source files below:

NOTE: Please note that these code snippets should be used for development and testing purposes only, as such it is unsupported and should not be used on production environments.

Sunday Feb 08, 2015

RIDC using Jython Scripts

Jython is an implementation of Python for the JVM. Jython takes the Python programming language syntax and enables it to run on the Java platform. This allows seamless integration with the use of Java libraries and other Java-based applications. Plus it helps to have a loosely typed interpreted language as it makes writing code much faster.

I have written a set of scripts to execute some core Content Server services to check-in, update, delete content items and to browse, create, delete, link items using FrameworkFolders. You can download the scripts here.

JythonWCCAllExampleScripts.zip

Checkin new Content and a Revision
Update Metadata
Delete a single revision and all revisions
FrameworkFolders: Create Folder
FrameworkFolders: Create Shortcuts
FrameworkFolders: Browse Folders
FrameworkFolders: Copy Items
FrameworkFolders: Move Items
FrameworkFolders: Delete Items
FrameworkFolders: Propagate Metadata
FrameworkFolders: Remove (unfile) content from a folder

NOTE: Please note that these code snippets should be used for development and testing purposes only, as such it is unsupported and should not be used on production environments.

Tuesday Feb 03, 2015

Using Access Control Lists in WebCenter Content

WebCenter Content offers a comprehensive security model using its traditional Security Groups and Account metadata fields. Each content item is assigned to a security group, and if accounts are enabled then content items can also be assigned to an account. Users are assigned a certain level of permission (Read, Write, Delete, or Admin) for each security group and account, which enables them to work with a content item only to the extent that they have permissions to the item's security group and account.

At times, these constructs do not meet the requirements and you have to look at additional security options available within the Content Server. I prefer to choose one or more options in the order listed below. In my opinion, as you start moving down the list, the computational overheard on the Content Server increases (User ACL puts the most overhead).
  1. Security Groups and Roles
  2. Accounts
  3. Supplemental Markings (Records Management)
  4. NeedToKnow Component
  5. Role ACL
  6. Group ACL
  7. Oracle Entitlement Server
  8. User ACL
Follow the instructions below to setup Access Control Lists on your Content Server and configure them.
  1. Login to the Content Server as an Administrator.
  2. Navigate to the Administration -> Admin Server -> General Configuration page.
  3. Under the Additional Configuration Variables section, add the following lines. You can also add the lines directly to the <INSTANCE_HOME>/cs/config/config.cfg file.
    UseEntitySecurity=true
    SpecialAuthGroups=<comma separated list of security groups>
    # ZonedSecurityFields=xClbraUserList,xClbraAliasList,xClbraRoleList
    AllowQuerySafeUserColumns=true
    # AccessListPrivilegesGrantedWhenEmpty=true
  4. Restart the Content Server.
  5. Login to the Content Server as an Administrator.
  6. Navigate to the Administration -> Admin Applets -> User Admin applet.
  7. Go to the Aliases tab, and add aliases for the groups that you want to list in the Group Access List metadata. Note that the Group Access List metadata is really a Alias Access List and does NOT correspond to a LDAP group but a Content Server internal Alias.
  8. If you want to use LDAP groups, for your ACLs, you will need to enable the RoleEntityACL component. This component is already installed but needs to be enabled. Restart the Content Server after this change.
  9. Navigate to the Administration -> Admin Applets -> Configuration Manager applet.
  10. Go to the Views tab and locate the ExternalRolesView view. Add the LDAP groups that you want to use for the Role ACL to this list.
NOTE: The User Access List type-ahead field will only display users that have logged into the Content Server at least once. The Group Access List type-ahead field will only display Aliases defined in the Content Server. The Role Access List type-ahead field will only display groups defined in the ExternalRolesView view. You will need to make sure that a corresponding group with the same name exists in LDAP.

Monday Oct 13, 2014

How to: Using the ComponentTool utility to manage Content Server components

I have been asked multiple times, primarily by people who want to implement some sort of a MWaaS for WebCenter Components, how to install and/or enable WebCenter Content Components using a script.

One of the tools you can use to script this activity is the ComponentTool utility, which is installed by default. The executable is located in the <domainhome>/ucm/cs/bin/ directory. The ComponentTool component enables administrators to use a command line to install, enable, and disable components.

I have listed the options and commands available with the ComponentTool utility in the table below:

Activity Command
Enable a Component ComponentTool [-v|-vv] [-t trace_section ] --enable <component_name>

For example,
   <DomainHome>/ucm/cs/bin/ComponentTool -vv --enable ContentFolios
Disable a Component ComponentTool [-v|-vv] [-t trace_section ] --disable <component_name>

For example,
   <DomainHome>/ucm/cs/bin/ComponentTool -v --disable ContentFolios
List Components ComponentTool [-v|-vv] [-t trace_section ] --list-enabled|--list-disabled|--list
Install a New Component ComponentTool [-v|-vv] [-t trace_section ] --install <component_name>.hda|<component_name>.zip [ --preferences <component_name_prefs>.hda ]

For example,
   <DomainHome>/ucm/cs/bin/ComponentTool -vv --install /tmp/sccomponent01.zip
Help ComponentTool [-v|-vv] [-t trace_section ] --help

Thursday Jul 03, 2014

iOS app for WebCenter Content

I have the habit of browsing through the Apple AppStore on my iPhone to discover new apps and give them a try. I usually look for new games but today I thought I will try the app for WebCenter Content. Finding the app was straight forward; I just needed to search for WebCenter Content and a few apps popped up. The first one from Oracle is what I was looking for.

I open the app to get a login screen. It requires a username, password and the URL of the content server you want to connect to. I used the URL in the format http[s]://hostname[:port]/cs/idcplg and soon enough was connected to it.

The app itself is fairly simple, with basic functions for searching (includes full-text), browsing and viewing documents with native device support. I have compiled a few screenshots of the app.

[Read More]
About

Welcome to my blog. I use this site to share my experience as well as tips and tricks on Oracle Fusion Middleware products.

Contributors

Search

Archives
« July 2016
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today