Tuesday Mar 26, 2013

Sharing a saved query through Desktop Integration Suite and folders

I had someone recently ask if there is a way to create a query folder through Desktop Integration Suite (DIS).  Query folders are a new feature available in Framework Folders that basically run pre-defined searches within the context of a folder.  While not immediately obvious, there is a way to do it. 

First, you perform your search through DIS and get your results in the Search Results folder.

DIS Search

Image Search

Search Results

You then take those results and right-click to save them as a Saved Query.  It now goes under the My Saved Queries folder within My Content Server.   Now you can hold down the Ctrl key and drag it to one of the folders under Browse Content.  

SaveQuery

Saved query

Query copied

Query in web UI

And that's all you need to create that query folder in DIS! 


Tuesday Feb 19, 2013

Getting started with Desktop Integration Suite

Getting Started with Desktop Integration SuiteI recently discovered the Oracle Learning Library which is a nice site for self-learning videos and tutorials on Oracle products.  Marsha Hancock, Senior Principal Curriculum Developer for WebCenter Content, just posted a video on Getting Started with Desktop Integration Suite (DIS).  This is a great way to quickly understand how to connect to WebCenter Content with DIS and begin working with it.

Thursday Feb 07, 2013

Caught in the act!

BustedSometimes when troubleshooting issues, the exact cause of the issue may be difficult to find.  You may run across an error appearing in the log file.  But it may not have enough information about what went wrong...or how it might happen again.  So you can turn on tracing and watch the output, but if you don't know when the error may happen, you may have to sift through a lot of trace logs to find the spot of the error.  That's where Event Trap tracing comes it.

Event Trap tracing allows you to specify keywords for content server to look for as it's writing out tracing in the server output.  If that keyword is found, all of the tracing in the buffer at that time will be sent to a separate event tracing output file.  So now you have a nice slice of tracing activity at the exact moment the particular keyword (based off error message or such) is hit. In addition, a thread dump from the JVM can be obtained at the same time to capture all of the thread activity as well. By default, the keyword is Exception so that every exception is captured this way.  

Event Trap Settings

 By default, the log files can be found in the <content server instance directory>/data/trace/event directory or they can be viewed in the browser by clicking on the 'View Event Output' link.

Tuesday Jan 29, 2013

Conversions in WebCenter Content

One of the guiding principles with WebCenter Content has been to make it as easy as possible to consume content.  And part of that means viewing content in a format that is optimal for the end user… regardless of the format the content was created in.  So WebCenter Content has a long history of converting files from one format to another.  Often this involves converting a proprietary desktop publishing format to something more open that can be viewed directly from a browser.  Or taking a high resolution image and creating a rendition that download quickly over a slow network.

Conversion Decision TreeOver the life of the product, the types and methods for those conversions has grown to provide a broad range of options.  It’s sometimes confusing to know what conversion are available and where exactly they are done (Content Server or Inbound Refinery), so I've put together a flowchart and list describing all of the different types of conversion, how and where they are done, and the pros and cons of each.  This list covers what’s available as of the current release – WebCenter Content 11g PS5.

PDF Conversions

Where: Inbound Refinery
When: Upon check-in
How: Multiple ways
Platform: All (* but depends)

So PDF conversions are probably the most common type of conversion done with WCC.  This involves converting a desktop publishing format (e.g. Microsoft Word) into Adobe PDF format.  The benefits obviously include being able to read the document directly in the browser (with a PDF reader plug-in) and not requiring the 3rd party product to read the proprietary format. In addition, PDFs also provide additional benefits such as being able to start viewing the document before the entire file downloads, possible compression on file size, and the ability to provide watermarks and additional security on the file.  And optionally, PDF/A format can be chosen which is recognized as an approved archival format.

Within PDF conversions, there are several different methods that can be used to create the PDF, depending on the needs and requirements.

PDFExportConverter – This method uses Oracle’s own OutsideIn filters to directly convert multiple format types into PDF.  The benefits include multiple platform support (any platform that WCC supports), fastest conversion, and no 3rd party software requirements.  The main downside to this type of conversion is it has the lowest fidelity to the original document. Meaning it won’t always exactly match the look and feel of the original document.  These formats are supported by the OutsideIn filters for conversion to PDF.

WinNativeConverter – Like the name implies, this type of conversion uses the native applications on Windows to do the conversion.  By using the original application that was used to create the document, you will get the best fidelity of PDF compared to the original.  The downside is that the Inbound Refinery can only be run on Windows and not other platforms.  It also requires a distiller engine to convert the PostScript format that gets printed from the native applications to PDF.  The recommended choice for that is AFPL Ghostscript

OpenOfficeConversion – The Open Office conversion is a bit of a compromise between the two types of conversions mentioned above.  It uses Apache Open Office to open and convert the native file. In most cases, it will give you better fidelity of PDF then the PDFExportConverter, but still not as good as WinNativeConverter.  Also, it does support more than just Windows, so it has broader platform support then WinNativeConverter. 

Tiff Converter

Where: Inbound Refinery
When: Upon check-in
How: Uses a 3rd party (CVISION PdfCompressor) engine to perform OCR and PDF conversion
Platform: Windows Only

When needing to convert TIFF formatted files into PDFs, this can be done with either PDFExportConverter or Tiff Converter.  The major difference is if optical character recognition (OCR) needs to be performed on the file in order to extract the full-text off the image.  If OCR is required, then Tiff Converter is used for that type of conversion.  In addition, a 3rd party tool, CVISION PdfCompressor, is required to do the actual OCR and conversion piece.  Tiff Converter acts as the controller between the Inbound Refinery and PdfCompressor.  But because PdfCompressor is a Windows-only application, the Inbound Refinery must also be on Windows. 

XML Converter

Where: Inbound Refinery
When: Upon check-in
How: Uses Oracle OutsideIn filters to convert native formats into XML
Platform: All

The XML Converter allows for native documents to be converted into 2 flavors of XML: FlexionXML (based on FlexionDoc schema) and SearchML (based on the SearchML schema).  In addition, those formats can go through additional transformation with a custom XSLT.  Because the XML Converter utilizes the Oracle OutsideIn filter technology, it supports all platforms.

DAM Converter

Where: Inbound Refinery
When: Upon check-in and updates
How: Can use both Oracle OutsideIn filters as well as 3rd party applications to do image conversions.  Flip Factory is required for video conversions.
Platform: All (* but depends)

DAM Converter is used to create multiple renditions of either image or video files.  The primary goal is to convert original formats which can typically be high resolution and large in size into other formats that are geared towards web or print delivery.  One thing that is unique to DAM Converter is the metadata that is used to specify the rendition set can be updated after the item has been submitted which will send the file back to the Inbound Refinery to be reprocessed.

When using the image converter, the Inbound Refinery comes with the Oracle OutsideIn filters to create renditions, so nothing else is required and it can run on all platforms.  But the converter also supports other types of image converters which are command-line driven such as Adobe Photoshop, XnView NConvert, ImageMagick.  Some are commercial and some are freeware.  Each has different capabilities for different use-cases and are supported on various platforms.  But for general purpose re-sizing, resolution, and format changes, OutsideIn can handle it.

For video conversion, Telestream’s Flip Factory is required.  The DAM Converter acts as the controller between the Inbound Refinery and Flip Factory.  What makes this integration a bit unique is that it is handled purely at a file system level.  This means that Flip Factory, which is a Windows-only application, does not need to reside on the same server as the Inbound Refinery.  They simply need shared file system access between servers.  So the Inbound Refinery can be on Linux while Flip Factory is on Windows.  

HTML Converter

Where: Inbound Refinery
When: Upon check-in
How: Uses Microsoft Office to convert Office documents into HTML
Platform: Windows Only

HTML Converter uses Microsoft Office to save the documents as HTML documents, collects the output (into a zip file if multiple files), and returns them to Content Server.  Using the HTML save output directly from Office, you get a very good fidelity of HTML compared to the original native format.  This is especially true for Excel and Visio which are less text-based.  The downside is you have no control over the HTML output to make any changes or provide consistency between conversions.  It’s simply formatted based on Office’s formatting.  Also, it does not apply any templating around the content to insert code before or after the content or present the document within the structure of a larger HTML page such as in the case of Site Studio.   

Dynamic Converter

Where: Content Server
When: Upon check-in or on-demand
How: Uses Oracle OutsideIn filters to convert native documents into HTML
Platform: All

Like HTML Converter, Dynamic Converter converts Office documents into HTML.  But there are several key differences between the two.  First is Dynamic Converter uses OutsideIn filters to convert to HTML so it supports a wide range of different native formats. Another difference is the processing happens on the Content Server side and not Inbound Refinery.  This allows the conversion to happen on-demand the first time the HTML version is requested.  Alternatively, DC can be configured to do the conversion upon check-in and cache the results so they are immediately available and don’t need to go through conversion on first request. DC also supports a wide range of controls over how the HTML is precisely formatted.  The result can be very minimal and clean HTML with various div or span tags to allow styling with CSS.  This can lead to a more consistent look and feel between converted documents.  In also allows for insertion of code before or after the content to embed the output within a template and is what is used within Site Studio.

Thumbnail Creation

Where: Content Server or Inbound Refinery
When: Upon check-in
How: Uses Oracle OutsideIn filters to create a thumbnail representation of the document to be used on search results
Platform: All

As a new feature in PS5, thumbnails can now be generated directly in the Content Server and not require the document to be sent to the Inbound Refinery (if it doesn’t need other conversions).  This allows the document to become available much more quickly.  But if the file is sent to the Inbound Refinery for other types of conversions, the thumbnail can be generated at that point.

For further information on conversions, see the documentation on Conversions as well as Dynamic Converter

Monday Jan 14, 2013

Migrating folders and content together in WebCenter Content

In the case of migrating from one WebCenter Content instance to another, there are several different tools within the system to accomplish that migration depending on what you need to move over.

This post will focus on the use case of needing to move a specific set of folders and their contents from one instance to another.  And the folder architecture in this example is Folders_g. Although Framework Folders is the recommended folders component for WebCenter Content 11g PS5 and later, there are still cases where you must still use Folders_g (e.g. WebCenter Portal, Fusion Applications, Primavera, etc).  Or perhaps you are at an older version and Folders_g is the only option.

To prepare, you must first have the FoldersStructureArchive component enabled on both the source and target instances.  If you are on UCM 10g, this component will be available within the CS10gR35UpdateBundle/extras folder.  In addition to enabling the component, there is a configuration flag to set.  By default, the config variable ArchiveFolderStructureOnly is set to false which means content will be exported along with the folders, so that can be left alone.  The config variable AllowArchiveNoneFolderItem is set to true by default which means it will export content both in the folder structure as well as those not selected...or even outside of folders.  Basically, it means you must use the Export Criteria in the archive to control the content to export. In our use case, we only want the content within the folders we select, so the configuration should be set as AllowArchiveNoneFolderItem=false.  Now only content that is in our selected folders will get exported into the archive. This can be set in the General Configuration in the Admin Server.

You will also need to make sure the custom metadata fields on both instances is identical. If they are mismatched, the folders will not import into the target instance correctly. You can use the Configuration Migration Utility to migrate those metadata fields.

Once the component is enabled and configurations set, go to Administration -> Admin Applets -> Archiver and select Edit -> Add... to create a new archive.  

New archive

Now that the archive is established, go back to the browser and go to Administration -> Folder Archiver Configuration.  For the Collection Name, it will default to the local collection.  Change this if your archive is in a different collection.  Then select your Archive Name from the list.

archive select

Expand the folder hierarchy and you can now select the specific folder(s) you want to migrate.  The thing to keep in mind are the parent folders to the ones you are selecting.  If the idea is you want to migrate a certain section of the folder hierarchy to the other server and you want it to be in the same place in the target instance, you want to make sure that the parent folder already exists in the target.  It is possible to migrate a folder and place it within a different parent folder in the target instance, but then you need to make sure you set the import maps correctly to specify the destination folder (more on that later).

Select folders

Once they are selected, click the Add button to save the configuration.  This will add the right criteria to the archive. Now go back to the Archiver applet.  Highlight the archive and select Actions -> Export.  Be sure 'Export Tables' is selected.  Note: If you try using the Preview on either the contents or the Table data, both will show everything and not just what you selected.  This is normal. The filtering of content and folders is not reflected in the Preview. Once completed, you can click on the View Batch Files... button to verify the results.  You should see an entry for the Collections_arTables and one or more for the content items.  

View batches

If you highlight the Collections row and click Edit, you can view and verify the results.

Verify collections table

You can do the same for the document entries as well.

Once you have the archive exported, you need to transfer it from the source to the target instance. If I don't have the outgoing providers set up to do the transfer, I sometimes cheat and copy over the archive folder from <cs instance dir>\archives\{archive name} directly over to the other instance.  Then I manually modify the collection.hda file on the target to let it know about the archive:

@ResultSet Archives
2
aArchiveName
aArchiveDescription
exportfoldersandfiles
Export some folders and files

@end

Or if I have Site Studio installed and my archive is fairly small, I'll take the approach described in this earlier post.

Before you import the archive on the target, you need to make sure the folders will be going into the right "parent" folder. If you've already migrated the parent folder to your folders to the target instance, then the IDs should match between instances and you should not have to do any import mappings. But if you are migrating the folders and the parent IDs will be different on the target (such as the main Contribution Folders or WebCenter Spaces root folder), then you will have to map those values.

First, to check what the folder's ID is, you can simply place your mouse over the link to the particular folder to get it's ID.  It will be identified as dCollectionID in the URL.  Do this on both the source and target instances.

Get dCollectionID

In this example, the dCollectionID on the source instance for the parent folder (Contribution Folders) is 826127598928000002.  On the target instance, its Contribution Folders ID is 838257920156000002.  So that means when the top level 'Product Management' folder in our archive moves over, the ID that specifies the ParentID needs to be mapped to the new value. So now we have all the information we need for the mapping.

Go to the Archiver on the target instance and highlight the archive.  Click on the Import Maps tab and then on the Table tab.  Double-click on the folder and then expand they date entry.  It should then show the Collections table.

Import tables

Click on the Edit button for the Value Maps. For the Input Value, you want to enter the value of the dCollectionID of the parent folder from the source instance. In our example, this is 826127598928000002. For the Field, you want to change this to be the dParentCollectionID. And for the Output Value, you want this to be the dCollectionID of the parent folder in the target instance.  In our example, this is 838257920156000002.  Click the Add button.  

Value map

This will now map the folders into the correct location on target.

The archive is now ready to be imported.  Click on Actions -> Import and be sure the 'Import Tables' check-box is checked. To check for any issues, be sure to go to the logs at Administration -> Log Files -> Archiver Logs.

And that's it.  Your folders and files should now be migrated over.

Thursday Jan 10, 2013

Adding browser search engines in WebCenter Content

In a post I made a few years ago, I described how you can add WebCenter Content (UCM at the time) search to the browser's search engines.  I think this is a handy shortcut if you find yourself performing searches often enough in WCC. 

Well, in the PS5 release, this was actually included as a new feature.  You need to enable the DesktopIntegrationSuite component in order to access it.  Once you do, go to the My Content Server -> My Downloads link.  There you will see the 'Add browser search' link. 

Add Browser Search

Once clicked, an OpenSearchDescription XML file is produced which each modern browser supports for adding in the search engine. 

Browser Search Bar

The one piece that's missing is something I mentioned in my earlier post: forcing authentication.  If you haven't logged into the server, your search will be performed anonymously and you will only get back content that is available to the guest role.  To make sure the search is performed as your user, the extra parameter Auth=Internet can be passed to the server to cause the server to challenge your request and force a login if needed.  Because the definition of the search engine URL is defined within the DesktopIntegrationSuite component, a new custom component can be added to override this.  Basically, the new component must override the dis_search_plugin resource and modify the Url locations.  Below is an example:

<@dynamichtml dis_search_plugin@>
<?xml version="1.0" encoding="UTF-8"?>
<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"
                       xmlns:moz="http://www.mozilla.org/2006/browser/search/">
    <ShortName><$if DIS_SearchPluginTitle$><$DIS_SearchPluginTitle$><$else$>Oracle WebCenter Content Server Search<$endif$></ShortName>
    <Description><$lc("wwDISSearchPluginDescription")$></Description>
    <Url type="text/html" method="get" template="<$xml(HttpBrowserFullCgiPath & "?IdcService=DESKTOP_BROWSER_SEARCH&Auth=Internet&MiniSearchText={searchTerms}")$>" />
    <$iconlocation=strReplace(HttpBrowserFullCgiPath,HttpCgiPath,"") & HttpImagesRoot & "desktopintegrationsuite/dis_search_plugin.ico"$>
    <Image height="16" width="16" type="image/x-icon"><$iconlocation$></Image>
    <Developer>Oracle Corporation</Developer>
    <InputEncoding>UTF-8</InputEncoding>
    <moz:SearchForm><$xml(HttpBrowserFullCgiPath & "?IdcService=DESKTOP_BROWSER_SEARCH&Auth=Internet&MiniSearchText=")$></moz:SearchForm>
</OpenSearchDescription>
<$setContentType("application/xml")$>
<$setHttpHeader("Content-Disposition","inline; filename=search_plugin.xml")$>
<$setHttpHeader("Cache-Control", "public")$>
<@end@>

I've included a pre-built custom component that does just that.

UPDATE (Jan 15, 2013)

In addition to enabling the component, there is also a configuration preference that must be enabled.   After enabling the Desktop Integration Suite component,  go to the 'advanced component manager'.  Go to the bottom to the 'Update Component Configuration' list and select DesktopIntegrationSuite and click Update.  The first entry is to 'Enable web browser search plug-in'.  Check that and click Update.

DIS Configuration

If you've already restarted to enable the DIS component, you do not need to restart for this configuration to take effect.

Friday Dec 21, 2012

Generating barcodes in reports

I recently had a comment posted on a previous blog post regarding generating barcodes in the reports that come with the records management module (either in WebCenter Content/UCM or WebCenter Content: Records/URM).  

I knew we could output barcodes because we do  in some of the default reports that come with the product.  But even when looking at those rich-text templates, it wasn't clear how they were defined.  So I did a little digging and discovered the code needed to be added to those fields to do the barcode magic.  I won't repeat the steps on how to update/create the custom reports from my earlier post, but will just cover the few extra steps for barcodes.

Once you have your field input into the template in Word, right-click on the field and choose BI Publisher -> Properties.  Click on the Advanced tab and you should see the box for Code with the field you are outputting surrounded by <?field_name?>. For barcodes, you'll want to enter this in that code field:

<?register-barcode-vendor:’oracle.xdo.template.rtf.util.barcoder.BarcodeUtil';'XMLPBarVendor'?><?dBarcodeFormated?>*<?dBarcode?>*<?format-barcode:dBarcodeFormated;code39;XMLPBarVendor?>

Just replace dBarcode with your field name (e.g. dDocName, xComments, etc).  

code

Next, you'll want to change the font on the field to be 'BC 3of9'.  This font should have been added when the BI Publisher Desktop add-in for Word was installed.

font

Now simply follow the steps to add the template to the repository and configure the appropriate reports.  Now when the reports are run, it should provide the values in barcodes.

report

One thing I noticed is when I saved the Word document in rich-text format, I was no longer able to re-open that rtf file and get back to the code for the field properties.  But in Word's default doc format, I was.  So if you think you might need to edit the report later on, it's probably a good idea to save a copy in doc format as well. 

Monday Dec 10, 2012

Expanding on requestaudit - Tracing who is doing what...and for how long

One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.  

>requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1
>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919

requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report*****

To change the default rotation or size of output, these can be set as configuration variables for the server:

RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)
RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)
RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)
RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20)

If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request. 

>requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs)

What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service.

>requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs)

You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert.

RequestAuditAdditionalVerboseFieldsList=TotalRows,path

In this case, for any search results, the number of items the user found is traced:

>requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta...

I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list:

RequestAuditAdditionalVerboseFieldsList=ClientToken

And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

Monday Dec 03, 2012

Access Control Lists for Roles

Back in an earlier post, I wrote about how to enable entity security (access control lists, aka ACLs) for UCM 11g PS3.  Well, there was actually an additional security option that was included in that release but not fully supported yet (only for Fusion Applications).  It's the ability to define Roles as ACLs to entities (documents and folders).  But now in PS5, this security option is now fully supported.  

The benefit of defining Roles for ACLs is that those user roles come from the enterprise security directory (e.g. OID, Active Directory, etc) and thus the WebCenter Content administrator does not need to define them like they do with ACL Groups (Aliases).  So it's a bit of best of both worlds.  Users are managed through the LDAP repository and are automatically granted/denied access through their group membership which are mapped to Roles in WCC.  A different way to think about it is being able to add multiple Accounts to content items...which I often get asked about.  Because LDAP groups can map to Accounts, there has always been this association between the LDAP groups and access to the entity in WCC.  But that mapping had to define the specific level of access (RWDA) and you could only apply one Account per content item or folder.  With Roles for ACLs, it basically takes away both of those restrictions by allowing users to define more then one Role and define the level of access on-the-fly.

To turn on ACLs for Roles, there is a component to enable.  On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top.   In the list of Disabled Components, enable the RoleEntityACL component. Then restart.  This is assuming the other configuration settings have been made for the other ACLs in the earlier post.  

Once enabled, a new metadata field called xClbraRoleList will be created.  If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection.

For Users and Groups, these values are automatically picked up from the corresponding database tables.  In the case of Roles, there is an explicitly defined list of choices that are made available.  These values must match the roles that are coming from the enterprise security repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager.  On the Views tab, edit the values for the ExternalRolesView.  By default, 'guest' and 'authenticated' are added.

Configuration Manager

 Once added, you can assign the roles to your content or folder.

Role entity field

If you are a user that can both access the Security Group for that item and you belong to that particular Role, you now have access to that item.  If you don't belong to that Role, you won't!

[Extra]

Because the selection mechanism for the list is using a type-ahead field, users may not even know the possible choices to start typing to.  To help them, one thing you can add to the form is a placeholder field which offers the entire list of roles as an option list they can scroll through (assuming its a manageable size)  and view to know what to type to.  By being a placeholder field, it won't need to be added to the custom metadata database table or search engine.  

List of possible roles field definition

Friday Oct 05, 2012

HTML Manifest for Content Folios

I recently worked on a project to create a custom content folio renderer in WebCenter Content. It needed to output the native files in the folio along with a manifest file in HTML format which would list the contents of the folio along with any designated metadata and a relative link to the file within the download.  This way a person could hand someone the folio download and it would be a self-contained package with all of the content and a single file to display the information on the contents.  The default Zip rendition of the folio will output the web-viewable version of the file with an HDA formatted file for each one. And unless you are fluent in HDA or have a tool to read them, they are difficult to consume.

Content Folio Manifest

I thought this might be useful for others, so I'm posting a copy of the component here. Beyond the standard instructions for installing a component, there is an environment configuration file (folionativezipwithmanifestrenderer_environment.cfg) which has a couple of options.

FolioMetadataManifestList - This is a comma separated list of metadata fields (system or custom) that should be included in the manifest file.

FolioMetadataManifestUseOriginalFilename - (True or False) If set to True, the filenames in the zip file will be based on the original filename as it was checked into WebCenter Content.  If False, it will use the 'Name' of the item as defined within the Folio.  This is usually the Title of the item.

The component also includes the source code, so feel free to use this as a reference for creating other interesting folios. 

Monday Sep 24, 2012

Configuring trace file size and number in WebCenter Content 11g

Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning.

One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.  

To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings.

If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs.

#Change log size to 10MB and number of logs to 20
FileSizeLimit=10485760
FileCountLimit=20

This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

Update - Sept. 27, 2012

 Kevin Smith has a nice blog post that describes some of these trace sections in detail.

Friday Sep 14, 2012

Mass Metadata Updates with Folders

With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.  

Within these new Query Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful.

  • It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.  
  • The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update.
  • You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder.

Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.  

Here are the basic steps to perform propagation.

  1. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well.

    CreateQueryFolder1

    CreateQueryFolder2

  2. Go into the folder to get the results.  

    Query Folder Result

  3. In the Edit menu, select 'Propagate'.

    Propagate Menu

  4. Select the check-box next to the field to update and enter the new value  Click the Propagate button.

    Propagate Window

  5. Once complete, a dialog will appear showing it is complete

    Confirmation

What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation.

Propagate Indicator

There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated.

One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

Thursday Aug 02, 2012

Adding and removing WebCenter Content cluster nodes

If you follow the Enterprise Deployment Guide, Fusion Middleware High Availability guide, or the support technote on example steps for installing a multi node cluster of WebCenter Content 11g, they all cover establishing a multi node cluster using the WebLogic Server domain configuration wizard.  But if you find yourself needing to add or remove nodes after the cluster has been established, there isn't much documentation covering that.  So the following are some steps on how to do those tasks.

Adding additional nodes 

1.  Install WebLogic Server and the WebCenter Content binaries on the new nodes.

2.  Log into WebLogic Server Administration Console and stop all of the managed servers.

3.  In Domain Structure, go to <domain> -> Environment -> Servers.   Select one of the UCM_server nodes and click the Clone button.

4.  For the Server Name, enter UCM_server# with the next logical number in the node sequence.  Enter the Server Listen Address and a port of 16200.  Click OK.


5.  Now a new machine needs to be created for the new node.  Go go  <domain> -> Environment ->  Machines.  Click New.

6.  Enter the machine name and click Next.


7.  Enter the Listen Address of the new node and modify the Listen Port for Node Manager if needed.  Click Finish


8.  Go to the machine that was associated to the managed server that was used to clone from in step 3.  Because the new managed server was cloned from an existing one, it will initially be associated with that same machine.  Check the box for it and click Remove.


9.  Click on the Servers tab.  Check the box for the newly cloned managed server and  click Remove.  Click Yes to the confirmation.

10. Click on Machines in the Domain Structure again and click on the machine just create in step 7.

11.  Click on the Servers tab and click the Add button.  Select the newly cloned managed server and click Finished.


12. Repeat steps 3-11 for additional nodes.

13. Shut down the WLS Admin Server.

14. On the WLS Admin Server machine, change directory to one on the shared/remote file system's mount.  

15. Execute the pack command to bundle the domain configuration.  For example:

/u01/oracle /Middleware/Oracle_ECM1/common/bin/pack.sh -managed=true –domain= /u01/oracle/Middleware/user_projects/domains/wcc_domain -template=ecm_template.jar -template_name="my ecm domain"

16. Go to the new node and execute the unpack command accessing the newly created template.  For example:

/u01/oracle/Middleware/Oracle_ECM1/common/bin/unpack.sh -domain=/u01/oracle/Middleware/user_projects/domains/wcc_domain -template=ecm_template.jar

17. Start your WLS Admin Server.

18. On the new node, and start the the managed server via command-line.  For example:

/u01/oracle/Middleware/user_projects/domains/wcc_domain/bin/startManagedWebLogic.sh UCM_server3 http://wcchost1:7001

19. You can now configure Node Manager on the new node to be able to start and stop it from the WLS Admin Server.

Removing Nodes

1.  Go to the WebLogic Server Administration Console.

2.  Stop the node(s) to remove. 

2.  In Domain Structure, go to <domain> -> Environment -> Servers.  Select the checkbox for the server node(s) to remove and click the Delete button.

3.  Go to <domain> -> Environment -> Machines.  Select the checkbox for the machine(s) to remove and click the Delete button.

Tuesday Jul 10, 2012

Adjusting the Score on Oracle Text search results

When you sort the results of a search by Score using OracleTextSearch as the search engine in WebCenter Content, the results coming back are based on the relevancy on the document.  In theory, the more relevant the search term is to the document, the higher ranked Score it should receive.  But in practice, the relevancy score can seem somewhat of a mystery.  It's not entirely clear how it ranks the importance of some documents over others based on the search term.  And often times, once a word appears a certain number of times within a document, the Score simply maxes out at 100 and the top results can be difficult to discern from one another.  Take for example the search for 'vacation' on this set of documents:

Score by relevance

Out of 7 results, 6 of them have a Score of '100' which means they are basically ranked the same.  This doesn't make the sort by Score very meaningful.  

Besides sorting by relevance, you can also tell Oracle Text to sort by occurrence.  In that case, it is a much more predictable result in how they would be ranked. And for many cases provide a more meaningful sorting of results then relevance. To change this takes a small component change to the SearchOperatorMap resource.  By default, the query used for full-text searching looks like:

<td>(ORACLETEXTSEARCH)fullText</td>
<td>DEFINESCORE((%V), RELEVANCE * .1)</td>
<td>text</td>

Overriding this resource and changing it to:

<td>(ORACLETEXTSEARCH)fullText</td>
<td>DEFINESCORE((%V), OCCURRENCE * .01)</td> 
<td>text</td>

will force it to now use occurrence (note the change in scale to .01 as well).  So running the same search and sort options as the example above, the results come out quite a bit differently:

Sort by occurrence

In this case, there is a clear understanding of how the items rank.   And generally, if the search term appears 3 times more in one document then another, it's got a better chance of being a document I'm interested in. 

You may or may not feel the relevance ranking is better then the search term occurrence, but this provides the opportunity to try an alternate method that might work better for your results.  A pre-built component is available for download here.

There is one caveat in using this method.  The occurrence ranking also maxes out at 100, so if a search term is in the document more then that, the Score result will stay at 100.

Thursday Jul 05, 2012

Idoc Script Plug-in for Notepad++

For those of you that caught it in an earlier post, Arnoud Koot wrote a great Idoc Script plug-in for Notepad++.  Well, he's back at it and has written an update for 11g!

Auto-complete

Arnoud made his announcement a few days ago on the WebCenter Content forum. And it looks like Jonathan Hult caught it as well and posted to his blog.

A great addition to his plug-in is context sensitive help.  Now you can look up the variables and functions without having to switch to the formal Oracle documentation.

Context Sensitive Help

He's even provided a tool to update the help automatically based on the Oracle documentation. 

A couple of things to look for that I had missed the instructions was the note about updating the LanguageHelp.ini with your own path to the iDoc11g.chm file as well as the <ctrl><space> keystroke for the auto-complete.

Great work Arnoud!

About

Kyle Hatlestad is a Solution Architect in the WebCenter Architecture group (A-Team) who works with WebCenter Content and other products in the WebCenter & Fusion Middleware portfolios. The WebCenter A-Team blog can be found at: https://blogs.oracle.com/ ateam_webcenter/

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today