Tuesday Mar 26, 2013

Sharing a saved query through Desktop Integration Suite and folders

I had someone recently ask if there is a way to create a query folder through Desktop Integration Suite (DIS).  Query folders are a new feature available in Framework Folders that basically run pre-defined searches within the context of a folder.  While not immediately obvious, there is a way to do it. 

First, you perform your search through DIS and get your results in the Search Results folder.

DIS Search

Image Search

Search Results

You then take those results and right-click to save them as a Saved Query.  It now goes under the My Saved Queries folder within My Content Server.   Now you can hold down the Ctrl key and drag it to one of the folders under Browse Content.  

SaveQuery

Saved query

Query copied

Query in web UI

And that's all you need to create that query folder in DIS! 


Tuesday Feb 19, 2013

Getting started with Desktop Integration Suite

Getting Started with Desktop Integration SuiteI recently discovered the Oracle Learning Library which is a nice site for self-learning videos and tutorials on Oracle products.  Marsha Hancock, Senior Principal Curriculum Developer for WebCenter Content, just posted a video on Getting Started with Desktop Integration Suite (DIS).  This is a great way to quickly understand how to connect to WebCenter Content with DIS and begin working with it.

Thursday Feb 07, 2013

Caught in the act!

BustedSometimes when troubleshooting issues, the exact cause of the issue may be difficult to find.  You may run across an error appearing in the log file.  But it may not have enough information about what went wrong...or how it might happen again.  So you can turn on tracing and watch the output, but if you don't know when the error may happen, you may have to sift through a lot of trace logs to find the spot of the error.  That's where Event Trap tracing comes it.

Event Trap tracing allows you to specify keywords for content server to look for as it's writing out tracing in the server output.  If that keyword is found, all of the tracing in the buffer at that time will be sent to a separate event tracing output file.  So now you have a nice slice of tracing activity at the exact moment the particular keyword (based off error message or such) is hit. In addition, a thread dump from the JVM can be obtained at the same time to capture all of the thread activity as well. By default, the keyword is Exception so that every exception is captured this way.  

Event Trap Settings

 By default, the log files can be found in the <content server instance directory>/data/trace/event directory or they can be viewed in the browser by clicking on the 'View Event Output' link.

Tuesday Jan 29, 2013

Conversions in WebCenter Content

One of the guiding principles with WebCenter Content has been to make it as easy as possible to consume content.  And part of that means viewing content in a format that is optimal for the end user… regardless of the format the content was created in.  So WebCenter Content has a long history of converting files from one format to another.  Often this involves converting a proprietary desktop publishing format to something more open that can be viewed directly from a browser.  Or taking a high resolution image and creating a rendition that download quickly over a slow network.

Conversion Decision TreeOver the life of the product, the types and methods for those conversions has grown to provide a broad range of options.  It’s sometimes confusing to know what conversion are available and where exactly they are done (Content Server or Inbound Refinery), so I've put together a flowchart and list describing all of the different types of conversion, how and where they are done, and the pros and cons of each.  This list covers what’s available as of the current release – WebCenter Content 11g PS5.

PDF Conversions

Where: Inbound Refinery
When: Upon check-in
How: Multiple ways
Platform: All (* but depends)

So PDF conversions are probably the most common type of conversion done with WCC.  This involves converting a desktop publishing format (e.g. Microsoft Word) into Adobe PDF format.  The benefits obviously include being able to read the document directly in the browser (with a PDF reader plug-in) and not requiring the 3rd party product to read the proprietary format. In addition, PDFs also provide additional benefits such as being able to start viewing the document before the entire file downloads, possible compression on file size, and the ability to provide watermarks and additional security on the file.  And optionally, PDF/A format can be chosen which is recognized as an approved archival format.

Within PDF conversions, there are several different methods that can be used to create the PDF, depending on the needs and requirements.

PDFExportConverter – This method uses Oracle’s own OutsideIn filters to directly convert multiple format types into PDF.  The benefits include multiple platform support (any platform that WCC supports), fastest conversion, and no 3rd party software requirements.  The main downside to this type of conversion is it has the lowest fidelity to the original document. Meaning it won’t always exactly match the look and feel of the original document.  These formats are supported by the OutsideIn filters for conversion to PDF.

WinNativeConverter – Like the name implies, this type of conversion uses the native applications on Windows to do the conversion.  By using the original application that was used to create the document, you will get the best fidelity of PDF compared to the original.  The downside is that the Inbound Refinery can only be run on Windows and not other platforms.  It also requires a distiller engine to convert the PostScript format that gets printed from the native applications to PDF.  The recommended choice for that is AFPL Ghostscript

OpenOfficeConversion – The Open Office conversion is a bit of a compromise between the two types of conversions mentioned above.  It uses Apache Open Office to open and convert the native file. In most cases, it will give you better fidelity of PDF then the PDFExportConverter, but still not as good as WinNativeConverter.  Also, it does support more than just Windows, so it has broader platform support then WinNativeConverter. 

Tiff Converter

Where: Inbound Refinery
When: Upon check-in
How: Uses a 3rd party (CVISION PdfCompressor) engine to perform OCR and PDF conversion
Platform: Windows Only

When needing to convert TIFF formatted files into PDFs, this can be done with either PDFExportConverter or Tiff Converter.  The major difference is if optical character recognition (OCR) needs to be performed on the file in order to extract the full-text off the image.  If OCR is required, then Tiff Converter is used for that type of conversion.  In addition, a 3rd party tool, CVISION PdfCompressor, is required to do the actual OCR and conversion piece.  Tiff Converter acts as the controller between the Inbound Refinery and PdfCompressor.  But because PdfCompressor is a Windows-only application, the Inbound Refinery must also be on Windows. 

XML Converter

Where: Inbound Refinery
When: Upon check-in
How: Uses Oracle OutsideIn filters to convert native formats into XML
Platform: All

The XML Converter allows for native documents to be converted into 2 flavors of XML: FlexionXML (based on FlexionDoc schema) and SearchML (based on the SearchML schema).  In addition, those formats can go through additional transformation with a custom XSLT.  Because the XML Converter utilizes the Oracle OutsideIn filter technology, it supports all platforms.

DAM Converter

Where: Inbound Refinery
When: Upon check-in and updates
How: Can use both Oracle OutsideIn filters as well as 3rd party applications to do image conversions.  Flip Factory is required for video conversions.
Platform: All (* but depends)

DAM Converter is used to create multiple renditions of either image or video files.  The primary goal is to convert original formats which can typically be high resolution and large in size into other formats that are geared towards web or print delivery.  One thing that is unique to DAM Converter is the metadata that is used to specify the rendition set can be updated after the item has been submitted which will send the file back to the Inbound Refinery to be reprocessed.

When using the image converter, the Inbound Refinery comes with the Oracle OutsideIn filters to create renditions, so nothing else is required and it can run on all platforms.  But the converter also supports other types of image converters which are command-line driven such as Adobe Photoshop, XnView NConvert, ImageMagick.  Some are commercial and some are freeware.  Each has different capabilities for different use-cases and are supported on various platforms.  But for general purpose re-sizing, resolution, and format changes, OutsideIn can handle it.

For video conversion, Telestream’s Flip Factory is required.  The DAM Converter acts as the controller between the Inbound Refinery and Flip Factory.  What makes this integration a bit unique is that it is handled purely at a file system level.  This means that Flip Factory, which is a Windows-only application, does not need to reside on the same server as the Inbound Refinery.  They simply need shared file system access between servers.  So the Inbound Refinery can be on Linux while Flip Factory is on Windows.  

HTML Converter

Where: Inbound Refinery
When: Upon check-in
How: Uses Microsoft Office to convert Office documents into HTML
Platform: Windows Only

HTML Converter uses Microsoft Office to save the documents as HTML documents, collects the output (into a zip file if multiple files), and returns them to Content Server.  Using the HTML save output directly from Office, you get a very good fidelity of HTML compared to the original native format.  This is especially true for Excel and Visio which are less text-based.  The downside is you have no control over the HTML output to make any changes or provide consistency between conversions.  It’s simply formatted based on Office’s formatting.  Also, it does not apply any templating around the content to insert code before or after the content or present the document within the structure of a larger HTML page such as in the case of Site Studio.   

Dynamic Converter

Where: Content Server
When: Upon check-in or on-demand
How: Uses Oracle OutsideIn filters to convert native documents into HTML
Platform: All

Like HTML Converter, Dynamic Converter converts Office documents into HTML.  But there are several key differences between the two.  First is Dynamic Converter uses OutsideIn filters to convert to HTML so it supports a wide range of different native formats. Another difference is the processing happens on the Content Server side and not Inbound Refinery.  This allows the conversion to happen on-demand the first time the HTML version is requested.  Alternatively, DC can be configured to do the conversion upon check-in and cache the results so they are immediately available and don’t need to go through conversion on first request. DC also supports a wide range of controls over how the HTML is precisely formatted.  The result can be very minimal and clean HTML with various div or span tags to allow styling with CSS.  This can lead to a more consistent look and feel between converted documents.  In also allows for insertion of code before or after the content to embed the output within a template and is what is used within Site Studio.

Thumbnail Creation

Where: Content Server or Inbound Refinery
When: Upon check-in
How: Uses Oracle OutsideIn filters to create a thumbnail representation of the document to be used on search results
Platform: All

As a new feature in PS5, thumbnails can now be generated directly in the Content Server and not require the document to be sent to the Inbound Refinery (if it doesn’t need other conversions).  This allows the document to become available much more quickly.  But if the file is sent to the Inbound Refinery for other types of conversions, the thumbnail can be generated at that point.

For further information on conversions, see the documentation on Conversions as well as Dynamic Converter

Monday Jan 14, 2013

Migrating folders and content together in WebCenter Content

In the case of migrating from one WebCenter Content instance to another, there are several different tools within the system to accomplish that migration depending on what you need to move over.

This post will focus on the use case of needing to move a specific set of folders and their contents from one instance to another.  And the folder architecture in this example is Folders_g. Although Framework Folders is the recommended folders component for WebCenter Content 11g PS5 and later, there are still cases where you must still use Folders_g (e.g. WebCenter Portal, Fusion Applications, Primavera, etc).  Or perhaps you are at an older version and Folders_g is the only option.

To prepare, you must first have the FoldersStructureArchive component enabled on both the source and target instances.  If you are on UCM 10g, this component will be available within the CS10gR35UpdateBundle/extras folder.  In addition to enabling the component, there is a configuration flag to set.  By default, the config variable ArchiveFolderStructureOnly is set to false which means content will be exported along with the folders, so that can be left alone.  The config variable AllowArchiveNoneFolderItem is set to true by default which means it will export content both in the folder structure as well as those not selected...or even outside of folders.  Basically, it means you must use the Export Criteria in the archive to control the content to export. In our use case, we only want the content within the folders we select, so the configuration should be set as AllowArchiveNoneFolderItem=false.  Now only content that is in our selected folders will get exported into the archive. This can be set in the General Configuration in the Admin Server.

You will also need to make sure the custom metadata fields on both instances is identical. If they are mismatched, the folders will not import into the target instance correctly. You can use the Configuration Migration Utility to migrate those metadata fields.

Once the component is enabled and configurations set, go to Administration -> Admin Applets -> Archiver and select Edit -> Add... to create a new archive.  

New archive

Now that the archive is established, go back to the browser and go to Administration -> Folder Archiver Configuration.  For the Collection Name, it will default to the local collection.  Change this if your archive is in a different collection.  Then select your Archive Name from the list.

archive select

Expand the folder hierarchy and you can now select the specific folder(s) you want to migrate.  The thing to keep in mind are the parent folders to the ones you are selecting.  If the idea is you want to migrate a certain section of the folder hierarchy to the other server and you want it to be in the same place in the target instance, you want to make sure that the parent folder already exists in the target.  It is possible to migrate a folder and place it within a different parent folder in the target instance, but then you need to make sure you set the import maps correctly to specify the destination folder (more on that later).

Select folders

Once they are selected, click the Add button to save the configuration.  This will add the right criteria to the archive. Now go back to the Archiver applet.  Highlight the archive and select Actions -> Export.  Be sure 'Export Tables' is selected.  Note: If you try using the Preview on either the contents or the Table data, both will show everything and not just what you selected.  This is normal. The filtering of content and folders is not reflected in the Preview. Once completed, you can click on the View Batch Files... button to verify the results.  You should see an entry for the Collections_arTables and one or more for the content items.  

View batches

If you highlight the Collections row and click Edit, you can view and verify the results.

Verify collections table

You can do the same for the document entries as well.

Once you have the archive exported, you need to transfer it from the source to the target instance. If I don't have the outgoing providers set up to do the transfer, I sometimes cheat and copy over the archive folder from <cs instance dir>\archives\{archive name} directly over to the other instance.  Then I manually modify the collection.hda file on the target to let it know about the archive:

@ResultSet Archives
2
aArchiveName
aArchiveDescription
exportfoldersandfiles
Export some folders and files

@end

Or if I have Site Studio installed and my archive is fairly small, I'll take the approach described in this earlier post.

Before you import the archive on the target, you need to make sure the folders will be going into the right "parent" folder. If you've already migrated the parent folder to your folders to the target instance, then the IDs should match between instances and you should not have to do any import mappings. But if you are migrating the folders and the parent IDs will be different on the target (such as the main Contribution Folders or WebCenter Spaces root folder), then you will have to map those values.

First, to check what the folder's ID is, you can simply place your mouse over the link to the particular folder to get it's ID.  It will be identified as dCollectionID in the URL.  Do this on both the source and target instances.

Get dCollectionID

In this example, the dCollectionID on the source instance for the parent folder (Contribution Folders) is 826127598928000002.  On the target instance, its Contribution Folders ID is 838257920156000002.  So that means when the top level 'Product Management' folder in our archive moves over, the ID that specifies the ParentID needs to be mapped to the new value. So now we have all the information we need for the mapping.

Go to the Archiver on the target instance and highlight the archive.  Click on the Import Maps tab and then on the Table tab.  Double-click on the folder and then expand they date entry.  It should then show the Collections table.

Import tables

Click on the Edit button for the Value Maps. For the Input Value, you want to enter the value of the dCollectionID of the parent folder from the source instance. In our example, this is 826127598928000002. For the Field, you want to change this to be the dParentCollectionID. And for the Output Value, you want this to be the dCollectionID of the parent folder in the target instance.  In our example, this is 838257920156000002.  Click the Add button.  

Value map

This will now map the folders into the correct location on target.

The archive is now ready to be imported.  Click on Actions -> Import and be sure the 'Import Tables' check-box is checked. To check for any issues, be sure to go to the logs at Administration -> Log Files -> Archiver Logs.

And that's it.  Your folders and files should now be migrated over.

Thursday Jan 10, 2013

Adding browser search engines in WebCenter Content

In a post I made a few years ago, I described how you can add WebCenter Content (UCM at the time) search to the browser's search engines.  I think this is a handy shortcut if you find yourself performing searches often enough in WCC. 

Well, in the PS5 release, this was actually included as a new feature.  You need to enable the DesktopIntegrationSuite component in order to access it.  Once you do, go to the My Content Server -> My Downloads link.  There you will see the 'Add browser search' link. 

Add Browser Search

Once clicked, an OpenSearchDescription XML file is produced which each modern browser supports for adding in the search engine. 

Browser Search Bar

The one piece that's missing is something I mentioned in my earlier post: forcing authentication.  If you haven't logged into the server, your search will be performed anonymously and you will only get back content that is available to the guest role.  To make sure the search is performed as your user, the extra parameter Auth=Internet can be passed to the server to cause the server to challenge your request and force a login if needed.  Because the definition of the search engine URL is defined within the DesktopIntegrationSuite component, a new custom component can be added to override this.  Basically, the new component must override the dis_search_plugin resource and modify the Url locations.  Below is an example:

<@dynamichtml dis_search_plugin@>
<?xml version="1.0" encoding="UTF-8"?>
<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"
                       xmlns:moz="http://www.mozilla.org/2006/browser/search/">
    <ShortName><$if DIS_SearchPluginTitle$><$DIS_SearchPluginTitle$><$else$>Oracle WebCenter Content Server Search<$endif$></ShortName>
    <Description><$lc("wwDISSearchPluginDescription")$></Description>
    <Url type="text/html" method="get" template="<$xml(HttpBrowserFullCgiPath & "?IdcService=DESKTOP_BROWSER_SEARCH&Auth=Internet&MiniSearchText={searchTerms}")$>" />
    <$iconlocation=strReplace(HttpBrowserFullCgiPath,HttpCgiPath,"") & HttpImagesRoot & "desktopintegrationsuite/dis_search_plugin.ico"$>
    <Image height="16" width="16" type="image/x-icon"><$iconlocation$></Image>
    <Developer>Oracle Corporation</Developer>
    <InputEncoding>UTF-8</InputEncoding>
    <moz:SearchForm><$xml(HttpBrowserFullCgiPath & "?IdcService=DESKTOP_BROWSER_SEARCH&Auth=Internet&MiniSearchText=")$></moz:SearchForm>
</OpenSearchDescription>
<$setContentType("application/xml")$>
<$setHttpHeader("Content-Disposition","inline; filename=search_plugin.xml")$>
<$setHttpHeader("Cache-Control", "public")$>
<@end@>

I've included a pre-built custom component that does just that.

UPDATE (Jan 15, 2013)

In addition to enabling the component, there is also a configuration preference that must be enabled.   After enabling the Desktop Integration Suite component,  go to the 'advanced component manager'.  Go to the bottom to the 'Update Component Configuration' list and select DesktopIntegrationSuite and click Update.  The first entry is to 'Enable web browser search plug-in'.  Check that and click Update.

DIS Configuration

If you've already restarted to enable the DIS component, you do not need to restart for this configuration to take effect.

Friday Dec 21, 2012

Generating barcodes in reports

I recently had a comment posted on a previous blog post regarding generating barcodes in the reports that come with the records management module (either in WebCenter Content/UCM or WebCenter Content: Records/URM).  

I knew we could output barcodes because we do  in some of the default reports that come with the product.  But even when looking at those rich-text templates, it wasn't clear how they were defined.  So I did a little digging and discovered the code needed to be added to those fields to do the barcode magic.  I won't repeat the steps on how to update/create the custom reports from my earlier post, but will just cover the few extra steps for barcodes.

Once you have your field input into the template in Word, right-click on the field and choose BI Publisher -> Properties.  Click on the Advanced tab and you should see the box for Code with the field you are outputting surrounded by <?field_name?>. For barcodes, you'll want to enter this in that code field:

<?register-barcode-vendor:’oracle.xdo.template.rtf.util.barcoder.BarcodeUtil';'XMLPBarVendor'?><?dBarcodeFormated?>*<?dBarcode?>*<?format-barcode:dBarcodeFormated;code39;XMLPBarVendor?>

Just replace dBarcode with your field name (e.g. dDocName, xComments, etc).  

code

Next, you'll want to change the font on the field to be 'BC 3of9'.  This font should have been added when the BI Publisher Desktop add-in for Word was installed.

font

Now simply follow the steps to add the template to the repository and configure the appropriate reports.  Now when the reports are run, it should provide the values in barcodes.

report

One thing I noticed is when I saved the Word document in rich-text format, I was no longer able to re-open that rtf file and get back to the code for the field properties.  But in Word's default doc format, I was.  So if you think you might need to edit the report later on, it's probably a good idea to save a copy in doc format as well. 

Monday Dec 10, 2012

Expanding on requestaudit - Tracing who is doing what...and for how long

One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.  

>requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1
>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919

requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report*****

To change the default rotation or size of output, these can be set as configuration variables for the server:

RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)
RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)
RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)
RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20)

If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request. 

>requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs)

What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service.

>requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs)

You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert.

RequestAuditAdditionalVerboseFieldsList=TotalRows,path

In this case, for any search results, the number of items the user found is traced:

>requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta...

I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list:

RequestAuditAdditionalVerboseFieldsList=ClientToken

And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

Monday Sep 24, 2012

Configuring trace file size and number in WebCenter Content 11g

Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning.

One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.  

To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings.

If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs.

#Change log size to 10MB and number of logs to 20
FileSizeLimit=10485760
FileCountLimit=20

This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

Update - Sept. 27, 2012

 Kevin Smith has a nice blog post that describes some of these trace sections in detail.

Tuesday Oct 25, 2011

Getting a list of Security Groups and Accounts for a user through the API

I got an interesting question on one of my previous posts about how to access the list of Security Groups a user can write to through the API.  In first looking at it, I thought it would be straightforward and there would be a schema service for this.  The one the user tried, GET_SCHEMA_VIEW_FRAGMENT, does indeed return a list of Security Groups, but you can't differentiate between the ones the user can read and which ones they can write to.  I looked through the documentation and couldn't find anything related which might work.  I thought perhaps by running the CHECKIN_NEW_FORM service which renders the check-in page template might offer a resultset to use, but no luck there.

The solution comes from a service buried in the std_services.htm file called GET_USER_PERMISSIONS.  When you run this service as the user, it will return the list of Security Groups and Accounts along with the level of access for that entity (1=read, 3=write, 7=delete, 15=admin).  If you access the service through the URL and add the '&IsPageDebug=1', you can see the results as such:

Get User Permissions

Friday Sep 09, 2011

Adding your own alert messages

If you've installed WebCenter Content (UCM) or have made changes such as switching the search engine, you may have noticed an alert message at the top of the pages letting you know if there is a specific task that needs to be done such as a restart or rebuild of the search collection.

Well, these alerts are open for administrators to set as well.  So for instance, if you have a planned outage you can set a message letting users know the system is going to be down for a certain amount of time.  


Adding and managing alerts is very simple.  There are three primary services that are used:  SET_USER_ALERT, GET_USER_ALERTS, DELETE_USER_ALERT.   With SET_USER_ALERT, you simply need to pass in alertId (a unique identifier you give the alert) and alertMsg with the message you want to display.   And because it's just a service, you can simply call it in a URL to set it (as an administrator):  

http://myserver:16200/cs/idcplg?IdcService=SET_USER_ALERT&alertId=maint&alertMsg=My message.  

You can get fancy with the message by including HTML as well as Idoc Script.  That will be processed on the page as it's being rendered.  Optionally, you can pass in alertUrl which would be a URL that the message would lead to.  This value is appended to the "/cs/idcplg" path.  

To know what alerts are set, you can run the GET_USER_ALERTS service and pass in IsJava=1 to display the values back:  

http://myserver:16200/cs/idcplg?IdcService=GET_USER_ALERTS&IsJava=1.  

It will then display the alerts in the USER_ALERTS result set.

To remove the alert, simply run the DELETE_USER_ALERT service and pass in alertId to identify which alert to remove.  Optionally, you can pass in isTempAlert=1 when you first create the alert and it will be removed the next time the server restarts.

About

Kyle Hatlestad is a Solution Architect in the WebCenter Architecture group (A-Team) who works with WebCenter Content and other products in the WebCenter & Fusion Middleware portfolios. The WebCenter A-Team blog can be found at: https://blogs.oracle.com/ ateam_webcenter/

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today