Monday Jan 14, 2013

Migrating folders and content together in WebCenter Content

In the case of migrating from one WebCenter Content instance to another, there are several different tools within the system to accomplish that migration depending on what you need to move over.

This post will focus on the use case of needing to move a specific set of folders and their contents from one instance to another.  And the folder architecture in this example is Folders_g. Although Framework Folders is the recommended folders component for WebCenter Content 11g PS5 and later, there are still cases where you must still use Folders_g (e.g. WebCenter Portal, Fusion Applications, Primavera, etc).  Or perhaps you are at an older version and Folders_g is the only option.

To prepare, you must first have the FoldersStructureArchive component enabled on both the source and target instances.  If you are on UCM 10g, this component will be available within the CS10gR35UpdateBundle/extras folder.  In addition to enabling the component, there is a configuration flag to set.  By default, the config variable ArchiveFolderStructureOnly is set to false which means content will be exported along with the folders, so that can be left alone.  The config variable AllowArchiveNoneFolderItem is set to true by default which means it will export content both in the folder structure as well as those not selected...or even outside of folders.  Basically, it means you must use the Export Criteria in the archive to control the content to export. In our use case, we only want the content within the folders we select, so the configuration should be set as AllowArchiveNoneFolderItem=false.  Now only content that is in our selected folders will get exported into the archive. This can be set in the General Configuration in the Admin Server.

You will also need to make sure the custom metadata fields on both instances is identical. If they are mismatched, the folders will not import into the target instance correctly. You can use the Configuration Migration Utility to migrate those metadata fields.

Once the component is enabled and configurations set, go to Administration -> Admin Applets -> Archiver and select Edit -> Add... to create a new archive.  

New archive

Now that the archive is established, go back to the browser and go to Administration -> Folder Archiver Configuration.  For the Collection Name, it will default to the local collection.  Change this if your archive is in a different collection.  Then select your Archive Name from the list.

archive select

Expand the folder hierarchy and you can now select the specific folder(s) you want to migrate.  The thing to keep in mind are the parent folders to the ones you are selecting.  If the idea is you want to migrate a certain section of the folder hierarchy to the other server and you want it to be in the same place in the target instance, you want to make sure that the parent folder already exists in the target.  It is possible to migrate a folder and place it within a different parent folder in the target instance, but then you need to make sure you set the import maps correctly to specify the destination folder (more on that later).

Select folders

Once they are selected, click the Add button to save the configuration.  This will add the right criteria to the archive. Now go back to the Archiver applet.  Highlight the archive and select Actions -> Export.  Be sure 'Export Tables' is selected.  Note: If you try using the Preview on either the contents or the Table data, both will show everything and not just what you selected.  This is normal. The filtering of content and folders is not reflected in the Preview. Once completed, you can click on the View Batch Files... button to verify the results.  You should see an entry for the Collections_arTables and one or more for the content items.  

View batches

If you highlight the Collections row and click Edit, you can view and verify the results.

Verify collections table

You can do the same for the document entries as well.

Once you have the archive exported, you need to transfer it from the source to the target instance. If I don't have the outgoing providers set up to do the transfer, I sometimes cheat and copy over the archive folder from <cs instance dir>\archives\{archive name} directly over to the other instance.  Then I manually modify the collection.hda file on the target to let it know about the archive:

@ResultSet Archives
2
aArchiveName
aArchiveDescription
exportfoldersandfiles
Export some folders and files

@end

Or if I have Site Studio installed and my archive is fairly small, I'll take the approach described in this earlier post.

Before you import the archive on the target, you need to make sure the folders will be going into the right "parent" folder. If you've already migrated the parent folder to your folders to the target instance, then the IDs should match between instances and you should not have to do any import mappings. But if you are migrating the folders and the parent IDs will be different on the target (such as the main Contribution Folders or WebCenter Spaces root folder), then you will have to map those values.

First, to check what the folder's ID is, you can simply place your mouse over the link to the particular folder to get it's ID.  It will be identified as dCollectionID in the URL.  Do this on both the source and target instances.

Get dCollectionID

In this example, the dCollectionID on the source instance for the parent folder (Contribution Folders) is 826127598928000002.  On the target instance, its Contribution Folders ID is 838257920156000002.  So that means when the top level 'Product Management' folder in our archive moves over, the ID that specifies the ParentID needs to be mapped to the new value. So now we have all the information we need for the mapping.

Go to the Archiver on the target instance and highlight the archive.  Click on the Import Maps tab and then on the Table tab.  Double-click on the folder and then expand they date entry.  It should then show the Collections table.

Import tables

Click on the Edit button for the Value Maps. For the Input Value, you want to enter the value of the dCollectionID of the parent folder from the source instance. In our example, this is 826127598928000002. For the Field, you want to change this to be the dParentCollectionID. And for the Output Value, you want this to be the dCollectionID of the parent folder in the target instance.  In our example, this is 838257920156000002.  Click the Add button.  

Value map

This will now map the folders into the correct location on target.

The archive is now ready to be imported.  Click on Actions -> Import and be sure the 'Import Tables' check-box is checked. To check for any issues, be sure to go to the logs at Administration -> Log Files -> Archiver Logs.

And that's it.  Your folders and files should now be migrated over.

Monday Dec 10, 2012

Expanding on requestaudit - Tracing who is doing what...and for how long

One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.  

>requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1
>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919

requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164
requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report*****

To change the default rotation or size of output, these can be set as configuration variables for the server:

RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)
RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)
RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)
RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20)

If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request. 

>requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs)

What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service.

>requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs)

You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert.

RequestAuditAdditionalVerboseFieldsList=TotalRows,path

In this case, for any search results, the number of items the user found is traced:

>requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta...

I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list:

RequestAuditAdditionalVerboseFieldsList=ClientToken

And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

Monday Dec 03, 2012

Access Control Lists for Roles

Back in an earlier post, I wrote about how to enable entity security (access control lists, aka ACLs) for UCM 11g PS3.  Well, there was actually an additional security option that was included in that release but not fully supported yet (only for Fusion Applications).  It's the ability to define Roles as ACLs to entities (documents and folders).  But now in PS5, this security option is now fully supported.  

The benefit of defining Roles for ACLs is that those user roles come from the enterprise security directory (e.g. OID, Active Directory, etc) and thus the WebCenter Content administrator does not need to define them like they do with ACL Groups (Aliases).  So it's a bit of best of both worlds.  Users are managed through the LDAP repository and are automatically granted/denied access through their group membership which are mapped to Roles in WCC.  A different way to think about it is being able to add multiple Accounts to content items...which I often get asked about.  Because LDAP groups can map to Accounts, there has always been this association between the LDAP groups and access to the entity in WCC.  But that mapping had to define the specific level of access (RWDA) and you could only apply one Account per content item or folder.  With Roles for ACLs, it basically takes away both of those restrictions by allowing users to define more then one Role and define the level of access on-the-fly.

To turn on ACLs for Roles, there is a component to enable.  On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top.   In the list of Disabled Components, enable the RoleEntityACL component. Then restart.  This is assuming the other configuration settings have been made for the other ACLs in the earlier post.  

Once enabled, a new metadata field called xClbraRoleList will be created.  If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection.

For Users and Groups, these values are automatically picked up from the corresponding database tables.  In the case of Roles, there is an explicitly defined list of choices that are made available.  These values must match the roles that are coming from the enterprise security repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager.  On the Views tab, edit the values for the ExternalRolesView.  By default, 'guest' and 'authenticated' are added.

Configuration Manager

 Once added, you can assign the roles to your content or folder.

Role entity field

If you are a user that can both access the Security Group for that item and you belong to that particular Role, you now have access to that item.  If you don't belong to that Role, you won't!

[Extra]

Because the selection mechanism for the list is using a type-ahead field, users may not even know the possible choices to start typing to.  To help them, one thing you can add to the form is a placeholder field which offers the entire list of roles as an option list they can scroll through (assuming its a manageable size)  and view to know what to type to.  By being a placeholder field, it won't need to be added to the custom metadata database table or search engine.  

List of possible roles field definition

Monday Sep 24, 2012

Configuring trace file size and number in WebCenter Content 11g

Lately I've been doing a lot of debugging using the System Output tracing in WebCenter Content 11g.  This is built-in tracing in the content server which provides a great level of detail on what's happening under the hood.  You can access the settings as well as a view of the tracing by going to Administration -> System Audit Information.  From here, you can select the tracing sections to include.  Some of my personal favorites are searchquery,  systemdatabase, userstorage, and indexer.  Usually I'm trying to find out some information regarding a search, database query, or user information.  Besides debugging, it's also very helpful for performance tuning.

One of the nice tricks with the tracing is it honors the wildcard (*) character.  So you can put in 'schema*' and gather all of the schema related tracing.  And you can notice if you select 'all' and update, it changes to just a *.  

To view the tracing in real-time, you simply go to the 'View Server Output' page and the latest tracing information will be at the bottom. This works well if you're looking at something pretty discrete and the system isn't getting much activity.  But if you've got a lot of tracing going on, it would be better to go after the trace log file itself.  By default, the log files can be found in the <content server instance directory>/data/trace directory. You'll see it named 'idccs_<managed server name>_current.log.  You may also find previous trace logs that have rolled over.  In this case they will identified by a date/time stamp in the name.  By default, the server will rotate the logs after they reach 1MB in size.  And it will keep the most recent 10 logs before they roll off and get deleted.  If your server is in a cluster, then the trace file should be configured to be local to the node per the recommended configuration settings.

If you're doing some extensive tracing and need to capture all of the information, there are a couple of configuration flags you can set to control the logs.

#Change log size to 10MB and number of logs to 20
FileSizeLimit=10485760
FileCountLimit=20

This is set by going to Admin Server -> General Configuration and entering them in the Additional Configuration Variables: section.  Restart the server and it should take on the new logging settings. 

Update - Sept. 27, 2012

 Kevin Smith has a nice blog post that describes some of these trace sections in detail.

Thursday Aug 02, 2012

Adding and removing WebCenter Content cluster nodes

If you follow the Enterprise Deployment Guide, Fusion Middleware High Availability guide, or the support technote on example steps for installing a multi node cluster of WebCenter Content 11g, they all cover establishing a multi node cluster using the WebLogic Server domain configuration wizard.  But if you find yourself needing to add or remove nodes after the cluster has been established, there isn't much documentation covering that.  So the following are some steps on how to do those tasks.

Adding additional nodes 

1.  Install WebLogic Server and the WebCenter Content binaries on the new nodes.

2.  Log into WebLogic Server Administration Console and stop all of the managed servers.

3.  In Domain Structure, go to <domain> -> Environment -> Servers.   Select one of the UCM_server nodes and click the Clone button.

4.  For the Server Name, enter UCM_server# with the next logical number in the node sequence.  Enter the Server Listen Address and a port of 16200.  Click OK.


5.  Now a new machine needs to be created for the new node.  Go go  <domain> -> Environment ->  Machines.  Click New.

6.  Enter the machine name and click Next.


7.  Enter the Listen Address of the new node and modify the Listen Port for Node Manager if needed.  Click Finish


8.  Go to the machine that was associated to the managed server that was used to clone from in step 3.  Because the new managed server was cloned from an existing one, it will initially be associated with that same machine.  Check the box for it and click Remove.


9.  Click on the Servers tab.  Check the box for the newly cloned managed server and  click Remove.  Click Yes to the confirmation.

10. Click on Machines in the Domain Structure again and click on the machine just create in step 7.

11.  Click on the Servers tab and click the Add button.  Select the newly cloned managed server and click Finished.


12. Repeat steps 3-11 for additional nodes.

13. Shut down the WLS Admin Server.

14. On the WLS Admin Server machine, change directory to one on the shared/remote file system's mount.  

15. Execute the pack command to bundle the domain configuration.  For example:

/u01/oracle /Middleware/Oracle_ECM1/common/bin/pack.sh -managed=true –domain= /u01/oracle/Middleware/user_projects/domains/wcc_domain -template=ecm_template.jar -template_name="my ecm domain"

16. Go to the new node and execute the unpack command accessing the newly created template.  For example:

/u01/oracle/Middleware/Oracle_ECM1/common/bin/unpack.sh -domain=/u01/oracle/Middleware/user_projects/domains/wcc_domain -template=ecm_template.jar

17. Start your WLS Admin Server.

18. On the new node, and start the the managed server via command-line.  For example:

/u01/oracle/Middleware/user_projects/domains/wcc_domain/bin/startManagedWebLogic.sh UCM_server3 http://wcchost1:7001

19. You can now configure Node Manager on the new node to be able to start and stop it from the WLS Admin Server.

Removing Nodes

1.  Go to the WebLogic Server Administration Console.

2.  Stop the node(s) to remove. 

2.  In Domain Structure, go to <domain> -> Environment -> Servers.  Select the checkbox for the server node(s) to remove and click the Delete button.

3.  Go to <domain> -> Environment -> Machines.  Select the checkbox for the machine(s) to remove and click the Delete button.

Thursday Jul 05, 2012

Idoc Script Plug-in for Notepad++

For those of you that caught it in an earlier post, Arnoud Koot wrote a great Idoc Script plug-in for Notepad++.  Well, he's back at it and has written an update for 11g!

Auto-complete

Arnoud made his announcement a few days ago on the WebCenter Content forum. And it looks like Jonathan Hult caught it as well and posted to his blog.

A great addition to his plug-in is context sensitive help.  Now you can look up the variables and functions without having to switch to the formal Oracle documentation.

Context Sensitive Help

He's even provided a tool to update the help automatically based on the Oracle documentation. 

A couple of things to look for that I had missed the instructions was the note about updating the LanguageHelp.ini with your own path to the iDoc11g.chm file as well as the <ctrl><space> keystroke for the auto-complete.

Great work Arnoud!

Wednesday Jun 06, 2012

WebCenter Content shared folders for clustering

When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required.[Read More]

Thursday Mar 22, 2012

Full-text indexing? You must read this

For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'.

One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.  

begin
ctx_ddl.optimize_index('FT_IDCTEXT1','FULL', parallel_degree =>'1');
end;

When I checked my own test instance, I found my collection had a row fragmentation of about 80%

Original fragmentation

After running the optimization procedure, it went down to 0%

After optimizing

The knowledgebase article On Index Fragmentation and Optimization When Using OracleTextSearch or DATABASE.FULLTEXT [ID 1087777.1] goes into detail on how to check your current index fragmentation, how to run the procedure, and then how to schedule the procedure to run automatically.  While the article mentions scheduling the job weekly, Peter says he now is recommending this be run daily, especially on more active systems.

And just as a reminder, be sure to involve your DBA with your WebCenter Content implementation as you go to production and over time.  We recently had a customer complain of slow performance of the application when it was discovered the database was starving for memory.  So it's always helpful to keep a watchful eye on your database.

Wednesday Mar 14, 2012

Improving WebCenter Content Search Performance

If you don't follow the Oracle WebCenter Content Alerts blog, you may have missed the announcement about the 'WebCenter Content: Database Searching and Indexing' webcast happening tomorrow.  It's happening on March 15, 2012 at 16:00 UK / 17:00 CET / 8:00 am Pacific / 9:00 am Mountain / 11:00 am Eastern. For details, go to https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1399682.1

This will cover some great tips on tracing and optimizing search within WebCenter Content.  I plan on attending and learning a few things myself!

[UPDATE 3/22/12]

For those who attended the webcast, I hope you found it helpful and informative.  I certainly did learn a few things myself!  The session was recorded and is available here along with the slide deck.   

Friday Feb 24, 2012

What do you mean you don't read HDA?

For any WebCenter Content or Records administrator who's done any customizing or troubleshooting of the server has undoubtedly run across an .hda (HDA) file.  A HDA file is proprietary data structure in ASCII text files used by WebCenter Content.  Why HDA and not some other format such as XML? Well, I'll have to leave that argument for the developers as to the benefits of one over the other.  But one thing is clear...while it may be very fast and easy for a computer to parse and read HDA, it's not so easy for humans. Sure, the LocalData section is easy with its name-value pairs, but try reading a result set with 75 attributes and it becomes a bit more difficult. Thanks to the advent of the IsPageDebug=1 option, handling HDA from the server is now easier to read.  But for those files directly on the server, they are still a challenge.

To make it easier to read (and edit) HDA files, Lee Klement, one of our illustrious Senior Principal Instructors in Oracle University, wrote an Excel spreadsheet macro to do just that.  He created it over 10 years ago, but the format has stayed the same so it works just as well as it did the day it came out (for all the old timers out there, you'll see references to Xpedio in there).

HDA Utility

I was recently working with an Oracle consultant on a project when he mentioned the frustration of reading the HDA files coming out of the Archiver and thinking about writing an Excel Macro for reading them. That's when I handed him a copy of Lee's wonderful spreadsheet and made his day. Figuring others out there could benefit from this tool, Lee gave me the OK to share it here.

After opening the spreadsheet, the primary worksheet has instructions on how to open and save the HDA files.   And as with other sample components offered here, it's available as-is.

Wednesday Dec 21, 2011

Search by extension and Title with a targeted Quick Search

Often when I'm doing a search, I'm doing it based on something in the Title.  But in addition to that, I often know the extension of the original (native) file I'm looking for as well.  I'll know if it's a PowerPoint I'm after...or maybe a zip file. The quickest way for me to do my searching is with the Quick Search in the top right.  So what I've done is created a targeted Quick Search to search by both the extension and the Title.   You can do this either as your own individual quick search or an administrator can set it up as a quick search that all users can use. 

  1. Go to My Content Server -> My Quick Searches.  If you are an administrator, you should be able to create new quick searches defined by admins.

    Quick Search

  2. Click Create New...
  3. If you have the expanded search form enabled, select Search Forms -> Query Builder Form.

    Query Builder Form

  4. Enter a Quick Search Key and Label. 

    Key and Label

  5. In the Query Builder section, click 'show advanced options'. 
  6. In the Query Text box, click 'Modify Query Text and add the following code:

    <$rsMakeFromString('myParms','#s','myParm')$><$loop myParms$><$if myFirst$> <AND> dDocTitle <contains> `<$myParm$>` <$else$><$myFirst=1$>dExtension <starts> `<$myParm$>`<$endif$><$endloop$>



  7. Click Save. 

Now when you do a quick search, you can either select the type of quick search:

select quick search

Or use the key to specify it:

quick search by key

And if you want to search by just the extension, you can leave off the comma and Title parameter.

Wednesday Nov 16, 2011

SQL installation scripts for WebCenter Content 11g

As part of the installation of WebCenter Content 11g (UCM or URM), one of the main functions is to run the Repository Creation Utility (RCU) to establish the database schema and tables.   This is pretty helpful because it runs all the scripts you need to have without having to manually set anything up in the database.  

In UCM 10g and earlier, the installation  itself would establish the database tables if you wanted it to.  Otherwise, the SQL scripts were available to be run independently ahead of time.  For DBAs who wanted to understand what was being done to the database for the application, this was helpful for them.  But in 11g, that is all masked now in RCU.  You don't get to see the scripts at all as part of it's establishing the tables. 

But if you comb through the directories for RCU, you can track them down.  They are in the  /rcuHome/rcu/integration/contentserver11/sql/ directories.  And to understand the order in which they are run, you can open up the /rcuHome/rcu/integration/contentserver11/contentserver11.xml file and see how they are run there.  The order in which they are run are:

  • contentserverrole.sql
  • contentserveruser.sql
  • intradoc.sql
  • workflow.sql
  • formats.sql
  • users.sql
  • default.sql
  • contentprocedures.sql 

If you are installing WebCenter Records (URM), it will run some additional scripts between the formats.sql and users.sql :

  • MetadataSet.sql
  • UIEnhancements.sql
  • RecordsManagement.sql
  • RecordsManagement_default.sql
  • ClassifiedEnhancements.sql
  • ClassifiedEnhancements_default.sql

In addition to the scripts being available within the RCU install directories, they are also available from within the Content Server UI.  If you go to Administration -> DataStoreDesign SQL Generation, this page can allow you to download these various SQL scripts.  

DataStoreDesign

 From here, you can select your particular database type and which components to include.  Several components make changes dynamically to the database when they are enabled, so these scripts give you a way to inspect what is being run during that startup time.  Once selected, click Generate and you now can either view or download the scripts from the Actions menu.

DISCLAIMER:  Installations are ONLY supported when done with the Repository Creation Utility.  These scripts are for reference only and not supported to be run manually.

Tuesday Nov 01, 2011

Displaying a WebCenter Content page in an iframe

If you've tried displaying a WebCenter Content (UCM) 11g page within an iframe of another page, you may have noticed that the iframe display takes over the entire page.  There is logic within the page template to make itself the "top" page.  

I was recently reminded of a configuration flag to disable this effect.  Add this to the Additional Configuration Variables on the General Configuration page of the Admin Server:

AllowContentServerInAnyDomains=1

Save and restart.  

WebCenter Content in an ifame

Tuesday Oct 25, 2011

Getting a list of Security Groups and Accounts for a user through the API

I got an interesting question on one of my previous posts about how to access the list of Security Groups a user can write to through the API.  In first looking at it, I thought it would be straightforward and there would be a schema service for this.  The one the user tried, GET_SCHEMA_VIEW_FRAGMENT, does indeed return a list of Security Groups, but you can't differentiate between the ones the user can read and which ones they can write to.  I looked through the documentation and couldn't find anything related which might work.  I thought perhaps by running the CHECKIN_NEW_FORM service which renders the check-in page template might offer a resultset to use, but no luck there.

The solution comes from a service buried in the std_services.htm file called GET_USER_PERMISSIONS.  When you run this service as the user, it will return the list of Security Groups and Accounts along with the level of access for that entity (1=read, 3=write, 7=delete, 15=admin).  If you access the service through the URL and add the '&IsPageDebug=1', you can see the results as such:

Get User Permissions

Friday Oct 14, 2011

jQuery DataTables using Excel spreadsheets and Dynamic Converter

On a recent project I worked on, we needed to display a calendar on a site with a list of different events.  From the content owner's perspective, authoring and maintaining this calendar in Microsoft Excel was ideal.  So using Dynamic Converter to convert that to HTML fit the bill.  But they wanted the calendar to be more interactive and dynamic then just a static table. Features such as sorting, searching, pagination and such.  So that's where the DataTables jQuery plug-in makes a perfect solution.  <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js" type="text/freezescript" charset="utf-8"> </script> <script type="text/freezescript" language="freezescript" src="http://datatables.net/release-datatables/media/js/jquery.dataTables.js"></script> <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js" type="text/freezescript" charset="utf-8"> </script> <script charset="utf-8"> $(document).ready(function() { $("table:contains('TEAM')").attr("id","TeamTable"); $('#TeamTable').prepend($('').append($('#TeamTable tr:first').remove())); $('#TeamTable').attr("class","display"); oTable = $('#TeamTable').dataTable({ 'bJQueryUI': true, 'sPaginationType': 'full_numbers' }) $('#ChangeDivision').appendTo($('#TeamTable_length')); }); function fnFilterType( area ) { oTable.fnFilter( area, 1 ); }</script>

While the default conversion of the Excel document to a HTML table was close, it still needed a bit of manipulation of the table format to fit what DataTables was looking for.  Luckily, jQuery makes that pretty easy to do as well.  

The following are the steps I took to create this conversion.

  1. The first step is to create your Excel document to work from and to check it in.  The first row should be your column headings and the rows below be your data.



  2. Open Internet Explorer and create a new Dynamic Converter  template through Administration -> Dynamic Converter Admin -> Create New Template.  In 11g, for the Template Format, select 'Classic HTML Conversion Template'.  In 10g it should be set as 'GUI Template'
  3. Edit the new template. Be sure to select Classic HTML Conversion Template as the Template Type.

    Note: If you are running Internet Explorer (IE) 8 or newer, you may encounter the error, "Internet Explorer has closed this webpage to help protect your computer.  A malfunctioning or malicious add-on has caused Internet Explorer to close this webpage."   To avoid this error, go to Tools -> Internet Options -> Advanced and uncheck 'Enable memory protection to help mitigate online attacks' near the bottom.  Restart IE and you should be able to bring up the template editor.

  4. Change the preview to point to the document submitted in step 1.
  5. First we'll remove the heading identifying the sheet from Excel.  Click on Element Setup and go to the Styles tab. 
  6. Click New and enter a Name of 'Heading 1'.  For the Associated element, click New and enter a Name of 'Heading 1'.  Click OK and OK.



  7. Go to the Elements tab and double-click on the Heading 1 check mark in the In Body column to change it to a red X.  Click OK.  The sheet heading should now disappear in the preview



  8. Next we'll want to remove all of the formatting to the text.  Click the Formatting button.  Highlight 'Default Paragraph' and for the Font name, Font color, and Font size, choose 'Don't specify'.



  9. Click on the Paragraph tab and for Alignment, choose 'Don't specify'.
  10. Click on the Tables tab and click the Borders and Sizing button.  For Table width and Cell width, choose 'Don't specify'. 



  11. Check the 'Use column headings' box in the Heading section.  Click OK for the Formatting dialog.



  12. Next we need to insert the JavaScript needed to reformat our table into a DataTable. Click on the Globals button and click on the Head tab.
  13. Check the box for 'Include HTML or scripting code in the Head' and insert the code:

    <!-- jQuery-1.4.4.min.js -->
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js" type="text/javascript" charset="utf-8">
    </script> 

    <!-- jQuery DataTables -->
    <script type="text/javascript" language="javascript" src="http://datatables.net/release-datatables/media/js/jquery.dataTables.js"></script>

    <!-- Link to the jQuery Demotables Stylesheet      -->
    <link href="http://datatables.net/release-datatables/media/css/demo_table_jui.css" type="text/css" rel="stylesheet" />

    <script type="text/javascript" charset="utf-8">
        $(document).ready(function() {
            $("table:contains('TEAM')").attr("id","TeamTable");
            $('#TeamTable').prepend($('<thead></thead>').append($('#TeamTable tr:first').remove()));   
            $('#TeamTable').attr("class","display");       
            $('#TeamTable').dataTable();
        });
    </script>


    Let's take a look at this code. 

    The first script tag is used to load the jQuery JavaScript libary.  Here we're loading it from Google's hosted APIs.  The next script tag is used to load the DataTables plug-in.  And the next link tag is loading a sample stylesheet to be used with the DataTables plug-in. In this example, I'm calling out to the hosted files.  You may want to download, check-in, and reference them locally to ensure they are always available.

    Inside the next script tag, the script waits until the page finished loading and begins it's function.  The first line in the function inserts the ID attribute onto the table with a value of 'TeamTable' so that we can easily reference it in the following actions.  In order to identify the table, it looks for the text 'TEAM'.  Adjust this appropriately for the text in your table.

    The next line inserts the <thead> </thead> tags around the heading row in the table.  There is no way to configure Dynamic Converter to insert this, so jQuery helps us do it after the fact.

    The third line applies the class 'display' to the table to utilize the DataTables stylesheet to help format the table.  Again, there isn't a way to insert this class with Dynamic Converter, so jQuery can do it for us.

    And finally, it runs the function to perform the DataTables function to transform the table.  It's using its basic 'zero configuration' settings without any options applied.

  14. Click OK to save the template.  Now use the Template Section Rules to target the appropriate spreadsheets with the new template.

Now when you view the HTML conversion of the spreadsheet, you should see it as a DataTable.  You can do things like sort columns, search, and have pagination.



But now that we have it as a DataTable, we can use the different options it offers to give it a different look and experience.

We can first add an additional JavaScript library and stylesheet from the jQuery UI project.  Edit the template again and modify the code being added to the Head section.

<!-- jquery-ui-1.8.6.custom.min.js -->
<script src='https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js' type='text/javascript' charset='utf-8'>
</script>

<!-- jQuery smoothness -->
<link href='http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/themes/smoothness/jquery-ui.css' type='text/css' rel='stylesheet' />

Then we can add some additional options to the DataTable:

oTable = $('#TeamTable').dataTable({
                'bJQueryUI': true,
                'sPaginationType': 'full_numbers'
})

So the bjQueryUI will use the UI library we included above.  And the pagination will show page numbers instead of just arrows.

Then we'll add an option list to do filtering on the table.  Add this line within the $(document).ready(function():

$('#ChangeDivision').appendTo($('#TeamTable_length'));

Then add an additional function to call when the option list changes:

 function fnFilterType( area )
    {
        oTable.fnFilter( area, 1 );
    }

Finally, we'll add the HTML option list to the page.  Click on the HTML tab and add this code in the 'Include HTML or scripting code before the content'. 

<span id="ChangeDivision"><br />
<span class="style6">Show</span> <select onchange="fnFilterType (value)" name="Division">
<option value="" selected="selected">All types</option>
<option value="NFC">only NFC</option>
<option value="AFC">only AFC</option>
</select>
</span>

When you make these additional additions to the editor, it will complain about a runtime error on the page.  This only occurs in the preview window and can be ignored. 

Now we have our updated DataTable:


You can download the completed GUI template for 11g here. The 10g version is here.  If using 11g, be sure to submit it as a "Classic  HTML Conversion Template" and as a "GUI Template" in 10g.  

Special thanks to Paul Thaden for the code on this example!

About

Kyle Hatlestad is a Solution Architect in the WebCenter Architecture group (A-Team) who works with WebCenter Content and other products in the WebCenter & Fusion Middleware portfolios. The WebCenter A-Team blog can be found at: https://blogs.oracle.com/ ateam_webcenter/

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today