Wednesday Feb 27, 2013

Transporting a Single Partition of a Table

Untitled Document

A feature of the Oracle database that appeared in 11gR1 but seems to have gone largely unnoticed is the ability to transport single (or multiple) partitions of a table to another database. This is particularly useful when archiving out older data held in older partitions of a table.

In previous releases of the Oracle database, it was only possible to transport entire tables, which meant every partition in that table. Therefore the standard methodology used to transport a single partition was to perform a partition exchange at the source database, making the partition a standalone table, and then to transport this table using the normal Transportable Tablespace method to the target database. The result is a standalone table in the target database. The Oracle 11.2 documentation still states that you cannot use the Transportable Tablespace feature to transport a single partition of a table, however we are going to use a different technique, known as ‘datafile copy’ to do exactly that.

The datafile copy technique is actually part of Data Pump. In a similar way to transporting tablespaces, Data Pump is used to export out the metadata describing the objects to be copied to the new database. The datafiles that contain these objects are then copied to the new database, and Data Pump is used to import the metadata. These objects can now be seen by the new database.

This sounds exactly like transporting a tablespace, but there is a difference. In the datafile copy technique we might only be interested in a subset of what is in the datafiles, not the entire contents. This is achieved by invoking Data Pump export in the table mode, rather than tablespace mode. The datafile then effectively becomes an alternative to a dumpfile containing the exported data.

If the only contents of the tablespace in question are a single partition (or subpartition) of a table, then the final result appears to be identical to that achieved using the partition exchange and transportable tablespace technique traditionally used.

Now look at a worked example:
First create a tablespace TRANS to hold the partition to be transported

SQL> CREATE TABLE "OE"."ORDERS_PART"
( "ORDER_ID" NUMBER(12),
"ORDER_DATE" TIMESTAMP(6) WITH local TIME ZONE CONSTRAINT "ORDER_PART_DATE_NN" NOT NULL ,
"ORDER_MODE" VARCHAR2(8),
"CUSTOMER_ID" NUMBER(6) CONSTRAINT "ORDER_PART_CUST_ID_NN" NOT NULL ,
"ORDER_STATUS" NUMBER(2),
"ORDER_TOTAL" NUMBER(8, 2),
"SALES_REP_ID" NUMBER(6),
"PROMOTION_ID" NUMBER(6))
TABLESPACE "EXAMPLE"
PARTITION BY RANGE ("ORDER_DATE")
(PARTITION "H1_2004" VALUES LESS THAN (TIMESTAMP'2004-07-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H2_2004" VALUES LESS THAN (TIMESTAMP'2005-01-01 00:00:00 +0:00') TABLESPACE TRANS,
PARTITION "H1_2005" VALUES LESS THAN (TIMESTAMP'2005-07-01 00:00:00 +0:00'),
PARTITION "H2_2005" VALUES LESS THAN (TIMESTAMP'2006-01-01 00:00:00 +0:00'),
PARTITION "H1_2006" VALUES LESS THAN (TIMESTAMP'2006-07-01 00:00:00 +0:00'),
PARTITION "H2_2006" VALUES LESS THAN (MAXVALUE));
Table created

And then populate it:

SQL> insert into oe.orders_part select * from oe.orders;
105 rows created.
SQL> commit;

Look at the segments that are contained inside this tablespace:

SQL> select owner, segment_name,partition_name from dba_segments where tablespace_name='TRANS'

OWNER SEGMENT_NAME PARTITION_NAME -------- --------------------- --------------
OE ORDERS_PART H1_2004 OE ORDERS_PART H2_2004

Just like when transporting tablespaces any tablespaces whose datafiles we want to copy must first be made read only.

SQL> alter tablespace trans read only;
Tablespace altered

Now the Data Pump export can be performed. The ‘table’ option is used to indicate which partitions we want to export, and also the ‘transportable=always’ to indicate that the generated dumpfile is to contain the table/partition metadata only as the datafiles containing segments involved will be copied to the new database.

expdp system/oracle tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpfiles=y

Export: Release 11.2.0.2.0 - Production on Thu Feb 21 16:20:31 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** tables=oe.orders_part:H1_2004 transportable=always dumpfile=ordersH1_2004.dmp reuse_dumpf
s=y
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  D:\DATAPUMP\ORDERSH1_2004.DMP
******************************************************************************
Datafiles required for transportable tablespace TRANS:
  D:\ORADATA\TRANS.DBF
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 16:20:53

Now the datafile is copied to its new location and the TRANS tablespace is made read-write again.

copy d:\oradata\trans.dbf d:\oradata\trans.cpy
        1 file(s) copied.

(If necessary, RMAN can now be invoked to convert the datafile to different platform format.)

SQL> alter tablespace trans read write;

Tablespace altered.

Now move over to the second database and import the partition’s metadata into the database. Use the option ‘partition_options=departition’ which will import the partition as a standalone table. In this example the schema owner is also being changed.

impdp system/oracle partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/oradata/trans.cpy' remap_schema=OE:SH

Import: Release 11.2.0.2.0 - Production on Thu Feb 21 16:21:28 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** partition_options=departition dumpfile=ordersH1_2004.dmp transport_datafiles='d:/
data/trans.cpy' remap_schema=OE:SH
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 16:21:34

Look for the newly imported table

SQL> select table_name, tablespace_name  from dba_tables
  2  where table_name like 'ORDERS%';

TABLE_NAME			TABLESPACE_NAME
------------------------------ --------------------------------------
ORDERS_PART_H1_2004		TRANS
ORDERS_QUEUETABLE 		EXAMPLE
ORDERS				EXAMPLE
ORDERS				EXAMPLE

Its new name is the name of the original table suffixed with the name of the partition. If this was the only segment that was originally stored in this datafile we would now be finished but, although this partition is the only segment whose metadata was imported into this database, there are actually other segments in this datafile of which this 'new' database does not know anything about.

SQL> select owner, segment_name, partition_name from dba_segments where tablespace_name='TRANS'

OWNER    SEGMENT_NAME         PARTITION_NAME
------------------------------ --------------
SH       ORDERS_PART_H1_2004          

So move the new table to a new tablespace enabling the TRANS tablespace to be dropped.

SQL> alter table sh.orders_part_h1_2004 move tablespace example;

Table altered.

SQL> drop tablespace trans;

Tablespace dropped.

Datafile copying as a means to transport partitions is one of the many features covered in the Oracle Database 11g : Implement Partitioning course.

About the Author:

Gwen Lazenby



Gwen Lazenby is Principal Training Consultant at Oracle. She is part of the UK and IE Core Tech Delivery team, teaching database administration, Spatial and Linux courses. Gwen is also a member of the OU EMEA SME team, with a special responsibility for Oracle Spatial.

Tuesday Jan 29, 2013

Defining Essbase Security using Metaread Filter

Defining security in multidimensional databases like Essbase is a subject that some administrators would rather prefer to get around. Anyway it’s no rocket science and follows a clear logic, how read, write or no access should be applied to applications, databases or numbers in the database, by using filters, in order to achieve the respective settings for making data available to users – or even not available. But there is another, fourth setting available in Essbase filters, which some administrators are not yet aware of, or at least not aware of how to apply it correctly: the so called Metaread access setting. So let’s have a look at this very useful option and shed some light on its efficient use.

Not only giving or denying access to the data and figures in an Essbase database is frequently required, but also in some cases blocking the user’s ability to see members and hierarchy, in other words parts of the metadata. This is what the Metaread filter addresses. But Metaread in various ways is different from the other three filter options, None, Read and Write. First of all, as it does not apply to the data/numbers in the database like all the others, but to metadata, where it limits visibility of members and parts of the hierarchy. Also it doesn’t know an AND logic, but only OR. And finally it overrides definitions made with Read or Write in the way, that even granted Read or Write data access on given members or member combinations cannot be executed, when Metaread definitions exclude these members from being seen at all by the user.

So how does it work in detail: Usually users can display all metadata, meaning they can see all members in the hierarchy, even if no Read or Write data access is given to them on these members. Metaread now adds another layer to existing filter definitions and enforces them by removing certain members or branches from the user’s view. It only allows users to see the explicitly defined members and their ancestors in the hierarchy, where for the ancestors only the hierarchy/member names are visible, while for the declared members always at least read access for the respective data is granted. In the case that data Write access would be given at the same time for the members defined in Metaread, this access would be maintained as Write and not reduced to read-only. Siblings of the defined Metaread members are not visible at all. This is illustrated in the example below.

essbase screenshot

For all other cells not being specified in the Metaread filter, unless not defined differently in another filter, the minimum database access applies, first the one defined at the user access level, or with second priority the setting from the global database access level, like this would be also the case with the common filter definitions.

Of course, also for Metaread, overlapping definitions might occur, but here we have to watch out as they are again treated in a different way, like the following example referring to the hierarchy seen above shows:

essbase screenshot

This definition, unlike for None, Read or Write, would not grant data access to California and West. Instead it would allow data access only to California, but not to its parent West, for which only the hierarchy would be shown. In order to avoid these conflicting settings for Metaread, it is recommended to define all members from the same dimension in one single filter row, hence the correct way to define data access on both members would be:

essbase screenshot

So as you can see, Metaread is a bit special, but not that complicated to use. And it adds another helpful option to the common None, Read and Write settings in Essbase security filters.

Want to learn more?

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region.

Please drop me a note directly if you have any questions: bernhard.kinkel@oracle.com .

About the Author:

Angela Chandler

Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

Wednesday Aug 29, 2012

Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

Untitled Document

Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users.

So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.

 

TIP

It is not necessary to highlight and copy the row or column descriptions

 

Next from the Smart View ribbon select Copy Data Point.

Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View; from it select the Paste Data Point icon.

The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers.

After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.)

It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text:

Before refresh:

After refresh:

From now on (provided that you are connected online to your database or application) for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again.

As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).

 

So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:

 

By selecting the Manage POV option from the Smart View menu in Word or Power Point…

 

… the following POV Manager – Queries window opens:

 

You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV.

TIP

Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV.

And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks:

Select one of the numbers from within your Word or Power Point document by double-clicking.

 

Then select the Visualize in Excel option from the Smart View menu.

Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.)

However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features.

So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms.

If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations.

You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: bernhard.kinkel@oracle.com .

About the Author:

Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

 

Wednesday Apr 18, 2012

Setting a simple high availability configuration for an Oracle Unified Directory Topology

Oracle Unified Directory is the latest full java implementation, of LDAP directories offered by Oracle.

It offers several improved features over earlier LDAP versions such as higher speed on read/write operations, better handling of high volume data capacity, ease of scaling, replication and proxy capabilities.

In this topic we will explore some of the replications features of the Oracle Unified Directory Server, by providing a simple setup for high availability of any OUD topology.

Production OUD topologies need to offer, a continues, synchronized, and high availability flow of the business data, managed / hosted on OUD Ldap Stores. This is achieved by using OUD Ldap data nodes, with specific OUD Ldap directories as "Replication" nodes. A cluster of "OUD replication nodes" associated then with OUD Ldap datastores offers a high degree of availability for the data hosted on the OUD Ldap directories.

A Replication node, is considered an instance of an Oracle Unified Directory, which is used only to synchronize data (read/write operations) between several OUD dedicated LDAP stores. We can setup a replication node, on the same JVM which is hosting an OUD Ldap store, or we can create a specific JVM process which will handle the replication process (this is our approach for this demo).


For our demo, we will create a simply 4 node topology, on the same physical host.

Two OUD nodes (every node is a separate JVM process) , both of them are active, will handle the user data, and two OUD Replication nodes (as separate JMV processes ), will handle the synchronization of any modification applied on the data of the node1/node2.

As a best practice, its good to have at least 2 separate replication nodes in our topology, although we can start the replication scenario by using only 1 replication node, and then adding one additional instance.

Having at least 2 replication nodes in our system, we ensure that any operation on the data nodes of our Ldap systems (node1/noede2) will be propagated to the other Ldap nodes, even if one of the replication node fails.

The whole scenario can be run into one physical host (VirtualBox, or VMaware), without any overhead. In some other topics we will discuss tuning and monitoring operations.



Lets Start by creating the two LDAP nodes (node1, node2) which will hold our data.

This is done by executing the oud-setup script in graphical mode:


Creation of the first node, the node1 listens to 1389 port, and the 4444 is its admin port.


The node 1 is a standalone LDA server


The server will manage the directory tree dc=example,dc=com which is a sample site. For this site, the wizard will generate 2000 samples entries.


this is the final setup for the node1


At this stage, we have already one instance of our LDAP topology ( node1) up and running !.

We will continue, with the creation of the second OUD Ldap node (node2)


The setup for the node node2 is nearly similar to the node1, the LDAP listen port is 2389, and the admin port is 5444.


For the node2, we will create the same directory tree structure, but we will leave the directory database empty. During the synchronization phase (see later slides), we will provision the directory with data coming from the node1.


This is the final setup for the node2


And at this stage we have the second node2 up and running !

The creation of the replication nodes is "nearly similar process" as for the previous LDAP nodes. We will create 2 OUD instances, with the configuration wizard, and we will setup the replication process as an additional setup using the dsreplication command.


The first replication node will listen to port 3389, the admin port will be 6444.


The second replication node will listen to 4389, its admin port will be at 7444.

At this stage we have 4 OUD instances running to our system. Two LDAP nodes (node1, node2) with the "business data", and two replication nodes. The node1 is actually populated with data, the node2 will be provisioned during the setup of the replication between this node and the node1.

Now lets setup the replication process between the node1, and the first replication OUD server, by executing the following dsreplication command. The node1 , is the LDAP data node, and the first replication node, will hold only the replication information (it will not hold any LDAP data)


dsreplication enable \                                                                          
  --host1 localhost  --port1 4444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  8989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             


This is what we should see in our prompt:


Then we should associate the second replication node with the node1.

The only parameters that we have to change to the previous scripts are the admin port for the second replication node, and the replication port for the second replication node


 dsreplication enable \                                                                          
  --host1 localhost  --port1 4444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 7444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  9989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             



We should execute the same script now, on the node2 in order to associate this node with the first replication node :


dsreplication enable \                                                                          
  --host1 localhost  --port1 5444 --bindDN1 "cn=Directory Manager" \                             
  --bindPassword1 welcome1 --noReplicationServer1 \                                              
  --host2 localhost --port2 6444 --bindDN2 "cn=Directory Manager" \                              
  --bindPassword2 welcome1 --onlyReplicationServer2 \                                            
  --replicationPort2  8989 --adminUID admin --adminPassword password \                           
  --baseDN "dc=example,dc=com" -X -n                                                             



At this stage we have associated the node1, node2, with the two replications nodes.

Before to start our operations (read/write) on node1,and node1 we have to initialize the replication topology with the following script :


dsreplication initialize-all --hostname localhost --port 4444 \                                 
  --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password                         


Here is what we should see :


As you notice, there is a fully successful provisioning of the node2, from the data of the node1!

Now we can monitor our configuration by executing the following command :


 dsreplication status --hostname localhost --port 4444 \                                         
  --baseDN "dc=example,dc=com" --adminUID admin --adminPassword password                         



To test our replication configuration, you can use any LDAP browser client, connect to the first instance, modify one or several entries, then connect to the second instance and check that your modification are applied :)

For additional training see: Oracle Unified Directory 11g: Services Deployment Essentials

About the Author:


Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugene currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

Monday Apr 09, 2012

How to create a PeopleCode Application Package/Application Class using PeopleTools Tables

This article describes how - in PeopleCode (Release PeopleTools 8.50) - to enable a grid without enabling each static column, using a dynamic Application Class.

The goal is to disable the following grid with three columns “Effort Date”, ”Effort Amount” and “Charge Back” , when the Check Box “Finished with task” is selected , without referencing each static column; this PeopleCode could be used dynamically with any grid.


If the check box “Finished with task” is cleared, the content of the grid columns is editable (and the buttons “+” and “-“ are available):


So, you create an Application Package “CLASS_EXTENSIONS” that contains an Application Class “EWK_ROWSET”.

This Application Class is defined with Class extends “ Rowset” and you add two news properties “Enabled” and “Visible”:


After creating this Application Class, you use it in two PeopleCode Events : Rowinit and FieldChange :


This code is very ‘simple’, you write only one command : ” &ERS2.Enabled = False” → and the entire grid is “Enabled”… and you can use this code with any Grid!

So, the complete PeopleCode to create the Application Package is (with explanation in [….]) :

******Package CLASS_EXTENSIONS :    [Name of the Package: CLASS_EXTENSIONS]

--Beginning of the declaration part------------------------------------------------------------------------------
class EWK_ROWSET extends Rowset;          [Definition Class EWK_ROWSET  as a 
                                          subclass of Class Rowset]
   method EWK_ROWSET(&RS As Rowset);      [Constructor is the Method with the
                                          same name of the Class]
   property boolean Visible get set;
   property boolean Enabled get set;      [Definition of the property 
                                          “Enabled” in read/write]
private                                   [Before the word “private”, 
                                          all the declarations are publics]
   method SetDisplay(&DisplaySW As boolean, &PropName As string, 
          &ChildSW As boolean);
   instance boolean &EnSW;
   instance boolean &VisSW;
   instance Rowset &NextChildRS;
   instance Row &NextRow;
   instance Record &NextRec;
   instance Field &NextFld;
   instance integer &RowCnt, &RecCnt, &FldCnt, &ChildRSCnt;
   instance integer &i, &j, &k;
   instance CLASS_EXTENSIONS:EWK_ROWSET &ERSChild;   [For recursion]
   Constant &VisibleProperty = "VISIBLE";
   Constant &EnabledProperty = "ENABLED";
end-class;
--End of the declaration part------------------------------------------------------------------------------

method EWK_ROWSET [The Constructor]
   /+ &RS as Rowset +/
   %Super = &RS;
end-method;
get Enabled
   /+ Returns Boolean +/;
   Return &EnSW;
end-get;
set Enabled
   /+ &NewValue as Boolean +/;
   &EnSW = &NewValue;
  %This.InsertEnabled=&EnSW;
  %This.DeleteEnabled=&EnSW;
 %This.SetDisplay(&EnSW, &EnabledProperty, False); [This method is called when
                                                    you set this property]
end-set;
get Visible
   /+ Returns Boolean +/;
   Return &VisSW;
end-get;

set Visible
   /+ &NewValue as Boolean +/;
   &VisSW = &NewValue;
   %This.SetDisplay(&VisSW, &VisibleProperty, False);
end-set;

method SetDisplay                 [The most important PeopleCode Method]
   /+ &DisplaySW as Boolean, +/
   /+ &PropName as String, +/
   /+ &ChildSW as Boolean +/             [Not used in our example]
   &RowCnt = %This.ActiveRowCount;
   &NextRow = %This.GetRow(1);      [To know the structure of a line ]
   &RecCnt = &NextRow.RecordCount; 
   For &i = 1 To &RowCnt                     [Loop for each Line]
      &NextRow = %This.GetRow(&i);
      For &j = 1 To &RecCnt                   [Loop for each Record]
         &NextRec = &NextRow.GetRecord(&j);
         &FldCnt = &NextRec.FieldCount;      

         For &k = 1 To &FldCnt                 [Loop for each Field/Record]
            &NextFld = &NextRec.GetField(&k);
            Evaluate Upper(&PropName)
            When = &VisibleProperty
               &NextFld.Visible = &DisplaySW;
               Break;
            When = &EnabledProperty;
               &NextFld.Enabled = &DisplaySW; [Enable each Field/Record]
               Break;
            When-Other
              Error "Invalid display property; Must be either VISIBLE or ENABLED"
            End-Evaluate;
         End-For;
      End-For;
      If &ChildSW = True Then   [If recursion]
         &ChildRSCnt = &NextRow.ChildCount;
         For &j = 1 To &ChildRSCnt [Loop for each Rowset child]
            &NextChildRS = &NextRow.GetRowset(&j);
            &ERSChild = create CLASS_EXTENSIONS:EWK_ROWSET(&NextChildRS);
            &ERSChild.SetDisplay(&DisplaySW, &PropName, &ChildSW);
 [For each Rowset child, call Method SetDisplay with the same parameters used 
 with the Rowset parent]
         End-For;
      End-If;
   End-For;
end-method;
******End of the Package CLASS_EXTENSIONS:[Name of the Package: CLASS_EXTENSIONS]

About the Author:


Pascal Thaler joined Oracle University in 2005 where he is a Senior Instructor. His area of expertise is Oracle Peoplesoft Technology and he delivers the following courses:

  • For Developers: PeopleTools Overview, PeopleTools I &II, Batch Application Engine, Language Oriented Object PeopleCode, Administration Security
  • For Administrators : Server Administration & Installation, Database Upgrade & Data Management Tools
  • For Interface Users: Integration Broker (Web Service)

Monday Feb 27, 2012

Scheduling Options in Primavera P6 by David Kelly

These options are made available by choosing the Options button, from the scheduling dialogue.

In a properly configured P6 environment, where projects are created by copy/paste from an official template created by the central project controls group, these settings will be correct.

Ignore relationships to and from other projects

In almost all circumstances the default, not to ignore such relationships, would be correct. An example of where this toggle switch is valuable is if all scope variations were held in a separate project. With both projects open, scheduling shows the combined projects. With just the original project open and this option switched on, the original scope only is scheduled. This is one of thimge simplest ways to switch between original scope and current scope.

Make open ended activities critical

This is a not just a display option, it changes the float on any path with an open end.

Use Expected Finish dates

Expected Finish is a controversial way to report progress. Effectively, rather than telling a P6 activity how much work has been done by reporting a per cent complete and allowing P6 to calculate when we can expect the activity to finish, we input an expected finish date and P6 calculates how much work has been done. Many project controls professionals think this is the wrong way round. Even if such dates have been added in the Status tab of the Activity screen, they will not be used if this option is unchecked. The arithmetic is NOT commutative. It will NOT put the original values back if you un-check this. Be careful!

Schedule automatically when a change affects dates

Switching “real-time” scheduling on such that the schedule recalculates with every significant change to the data is not advised. This feature is the preserve of light-weight single-user planning systems.

Level Resources During Scheduling

This is largely a matter of style. Some planners prefer to see the results of the schedule before resource levelling, others having satisfactorily set the levelling parameters would rather both processes were run together.

Recalculate assignment costs after scheduling

It is hard to imagine a scenario where one would NOT want to recalculate an activity’s costs based on the new dates that the activity may have once scheduled.

Retained Logic vs. Progress Override

None of the above settings creates as much discussion as this one. The controversy arises when there is out-of-sequence progress.

The above Barchart shows two projects which are identical in every respect except for that setting. The only progress that has been achieved is that the site has been prepared. Using Progress Override means that the remaining duration of the activity “Prep site and Erect” starts at the data date. This is clearly nonsense. Note that the whole project now finishes earlier than the Retained Logic project which leaves the remaining duration in the position that its predecessors demand. What is going on here, why do these options exist?

In the real world most planners add relationships to a project for two reasons:

  1. The laws of physics. If you are going to put a pipe in a trench, you must make the trench first.
  2. Not enough resource information. If I need to machine two valve blocks, and I do not know all or any of:
  • Exactly how much machine time and labour time is required for each of them.
  • My resource dictionary does not properly describe the availability of the equipment and labour to do the job,
  • I do not have clear guidance from management about the priority for allocation of resources
  • BUT, I “know” I can only do one at a time – Then I add a finish to start relationship between the two activities.
  • In case 2) above it does not matter if we start the successor activity first, and is we did start it first we would need to finish it before starting on the predecessor. In a perfect world case 2) above is easily dealt with by resource levelling, but quite a lot more information is required to do it the correct way.

    Calculate start to start lag from

    Clearly Actual Start may calculate more realistic dates. Probably Early Start calculates more optimistic dates if the schedule is slipping.

    Define Critical Activities as

    The definition of Critical in textbooks of CPM methodology is where float is less than or equal to zero. The longest path through the network always shows red bars in the barchart for the so-called Critical path, and is a more popular choice.

    Calculate Float based on Finish date of

    When scheduling multiple projects – perhaps a portfolio that represents a single contract – how many float paths? Does each project have its “own” float, or is float “owned” by the whole portfolio of projects? Before considering this question we would need to know how the projects' inter-project relationships are structured, how many open ends there are in how many of the projects, and an understanding of the commercial/contractual implications. It is unlikely that the planner on the project can answer this question alone.

    Compute Total Float as

    The author admits defeat here. Apparently in some circumstances an LoE or WBS Summary activity can have different Start and Finish Floats. There is no Primavera documentation that describes the circumstances or justifies the arithmetic. Choose Finish Float.

    Calendar for calculating relationship lag

    This is very important. Best practice is to always use 24hour, and always enter lags in hours. E.g. if the lag is 5 days, enter 120h into the lag dialogue. This way no changes in any calendar will change the wall-clock time of any lag

    NOTE: When you schedule in P6 all open projects are scheduled at the data date selected for each project. If only one project is open, then you can change the data date for that project. If multiple projects are open then you can only change their data date in the Projects screen, where you can even “Fill Down” a new data date to multiple projects.

    About the Author:

    David Kelly

    Dave Kelly delivers Oracle Primavera training courses at Milestone in Aberdeen; Milestone is an Oracle University Authorised Education Centre and offers the complete Oracle Primavera course curriculum. Dave has been involved in Planning and Scheduling software training and consultancy for many years he is well known and respected as an expert in delivering Primavera and associated solutions as both an experienced consultant and trainer.

About

Expert trainers from Oracle University share tips and tricks and answer questions that come up in a classroom.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today