A multiple data source environment is where the Star schema is populated by either multiple P6 instances or a single P6 instance has been split into unique sources of data. You could split a single P6 instance based on a specific criteria. Maybe you want a group of projects under an EPS to be updated in Star only on Friday's, but another group of projects need to be updated daily. You could split into separate ETL's. See previous blogs for more information on filtering and multiple data sources.
For this blog we are going to cover how the ETL's are executed. If you have two data sources, you need to run two separate ETL processes at different times. ETL #1 must be run first and complete before ETL #2 can be started. You do NOT want to allow both ETL processes to be executed at the same time. This can accomplished with a batch process or another queueing mechanism to make sure ETL #1 completes then execute ETL #2.
If ETL's were to be run at the same time you could see some data issues because they share staging tables. While the data in the facts and dimensions is contained in rows that are unique to the data source the staging tables are not. This data could be clobbered if both ETL's were running at the same time then that clobbered data may be pulled into the rows for an existing data source.
To help control this problem a new web configuration utility was created in P6 Reporting Database and P6 Analytics 3.3. Now there is a queuing mechanism to prevent ETL's from running at the same time.
You can setup separate tabs for each ETL. Define the schedule for each ETL. They will then queue up and be displayed on the home tab where the running and queued ETL's will show. They can also be Stopped or Removed from the queue. The main take away is for multiple data source environments the ETL's are sequential not parallel.