Monday Aug 12, 2013

Quick! Swap those models – I’ve got a better one

(or, Why in-database analytics enables real-time scoring and can make model deployment easy)

Refreshing predictive models is a standard part of the process when deploying advanced analytics solutions in production environments. In addition, many predictive models need to be used in a real-time setting for scoring customers, whether that is for fraud detection, predicting churn, or recommending next likely product. One of the problems with using vanilla R is that real-time scoring often requires starting an R engine for each score, or enabling some ad hoc mechanism for real-time scoring, which can increase application complexity.

In this blog post, we look at how Oracle R Enterprise enables:

  • Building models in-database on database data from R
  • Renaming in-database models for use by a stored procedure
  • Invoking the stored procedure to make predictions from SQL
  • Building a second model and swapping it with the original
  • Moving a model from development environment to production environment

Building the model in R

So let’s start with building a generalized linear model (GLM) in Oracle Database. For illustration purposes, we’ll use the longley data set from R – a macroeconomic data set that provides a well-known example for a highly collinear regression. In R, type ?longley for the full description of the data set.

Using the following R script, we create the database table LONGLEY_TABLE from the longley data.frame and then build the model using the in-database GLM algorithm. We’re predicting the number of people employed using the remaining variables. Then, we view the model details using summary and the auto-generated This corresponds to the name of the Oracle Data Mining (ODM) model in the database, which is auto-generated. Next, we use the model to predict using the original data, just for a confirmation that the model works as expected.

ore.connect("rquser","my_sid","my_host","rquser_pswd",1521, all=TRUE)

ore.create(longley, table="LONGLEY_TABLE")

mod.glm <- ore.odmGLM(Employed ~ ., data = LONGLEY_TABLE)



predict(fit1, LONGLEY_TABLE)

While a user can refer to the ODM model by its name in, for example, when working with it in SQL or the Oracle Data Miner GUI, this may not be convenient since it will look something like ORE$23_123. In addition, unless the R object mod.glm is saved in an ORE datastore (an ORE feature corresponding to R’s save and load functions using and ore.load, but in the database), at the end of the session, this object and corresponding ODM model will be removed.

In addition, we’ll want to have a common name for the model so that we can swap an existing model with a new model and not have the change higher level code. To rename an ODM model, we can use the PL/SQL statement shown here, invoked with R using ore.exec. Of course, this could also be done from any SQL interface, e.g., SQL*Plus, SQL Developer, etc., just supplying the explicit SQL.

ore.exec(paste("BEGIN DBMS_DATA_MINING.RENAME_MODEL(model_name => '", mod.glm$, "', new_model_name => 'MY_GLM_MODEL'); END;",sep=""))

So now, we have the ODM model named MY_GLM_MODEL. Keep in mind, after the model is renamed, the original model no longer exists and the R object is invalid – at least from the standpoint of being able to use it in functions like summary or predict.

Scoring data from a SQL procedure

As noted above, users can score in batch from R, however, they can also score in batch from SQL. But we’re interested in real-time scoring from the database using the in-database model. This can be done directly in a SQL query but providing the input data in the query itself. This eliminates having to write data to a database table and then doing a lookup to retrieve the data for scoring – making it real-time.

The following SQL does just this. The WITH clause defines the input data, selecting from dual. The SELECT clause uses the model MY_GLM_MODEL to make the prediction using the data defined by data_in.

WITH data_in as (select 2013 "Year",

234.289 "GNP",

235.6 "Unemployed",

107.608 "Population",

159 "Armed.Forces",

83 "GNP.deflator",

60.323 "Employed"

from dual )


FROM data_in

While we could invoke the SQL directly, having a stored procedure in the database can give us more flexibility. Here’s the stored procedure version in PL/SQL.




unemployed_in IN BINARY_DOUBLE,

population_in IN BINARY_DOUBLE,

armed_forces_in IN BINARY_DOUBLE,

gnp_deflator_in IN BINARY_DOUBLE,

employed_in IN BINARY_DOUBLE,

pred_out OUT NUMBER) AS


WITH data_in as (select year_in "Year",

gnp_in "GNP",

unemployed_in "Unemployed",

population_in "Population",

armed_forces_in "Armed.Forces",

gnp_deflator_in "GNP.deflator",

employed_in "Employed"

from dual ),


FROM data_in )

select PRED into pred_out from model_score;




'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);


To invoke the stored procedure, we can do the following:



score NUMBER;


MY_SCORING_PROC(1947, 234.289, 235.6, 107.608, 159, 83, 60.323, score);

DBMS_OUTPUT.PUT_LINE('Score: '|| score);


Refreshing the model from R

Let’s say the model above has been in production for a while, but has become stale – that is, it’s not predicting as well as it used to due to changing patterns in the data. To refresh it, we build a new model. For illustration purposes, we’re going to use the same data (so an identical model will be produced, except for its name).

mod.glm2 <- ore.odmGLM(Employed ~ ., data = LONGLEY_TABLE)



To swap the models, we delete the existing model called MY_GLM_MODEL and rename the new model to MY_GLM_MODEL. Again, we can do this from R using PL/SQL and through ore.exec.

ore.exec(paste("BEGIN DBMS_DATA_MINING.DROP_MODEL('MY_GLM_MODEL'); DBMS_DATA_MINING.RENAME_MODEL(model_name => '",mod.glm2$,"', new_model_name => 'MY_GLM_MODEL'); END;",sep=""))

We can now re-execute the stored procedure and the new model will be used.



score NUMBER;


MY_SCORING_PROC(1947, 234.289, 235.6, 107.608, 159, 83, 60.323, score);

DBMS_OUTPUT.PUT_LINE('Score: '|| score);


You may have noticed that this approach can introduce a brief period where no model is accessible - between the DROP_MODEL and RENAME_MODEL. A better approach involves the use of SYNONYMs. In general, synonyms provide both data independence and location transparency, being an alternative name for a table, view, sequence, procedure, stored function, and other database objects. We can use this in conjunction with our stored procedure above. First, create a synonym for the original scoring procedure.


When invoking the procedure from your application, use the name MY_SCORING_PROC_SYM in place of MY_SCORING_PROC.  Instead of renaming the model, create a second stored procedure, with a different name, e.g., MY_SCORING_PROC_2. The new procedure references the name of the newly build model internally. 

When it is time to swap the models, invoke the following to change the procedures.



Another benefit of this approach is that replaced models can still be kept should you need to revert to a previous version. 

Moving an in-database model from one machine to another

In a production deployment, there’s often the need to move a model from the development environment to the production environment. For example, the data scientist may have built the model in a development / sandbox environment and now needs to move it to the production machine(s).

In-database models provide functions EXPORT_MODEL and IMPORT_MODEL as part of the DBMS_DATA_MINING SQL package. See the 11g documentation for details. These calls can be invoked from R, but we’ll show this from SQL just to keep the flow easier to see.

From a SQL prompt, e.g., from SQL*Plus, connect to the schema that contains the model. Create a DIRECTORY object where the exported model file will be stored. List the model names available to this schema, which should contain MY_GLM_MODEL. Then, export the model

CONNECT rquser/rquser_psw


-- list the models available to rquser

SELECT name FROM dm_user_models;

-- export the model called MY_GLM_MODEL to a dump file in same schema



'name = ''MY_GLM_MODEL''');

At this point, you have the ODM model named MY_GLM_MODEL in the file MY_GLM_MODEL_out01.dmp stored in the file system under /home/MY_DIRECTORY. This file can now be moved to the production environment and the model loaded into the target schema.

Log into the new schema and invoke IMPORT_MODEL.

CONNECT rquser2/rquser2_psw


'RQUSERDIR', 'name = ''MY_GLM_MODEL''',

'IMPORT', NULL, 'glm_imp_job', 'rquser:rquser2');


In this post, we’ve highlighted how to build an in-database model in R and use it for scoring through SQL in a production, re-time settings. In addition, we showed how it is possible to swap, or refresh, models in a way that can leave your application code untouched. Finally, we highlighted database functionality that allows you to move in-database models from one database environment to another.

Users should note that all the functionality shown involving SQL, or being invoked through ore.exec, can be easily wrapped in R functions that could ultimately become part of ORE. If any of our readers are interested in giving this a try, we can post your solution here to share with the R and Oracle community. For the truly adventurous, check out the Oracle Database package DBMS_FILE_TRANSFER to consider wrapping the ability to move model files from R as well.

Friday Jul 19, 2013

Oracle R Connector for Hadoop 2.2.0 released

Oracle R Connector for Hadoop 2.2.0 is now available for download. The Oracle R Connector for Hadoop 2.x series has introduced numerous enhancements, which are highlighted in this article and summarized as follows:

 ORCH 2.0.0
 ORCH 2.1.0
 ORCH 2.2.0

 Analytic Functions

  • orch.lm
  • orch.lmf
  • orch.neural
  • orch.nmf

Oracle Loader for Hadoop (OLH) support

CDH 4.2.0

ORCHhive transparency layer







Analytic Functions
  • orch.cor
  • orch.cov
  • orch.kmeans
  • orch.princomp
  • orch.sample - by percent

Configurable delimiters in text input data files

Map-only and reduce-only jobs

Keyless map/reduce output

"Pristine" data mode for high performance data access

HDFS cache of metadata

Hadoop Abstraction Layer (HAL)


Analytic Functions
  • orch.sample - by number of rows

CDH 4.3.0

Full online documentation

Support integer and matrix data types in hdfs.attach with detection of "pristine" data

Out-of-the-box support for "pristine" mode for high I/O performance

HDFS cache to improve interactive performance when navigating HDFS directories and file lists

HDFS multi-file upload and download performance enhancements

HAL for Hortonworks Data Platform 1.2 and Apache Hadoop 1.0

ORCH 2.0.0

In ORCH 2.0.0, we introduced four Hadoop-enabled analytic functions supporting linear  regression, low rank matrix factorization, neural network, and non-negative matrix factorization. These enable R users to immediately begin using advanced analytics functions on HDFS data using the MapReduce paradigm on a Hadoop cluster without having to design and implement such algorithms themselves.

While ORCH 1.x supported moving data between the database and HDFS using sqoop, ORCH 2.0.0 supports the use of Oracle Loader for Hadoop (OLH) to move very large data volumes from HDFS to Oracle Database in a efficient and high performance manner.

ORCH 2.0.0 supported Cloudera Distribution for Hadoop (CDH) version 4.2.0 and introduced the ORCHhive transparency layer, which leverages the Oracle R Enterprise transparency layer for SQL, but instead maps to HiveQL, a SQL-like language for manipulating HDFS data via Hive tables.

ORCH 2.1.0

In ORCH 2.1.0, we added several more analytic functions, including correlation and covariance, clustering via K-Means, principle component analysis (PCA), and sampling by specifying the percent of records to return.

ORCH 2.1.0 also brought a variety of features, including: configurable delimiters (beyond comma delimited text files, using any ASCII delimiter), the ability to specify mapper-only and reduce-only jobs, and the output of NULL keys in mapper and reducer functions.

To speed the loading of data into Hadoop jobs, ORCH introduced “pristine” mode where the user guarantees that the data meets certain requirements so that ORCH skips a time-consuming data validation step. “Pristine” data requires that numeric columns contain only numeric data, that missing values are either R’s NA or the null string, and that all rows have the same number of columns. This improves performance of hdfs.get on a 1GB file by a factor of 10.

ORCH 2.1.0 introduced the caching of ORCH metadata to improve response time of ORCH functions, such as, hdfs.describe, and hdfs.mget between 5x and 70x faster.

The Hadoop Abstraction Layer, or HAL, enables ORCH to work on top of various Hadoop versions or variants, including Apache/Hortonworks, Cloudera Hadoop distributions: CDH3, and CDH 4.x with MR1 and MR2.

ORCH 2.2.0

In the latest release, ORCH 2.2.0, we’ve augmented orch.sample to allow specifying the number of rows in addition to percentage of rows. CDH 4.3 is now supported, and ORCH functions provide full online documentation via R's help function or ?. The function hdfs.attach now support integer and matrix data types and the ability to detect pristine data automatically. HDFS bulk directory upload and download performance speeds were also improved. Through the caching and automatic synchronization of ORCH metadata and file lists, the responsiveness of metadata HDFS-related functions has improved by 3x over ORCH 2.1.0, which also improves performance of and hadoop.exec functions. These improvements in turn bring a more interactive user experience for the R user when working with HDFS.

Starting in ORCH 2.2.0, we introduced out-of-the-box tuning optimizations for high performance and expanded HDFS caching to include the caching of file lists, which further improves performance of HDFS-related functions.

The function hdfs.upload now supports the option to upload multi-file directories in a single invocation, which optimizes the process. When downloading an HDFS directory, is optimized to issue a single HDFS command to download files into one local temporary directory before combining the separate parts into a single file.

The Hadoop Abstraction Layer (HAL) was extended to support Hortonworks Data Platform 1.2 and Apache Hadoop 1.0. In addition, ORCH now allows the user to override the Hadoop Abstraction Layer version for use with unofficially supported distributions of Hadoop using system environment variables. This enables testing and certification of ORCH by other Hadoop distribution vendors.

Certification of ORCH on non-officially supported platforms can be done using a separate test kit (available for download upon request: that includes an extensive set of tests for core ORCH functionality and that can be run using the ORCH built-in testing framework. Running the tests pinpoints the failures and ensures that ORCH is compatible with the target platform.

See the ORCH 2.2.0 Change List and Release Notes for additional details. ORCH 2.2.0 can be downloaded here.

Tuesday May 28, 2013

Converting Existing R Scripts to ORE - Getting Started

Oracle R Enterprise provides a comprehensive, database-centric environment for end-to-end analytical processes in R, with immediate deployment to production environments. This message really resonates with our customers who are interested in executing R functions on database-resident data while seamlessly leveraging Oracle Database as a high-performance computing (HPC) environment. The ability to develop and operationalize R scripts for analytical applications in one step is quite appealing.

One frequently asked question is how to convert existing R code that access data in flat files or the database to use Oracle R Enterprise. In this blog post, we talk about a few scenarios and how to begin a conversion from existing R code to using Oracle R Enterprise.

Consider the following scenarios:

Scenario 1: A stand-alone R script that generates its own data and simply returns a result. Data is not obtained from the file system or database. This may result from performing simulations where dadta is dynamically generated, or perhaps access from a URL on the internet.

Scenario 2: An R script that loads data from a flat file such as a CSV file, performs some computations in R, and then writes the result back to a file.

Scenario 3: An R script that loads data from a database table, via one of the database connector packages like RODBC, RJDBC, or ROracle, and writes a result back to the database –using SQL statements or package functions.

Scenario 1

A stand-alone R script might normally be run on a user’s desktop, invoked as a cron job, or even via Java to spawn an R engine and retrieve the result, but we’d like to operationalize its execution as part of a database application, invoked from SQL. Here’s a simple script to illustrate the concept of converting such a script to be executed at the database server using ORE’s embedded R execution. The script generates a data.frame with some random columns, performs summary on that data and returns the summary statistics, which are represented as an R table.

# generate data


n <- 1000

df <- 3

x <- data.frame(a=1:n, b=rnorm(n), c=rchisq(n,df=df))

# perform some analysis

res <- summary(x)

#return the result


To convert this to use ORE, create a function with appropriate arguments and body, for example:

myFunction1 <- function (n = 1000, df = 3,seed=1) {


x <- data.frame(a=1:n, b=rnorm(n), c=rchisq(n,df=df))

res <- summary(x)



Next, load the ORE packages and connect to Oracle Database using the ore.connect function. Using the all argument set to TRUE loads metadata for all the tables and views in that schema. We then store the function in the R script repository, invoking it via ore.doEval.

# load ORE packages and connect to Oracle Database


ore.connect("schema","sid","hostname","password",port=1521, all=TRUE)

# load function into R script repository


ore.scriptCreate("myFunction-1", myFunction1)

# invoke using embedded R execution at the database server


> ore.doEval(FUN.NAME="myFunction-1")
       a                b                  c           
 Min.   :   1.0   Min.   :-3.00805   Min.   : 0.03449  
 1st Qu.: 250.8   1st Qu.:-0.69737   1st Qu.: 1.27386  
 Median : 500.5   Median :-0.03532   Median : 2.36454  
 Mean   : 500.5   Mean   :-0.01165   Mean   : 3.07924  
 3rd Qu.: 750.2   3rd Qu.: 0.68843   3rd Qu.: 4.25994  
 Max.   :1000.0   Max.   : 3.81028   Max.   :17.56720  

Of course, we’re using default values here. To provide different arguments, change the invocation with arguments as follows:

ore.doEval(FUN.NAME="myFunction-1", n=500, df=5, seed=2)

> ore.doEval(FUN.NAME="myFunction-1", n=500, df=5, seed=2)
       a               b                  c          
 Min.   :  1.0   Min.   :-2.72182   Min.   : 0.1621  
 1st Qu.:125.8   1st Qu.:-0.65346   1st Qu.: 2.6144  
 Median :250.5   Median : 0.04392   Median : 4.4592  
 Mean   :250.5   Mean   : 0.06169   Mean   : 5.0386  
 3rd Qu.:375.2   3rd Qu.: 0.79096   3rd Qu.: 6.8467  
 Max.   :500.0   Max.   : 2.88842   Max.   :17.0367  

Having successfully invoked this from the R client (my laptop), we can now invoke it from SQL. Here, we retrieve the summary result, which is an R table, as an XML string.

select *

from table(rqEval( NULL,'XML','myFunction-1'));

The result can be viewed from SQL Developer.

The following shows the XML output in a more structured manner.

What if we wanted to get the result to appear as a SQL table? Since the current result is an R table (an R object), we need to convert this to a data.frame to return it. We’ll make a few modifications to “myFunction-1” above. Most notably is the need to convert the table object in res to a data.frame. There are a variety of ways to do this.

myFunction2 <- function (n = 1000, df = 3,seed=1) {

# generate data


x <- data.frame(a=1:n, b=rnorm(n), c=rchisq(n,df=df))

# perform some analysis

res <- summary(x)

# convert the table result to a data.frame

res.df <- as.matrix(res)

res.sum <-,9,20)),6,3))

names(res.sum) <- c('a','b','c')

res.sum$statname <- c("min","1stQ","median","mean","3rdQ","max")

res.sum <- res.sum[,c(4,1:3)]



# load function into R script repository


ore.scriptCreate("myFunction-2", myFunction2)

We’ll now modify the SQL statement to specify the format of the result.

select *

from table(rqEval( NULL,'select cast(''a'' as VARCHAR2(12)) as "statname",

1 "a", 1 "b", 1 "c" from dual ','myFunction-2'));

Here’s the result as viewed from SQL Developer.

This type of result could be incorporated into any SQL application accepting table or view input from a SQL query. That is particular useful in combination with OBIEE dashboards via an RPD.

Scenario 2

If you’ve been loading data from a flat file, perhaps a CSV file, your R code may look like the following, where it specifies to builds a model and write hat model to a file for future use, perhaps in scoring. It also generates a graph of the clusters highlighting the individual points, colored by their cluster id, with the centroids indicated with a star.

# read data


dat <- read.csv("myDataFile.csv")

# build a clustering model

cl <- kmeans(x, 2)

# write model to file

save(cl, file="myClusterModel.dat")

# create a graph and write it to a file


plot(x, col = cl$cluster)

points(cl$centers, col = 1:2, pch = 8, cex=2)

The resulting PDF file contains the following image.

To convert this script for use in ORE, there are several options. We’ll explore two: the first involving minimal change to use embedded R execution, and the second leveraging in-database techniques. First, we’ll want the data we used above in variable dat to be loaded into the database.

# create a row id to enable ordered results (if a key doesn’t already exist)

dat$ID <- 1:nrow(dat)

# remove the table if it exists


# create the table using the R data.frame, resulting in an ore.frame named MY_DATA


# assign the ID column as the row.names of the ore.frame

row.names(MY_DATA) <- MY_DATA$ID

In the first example, we’ll use embedded R execution and pass the data to the function via ore.tableApply. We’ll generate the graph, but simply display it within the function to allow embedded R execution to return the graph as a result. (Note we could also write the graph to a file in any directory accessible to the database server.) Instead of writing the model to a file, which requires keeping track of its location, as well as worring about backup and recovery, we store the model in the database R datastore using All this requires minimal change. As above, we could store the function in the R script repository and invoke it by name – both from R and SQL. In this example, we simply provide the function itself as argument.

myClusterFunction1 <- function(x) {

cl <- kmeans(x, 2), name="myClusterModel",overwrite=TRUE)

plot(x, col = cl$cluster)

points(cl$centers, col = 1:2, pch = 8, cex=2)



ore.tableApply(MY_DATA[,c('x','y')], myClusterFunction1,


The ore.tableApply function projects the x and y columns of MY_DATA as input and also specifies ore.connect as TRUE since we are using the R datastore, which requires a database connection. Optionally, we can specify control arguments to the PNG output. In this example, these are the height and width of the image.

For the second example, we convert this to leverage the ORE Transparency Layer. We’ll use the in-database K-Means algorithm and save the model in a datastore named “myClusteringModel”, as we did above. Since ore.odmKMeans doesn’t automatically assign cluster ids (since the data may be very large or are not required), the scoring is done separately. Note, however, that the prediction results also exist in the database as an ore.frame. To ensure ordering, we also assign row.names to the ore.frame pred. Lastly, we create the plot. Coloring the nodes requires pulling the cluster assignments; however, the points themselves can be accessed from the ore.frame. The centroids points are obtained from cl$centers2 of the cluster model.

# build a clustering model in-database

cl <- ore.odmKMeans(~., MY_DATA, 2,

# save model in database R datastore,name="myClusterModel",overwrite=TRUE)

# generate predictions to assign each row a cluster id, supplement with original data

pred <- predict(cl,MY_DATA,supp=c('x','y','ID'),type="class")

# assign row names to ensure ordering of results

row.names(pred) <- pred$ID

# create the graph

plot(pred[,c('x','y')], col = ore.pull(pred$CLUSTER_ID))

points(cl$centers2[,c('x','y')], col = c(2,3), pch = 8, cex=2)

We can also combine using the transparency layer within an embedded R function. But we’ll leave that as an exercise to the reader.

Scenario 3

In this last scenario, the data already exists in the database and one of the database interface packages, such as RODBC, RJDBC, and ROracle is be used to retrieve data from and write data to the database. We’ll illustrate this with ROracle, but the same holds for the other two packages.

# connect to the database

drv <- dbDriver("Oracle")

con <- dbConnect(drv, "mySchema", "myPassword")

# retrieve the data specifying a SQL query

dat <- dbGetQuery(con, 'select * from MY_RANDOM_DATA where "a" > 100')

# perform some analysis

res <- summary(dat)

# convert the table result to a data.frame for output as table

res.df <- as.matrix(res)

res.sum <-,9,20)),6,3))

names(res.sum) <- c('a','b','c')

res.sum$statname <- c("min","1stQ","median","mean","3rdQ","max")

res.sum <- res.sum[,c(4,1:3)]


dbWriteTable(con, "SUMMARY_STATS", res.sum)

Converting this to ORE is straightforward. We’re already connected to the database using ore.connect from previous scenarios, so the existing table MY_RANDOM_DATA was already loaded in the environment as an ore.frame. Executing lists this table is the result, so we can just start using it.



# no need to retrieve the data, use the transparency layer to compute summary

res <- with(MY_RANDOM_DATA , summary(MY_RANDOM_DATA[a > 100,]))

# convert the table result to a data.frame for output as table

res.df <- as.matrix(res)

res.sum <-,9,20)),6,3))

names(res.sum) <- c('a','b','c')

res.sum$statname <- c("min","1stQ","median","mean","3rdQ","max")

res.sum <- res.sum[,c(4,1:3)]

# create the database table

ore.create(res.sum, "SUMMARY_STATS")


As we did in previous scenarios, this script can also be wrapped in a function and used in embedded R execution. This too is left as an exercise to the reader.


As you can see from the three scenarios discussed here, converting a script that accesses no external data, accesses and manipulates file data, or accesses and manipulates database data can be accomplished with a few strategic modifications. More involved scripts, of course, may require additional manipulation. For example, if the SQL query performs complex joins and filtering, along with derived column creation, the user may want to convert this SQL to the corresponding ORE Transparency Layer code, thereby eliminating reliance on SQL. But that’s a topic for another post.

Monday Nov 19, 2012

Join us at BIWA Summit 2013!

Registration is now open for BIWA Summit 2013.  This event, focused on Business Intelligence, Data Warehousing and Analytics, is hosted by the BIWA SIG of the IOUG on January 9 and 10 at the Hotel Sofitel, near Oracle headquarters in Redwood City, California.

Be sure to check out our featured speakers, including Oracle executives Balaji Yelamanchili, Vaishnavi Sashikanth, and Tom Kyte, and Ari Kaplan, sports analyst, as well as the many other internationally recognized speakers.  Hands-on labs will give you the opportunity to try out much of the Oracle software for yourself (including Oracle R Enterprise)--be sure to bring a laptop capable of running Windows Remote Desktop.  There will be over 35 sessions on a wide range of BIWA-related topics.  See the BIWA Summit 2013 web site for details and be sure to register soon, while early bird rates still apply.


The place for best practices, tips, and tricks for applying Oracle R Enterprise, Oracle R Distribution, ROracle, and Oracle R Advanced Analytics for Hadoop in both traditional and Big Data environments.


« April 2014