Monday Jul 13, 2015

BIWASummit 2016 "Call for Speakers" is open!

Oracle BIWA Summit is an annual conference that provides attendees a concentrated three days of content focused on Big Data and Analytics. Once again, it will be held at the Oracle Headquarters Conference Center in Redwood Shores, CA. As part of the organizing committee, I invite you to submit session proposals, especially those involving Oracle's R technologies.

BIWA Summit attendees want to hear about your use of Oracle technology. Proposals will be accepted through Monday evening November 2, 2015, at midnight EST.

To submit your abstract, click here.

This year's tracks include:


Oracle BIWA Summit 2016 is organized and managed by the Oracle BIWA SIG, the Oracle Spatial SIG, and the Oracle Northern California User Group. The event attracts top BI, data warehousing, analytics, Spatial, IoT and Big Data experts.

The three-day event includes keynotes from industry experts, educational sessions, hands-on labs, and networking events.

Hot topics include:


  • Database, data warehouse and cloud, Big Data architecture

  • Deep dives and hands-on labs on existing Oracle BI, data warehouse, and analytics products

  • Updates on the latest Oracle products and technologies (e.g. Big Data Discovery, Oracle Visual Analyzer, Oracle Big Data SQL)

  • Novel and interesting use cases on everything – Spatial, Graph, Text, Data Mining, IoT, ETL, Security, Cloud

  • Working with Big Data (e.g., Hadoop, "Internet of Things,” SQL, R, Sentiment Analysis)

  • Oracle Business Intelligence (OBIEE), Oracle Big Data Discovery, Oracle Spatial, and Oracle Advanced Analytics—Better Together

I look forward to seeing you there!

Friday Apr 17, 2015

The Intersection of “Data Capital” and Advanced Analytics

We’ve heard about the Three Laws of Data Capital from Paul Sonderegger at Oracle: data comes from activity, data tends to make more data, and platforms tend to win. Advanced analytics enables enterprises to take full advantage of the data their activity produces, ranging from IoT sensors and PoS transactions to social media and image/video. Traditional BI tools produce summary data from data – producing more data, but traditional BI tools provide a view of the past – what did happen. Advanced analytics also produces more data from data, but this data is transformative, generating previously unknown insights and providing a view of future behavior or outcomes – what will likely happen. Oracle provides a platform for advanced analytics today through Oracle Advanced Analytics on Oracle Database, and Oracle R Advanced Analytics for Hadoop on Big Data Appliance to support investing data.

Enterprises need to put their data to work to realize a return on their investment in data capture, cleansing, and maintenance. Investing data through advanced analytics algorithms has shown repeatedly to dramatically increase ROI. For examples, see customer quotes and videos from StubHub, dunnhumby, CERN, among others. Too often, data centers are perceived as imposing a “tax” instead of yielding a “dividend.” If you cannot extract new insights from your data and use data to perform such revenue enhancing actions such as predicting customer behavior, understanding root causes, and reducing fraud, the costs to maintain large volumes of historical data may feel like a tax. How do enterprises convert data centers to dividend-yielding assets?

One approach is to reduce “transaction costs.” Typically, these transaction costs involve the cost for moving data into environments where predictive models can be produced or sampling data to be small enough to fit existing hardware and software architectures. Then, there is the cost for putting those models into production. Transaction costs result in multi-step efforts that are labor intensive and make enterprises postpone investing their data and deriving value. Oracle has long recognized the origins of these high transaction costs and produced tools and a platform to eliminate or dramatically lower these costs.

Further, consider the data scientist or analyst as the “data capital manager,” the person or persons striving to extract the maximum yield from data assets. To achieve high dividends with low transaction costs, the data capital manager needs to be supported with tools and a platform that automates activities – making them more productive – and ultimately more heroic within the enterprise – doing more with less because it’s faster and easier. Oracle removes a lot of the grunt work from the advanced analytics process: data is readily accessible, data manipulation and model building / data scoring is scalable, and deployment is immediate. To learn more about how to increase dividends from your data capital, see Oracle Advanced Analytics and Oracle R Advanced Analytics for Hadoop.

Tuesday Dec 23, 2014

Pain Point #4: “Recoding R (or other) models into SQL, C, or Java takes time and is error prone”

In the previous post in this series Addressing Analytic Pain Points, I focused on some issues surrounding production deployment of advanced analytics solutions. One specific aspect of production deployment involves how to get predictive model results (e.g., scores) from R or leading vendor tools into applications that are based on programming languages such as SQL, C, or Java. In certain environments, one way to integrate predictive models involves recoding them into one of these languages. Recoding involves identifying the minimal information needed for scoring, i.e., making predictions, and implementing that in a language that is compatible with the target environment. For example, consider a linear regression model with coefficients. It can be fairly straightforward to write a SQL statement or a function in C or Java to produce a score using these coefficients. This translated model can then be integrated with production applications or systems.

While recoding has been a technique used for decades, it suffers from several drawbacks: latency, quality, and robustness. Latency refers to the time delay between the data scientist developing the solution and leveraging that solution in production. Customers recount historic horror stories where the process from analyst to software developers to application deployment took months. Quality comes into play on two levels: the coding and testing quality of the software produced, and the freshness of the model itself. In fast changing environments, models may become “stale” within days or weeks. As a result, latency can impact quality. In addition, while a stripped down implementation of the scoring function is possible, it may not account for all cases considered by the original algorithm implementer. As such, robustness, i.e., the ability to handle greater variation in the input data, may suffer.

One way to address this pain point is to make it easy to leverage predictive models immediately (especially open source R and in-database Oracle Advanced Analytics models), thereby eliminating the need to recode models. Since enterprise applications normally know how to interact with databases via SQL, as soon as a model is produced, it can be placed into production via SQL access. In the case of R models, these can be accessed using Oracle R Enterprise embedded R execution in parallel via ore.rowApply and, for select models, the ore.predict capability performs automatic translation of native R models for execution inside the database. In the case of native SQL Oracle Advanced Analytics interface algorithms, as found in Oracle Data Mining and exposed through an R interface in Oracle R Enterprise, users can perform scoring directly in Oracle Database. This capability minimizes or even eliminates latency, dramatically increases quality, and leverages the robustness of the original algorithm implementations.

Sunday Dec 14, 2014

Pain Point #3: “Putting R (or other) models and results into production is ad hoc and complex”

Continuing in our series Addressing Analytic Pain Points, another concern for data scientists and analysts, as well as enterprise management, is how to leverage analytic results in production systems. These production systems can include (i) dashboards used by management to make business decisions, (ii) call center applications where representatives see personalized recommendations for the customer they’re speaking to or how likely that customer is to churn, (iii) real-time recommender systems for customer retail web applications, (iv) automated network intrusion detection systems, and (v) semiconductor manufacturing alert systems that monitor product quality and equipment parameters via sensors – to name a few.

When a data scientist or analyst begins examining a data-based business problem, one of the first steps is to acquire the available data relevant to that problem. In many enterprises, this involves having it extracted from a data warehouse and operational systems, or acquiring supplemental data from third parties. They then explore the data, prepare it with various transformations, build models using a variety of algorithms and settings, evaluate the results, and after choosing a “best” approach, produce results such as predictions or insights that can be used by the enterprise.

If the end goal is to produce a slide deck or report, aside from those final documents, the work is done. However, reaping financial benefits from advanced analytics often needs to go beyond PowerPoint! It involves automating the process described above: extract and prepare the data, build and select the “best” model, generate predictions or highlight model details such as descriptive rules, and utilize them in production systems.

One of the biggest challenges enterprises face involves realizing the promised benefits in production that the data scientist achieved in the lab. How do you take that cleverly crafted R script, for example, and put all the necessary “plumbing” around it to enable not only the execution of the R script, but the movement of data and delivery of results where they are needed, parallel and distributed script execution across compute nodes, and execution scheduling.

As a production deployment, care needs to taken to safeguard against potential failures in the process. Further, more “moving parts” result in greater complexity. Since the plumbing is often custom implemented for each deployment, this plumbing needs to be reinvented and thoroughly tested for each project. Unfortunately, code and process reuse is seldom realized across an enterprise even for similar projects, which results in duplication of effort.

Oracle Advanced Analytics (Oracle R Enterprise and Oracle Data Mining) with Oracle Database provides an environment that eliminates the need for a separately managed analytics server, the corresponding movement of data and results between such a server and the database, and the need for custom plumbing. Users can store their R and SQL scripts directly in Oracle Database and invoke them through standard database mechanisms. For example, R scripts can be invoked via SQL, and SQL scripts can be scheduled for execution through Oracle Database’s DMBS_SCHEDULER package. Parallel and distributed execution of R scripts is supported through embedded R execution, while the database kernel supports parallel and distributed execution of SQL statements and in-database data mining algorithms. In addition, using the Oracle Advanced Analytics GUI, Oracle Data Miner, users can convert “drag and drop” analytic workflows to SQL scripts for ease of deployment in Oracle Database.

By making solution deployment a well-defined and routine part of the production process and reducing complexity through fewer moving parts and built-in capabilities, enterprises are able to realize and then extend the value they get from predictive analytics faster and with greater confidence.

Monday Sep 22, 2014

Oracle R Enterprise 1.4.1 Released

Oracle R Enterprise, a component of the Oracle Advanced Analytics option to Oracle Database, makes the open source R statistical programming language and environment ready for the enterprise and big data. Designed for problems involving large data volumes, Oracle R Enterprise integrates R with Oracle Database.

R users can execute R commands and scripts for statistical and graphical analyses on data stored in Oracle Database. R users can develop, refine, and deploy R scripts that leverage the parallelism and scalability of the database to automate data analysis. Data analysts and data scientists can use open source R packages and develop and operationalize R scripts for analytical applications in one step – from R or SQL.

With the new release of Oracle R Enterprise 1.4.1, Oracle enables support for Multitenant Container Database (CDB) in Oracle Database 12c and pluggable databases (PDB). With support for CDB / PDB, enterprises can take advantage of new ways of organizing their data: easily taking entire databases offline and easily bringing them back online when needed. Enterprises, such as pharmaceutical companies, that collect vast quantities of data across multiple experiments for individual projects immediately benefit from this capability.

This point release also includes the following enhancements:

• Certified for use with R 3.1.1 and Oracle R Distribution 3.1.1.

• Simplified and enhanced script for install, upgrade, uninstall of ORE Server and the creation and configuratioon of ORE users.

• New supporting packages: arules and statmod.

• ore.glm accepts offset terms in model formula and can fit negative binomial and tweedie families of GLM.

• ore.sync argument, query, creates ore.frame object from SELECT statement without creating view. This allows users to effectively access a view of the data without the CREATE VIEW privilege.

• Global option for serialization, ore.envAsEmptyenv, specifies whether referenced environment objects in an R object, e.g., in an lm model, should be replaced with an empty environment during serialization to the ORE R datastore. This is used by (i) ore.push, which for a list object accepts envAsEmptyenv as an optional argument, (ii) ore.save, which has envAsEmptyenv as a named argument, and (iii) ore.doEval and the other embedded R execution functions, which accept ore.envAsEmptyenv as a control argument.

Oracle R Enterprise 1.4.1
can be downloaded from OTN here.

Thursday Jul 24, 2014

Are you experiencing analytics pain points?

At the user!2014 conference at UCLA in early July, which was a stimulating and well-attended conference, I spoke about Oracle’s R Technologies during the sponsor talks. One of my slides focused on examples of analytics pain points we often hear from customers and prospects. For example,

“It takes too long to get my data or to get the ‘right’ data”
“I can’t analyze or mine all of my data – it has to be sampled”
“Putting R models and results into production is ad hoc and complex”
“Recoding R models into SQL, C, or Java takes time and is error prone”
“Our company is concerned about data security, backup and recovery”
“We need to build 10s of thousands of models fast to meet business objectives”

After the talk, several people approached me remarking how these are exactly the problems they encounter in their organizations. One person even asked, if I’d interviewed her for my talk since she is experiencing every one of these pain points.

Oracle R Enterprise, a component of the Oracle Advanced Analytics option to Oracle Database, addresses these pain points. Let’s take a look one by one.

If it takes too long to get your data, perhaps because your moving it from the database where it resides to your external analytics server or laptop, the ideal solution is don’t move it. Analyze it where it is. This is exactly what Oracle R Enterprise allows you to do using the transparency layer and in-database predictive analytics capabilities. With Oracle R Enterprise, R functions normally performed on data.frames are translated to SQL for execution in the database, taking advantage of query optimization, indexes, parallel-distributed execution, etc. With the advent of Oracle Data In-Memory option, this has even more advantages, but that’s a topic for another post. The second part of this pain point is getting access to the “right” data. Allowing your data scientist to have a sandbox with access to the range of data necessary to perform his/her work avoids the delay of requesting flat file extracts via the DBA, only to realize that more or different data is required. The cycle time in getting the “right” data impedes progress, not to mention annoying some key individuals in your organization. We’ll come back to the security aspects later.

Increasingly, data scientists want to avoid sampling data when analyzing data or building predictive models. Minimally, they at least want to use much more data than may fit in typical analytics servers. Oracle R Enterprise provides an R interface to powerful in-database analytic functions and data mining algorithms. These algorithms are designed to work in a parallel distributed manner whether the data fits in memory or not. In other cases, sampling is desired, if not required, but this results in the chicken-and-egg problem: The data need to be sampled since they won’t fit in memory, but the data are too big to fit in memory to sample! Users have developed home grown techniques to chunk the data and combine partial samples; however, they shouldn’t have to. When sampling is desired/required, with Oracle R Enterprise, we are able to leverage row indexing and in-database sampling to extract only database table rows that are in the sample, using standard R syntax or Oracle R Enterprise-based sampling functions.

Our next pain point involves production deployment. Many good predictive models have been laid waste for lack of integration with or complexity introduced by production environments. Enterprise applications and dashboards often speak SQL and know how to access data. However, to craft a solution that extracts data, invokes an R script in an external R engine, and places batch results back in the database requires a lot of manual coding, often leveraging ad hoc cron jobs. Oracle R Enterprise enables the execution of R scripts on the database server machine, in local R engines under the control of Oracle Database. This can be done from R and SQL. Using the SQL API, R scripts can be invoked to return results in the form of table data, images, and XML. In addition, data can be moved to these R engines more efficiently, and the powerful database hardware, such as Exadata machines, can be leveraged for data-parallel and task-parallel R script execution.

When users don’t have access to a tight integration between R and SQL as noted above, another pain point involves using R only to build the models and relying on developers to recode the scoring procedures in a programming language that fits with the production environment, e.g., SQL, C, or Java. This has multiple downsides: it takes time to recode, manual recoding is error prone, and the resulting code requires significant testing. When the model is refreshed, the process repeats.

The pain points discussed so far also suffer from concerns about security, backup, and recovery. If data is being moved around in flat files, what security protocols or access controls are placed on those flat files? How can access be audited? Oracle R Enterprise enables analytics users to leverage an Oracle Database secured environment for data access. Moving on, if R scripts, models, and other R objects are stored and managed as flat files, how are these backed up? How are they synced with the deployed application? By storing all these artifacts in Oracle Database via Oracle R Enterprise, backup is a normal part of DBA operation with established protocols. The R Script Repository and Datastore simplify backup. Crafting ad hoc solutions involving third party analytic servers, there is the issue of recovery, or resilience to failures. Fewer moving parts mean lower complexity. Programming for failure contingencies in a distributed application adds significant complexity to an application. Allowing Oracle Database to control the execution of R scripts in database server side R engines reduces complexity and frees application developers and data scientists to focus on the more creative aspects of their work.

Lastly, users of advanced analytics software – data scientists, analysts, statisticians – are increasing pushing the barrier of scalability. Not just in volume of data processed, but in the number and frequency of their computations and analyses, e.g., predictive model building. Where only a few models are involved, it may be tractable to manage a few files to store predictive models on disk (although as noted above, this has its own complications). When you need to build thousands of models or hundreds of thousands of models, managing these models becomes a challenge in its own right.

In summary, customers are facing a wide range of pain points in their analytics activities. Oracle R Enterprise, a component of the Oracle Advanced Analytics option to Oracle Database, addresses these pain points allowing data scientists, analysts, and statisticians, as well as the IT staff who supports them, to be more productive, while promoting and enabling new uses of advanced analytics.

Tuesday Jul 22, 2014

StubHub Taps into Big Data for Insight into Millions of Customers’ Ticket-Buying Patterns, Fraud Detection, and Optimized Ticket Prices

What can you use for a comprehensive platform for real-time analytics?
How do you drive company growth to leverage actions of millions of customers?
How can you process big data volumes for near-real-time recommendations and dramatically reduce fraud?

These questions, and others, posed challenges set by Stubhub. Read what Stubhub achieved with Oracle R Enterprise from the Oracle Advanced Analytics option to Oracle Database.

Mike Barber, Senior Manager of Data Science at StubHub said:

“Big data is having a tremendous impact on how we run our business. Oracle Database and its various options—including Oracle Advanced Analytics—combine high-performance data-mining functions with the open source R language to enable predictive analytics, data mining, text mining, statistical analysis, advanced numerical computations, and interactive graphics—all inside the database.”

Yadong Chen, Principal Architect, Data Systems at StubHub said:

“We considered solutions from several other vendors, but Oracle Database was a natural choice for us because it enabled us to run analytics at the data source. This capability, together with the integration of open source R with the database, ensured scalability and enabled near-real-time analytics capabilities."

Read the full press release here.

Monday May 19, 2014

Model cross-validation with ore.CV()

In this blog post we illustrate how to use Oracle R Enterprise for performing cross-validation of regression and classification models. We describe a new utility R function ore.CV that leverages features of Oracle R Enterprise and is available for download and use.

Predictive models are usually built on given data and verified on held-aside or unseen data. Cross-validation is a model improvement technique that avoids the limitations of a single train-and-test experiment by building and testing multiple models via repeated sampling from the available data. It's purpose is to offer a better insight into how well the model would generalize to new data and avoid over-fitting and deriving wrong conclusions from misleading peculiarities of the seen data.

In a k-fold cross-validation the data is partitioned into k (roughly) equal size subsets. One of the subsets is retained for testing and the remaining k-1 subsets are used for training. The process is repeated k times with each of the k subsets serving exactly once as testing data. Thus, all observations in the original data set are used for both training and testing.

The choice of k depends, in practice on the size n of the data set. For large data, k=3 could be sufficient. For very small data, the extreme case where k=n, leave-one-out cross-validation (LOOCV) would use a single observation from the original sample as testing data and the remaining observations as training data. Common choices are k=10 or k=5.

For a select set of algorithms and cases, the function ore.CV performs cross-validation for models generated by ORE regression and classification functions using in-databse data. ORE embedded R execution is leveraged to support cross-validation also for models built with vanilla R functions.

Usage

ore.CV(funType, function, formula, dataset, nFolds=<nb.folds>, fun.args=NULL, pred.args=NULL, pckg.lst=NULL)
  • funType - "regression" or "classification"
  • function - ORE predictive modeling functions for regression & classification or R function (regression only)
  • formula - object of class "formula"
  • dataset - name of the ore.frame
  • nFolds - number of folds
  • fun.args - list of supplementary arguments for 'function'
  • pred.args - list of supplementary arguments for 'predict'. Must be consistent with the model object/model generator 'function'.
  • pckg.lst - list of packages to be loaded by the DB R engine for embedded execution.
The set of functions supported for ORE include:
  • ore.lm
  • ore.stepwise
  • ore.neural
  • ore.glm
  • ore.odmDT
  • ore.odmSVM
  • ore.odmGLM
  • ore.odmNB
The set of functions supported for R include:
  • lm
  • glm
  • svm
Note: The 'ggplot' and 'reshape' packages are required on the R client side for data post-processing and plotting (classification CV).

Examples

In the following examples, we illustrate various ways to invoke ore.CV using some datasets we have seen in previous posts. The datasets can be created as ore.frame objects using:
 
IRIS <- ore.push(iris)
LONGLEY <- ore.push(longley)
library(rpart)
KYPHOSIS <- ore.push(kyphosis)
library(PASWR)
TITANIC3 <- ore.push(titanic3)
MTCARS <- pore.push(mtcars)
(A) Cross-validation for models generated with ORE functions.
 
# Basic specification
ore.CV("regression","ore.lm",Sepal.Length~.-Species,"IRIS",nFolds=5)
ore.CV("regression","ore.neural",Employed~GNP+Population+Year,
            "LONGLEY",nFolds=5)

#Specification of function arguments
ore.CV("regression","ore.stepwise",Employed~.,"LONGLEY",nFolds=5,
            fun.args= list(add.p=0.15,drop.p=0.15))
ore.CV("regression","ore.odmSVM",Employed~GNP+Population+Year,
             "LONGLEY",nFolds=5, fun.args="regression")

#Specification of function arguments and prediction arguments
ore.CV("classification","ore.glm",Kyphosis~.,"KYPHOSIS",nFolds=5,
             fun.args=list(family=binomial()),pred.args=list(type="response"))
ore.CV("classification","ore.odmGLM",Kyphosis~.,"KYPHOSIS",nFolds=5,
            fun.args= list(type="logistic"),pred.args=list(type="response"))
 
(B) Cross-validation for models generated with R functions via the ORE embedded execution mechanism.

ore.CV("regression","lm",mpg~cyl+disp+hp+drat+wt+qsec,"MTCARS",nFolds=3)
ore.CV("regression","svm",Sepal.Length~.-Species,"IRIS",nFolds=5,
             fun.args=list(type="eps-regression"), pckg.lst=c("e1071")) 


Restrictions

  • The signature of the model generator ‘function’ must be of the following type: function(formula,data,...). For example, functions like, ore.stepwise, ore.odmGLM and lm are supported but the R step(object,scope,...) function for AIC model selection via the stepwise algorithm, does not satisfy this requirement.
  • The model validation process requires the prediction function to return a (1-dimensional) vector with the predicted values. If the (default) returned object is different the requirement must be met by providing an appropriate argument through ‘pred.args’. For example, for classification with ore.glm or ore.odmGLM the user should specify pred.args=list(type="response").
  • Cross-validation of classification models via embedded R execution of vanilla R functions is not supported yet.
  • Remark: Cross-validation is not a technique intended for large data as the cost of multiple model training and testing can become prohibitive. Moreover, with large data sets, it is possible to effectively produce an effective sampled train and test data set. The current ore.CV does not impose any restrictions on the size of the input and the user working with large data should use good judgment when choosing the model generator and the number of folds.

    Output

    The function ore.CV provides output on several levels: datastores to contain model results, plots, and text output.

    Datastores

    The results of each cross-validation run are saved into a datastore named dsCV_funTyp_data_Target_function_nFxx where funTyp, function, nF(=nFolds) have been described above and Target is the left-hand-side of the formula. For example, if one runs the ore.neural, ore.glm, and ore.odmNB-based cross-validation examples from above, the following three datastores are produced:
    
    R> ds <- ore.datastore(pattern="dsCV")
    R> print(ds)
    datastore.name object.count size creation.date description
    1 dsCV_classification_KYPHOSIS_Kyphosis_ore.glm_nF5 104480326 2014-04-30 18:19:55 <NA>
    2 dsCV_classification_TITANIC3_survived_ore.odmNB_nF5 10 592083 2014-04-30 18:21:35 <NA>
    3 dsCV_regression_LONGLEY_Employed_ore.neural_nF5 10 497204 2014-04-30 18:16:35 <NA>
    
    Each datastore contains the models and prediction tables for every fold. Every prediction table has 3 columns: the fold index together with the target variable/class and the predicted values. If we consider the example from above and examine the most recent datastore (the Naive Bayes classification CV), we would see:
    
    R> ds.last <- ds$datastore.name[which.max(as.numeric(ds$creation.date))]
    R> ore.datastoreSummary(name=ds.last)
    object.name class size length row.count col.count
    1 model.fold1 ore.odmNB 66138 9 NA NA
    2 model.fold2 ore.odmNB 88475 9 NA NA
    3 model.fold3 ore.odmNB 110598 9 NA NA
    4 model.fold4 ore.odmNB 133051 9 NA NA
    5 model.fold5 ore.odmNB 155366 9 NA NA
    6 test.fold1 ore.frame 7691 3 261 3
    7 test.fold2 ore.frame 7691 3 262 3
    8 test.fold3 ore.frame 7691 3 262 3
    9 test.fold4 ore.frame 7691 3 262 3
    10 test.fold5 ore.frame 7691 3 262 3
    
    

    Plots

    The following plots are generated automatically by ore.CV and saved in an automatically generated OUTPUT directory:

  • Regression: ore.CV compares predicted vs target values, root mean square error (RMSE) and relative error (RERR) boxplots per fold. The example below is based on 5-fold cross-validation with the ore.lm regression model for Sepal.Length ~.-Species using the ore.frame IRIS dataset.
  • Classification : ore.CV outputs a multi plot figure for classification metrics like Precision, Recall and F-measure. Each metrics is captured per target class (side-by-side barplots) and fold (groups of barplots). The example below is based on the 5-folds CV of the ore.odmSVM classification model for Species ~. using the ore.frame IRIS dataset.
  • Text output
    For classification problems, the confusion tables for each fold are saved in an ouput file residing in the OUTPUT directory together with a summary table displaying the precision, recall and F-measure metrics for every fold and predicted class.
    file.show("OUTDIR/tbl_CV_classification_IRIS_Species_ore.odmSVM_nF5")
    
    Confusion table for fold 1 :           
                 setosa versicolor virginica
      setosa          9          0         0
      versicolor      0         12         1
      virginica       0          1         7
    Confusion table for fold 2 :            
                 setosa versicolor virginica
      setosa          9          0         0
      versicolor      0          8         1
      virginica       0          2        10
    Confusion table for fold 3 :           
                 setosa versicolor virginica
      setosa         11          0         0
      versicolor      0         10         2
      virginica       0          0         7
    Confusion table for fold 4 :            
                 setosa versicolor virginica
      setosa          9          0         0
      versicolor      0         10         0
      virginica       0          2         9
    Confusion table for fold 5 :            
                 setosa versicolor virginica
      setosa         12          0         0
      versicolor      0          5         1
      virginica       0          0        12
    Accuracy, Recall & F-measure table per {class,fold}
       fold      class TP  m  n Precision Recall F_meas
    1     1     setosa  9  9  9     1.000  1.000  1.000
    2     1 versicolor 12 13 13     0.923  0.923  0.923
    3     1  virginica  7  8  8     0.875  0.875  0.875
    4     2     setosa  9  9  9     1.000  1.000  1.000
    5     2 versicolor  8  9 10     0.889  0.800  0.842
    6     2  virginica 10 12 11     0.833  0.909  0.870
    7     3     setosa 11 11 11     1.000  1.000  1.000
    8     3 versicolor 10 12 10     0.833  1.000  0.909
    9     3  virginica  7  7  9     1.000  0.778  0.875
    10    4     setosa  9  9  9     1.000  1.000  1.000
    11    4 versicolor 10 10 12     1.000  0.833  0.909
    12    4  virginica  9 11  9     0.818  1.000  0.900
    13    5     setosa 12 12 12     1.000  1.000  1.000
    14    5 versicolor  5  6  5     0.833  1.000  0.909
    15    5  virginica 12 12 13     1.000  0.923  0.960
    
              
    What's next
    Several extensions of ore.CV are possible involving sampling, parallel model training and testing, support for vanilla R classifiers, post-processing and output. More material for future posts.

    Sunday Apr 27, 2014

    Step-by-step: Returning R statistical results as a Database Table


    R provides a rich set of statistical functions that we may want to use directly from SQL. Many of these results can be readily expressed as structured table data for use with other SQL tables, or for use by SQL-enabled applications, e.g., dashboards or other statistical tools.

    In this blog post, we illustrate in a sequence of five simple steps  how to go from an R function to a SQL-enabled result. Taken from recent "proof of concept" customer engagement, our example involves using the function princomp, which performs a principal components analysis on a given numeric data matrix and returns the results as an object of class princomp. The customer actively uses this R function to produce loadings used in subsequent computations and analysis. The loadings is a matrix whose columns contain the eigenvectors).

    The current process of pulling data from their Oracle Database, starting an R  engine, invoking the R script, and placing the results back in the database was proving non-performant and unnecessarily complex. The goal was to leverage Oracle R Enterprise to streamline this process and allow the results to be immediately accessible
    through SQL.

    As a best practice, here is a process that can get you from start to finish:

    Step 1: Invoke from command line, understand results

    If you're using a particular R function, chances are you are familiar with its content. However, you may not be familiar with its structure. We'll use an example from the R princomp documentation that uses the USArrests data set. We see that the class of the result is of type princomp, and the model prints the call and standard deviations of the components. To understand the underlying structure, we invoke the function str and see there are seven elements in the list, one of which is the matrix loadings.

    mod <- princomp(USArrests, cor = TRUE)
    class(mod)
    mod
    str(mod)


    Results:

    R> mod <- princomp(USArrests, cor = TRUE)
    R> class(mod)
    [1] "princomp"
    R> mod
    Call:
    princomp(x = USArrests, cor = TRUE)

    Standard deviations:
       Comp.1    Comp.2    Comp.3    Comp.4
    1.5748783 0.9948694 0.5971291 0.4164494

    4 variables and 50 observations.

    R> str(mod)
    List of 7
    $ sdev : Named num [1:4] 1.575 0.995 0.597 0.416
    ..- attr(*, "names")= chr [1:4] "Comp.1" "Comp.2" "Comp.3" "Comp.4"
    $ loadings: loadings [1:4, 1:4] -0.536 -0.583 -0.278 -0.543 0.418 ...
    ..- attr(*, "dimnames")=List of 2
    .. ..$ : chr [1:4] "Murder" "Assault" "UrbanPop" "Rape"
    .. ..$ : chr [1:4] "Comp.1" "Comap.2" "Comp.3" "Comp.4"
    $ center : Named num [1:4] 7.79 170.76 65.54 21.23
    ..- attr(*, "names")= chr [1:4] "Murder" "Assault" "UrbanPop" "Rape"
    $ scale : Named num [1:4] 4.31 82.5 14.33 9.27
    ..- attr(*, "names")= chr [1:4] "Murder" "Assault" "UrbanPop" "Rape"
    $ n.obs : int 50
    $ scores : num [1:50, 1:4] -0.986 -1.95 -1.763 0.141 -2.524 ...
    ..- attr(*, "dimnames")=List of 2
    .. ..$ : chr [1:50] "1" "2" "3" "4" ...
    .. ..$ : chr [1:4] "Comp.1" "Comp.2" "Comp.3" "Comp.4"
    $ call : language princomp(x = dat, cor = TRUE)
    - attr(*, "class")= chr "princomp"


    Step 2: Wrap script in a function, and invoke from ore.tableApply

    Since we want to invoke princomp on database data, we first push the demo data, USArrests, to the database to create an ore.frame. Other data we wish to use will also be in database tables.

    We'll use ore.tableApply (for the reasons cited in the previous blog post)  providing the ore.frame as the first argument and simply returning within our function the model produced by princomp. We'll then look at its class, retrieve the result from the database, and check its class and structure once again.

    Notice that we are able to obtain the exact same result we received using our local R engine as with the database R engine through embedded R execution.

    dat <- ore.push(USArrests)
    computePrincomp <- function(dat) princomp(dat, cor=TRUE)
    res <- ore.tableApply(dat, computePrincomp)


    class(res)
    res.local <- ore.pull(res)
    class(res.local)
    str(res.local)
    res.local
    res


    Results:

    R> dat <- ore.push(USArrests)
    R> computePrincomp <- function(dat) princomp(dat, cor=TRUE)
    R> res <- ore.tableApply(dat, dat, computePrincomp)
    R> class(res)
    [1] "ore.object"
    attr(,"package")
    [1] "OREembed"
    R> res.local <- ore.pull(res)
    R> class(res.local)
    [1] "princomp"


    R> str(res.local)
    List of 7
    $ sdev : Named num [1:4] 1.575 0.995 0.597 0.416
    ..- attr(*, "names")= chr [1:4] "Comp.1" "Comp.2" "Comp.3" "Comp.4"
    $ loadings: loadings [1:4, 1:4] -0.536 -0.583 -0.278 -0.543 0.418 ...
    ..- attr(*, "dimnames")=List of 2
    .. ..$ : chr [1:4] "Murder" "Assault" "UrbanPop" "Rape"
    .. ..$ : chr [1:4] "Comp.1" "Comap.2" "Comp.3" "Comp.4"
    $ center : Named num [1:4] 7.79 170.76 65.54 21.23
    ..- attr(*, "names")= chr [1:4] "Murder" "Assault" "UrbanPop" "Rape"
    $ scale : Named num [1:4] 4.31 82.5 14.33 9.27
    ..- attr(*, "names")= chr [1:4] "Murder" "Assault" "UrbanPop" "Rape"
    $ n.obs : int 50
    $ scores : num [1:50, 1:4] -0.986 -1.95 -1.763 0.141 -2.524 ...
    ..- attr(*, "dimnames")=List of 2
    .. ..$ : chr [1:50] "1" "2" "3" "4" ...
    .. ..$ : chr [1:4] "Comp.1" "Comp.2" "Comp.3" "Comp.4"
    $ call : language princomp(x = dat, cor = TRUE)
    - attr(*, "class")= chr "princomp"

    R> res.local
    Call:
    princomp(x = dat, cor = TRUE)

    Standard deviations:
       Comp.1    Comp.2    Comp.3    Comp.4
    1.5748783 0.9948694 0.5971291 0.4164494

    4 variables and 50 observations.
    R> res
    Call:
    princomp(x = dat, cor = TRUE)

    Standard deviations:
       Comp.1    Comp.2    Comp.3    Comp.4
    1.5748783 0.9948694 0.5971291 0.4164494


    4 variables and 50 observations.


    Step 3: Determine what results we really need

    Since we are only interested in the loadings and any result we return needs to be a data.frame to turn it into a database row set (table result), we build the model, transform the loadings object into a data.frame, and return the data.frame as the function result. We then view the class of the result and its values.

    Since we do this from the R API, we can simply print res to display the returned data.frame, as the print does an implicit ore.pull.

    returnLoadings <- function(dat) {
                        mod <- princomp(dat, cor=TRUE)
                        dd <- dim(mod$loadings)
                        ldgs <- as.data.frame(mod$loadings[1:dd[1],1:dd[2]])
                        ldgs$variables <- row.names(ldgs)
                        ldgs
                      }
    res <- ore.tableApply(dat, returnLoadings)
    class(res)
    res

    ore.create(USArrests,table="USARRESTS")


    Results:

    R> res <- ore.tableApply(dat, returnLoadings)
    R> class(res)
    [1] "ore.object"
    attr(,"package")
    [1] "OREembed"
    R> res
                 Comp.1     Comp.2     Comp.3     Comp.4 variables
    Murder   -0.5358995  0.4181809 -0.3412327  0.64922780 Murder
    Assault  -0.5831836  0.1879856 -0.2681484 -0.74340748 Assault
    UrbanPop -0.2781909 -0.8728062 -0.3780158  0.13387773 UrbanPop
    Rape     -0.5434321 -0.1673186  0.8177779  0.08902432 Rape


    Step 4: Load script into the R Script Repository in the database

    We're at the point of being able to load the script into the R Script Repository before invoking it from SQL. We can create the function from R or from SQL. In R,

    ore.scriptCreate('princomp.loadings', returnLoadings)

    or from SQL,

    begin
    --sys.rqScriptDrop('princomp.loadings');
    sys.rqScriptCreate('princomp.loadings',
          'function(dat) {
            mod <- princomp(dat, cor=TRUE)
            dd <- dim(mod$loadings)
            ldgs <- as.data.frame(mod$loadings[1:dd[1],1:dd[2]])
            ldgs$variables <- row.names(ldgs)
            ldgs
          }');
    end;
    /


    Step 5: invoke from SQL SELECT statement

    Finally, we're able to invoke the function from SQL using the rqTableEval table function. We pass in a cursor with the data from our USARRESTS table. We have no parameters, so the next argument is NULL. To get the results as a table, we specify a SELECT string that defines the structure of the result. Note that the column names must be identical to what is returned in the R data.frame. The last parameter is the name of the function we want to invoke from the R script repository.

    Invoking this, we see the result as a table from the SELECT statement.

    select *
    from table(rqTableEval( cursor(select * from USARRESTS),
                            NULL,
                           'select 1 as "Comp.1", 1 as "Comp.2", 1 as "Comp.3", 1 as "Comp.4", cast(''a'' as varchar2(12)) "variables" from dual',
                            'princomp.loadings'));


    Results:

    SQL> select *
    from table(rqTableEval( cursor(select * from USARRESTS),NULL,
              'select 1 as "Comp.1", 1 as "Comp.2", 1 as "Comp.3", 1 as "Comp.4", cast(''a'' as varchar2(12)) "variables" from dual','princomp.loadings'));
    2 3
        Comp.1     Comp.2     Comp.3     Comp.4  variables
    ---------- ---------- ---------- ---------- ------------
    -.53589947  .418180865 -.34123273  .649227804 Murder
    -.58318363  .187985604 -.26814843 -.74340748  Assault
    -.27819087 -.87280619  -.37801579  .133877731 UrbanPop
    -.54343209 -.16731864   .817777908 .089024323 Rape

    As you see above, we have the loadings result returned as a SQL table.


    In this example, we walked through the steps of moving from invoking an R function to obtain a specific result to producing that same result from SQL by invoking an R script at the database server under the control of Oracle Database.

    Monday Aug 12, 2013

    Quick! Swap those models – I’ve got a better one

    (or, Why in-database analytics enables real-time scoring and can make model deployment easy)

    Refreshing predictive models is a standard part of the process when deploying advanced analytics solutions in production environments. In addition, many predictive models need to be used in a real-time setting for scoring customers, whether that is for fraud detection, predicting churn, or recommending next likely product. One of the problems with using vanilla R is that real-time scoring often requires starting an R engine for each score, or enabling some ad hoc mechanism for real-time scoring, which can increase application complexity.

    In this blog post, we look at how Oracle R Enterprise enables:

    • Building models in-database on database data from R
    • Renaming in-database models for use by a stored procedure
    • Invoking the stored procedure to make predictions from SQL
    • Building a second model and swapping it with the original
    • Moving a model from development environment to production environment

    Building the model in R

    So let’s start with building a generalized linear model (GLM) in Oracle Database. For illustration purposes, we’ll use the longley data set from R – a macroeconomic data set that provides a well-known example for a highly collinear regression. In R, type ?longley for the full description of the data set.

    Using the following R script, we create the database table LONGLEY_TABLE from the longley data.frame and then build the model using the in-database GLM algorithm. We’re predicting the number of people employed using the remaining variables. Then, we view the model details using summary and the auto-generated fit.name. This fit.name corresponds to the name of the Oracle Data Mining (ODM) model in the database, which is auto-generated. Next, we use the model to predict using the original data, just for a confirmation that the model works as expected.

    ore.connect("rquser","my_sid","my_host","rquser_pswd",1521, all=TRUE)

    ore.create(longley, table="LONGLEY_TABLE")

    mod.glm <- ore.odmGLM(Employed ~ ., data = LONGLEY_TABLE)

    summary(mod.glm)

    mod.glm$fit.name

    predict(fit1, LONGLEY_TABLE)

    While a user can refer to the ODM model by its name in fit.name, for example, when working with it in SQL or the Oracle Data Miner GUI, this may not be convenient since it will look something like ORE$23_123. In addition, unless the R object mod.glm is saved in an ORE datastore (an ORE feature corresponding to R’s save and load functions using ore.save and ore.load, but in the database), at the end of the session, this object and corresponding ODM model will be removed.

    In addition, we’ll want to have a common name for the model so that we can swap an existing model with a new model and not have the change higher level code. To rename an ODM model, we can use the PL/SQL statement shown here, invoked with R using ore.exec. Of course, this could also be done from any SQL interface, e.g., SQL*Plus, SQL Developer, etc., just supplying the explicit SQL.

    ore.exec(paste("BEGIN DBMS_DATA_MINING.RENAME_MODEL(model_name => '", mod.glm$fit.name, "', new_model_name => 'MY_GLM_MODEL'); END;",sep=""))

    So now, we have the ODM model named MY_GLM_MODEL. Keep in mind, after the model is renamed, the original model no longer exists and the R object is invalid – at least from the standpoint of being able to use it in functions like summary or predict.

    Scoring data from a SQL procedure

    As noted above, users can score in batch from R, however, they can also score in batch from SQL. But we’re interested in real-time scoring from the database using the in-database model. This can be done directly in a SQL query but providing the input data in the query itself. This eliminates having to write data to a database table and then doing a lookup to retrieve the data for scoring – making it real-time.

    The following SQL does just this. The WITH clause defines the input data, selecting from dual. The SELECT clause uses the model MY_GLM_MODEL to make the prediction using the data defined by data_in.

    WITH data_in as (select 2013 "Year",

    234.289 "GNP",

    235.6 "Unemployed",

    107.608 "Population",

    159 "Armed.Forces",

    83 "GNP.deflator",

    60.323 "Employed"

    from dual )

    SELECT PREDICTION(MY_GLM_MODEL USING *) "PRED"

    FROM data_in

    While we could invoke the SQL directly, having a stored procedure in the database can give us more flexibility. Here’s the stored procedure version in PL/SQL.

    CREATE OR REPLACE

    PROCEDURE MY_SCORING_PROC (year_in IN NUMBER,

    gnp_in IN BINARY_DOUBLE,

    unemployed_in IN BINARY_DOUBLE,

    population_in IN BINARY_DOUBLE,

    armed_forces_in IN BINARY_DOUBLE,

    gnp_deflator_in IN BINARY_DOUBLE,

    employed_in IN BINARY_DOUBLE,

    pred_out OUT NUMBER) AS

    BEGIN

    WITH data_in as (select year_in "Year",

    gnp_in "GNP",

    unemployed_in "Unemployed",

    population_in "Population",

    armed_forces_in "Armed.Forces",

    gnp_deflator_in "GNP.deflator",

    employed_in "Employed"

    from dual ),

    model_score as (SELECT PREDICTION(MY_GLM_MODEL USING *) "PRED"

    FROM data_in )

    select PRED into pred_out from model_score;

    EXCEPTION

    WHEN OTHERS THEN

    raise_application_error(-20001,

    'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);

    END;

    To invoke the stored procedure, we can do the following:

    SET SERVEROUTPUT ON

    DECLARE

    score NUMBER;

    BEGIN

    MY_SCORING_PROC(1947, 234.289, 235.6, 107.608, 159, 83, 60.323, score);

    DBMS_OUTPUT.PUT_LINE('Score: '|| score);

    END;

    Refreshing the model from R

    Let’s say the model above has been in production for a while, but has become stale – that is, it’s not predicting as well as it used to due to changing patterns in the data. To refresh it, we build a new model. For illustration purposes, we’re going to use the same data (so an identical model will be produced, except for its name).

    mod.glm2 <- ore.odmGLM(Employed ~ ., data = LONGLEY_TABLE)

    summary(mod.glm2)

    mod.glm2$fit.name

    To swap the models, we delete the existing model called MY_GLM_MODEL and rename the new model to MY_GLM_MODEL. Again, we can do this from R using PL/SQL and through ore.exec.

    ore.exec(paste("BEGIN DBMS_DATA_MINING.DROP_MODEL('MY_GLM_MODEL'); DBMS_DATA_MINING.RENAME_MODEL(model_name => '",mod.glm2$fit.name,"', new_model_name => 'MY_GLM_MODEL'); END;",sep=""))

    We can now re-execute the stored procedure and the new model will be used.

    SET SERVEROUTPUT ON

    DECLARE

    score NUMBER;

    BEGIN

    MY_SCORING_PROC(1947, 234.289, 235.6, 107.608, 159, 83, 60.323, score);

    DBMS_OUTPUT.PUT_LINE('Score: '|| score);

    END;

    You may have noticed that this approach can introduce a brief period where no model is accessible - between the DROP_MODEL and RENAME_MODEL. A better approach involves the use of SYNONYMs. In general, synonyms provide both data independence and location transparency, being an alternative name for a table, view, sequence, procedure, stored function, and other database objects. We can use this in conjunction with our stored procedure above. First, create a synonym for the original scoring procedure.

    CREATE or REPLACE SYNONYM MY_SCORING_PROC_SYM for MY_SCORING_PROC;

    When invoking the procedure from your application, use the name MY_SCORING_PROC_SYM in place of MY_SCORING_PROC.  Instead of renaming the model, create a second stored procedure, with a different name, e.g., MY_SCORING_PROC_2. The new procedure references the name of the newly build model internally. 

    When it is time to swap the models, invoke the following to change the procedures.

     

    CREATE or REPLACE SYNONYM MY_SCORING_PROC_SYM for MY_SCORING_PROC_2;

    Another benefit of this approach is that replaced models can still be kept should you need to revert to a previous version. 

    Moving an in-database model from one machine to another

    In a production deployment, there’s often the need to move a model from the development environment to the production environment. For example, the data scientist may have built the model in a development / sandbox environment and now needs to move it to the production machine(s).

    In-database models provide functions EXPORT_MODEL and IMPORT_MODEL as part of the DBMS_DATA_MINING SQL package. See the 11g documentation for details. These calls can be invoked from R, but we’ll show this from SQL just to keep the flow easier to see.

    From a SQL prompt, e.g., from SQL*Plus, connect to the schema that contains the model. Create a DIRECTORY object where the exported model file will be stored. List the model names available to this schema, which should contain MY_GLM_MODEL. Then, export the model

    CONNECT rquser/rquser_psw

    CREATE OR REPLACE DIRECTORY rquserdir AS '/home/MY_DIRECTORY';

    -- list the models available to rquser

    SELECT name FROM dm_user_models;

    -- export the model called MY_GLM_MODEL to a dump file in same schema

    EXECUTE DBMS_DATA_MINING.EXPORT_MODEL ('MY_GLM_MODEL_out',

    'RQUSERDIR',

    'name = ''MY_GLM_MODEL''');

    At this point, you have the ODM model named MY_GLM_MODEL in the file MY_GLM_MODEL_out01.dmp stored in the file system under /home/MY_DIRECTORY. This file can now be moved to the production environment and the model loaded into the target schema.

    Log into the new schema and invoke IMPORT_MODEL.

    CONNECT rquser2/rquser2_psw

    EXECUTE DBMS_DATA_MINING.IMPORT_MODEL (MY_GLM_MODEL_out01.dmp',

    'RQUSERDIR', 'name = ''MY_GLM_MODEL''',

    'IMPORT', NULL, 'glm_imp_job', 'rquser:rquser2');

    Summary

    In this post, we’ve highlighted how to build an in-database model in R and use it for scoring through SQL in a production, re-time settings. In addition, we showed how it is possible to swap, or refresh, models in a way that can leave your application code untouched. Finally, we highlighted database functionality that allows you to move in-database models from one database environment to another.

    Users should note that all the functionality shown involving SQL, or being invoked through ore.exec, can be easily wrapped in R functions that could ultimately become part of ORE. If any of our readers are interested in giving this a try, we can post your solution here to share with the R and Oracle community. For the truly adventurous, check out the Oracle Database package DBMS_FILE_TRANSFER to consider wrapping the ability to move model files from R as well.

    Friday Jul 19, 2013

    Oracle R Connector for Hadoop 2.2.0 released

    Oracle R Connector for Hadoop 2.2.0 is now available for download. The Oracle R Connector for Hadoop 2.x series has introduced numerous enhancements, which are highlighted in this article and summarized as follows:

     ORCH 2.0.0
     ORCH 2.1.0
     ORCH 2.2.0

     Analytic Functions

    • orch.lm
    • orch.lmf
    • orch.neural
    • orch.nmf

    Oracle Loader for Hadoop (OLH) support

    CDH 4.2.0

    ORCHhive transparency layer

    .

    .

    .

    .

    .

    .

    Analytic Functions
    • orch.cor
    • orch.cov
    • orch.kmeans
    • orch.princomp
    • orch.sample - by percent

    Configurable delimiters in text input data files

    Map-only and reduce-only jobs

    Keyless map/reduce output

    "Pristine" data mode for high performance data access

    HDFS cache of metadata

    Hadoop Abstraction Layer (HAL)

    .

    Analytic Functions
    • orch.sample - by number of rows

    CDH 4.3.0

    Full online documentation

    Support integer and matrix data types in hdfs.attach with detection of "pristine" data

    Out-of-the-box support for "pristine" mode for high I/O performance

    HDFS cache to improve interactive performance when navigating HDFS directories and file lists

    HDFS multi-file upload and download performance enhancements

    HAL for Hortonworks Data Platform 1.2 and Apache Hadoop 1.0

    ORCH 2.0.0

    In ORCH 2.0.0, we introduced four Hadoop-enabled analytic functions supporting linear  regression, low rank matrix factorization, neural network, and non-negative matrix factorization. These enable R users to immediately begin using advanced analytics functions on HDFS data using the MapReduce paradigm on a Hadoop cluster without having to design and implement such algorithms themselves.

    While ORCH 1.x supported moving data between the database and HDFS using sqoop, ORCH 2.0.0 supports the use of Oracle Loader for Hadoop (OLH) to move very large data volumes from HDFS to Oracle Database in a efficient and high performance manner.

    ORCH 2.0.0 supported Cloudera Distribution for Hadoop (CDH) version 4.2.0 and introduced the ORCHhive transparency layer, which leverages the Oracle R Enterprise transparency layer for SQL, but instead maps to HiveQL, a SQL-like language for manipulating HDFS data via Hive tables.

    ORCH 2.1.0

    In ORCH 2.1.0, we added several more analytic functions, including correlation and covariance, clustering via K-Means, principle component analysis (PCA), and sampling by specifying the percent of records to return.

    ORCH 2.1.0 also brought a variety of features, including: configurable delimiters (beyond comma delimited text files, using any ASCII delimiter), the ability to specify mapper-only and reduce-only jobs, and the output of NULL keys in mapper and reducer functions.

    To speed the loading of data into Hadoop jobs, ORCH introduced “pristine” mode where the user guarantees that the data meets certain requirements so that ORCH skips a time-consuming data validation step. “Pristine” data requires that numeric columns contain only numeric data, that missing values are either R’s NA or the null string, and that all rows have the same number of columns. This improves performance of hdfs.get on a 1GB file by a factor of 10.

    ORCH 2.1.0 introduced the caching of ORCH metadata to improve response time of ORCH functions, such as hdfs.ls, hdfs.describe, and hdfs.mget between 5x and 70x faster.

    The Hadoop Abstraction Layer, or HAL, enables ORCH to work on top of various Hadoop versions or variants, including Apache/Hortonworks, Cloudera Hadoop distributions: CDH3, and CDH 4.x with MR1 and MR2.

    ORCH 2.2.0

    In the latest release, ORCH 2.2.0, we’ve augmented orch.sample to allow specifying the number of rows in addition to percentage of rows. CDH 4.3 is now supported, and ORCH functions provide full online documentation via R's help function or ?. The function hdfs.attach now support integer and matrix data types and the ability to detect pristine data automatically. HDFS bulk directory upload and download performance speeds were also improved. Through the caching and automatic synchronization of ORCH metadata and file lists, the responsiveness of metadata HDFS-related functions has improved by 3x over ORCH 2.1.0, which also improves performance of hadoop.run and hadoop.exec functions. These improvements in turn bring a more interactive user experience for the R user when working with HDFS.

    Starting in ORCH 2.2.0, we introduced out-of-the-box tuning optimizations for high performance and expanded HDFS caching to include the caching of file lists, which further improves performance of HDFS-related functions.

    The function hdfs.upload now supports the option to upload multi-file directories in a single invocation, which optimizes the process. When downloading an HDFS directory, hdfs.download is optimized to issue a single HDFS command to download files into one local temporary directory before combining the separate parts into a single file.

    The Hadoop Abstraction Layer (HAL) was extended to support Hortonworks Data Platform 1.2 and Apache Hadoop 1.0. In addition, ORCH now allows the user to override the Hadoop Abstraction Layer version for use with unofficially supported distributions of Hadoop using system environment variables. This enables testing and certification of ORCH by other Hadoop distribution vendors.

    Certification of ORCH on non-officially supported platforms can be done using a separate test kit (available for download upon request: mark.hornick@oracle.com) that includes an extensive set of tests for core ORCH functionality and that can be run using the ORCH built-in testing framework. Running the tests pinpoints the failures and ensures that ORCH is compatible with the target platform.

    See the ORCH 2.2.0 Change List and Release Notes for additional details. ORCH 2.2.0 can be downloaded here.


    Tuesday May 28, 2013

    Converting Existing R Scripts to ORE - Getting Started

    Oracle R Enterprise provides a comprehensive, database-centric environment for end-to-end analytical processes in R, with immediate deployment to production environments. This message really resonates with our customers who are interested in executing R functions on database-resident data while seamlessly leveraging Oracle Database as a high-performance computing (HPC) environment. The ability to develop and operationalize R scripts for analytical applications in one step is quite appealing.

    One frequently asked question is how to convert existing R code that access data in flat files or the database to use Oracle R Enterprise. In this blog post, we talk about a few scenarios and how to begin a conversion from existing R code to using Oracle R Enterprise.

    Consider the following scenarios:

    Scenario 1: A stand-alone R script that generates its own data and simply returns a result. Data is not obtained from the file system or database. This may result from performing simulations where dadta is dynamically generated, or perhaps access from a URL on the internet.

    Scenario 2: An R script that loads data from a flat file such as a CSV file, performs some computations in R, and then writes the result back to a file.

    Scenario 3: An R script that loads data from a database table, via one of the database connector packages like RODBC, RJDBC, or ROracle, and writes a result back to the database –using SQL statements or package functions.

    Scenario 1

    A stand-alone R script might normally be run on a user’s desktop, invoked as a cron job, or even via Java to spawn an R engine and retrieve the result, but we’d like to operationalize its execution as part of a database application, invoked from SQL. Here’s a simple script to illustrate the concept of converting such a script to be executed at the database server using ORE’s embedded R execution. The script generates a data.frame with some random columns, performs summary on that data and returns the summary statistics, which are represented as an R table.

    # generate data

    set.seed(1)

    n <- 1000

    df <- 3

    x <- data.frame(a=1:n, b=rnorm(n), c=rchisq(n,df=df))

    # perform some analysis

    res <- summary(x)

    #return the result

    res


    To convert this to use ORE, create a function with appropriate arguments and body, for example:

    myFunction1 <- function (n = 1000, df = 3,seed=1) {

    set.seed(seed)

    x <- data.frame(a=1:n, b=rnorm(n), c=rchisq(n,df=df))

    res <- summary(x)

    res

    }

    Next, load the ORE packages and connect to Oracle Database using the ore.connect function. Using the all argument set to TRUE loads metadata for all the tables and views in that schema. We then store the function in the R script repository, invoking it via ore.doEval.

    # load ORE packages and connect to Oracle Database

    library(ORE)

    ore.connect("schema","sid","hostname","password",port=1521, all=TRUE)

    # load function into R script repository

    ore.scriptDrop("myFunction-1")

    ore.scriptCreate("myFunction-1", myFunction1)

    # invoke using embedded R execution at the database server

    ore.doEval(FUN.NAME="myFunction-1")

    > ore.doEval(FUN.NAME="myFunction-1")
           a                b                  c           
     Min.   :   1.0   Min.   :-3.00805   Min.   : 0.03449  
     1st Qu.: 250.8   1st Qu.:-0.69737   1st Qu.: 1.27386  
     Median : 500.5   Median :-0.03532   Median : 2.36454  
     Mean   : 500.5   Mean   :-0.01165   Mean   : 3.07924  
     3rd Qu.: 750.2   3rd Qu.: 0.68843   3rd Qu.: 4.25994  
     Max.   :1000.0   Max.   : 3.81028   Max.   :17.56720  

    Of course, we’re using default values here. To provide different arguments, change the invocation with arguments as follows:

    ore.doEval(FUN.NAME="myFunction-1", n=500, df=5, seed=2)

    > ore.doEval(FUN.NAME="myFunction-1", n=500, df=5, seed=2)
           a               b                  c          
     Min.   :  1.0   Min.   :-2.72182   Min.   : 0.1621  
     1st Qu.:125.8   1st Qu.:-0.65346   1st Qu.: 2.6144  
     Median :250.5   Median : 0.04392   Median : 4.4592  
     Mean   :250.5   Mean   : 0.06169   Mean   : 5.0386  
     3rd Qu.:375.2   3rd Qu.: 0.79096   3rd Qu.: 6.8467  
     Max.   :500.0   Max.   : 2.88842   Max.   :17.0367  

    Having successfully invoked this from the R client (my laptop), we can now invoke it from SQL. Here, we retrieve the summary result, which is an R table, as an XML string.

    select *

    from table(rqEval( NULL,'XML','myFunction-1'));

    The result can be viewed from SQL Developer.

    The following shows the XML output in a more structured manner.


    What if we wanted to get the result to appear as a SQL table? Since the current result is an R table (an R object), we need to convert this to a data.frame to return it. We’ll make a few modifications to “myFunction-1” above. Most notably is the need to convert the table object in res to a data.frame. There are a variety of ways to do this.

    myFunction2 <- function (n = 1000, df = 3,seed=1) {

    # generate data

    set.seed(seed)

    x <- data.frame(a=1:n, b=rnorm(n), c=rchisq(n,df=df))

    # perform some analysis

    res <- summary(x)

    # convert the table result to a data.frame

    res.df <- as.matrix(res)

    res.sum <- as.data.frame(matrix(as.numeric(substr(res.df,9,20)),6,3))

    names(res.sum) <- c('a','b','c')

    res.sum$statname <- c("min","1stQ","median","mean","3rdQ","max")

    res.sum <- res.sum[,c(4,1:3)]

    res.sum

    }

    # load function into R script repository

    ore.scriptDrop("myFunction-2")

    ore.scriptCreate("myFunction-2", myFunction2)

    We’ll now modify the SQL statement to specify the format of the result.

    select *

    from table(rqEval( NULL,'select cast(''a'' as VARCHAR2(12)) as "statname",

    1 "a", 1 "b", 1 "c" from dual ','myFunction-2'));

    Here’s the result as viewed from SQL Developer.


    This type of result could be incorporated into any SQL application accepting table or view input from a SQL query. That is particular useful in combination with OBIEE dashboards via an RPD.

    Scenario 2

    If you’ve been loading data from a flat file, perhaps a CSV file, your R code may look like the following, where it specifies to builds a model and write hat model to a file for future use, perhaps in scoring. It also generates a graph of the clusters highlighting the individual points, colored by their cluster id, with the centroids indicated with a star.

    # read data

    setwd("D:/datasets")

    dat <- read.csv("myDataFile.csv")

    # build a clustering model

    cl <- kmeans(x, 2)

    # write model to file

    save(cl, file="myClusterModel.dat")

    # create a graph and write it to a file

    pdf("myGraphFile.pdf")

    plot(x, col = cl$cluster)

    points(cl$centers, col = 1:2, pch = 8, cex=2)

    dev.off()

    The resulting PDF file contains the following image.


    To convert this script for use in ORE, there are several options. We’ll explore two: the first involving minimal change to use embedded R execution, and the second leveraging in-database techniques. First, we’ll want the data we used above in variable dat to be loaded into the database.

    # create a row id to enable ordered results (if a key doesn’t already exist)

    dat$ID <- 1:nrow(dat)

    # remove the table if it exists

    ore.drop("MY_DATA")

    # create the table using the R data.frame, resulting in an ore.frame named MY_DATA

    ore.create(dat,"MY_DATA")

    # assign the ID column as the row.names of the ore.frame

    row.names(MY_DATA) <- MY_DATA$ID

    In the first example, we’ll use embedded R execution and pass the data to the function via ore.tableApply. We’ll generate the graph, but simply display it within the function to allow embedded R execution to return the graph as a result. (Note we could also write the graph to a file in any directory accessible to the database server.) Instead of writing the model to a file, which requires keeping track of its location, as well as worring about backup and recovery, we store the model in the database R datastore using ore.save. All this requires minimal change. As above, we could store the function in the R script repository and invoke it by name – both from R and SQL. In this example, we simply provide the function itself as argument.

    myClusterFunction1 <- function(x) {

    cl <- kmeans(x, 2)

    ore.save(cl, name="myClusterModel",overwrite=TRUE)

    plot(x, col = cl$cluster)

    points(cl$centers, col = 1:2, pch = 8, cex=2)

    TRUE

    }

    ore.tableApply(MY_DATA[,c('x','y')], myClusterFunction1,

    ore.connect=TRUE,ore.png.height=700,ore.png.width=700)

    The ore.tableApply function projects the x and y columns of MY_DATA as input and also specifies ore.connect as TRUE since we are using the R datastore, which requires a database connection. Optionally, we can specify control arguments to the PNG output. In this example, these are the height and width of the image.

    For the second example, we convert this to leverage the ORE Transparency Layer. We’ll use the in-database K-Means algorithm and save the model in a datastore named “myClusteringModel”, as we did above. Since ore.odmKMeans doesn’t automatically assign cluster ids (since the data may be very large or are not required), the scoring is done separately. Note, however, that the prediction results also exist in the database as an ore.frame. To ensure ordering, we also assign row.names to the ore.frame pred. Lastly, we create the plot. Coloring the nodes requires pulling the cluster assignments; however, the points themselves can be accessed from the ore.frame. The centroids points are obtained from cl$centers2 of the cluster model.

    # build a clustering model in-database

    cl <- ore.odmKMeans(~., MY_DATA, 2, auto.data.prep=FALSE)

    # save model in database R datastore

    ore.save(cl,name="myClusterModel",overwrite=TRUE)

    # generate predictions to assign each row a cluster id, supplement with original data

    pred <- predict(cl,MY_DATA,supp=c('x','y','ID'),type="class")

    # assign row names to ensure ordering of results

    row.names(pred) <- pred$ID

    # create the graph

    plot(pred[,c('x','y')], col = ore.pull(pred$CLUSTER_ID))

    points(cl$centers2[,c('x','y')], col = c(2,3), pch = 8, cex=2)

    We can also combine using the transparency layer within an embedded R function. But we’ll leave that as an exercise to the reader.

    Scenario 3

    In this last scenario, the data already exists in the database and one of the database interface packages, such as RODBC, RJDBC, and ROracle is be used to retrieve data from and write data to the database. We’ll illustrate this with ROracle, but the same holds for the other two packages.

    # connect to the database

    drv <- dbDriver("Oracle")

    con <- dbConnect(drv, "mySchema", "myPassword")

    # retrieve the data specifying a SQL query

    dat <- dbGetQuery(con, 'select * from MY_RANDOM_DATA where "a" > 100')

    # perform some analysis

    res <- summary(dat)

    # convert the table result to a data.frame for output as table

    res.df <- as.matrix(res)

    res.sum <- as.data.frame(matrix(as.numeric(substr(res.df,9,20)),6,3))

    names(res.sum) <- c('a','b','c')

    res.sum$statname <- c("min","1stQ","median","mean","3rdQ","max")

    res.sum <- res.sum[,c(4,1:3)]

    res.sum

    dbWriteTable(con, "SUMMARY_STATS", res.sum)

    Converting this to ORE is straightforward. We’re already connected to the database using ore.connect from previous scenarios, so the existing table MY_RANDOM_DATA was already loaded in the environment as an ore.frame. Executing ore.ls lists this table is the result, so we can just start using it.

    > ore.ls(pattern="MY_RAND")

    [1] "MY_RANDOM_DATA"

    # no need to retrieve the data, use the transparency layer to compute summary

    res <- with(MY_RANDOM_DATA , summary(MY_RANDOM_DATA[a > 100,]))

    # convert the table result to a data.frame for output as table

    res.df <- as.matrix(res)

    res.sum <- as.data.frame(matrix(as.numeric(substr(res.df,9,20)),6,3))

    names(res.sum) <- c('a','b','c')

    res.sum$statname <- c("min","1stQ","median","mean","3rdQ","max")

    res.sum <- res.sum[,c(4,1:3)]

    # create the database table

    ore.create(res.sum, "SUMMARY_STATS")

    SUMMARY_STATS


    As we did in previous scenarios, this script can also be wrapped in a function and used in embedded R execution. This too is left as an exercise to the reader.

    Summary

    As you can see from the three scenarios discussed here, converting a script that accesses no external data, accesses and manipulates file data, or accesses and manipulates database data can be accomplished with a few strategic modifications. More involved scripts, of course, may require additional manipulation. For example, if the SQL query performs complex joins and filtering, along with derived column creation, the user may want to convert this SQL to the corresponding ORE Transparency Layer code, thereby eliminating reliance on SQL. But that’s a topic for another post.

    Monday Nov 19, 2012

    Join us at BIWA Summit 2013!

    Registration is now open for BIWA Summit 2013.  This event, focused on Business Intelligence, Data Warehousing and Analytics, is hosted by the BIWA SIG of the IOUG on January 9 and 10 at the Hotel Sofitel, near Oracle headquarters in Redwood City, California.

    Be sure to check out our featured speakers, including Oracle executives Balaji Yelamanchili, Vaishnavi Sashikanth, and Tom Kyte, and Ari Kaplan, sports analyst, as well as the many other internationally recognized speakers.  Hands-on labs will give you the opportunity to try out much of the Oracle software for yourself (including Oracle R Enterprise)--be sure to bring a laptop capable of running Windows Remote Desktop.  There will be over 35 sessions on a wide range of BIWA-related topics.  See the BIWA Summit 2013 web site for details and be sure to register soon, while early bird rates still apply.

    About

    The place for best practices, tips, and tricks for applying Oracle R Enterprise, Oracle R Distribution, ROracle, and Oracle R Advanced Analytics for Hadoop in both traditional and Big Data environments.

    Search

    Archives
    « August 2015
    SunMonTueWedThuFriSat
          
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
         
    Today