Saturday Mar 28, 2015

Use Repository APIs to Manage and Schedule Workflows to run

Data Miner 4.1 ships with a set of repository PL/SQL APIs that allow applications to manage Data Miner projects and workflows directly. The workflow APIs enable applications to execute workflows immediately or schedule workflows to execute using specific time intervals or using defined schedules. The workflow run APIs internally use Oracle Scheduler for scheduling functionality. Moreover, repository views are provided for applications to query project and workflow information. Applications can also monitor workflow execution status and query generated results using these views.

With the workflow APIs, applications can seamlessly integrate the workflow running process.  Moreover, all generated results are accessible by the Data Miner, so you can view the results using the Data Miner user interface.

For more information, please read the White Paper Use Repository APIs to Manage and Schedule Workflows to run

Monday Dec 15, 2014

Use Oracle Data Miner to Perform Sentiment Analysis inside Database using Twitter Data Demo

Sentiment analysis has been a hot topic recently; sentiment analysis or opinion mining refers to the application of natural language processing, computational linguistics, and text analytics to identify and extract subjective information in source materials.  Social media websites are good source of people sentiments.  Companies have been using social networking sites to make new product announcements, promote their products, collect product reviews and user feedback, interact with their customers, etc.  It is important for companies to sense customer sentiments toward their products, so they can react accordingly to benefit from customers’ opinion.

In this blog, we will show you how to use Data Miner to perform some basic sentiment analysis (based on text analytics) using Twitter data.  The demo data was downloaded from the developer API console page of the Twitter website.  The data itself originated from the Oracle Twitter page, and it contains about a thousand tweets posted in the past six months (May to Oct 2014).  We will determine the sentiments (highly favored, moderately favored, and less favored) of tweets based on their favorite counts, and assign the sentiment to each tweet.  We then build classification models using these tweets along with their assigned sentiments.  The goal is to predict how well a new tweet will be received by customers.  This may help marketing department to better craft a tweet before it is posted.

The demo (click here to download demo twitter data and workflow) will use the newly added JSON Query node in the Data Miner 4.1 to import the twitter data; please review the “How to import JSON data to Data Miner for Mining” blog entry in previous post.

Workflow for Sentiment Analysis

The following workflow shows the process we use to prepare the twitter data, determine the sentiments of tweets, and build classification models on the data.

The following describes the nodes used in the above workflow:

  • Data Source (TWITTER_LARGE)
    • Select the demo Twitter data source.  The sample Twitter data is attached with this blog.
  • JSON Query (JSON Query)
    • Select the required JSON attributes used for analysis; we only use the “id”, “text”, and “favorite_count” attributes.  The “text” attribute contains the tweet, and the “favorite_count” attribute indicates how many times the tweet has been favorited.
  • SQL Query (Cleanse Tweets)
    • Remove shorten URLs and punctuations within tweets because these data contain no predictive information.
  • Filter Rows (Filter Rows)
    • Remove retweeted tweets because these are duplicate tweets.
  • Transform (Transform)
    • Perform quantile bin of the “favorite_count” data into three quantiles; each quantile represent a sentiment.  The top quantile represents “highly favored” sentiment, the middle quantile represents “moderately favored” sentiment, and the bottom quantile represents “less favored” sentiment.
  • SQL Query (Recode Sentiment)
    • Assign quantiles as determined sentiments to tweets.
  • Create Table (OUTPUT_4_29)
    • Persist the data to a table for classification model build (optional).
  • Classification (Class Build)
    • Build classification models to predict customer sentiment toward a new tweet (how much will customer like this new tweet?).

Data Source Node (TWITTER_LARGE)

Select the JSON_DATA in the TWITTER_LARGE table.  The JSON_DATA contains about a thousand tweets to be used for sentiment analysis.

JSON Query Node (JSON Query)

Use the new JSON Query node to select the following JSON attributes.  This node projects the JSON data to relational data format, so that it can be consumed within the workflow process.

SQL Query Node (Cleanse Tweets)

Use the REGEXP_REPLACE function to remove numbers, punctuations, and shorten URLs inside tweets because these data are considered noises and do not provide any predictive information.  Notice we do not treat hash tags inside tweets specially; these tags are treated as regular words.

We specify the number, punctuation, and URL patterns in regular expression syntax and use the database function REGEXP_REPLACE to replace these patterns inside all tweets with empty spaces.

SELECT
REGEXP_REPLACE("JSON Query_N$10055"."TWEET", '([[:digit:]*]|[[:punct:]*]|(http[s]?://(.*?)(\s|$)))', '', 1, 0) "TWEETS",
"JSON Query_N$10055"."FAVORITE_COUNT",
"JSON Query_N$10055"."ID"
FROM
"JSON Query_N$10055"

Filter Rows Node (Filter Rows)

Remove retweeted tweets because these are duplicate tweets.  Usually, retweeted tweets start with a “RT” abbreviate, so we specify the following row filter condition to filter out those tweets.

Transform Node (Transform)

Use the Transform node to perform quantile bin of the “favorite_count” data into three quantiles; each quantile represent a sentiment.  For simplicity, we just bin the count into three quantiles without applying any special treatment first.

SQL Query Node (Recode Sentiment)

Assign quantiles as determined sentiments to tweets; top quantile represents “highly favored” sentiment, the middle quantile represents “moderately favored” sentiment, and the bottom quantile represents “less favored”.  These sentiments become target classes for the classification model build.

Classification Node (Class Build)

Build Classification models using the sentiment as target and tweet id as case id.

Since the TWEETS column contains the textual tweets, so we change the mining type to Text Custom.

Enable the Stemming option for text processing.

Compare Test Results

After the model build completes successfully, open the test viewer to compare model test results, the SVM model seems to produce the best prediction for the “highly favored” sentiment (57% correct prediction).

Moreover, the SVM model has better lift result than other models, so we will use this model for scoring.

Sentiment Prediction (Scoring)

Let’s score this tweet “this is a boring tweet!” using the SVM model.

As expected, this tweet receives a “less favored” prediction.

How about this tweet “larry is doing a data mining demo now!” ?

Not surprisingly, this tweet receives a “highly favored” prediction.

Last but not least, let’s see the sentiment prediction for the title of this blog

Not bad it gets a “highly favored” prediction, so it seems this title will be well received by audience.

Conclusion

The best SVM model only produces 57% accuracy for the “highly favored” sentiment prediction, but it is reasonably better than random guess.  For a larger sample of tweet data, the model accuracy could be improved.  With the new JSON Query node, it enables us to perform data mining on JSON data which is the most popular data format produced by prominent social networking sites.

Monday Dec 08, 2014

How to import JSON data to Data Miner for Mining

JSON is a popular lightweight data structure used by Big Data. Increasingly, a lot of data produced by Big Data are in JSON format. For example, web logs generated in the middle tier web servers are likely in JSON format. NoSQL database vendors have chosen JSON as their primary data representation. Moreover, the JSON format is widely used in the RESTful style Web services responses generated by most popular social media websites like Facebook, Twitter, LinkedIn, etc. This JSON data could potentially contain wealth of information that is valuable for business use. So it is important that we can bring this data over to Data Miner for analysis and mining purposes.

Oracle database 12.1.0.2 provides ability to store and query JSON data. To take advantage of the database JSON support, the upcoming Data Miner 4.1 added a new JSON Query node that allows users to query JSON data as relational format. In additional, the current Data Source node and Create Table node are enhanced to allow users to specify JSON data in the input data source.

In this blog, I will show you how to specify a JSON data in the input data source and use JSON Query node to selectively query desirable attributes and project the result in relational format. Once the data is in relational format, users can treat it as a normal relational data source and start analyzing and mining it immediately. The Data Miner repository installation installs a sample JSON dataset ODMR_SALES_JSON_DATA, which I will be using it here. However, Oracle Big Data SQL supports queries against vast amounts of big data stored in multiple data sources, including Hadoop. Users can view and analyze data from various data stores together, as if it were all stored in an Oracle database.

Specify JSON Data

The Data Source node and Create Table nodes are enhanced to allow users to specify the JSON data type in the input data source.

Data Source Node

For this demo, we will focus on the Data Source node. To specify JSON data, create a new workflow with a Data Source node. In the Define Data Source wizard, select the ODMR_SALES_JSON_DATA table. Notice there is only one column (JSON_DATA) in this table, which contains the JSON data.

Click Next to go to the next step where it shows the JSON_DATA is selected with the JSON(CLOB) data type. The JSON prefix indicates the data stored is in JSON format; the CLOB is the original data type. The JSON_DATA column is defined with the new “IS JSON” constraint, which indicates only valid JSON document can be stored there. The UI can detect this constraint and automatically select the column as JSON type. If there was not a “IS JSON” constraint defined, the column would be shown with a CLOB data type. To manually designate a column as a JSON type, click on the data type itself to bring up a in-place dropdown where it lists the original data type (e.g. CLOB) and a corresponding JSON type (e.g. JSON(CLOB)), so just select the JSON type. Note: only the following data types can be set to JSON type: VARCHAR2, CLOB, BLOB, RAW, NCLOB, and NVARCHAR2.

Click Finish and run the node now.

Once the node is run successfully, open the editor to examine the generated JSON schema.

Notice the message “System Generated Data Guide is available” at the bottom of the Selected Attributes listbox. What happens here is when the Data Source node is run, it parsed the JSON documents to produce a schema that represents the document structure. Here is what the schema looks like:

PATH

TYPE

$."CUST_ID"

NUMBER

$."EDUCATION"

STRING

$."OCCUPATION"

STRING

$."HOUSEHOLD_SIZE"

STRING

$."YRS_RESIDENCE"

STRING

$."AFFINITY_CARD"

STRING

$."BULK_PACK_DISKETTES"

STRING

$."FLAT_PANEL_MONITOR"

$."HOME_THEATER_PACKAGE"

$."BOOKKEEPING_APPLICATION"

$."PRINTER_SUPPLIES"

$."Y_BOX_GAMES"

$."OS_DOC_SET_KANJI"

$."COMMENTS"

$."SALES"

$."SALES"."PROD_ID"

$."SALES"."QUANTITY_SOLD"

$."SALES"."AMOUNT_SOLD"

$."SALES"."CHANNEL_ID"

$."SALES"."PROMO_ID"

STRING

STRING

STRING

STRING

STRING

STRING

STRING

ARRAY

NUMBER

NUMBER

NUMBER

NUMBER

NUMBER

The JSON Path expression syntax and associated data type info (OBJECT, ARRAY, NUMBER, STRING, BOOLEAN, NULL) are used to represent JSON document structure. We will refer to this JSON schema as Data Guide throughout the product.

Before we look at the Data Guide in the UI, let’s look at the settings that can affect how it is generated. Click the “JSON Settings…” button to open the JSON Parsing Settings dialog.

The settings are described below:

· Generate Data Guide if necessary

o Generate a Data Guide if it is not already generated in parent node.

· Sampling

o Sample JSON documents for Data Guide generation.

· Max. number of documents

o Specify maximum number of JSON documents to be parsed for Data Guide generation.

· Limit Document Values to Process

o Sample JSON document values for Data Guide generation.

· Max. number per document

o Specify maximum number of JSON document scalar values (e.g. NUMBER, STRING, BOOLEAN, NULL) per document to be parsed for Data Guide generation.

The sampling option is enabled by default to prevent long-running parsing of JSON documents; parsing could take a while for large number of documents. However, users may supply a Data Guide (Import from File) or reuse an existing Data Guide (Import from Workflow) if compatible Data Guide is available.

Now let’s look at the Data Guide, go back to the Edit Data Source Node dialog, select the JSON_DATA column and click the above to open the Edit Data Guide dialog. The dialog shows the JSON structure in a hierarchical tree view with data type information. The “Number of Values Processed” shows the total number of JSON scalar values was parsed to produce the Data Guide.

Users can control whether to enable Data Guide generation or import a compatible Data Guide via the menu under the icon.

The menu options are described below:

· Default

o Use the “Generate Data Guide if necessary” setting found in the JSON Parsing Setting dialog (see above).

· On

o Always generate a Data Guide.

· Off

o Do not generate a Data Guide.

· Import From Workflow

o Import a compatible Data Guide from a workflow node (e.g. Data Source, Create Table). The option will be set to Off after the import (disable Data Guide generation).

· Import From File

o Import a compatible Data Guide from a file. The option will be set to Off after the import (disable Data Guide generation).

Users can also export the current Data Guide to a file via the icon.

Select JSON Data

In Data Miner 4.1, a new JSON Query node is added to allow users to selectively bring over desirable JSON attributes as relational format.

JSON Query Node

The JSON Query node is added to the Transforms group of the Workflow.

Let’s create a JSON Query node and connect the Data Source node to it.

Double click the JSON Query node to open the editor. The editor consists of four tabs, and these tabs are described as followings:

· JSON

The Column dropdown lists all available columns in the data source where JSON structure (Data Guide) is found. It consists of the following two sub tabs:

o Structure

o Show the JSON structure of the selected column in a hierarchical tree view.

o Data

o Show sample of JSON documents found in the selected column. By default it displays first 2,000 characters (including spaces) of the documents. Users can change the sample size (max. 50,000 chars) and run the query to see more of the documents.

· Addition output

o Allow users to select any non-JSON columns in the data source as additional output columns.

· Aggregation

o Allow users to define aggregations of JSON attributes.

· Preview

o Output Columns

o Show columns in the generated relational output.

o Output Data

o Show data in the generated relational output.

JSON Tab

Let’s select some JSON attributes to bring over. Skip the SALES attributes because we want to define aggregations for these attributes (QUANTITY_SOLD and AMOUNT_SOLD).

To peek at the JSON documents, go to the Data tab. You can change the Sample Size to look at more JSON data. Also, you can search for specific data within the displayed documents by using the search control.

Addition Output Tab

If you have any non-JSON columns in the data source that you want to carry over for output, you can select those columns here.

Aggregate Tab

Let’s define aggregations (use SUM function) for QUANTITY_SOLD and AMOUNT_SOLD attributes (within the SALES array) for each customer group (group by CUST_ID).

Click the icon in the top toolbar to open the Edit Group By dialog, where you can select the CUST_ID as the Group-By attribute. Notice the Group-By attribute can consists of multiple attributes.

Click OK to return to the Aggregate tab, where you can see the selected CUST_ID Group-By attribute is now added to the Group By Attributes table at the top.

Click the icon in the bottom toolbar to open the Add Aggregations dialog, where you can define the aggregations for both QUANTITY_SOLD and AMOUNT_SOLD attributes using the SUM function.

Next, click the icon in the toolbar to open the Edit Sub Group By dialog, where you can specify a Sub-Group By attribute (PROD_ID) to calculate quantity sold and amount sold per product per customer.

Specifying a Sub-Group By column creates a nested table; the nested table contains columns with data type DM_NESTED_NUMERICALS.

Click OK to return to the Aggregate tab, where you can see the defined aggregations are now added to the Aggregation table at the bottom.

Preview Tab

Let’s go to the Preview tab to look at the generated relational output. The Output Columns tab shows all output columns and their corresponding source JSON attributes. The output columns can be renamed by using the in-place edit control.

The Output Data tab shows the actual data in the generated relational output.

Click OK to close the editor when you are done. The generated relational output is single-record case format; each row represents a case. If we had not defined the aggregations for the JSON array attributes, the relational output would have been in multiple-record case format. The multiple-record case format is not suitable for building mining models except for Association model (which accepts transactional data format with transaction id and item id).

Use Case

Here is an example of how JSON Query node is used to project the JSON data source to relational format, so that the data can be consumed by Explore Data node for data analysis and Class Build node for building models.

Conclusion

This blog shows how JSON data can be brought over to Data Miner via the new JSON Query node. Once the data is projected to relational format, it can easily be consumed by Data Miner for graphing, data analysis, text processing, transformation, and modeling.

Wednesday Oct 08, 2014

2014 was a very good year for Oracle Advanced Analytics at Oracle Open World 2014

2014 was a very good year for Oracle Advanced Analytics at Oracle Open World 2014.   We had a number of customer, partner and Oracle talks that focused on the Oracle Advanced Analytics Database Option.    See below with links to presentations.  Check back later to OOW Sessions Content Catalog as not all presentations have been uploaded yet.  :-(

Big Data and Predictive Analytics: Fiserv Data Mining Case Study [CON8631]

Moving data mining algorithms to run as native data mining SQL functions eliminates data movement, automates knowledge discovery, and accelerates the transformation of large-scale data to actionable insights from days/weeks to minutes/hours. In this session, Fiserv, a leading global provider of electronic commerce systems for the financial services industry, shares best practices for turning in-database predictive models into actionable policies and illustrates the use of Oracle Data Miner for fraud prevention in online payments. Attendees will learn how businesses that implement predictive analytics in their production processes significantly improve profitability and maximize their ROI.

Developing Relevant Dining Visits with Oracle Advanced Analytics at Olive Garden [CON2898]

Olive Garden, traditionally managing its 830 restaurants nationally, transitioned to a localized approach with the help of predictive analytics. Using k-means clustering and logistic classification algorithms, it divided its stores into five behavioral segments. The analysis leveraged Oracle SQL Developer 4.0 and Oracle R Enterprise 1.3 to evaluate 115 million transactions in just 5 percent the time required by the company’s BI tool. While saving both time and money by making it possible to develop the solution internally, this analysis has informed Olive Garden’s latest remodel campaign and continues to uncover millions in profits by optimizing pricing and menu assortment. This session illustrates how Oracle Advanced Analytics solutions directly affect the bottom line.

A Perfect Storm: Oracle Big Data Science for Enterprise R and SAS Users [CON8331]

With the advent of R and a rich ecosystem of users and developers, a myriad of bloggers, and thousands of packages with functionality ranging from social network analysis and spatial data analysis to empirical finance and phylogenetics, use of R is on a steep uptrend. With new R tools from Oracle, including Oracle R Enterprise, Oracle R Distribution, and Oracle R Advanced Analytics for Hadoop, users can scale and integrate R for their enterprise big data needs. Come to this session to learn about Oracle’s R technologies and what data scientists from smart companies around the world are doing with R.

Additional Information

Market Basket Analysis at Dunkin’ Brands [CON6545]

With almost 120 years of franchising experience, Dunkin’ Brands owns two of the world’s most recognized, beloved franchises: Dunkin’ Donuts and Baskin-Robbins. This session describes a market basket analysis solution built from scratch on the Oracle Advanced Analytics platform at Dunkin’ Brands. This solution enables Dunkin’ to look at product affinity and a host of associated sales metrics with a view to improving promotional effectiveness and cross-sell/up-sell to increase customer loyalty. The presentation discusses the business value achieved and technical challenges faced in scaling the solution to Dunkin’ Brands’ transaction volumes, including engineered systems (Oracle Exadata) hardware and parallel processing at the core of the implementation.

Predictive Analytics with Oracle Data Mining [CON8596]

This session presents three case studies related to predictive analytics with the Oracle Data Mining feature of Oracle Advanced Analytics. Service contracts cancellation avoidance with Oracle Data Mining is about predicting the contracts at risk of cancellation at least nine months in advance. Predicting hardware opportunities that have a high likelihood of being won means identifying such opportunities at least four months in advance to provide visibility into suppliers of required materials. Finally, predicting cloud customer churn involves identifying the customers that are not as likely to renew subscriptions as others.

SQL Is the Best Development Language for Big Data [CON7439]

SQL has a long and storied history. From the early 1980s till today, data processing has been dominated by this language. It has changed and evolved greatly over time, gaining features such as analytic windowing functions, model clauses, and row-pattern matching. This session explores what's new in SQL and Oracle Database for exploiting big data. You'll see how to use SQL to efficiently and effectively process data that is not stored directly in Oracle Database.

Advanced Predictive Analytics for Database Developers on Oracle [CON7977]

Traditional database applications use SQL queries to filter, aggregate, and summarize data. This is called descriptive analytics. The next level is predictive analytics, where hidden patterns are discovered to answer questions that give unique insights that cannot be derived with descriptive analytics. Businesses are increasingly using machine learning techniques to perform predictive analytics, which helps them better understand past data, predict future trends, and enable better decision-making. This session discusses how to use machine learning algorithms such as regression, classification, and clustering to solve a few selected business use cases.

What Are They Thinking? With Oracle Application Express and Oracle Data Miner [UGF2861]

Have you ever wanted to add some data science to your Oracle Application Express applications? This session shows you how you can combine predictive analytics from Oracle Data Miner into your Oracle Application Express application to monitor sentiment analysis. Using Oracle Data Miner features, you can build data mining models of your data and apply them to your new data. The presentation uses Twitter feeds from conference events to demonstrate how this data can be fed into your Oracle Application Express application and how you can monitor sentiment with the native SQL and PL/SQL functions of Oracle Data Miner. Oracle Application Express comes with several graphical techniques, and the presentation uses them to create a sentiment dashboard.

Transforming Customer Experience with Big Data and Predictive Analytics [CON8148]

Delivering a high-quality customer experience is essential for long-term profitability and customer retention in the communications industry. Although service providers own a wealth of customer data within their systems, the sheer volume and complexity of the data structures inhibit their ability to extract the full value of the information. To change this situation, service providers are increasingly turning to a new generation of business intelligence tools. This session begins by discussing the key market challenges for business analytics and continues by exploring Oracle’s approach to meeting these challenges, including the use of predictive analytics, big data, and social network analytics.

There are a few others where Oracle Advanced Analytics is included e.g. Retail GBU, Big Data Strategy, etc. but they are typically more broadly focused.  If you search the Content Catalog for “Advanced Analytics” etc. you can find other related presentations that involve OAA.

Hope this helps.  Enjoy!

cb

Tuesday Mar 18, 2014

Deploy Data Miner Apply Node SQL as RESTful Web Service for Real-Time Scoring

The free Oracle Data Miner GUI is an extension to Oracle SQL Developer that enables data analysts to work directly with data inside the database, explore the data graphically, build and evaluate multiple data mining models, apply Oracle Data Mining models to new data and deploy Oracle Data Mining's predictions and insights throughout the enterprise. The product enables a complete workflow deployment to a production system via generated PL/SQL scripts (See Generate a PL/SQL script for workflow deployment). This time I want to focus on the model scoring side, especially the single record real-time scoring. Would it be nice if the scoring function can be accessed by different systems on different platforms? How about deploying the scoring function as a Web Service? This way any system that can send HTTP request can invoke the scoring Web Service, and consume the returning result as they see fit. For example, you can have a mobile app that collects customer data, and then invokes the scoring Web Service to determine how likely the customer is going to buy a life insurance. This blog shows a complete demo from building predictive models to deploying a scoring function as a Web Service. However, the demo does not take into account of any authentication and security consideration related to Web Services, which is out of the scope of this blog.

Web Services Requirement

This demo uses the Web Services feature provided by the Oracle APEX 4.2 and Oracle REST Data Services 2.0.6 (formerly Oracle APEX Listener). Here are the installation instructions for both products:

For 11g Database

Go to the Oracle Application Express Installation Guide and following the instructions below:

1.5.1 Scenario 1: Downloading from OTN and Configuring the Oracle Application Express Listener

· Step 1: Install the Oracle Database and Complete Pre-installation Tasks

· Step 2: Download and Install Oracle Application Express

· Step 3: Change the Password for the ADMIN Account

· Step 4: Configure RESTful Services

· Step 5: Restart Processes

· Step 6: Configure APEX_PUBLIC_USER Account

· Step 7: Download and Install Oracle Application Express Listener

· Step 8: Enable Network Services in Oracle Database 11g

· Step 9: Security Considerations

· Step 10: About Developing Oracle Application Express in Other Languages

· Step 11: About Managing JOB_QUEUE_PROCESSES

· Step 12: Create a Workspace and Add Oracle Application Express Users


For 12c Database

Go to Oracle Application Express Installation Guide (Release 4.2 for Oracle Database 12c) and following the instructions below:

4.4 Installing from the Database and Configuring the Oracle Application Express Listener

· Install the Oracle Database and Complete Preinstallation Tasks

· Download and Install Oracle Application Express Listener

· Configure RESTful Services

· Enable Network Services in Oracle Database 12c

· Security Considerations

· About Running Oracle Application Express in Other Languages

· About Managing JOB_QUEUE_PROCESSES

· Create a Workspace and Add Oracle Application Express Users


Note: The APEX is pre-installed with the Oracle database 12c, but you need to configure it in order to use it.

For this demo, create a Workspace called DATAMINER that is based on an existing user account that has already been granted access to the Data Miner (this blog assumes DMUSER is the Data Miner user account). Please refer to the Oracle By Example Tutorials to review how to create a Data Miner user account and install the Data Miner Repository. In addition, you need to create an APEX user account (for simplicity I use DMUSER).

Build Models to Predict BUY_INSURANCE

This demo uses the demo data set, INSUR_CUST_LTV_SAMPLE, that comes with the Data Miner installation. Now, let’s use the Classification Build node to build some models using the CUSTOMER_ID as the case id and BUY_INSURANCE as the target.

Evaluate the Models

Nice thing about the Build node is that it builds a set of models with different algorithms within the same mining function by default, so we can select the best model to use. Let’s look at the models in the Test Viewer; here we can compare the models by looking at their Predictive Confidence, Overall Accuracy, and Average Accuracy values. Basically, the model with the highest values across these three metrics is the good one to use. As you can see, the winner here is the CLAS_DT_3_6 decision tree model.

Next, let’s see what input data columns are used as predictors for the decision tree model. You can find that information in the Model Viewer below. Surprisingly, it only uses a few columns for the prediction. These columns will be our input data requirement for the scoring function, the rest of the input columns can be ignored.


Score the Model

Let’s complete the workflow with an Apply node, from which we will generate the scoring SQL statement to be used for the Web Service. Here we reuse the INSUR_CUST_LTV_SAMPLE data as input data to the Apply node, and select only the required columns as found in the previous step. Also, in the Class Build node we deselect the other models as output in the Property Inspector (Models tab), so that only decision tree model will be used for the Apply node. The generated scoring SQL statement will use only the decision tree model to score against the limited set of input columns.

Generate SQL Statement for Scoring

After the workflow is run successfully, we can generate the scoring SQL statement via the “Save SQL” context menu off the Apply node as shown below.

Here is the generated SQL statement:

/* SQL Deployed by Oracle SQL Developer 4.1.0.14.78 from Node "Apply", Workflow "workflow score", Project "project", Connection "conn_12c" on Mar 16, 2014 */
ALTER SESSION set "_optimizer_reuse_cost_annotations"=false;
ALTER SESSION set NLS_NUMERIC_CHARACTERS=".,";
--ALTER SESSION FOR OPTIMIZER
WITH
/* Start of sql for node: INSUR_CUST_LTV_SAMPLE APPLY */
"N$10013" as (select /*+ inline */ "INSUR_CUST_LTV_SAMPLE"."BANK_FUNDS",
"INSUR_CUST_LTV_SAMPLE"."CHECKING_AMOUNT",
"INSUR_CUST_LTV_SAMPLE"."CREDIT_BALANCE",
"INSUR_CUST_LTV_SAMPLE"."N_TRANS_ATM",
"INSUR_CUST_LTV_SAMPLE"."T_AMOUNT_AUTOM_PAYMENTS"
from "DMUSER"."INSUR_CUST_LTV_SAMPLE" )
/* End of sql for node: INSUR_CUST_LTV_SAMPLE APPLY */
,
/* Start of sql for node: Apply */
"N$10011" as (SELECT /*+ inline */
PREDICTION("DMUSER"."CLAS_DT_3_6" COST MODEL USING *) "CLAS_DT_3_6_PRED",
PREDICTION_PROBABILITY("DMUSER"."CLAS_DT_3_6", PREDICTION("DMUSER"."CLAS_DT_3_6" COST MODEL USING *) USING *) "CLAS_DT_3_6_PROB",
PREDICTION_COST("DMUSER"."CLAS_DT_3_6" COST MODEL USING *) "CLAS_DT_3_6_PCST"
FROM "N$10013" )
/* End of sql for node: Apply */
select * from "N$10011";

We need to modify the first SELECT SQL statement to change the data source from a database table to a record that can be constructed on the fly, which is crucial for real-time scoring. The bind variables (e.g. :funds) are used; these variables will be replaced with actual data (passed in by the Web Service request) when the SQL statement is executed.

/* SQL Deployed by Oracle SQL Developer 4.1.0.14.78 from Node "Apply", Workflow "workflow score", Project "project", Connection "conn_12c" on Mar 16, 2014 */
WITH
/* Start of sql for node: INSUR_CUST_LTV_SAMPLE APPLY */
"N$10013" as (select /*+ inline */
:funds "BANK_FUNDS",
:checking "CHECKING_AMOUNT",
:credit "CREDIT_BALANCE",
:atm "N_TRANS_ATM",
:payments "T_AMOUNT_AUTOM_PAYMENTS"
from DUAL
)
/* End of sql for node: INSUR_CUST_LTV_SAMPLE APPLY */
,
/* Start of sql for node: Apply */
"N$10011" as (SELECT /*+ inline */
PREDICTION("DMUSER"."CLAS_DT_3_6" COST MODEL USING *) "CLAS_DT_3_6_PRED",
PREDICTION_PROBABILITY("DMUSER"."CLAS_DT_3_6", PREDICTION("DMUSER"."CLAS_DT_3_6" COST MODEL USING *) USING *) "CLAS_DT_3_6_PROB",
PREDICTION_COST("DMUSER"."CLAS_DT_3_6" COST MODEL USING *) "CLAS_DT_3_6_PCST"
FROM "N$10013" )
/* End of sql for node: Apply */
select * from "N$10011";

Create Scoring Web Service

Assume the Oracle APEX and Oracle REST Data Services have been properly installed and configured; we can proceed to create a RESTful web service for real-time scoring. The followings describe the steps to create the Web Service in APEX:

1. APEX Login

You can bring up the APEX login screen by pointing your browser to http://<host>:<port>/ords. Enter your Workspace name and account info to login. The Workspace should be based on the Data Miner DMUSER account for this demo to work.

2. Select SQL Workshop

Select the SQL Workshop icon to proceed.

3. Select RESTful Services

Select the RESTful Services to create the Web Service.

Click the “Create” button to continue.

4. Define Restful Services

Enter the following information to define the scoring Web Service in the RESTful Services Module form:

Name: buyinsurance

URI Prefix: score/

Status: Published

URI Template: buyinsurance?funds={funds}&checking={checking}&credit={credit}&atm={atm}&payments={payments}

Method: GET

Source Type: Query Format: CSV

Source:

/* SQL Deployed by Oracle SQL Developer 4.1.0.14.78 from Node "Apply", Workflow "workflow score", Project "project", Connection "conn_11204" on Mar 16, 2014 */
WITH
/* Start of sql for node: INSUR_CUST_LTV_SAMPLE APPLY */
"N$10013" as (select /*+ inline */
:funds "BANK_FUNDS",
:checking "CHECKING_AMOUNT",
:credit "CREDIT_BALANCE",
:atm "N_TRANS_ATM",
:payments "T_AMOUNT_AUTOM_PAYMENTS"
from DUAL
)
/* End of sql for node: INSUR_CUST_LTV_SAMPLE APPLY */
,
/* Start of sql for node: Apply */
"N$10011" as (SELECT /*+ inline */
PREDICTION("DMUSER"."CLAS_DT_3_6" COST MODEL USING *) "CLAS_DT_3_6_PRED",
PREDICTION_PROBABILITY("DMUSER"."CLAS_DT_3_6", PREDICTION("DMUSER"."CLAS_DT_3_6" COST MODEL USING *) USING *) "CLAS_DT_3_6_PROB",
PREDICTION_COST("DMUSER"."CLAS_DT_3_6" COST MODEL USING *) "CLAS_DT_3_6_PCST"
FROM "N$10013" )
/* End of sql for node: Apply */
select * from "N$10011";

Note: JSON output format is supported.

Lastly, create the following parameters that are used to pass the data from the Web Service request (URI) to the bind variables used in the scoring SQL statement.

The final RESTful Services Module definition should look like the following. Make sure the “Requires Secure Access” is set to “No” (HTTPS secure request is not addressed in this demo).

Test the Scoring Web Service

Let’s create a simple web page using your favorite HTML editor (I use JDeveloper to create this web page). The page includes a form that is used to collect customer data, and then fires off the Web Service request upon submission to get a prediction and associated probability.

Here is the HTML source of the above Form:

<!DOCTYPE html>

<html>

<head>

<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>

<title>score</title>

</head>

<body>

<h2>

Determine if Customer will Buy Insurance

</h2>

<form action="http://localhost:8080/ords/dataminer/score/buyinsurance" method="get">

<table>

<tr>

<td>Bank Funds:</td>

<td><input type="text" name="funds"/></td>

</tr>

<tr>

<td>Checking Amount:</td>

<td><input type="text" name="checking"/></td>

</tr>

<tr>

<td>Credit Balance:</td>

<td><input type="text" name="credit"/></td>

</tr>

<tr>

<td>Number ATM Transactions:</td>

<td><input type="text" name="atm"/></td>

</tr>

<tr>

<td>Amount Auto Payments:</td>

<td><input type="text" name="payments"/></td>

</tr>

<tr>

<td colspan="2" align="right">

<input type="submit" value="Score"/>

</td>

</tr>

</table>

</form>

</body>
</html>

When the Score button is pressed, the form sends a GET HTTP request to the web server with the collected form data as name-value parameters encoded in the URL.

checking=%7bchecking%7d&credit=%7bcredit%7d&atm=%7batm%7d&payments=%7bpayments%7d">http://localhost:8080/ords/dataminer/score/buyinsurance?funds={funds}&checking={checking}&credit={credit}&atm={atm}&payments={payments}

Notice the {funds}, {checking}, {credit}, {atm}, {payments} will be replaced with actual data from the form. This URI matches the URI Template specified in the RESTful Services Module form above.

Let’s test out the scoring Web Service by entering some values in the form and hit the Score button to see the prediction.

The prediction along with its probability and cost is returned as shown below. Unfortunately, this customer is less likely to buy insurance.

Let’s change some values and see if we have any luck.

Bingo! This customer is more likely to buy insurance.

Conclusion

This blog shows how to deploy Data Miner generated scoring SQL as Web Service, which can be consumed by different systems on different platforms from anywhere. In theory, any SQL statement generated from the Data Miner node could potentially be made as Web Services. For example, we can have a Web Service that returns Model Details info, and this info can be consumed by some BI tool for application integration purpose.

About

Everything about Oracle Data Mining, a component of the Oracle Advanced Analytics Option - News, Technical Information, Opinions, Tips & Tricks. All in One Place

Search

Categories
Archives
« May 2015
SunMonTueWedThuFriSat
     
1
2
3
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today