Friday Jul 24, 2015
Wednesday Apr 22, 2015
By Charlie Berger, Advanced Analytics-Oracle on Apr 22, 2015
OpenWorld 2015 Call for Proposals Extended to Wed, May 6th, 11:59 p.m https://www.oracle.com/openworld/call-for-proposals.html Submit your Oracle Advanced Analytics stories now
If you’re an Oracle technology expert, conference attendees want to hear it straight from you. So don’t wait—proposals must be submitted by April 29.
Wanted: Outstanding Oracle Experts
The Oracle OpenWorld 2015 Call for Proposals is now open. Attendees at the conference are eager to hear from experts on Oracle business and technology. They’re looking for insights and improvements they can put to use in their own jobs: exciting innovations, strategies to modernize their business, different or easier ways to implement, unique use cases, lessons learned, the best of best practices.
If you’ve got something special to share with other Oracle users and technologists, they want to hear from you, and so do we. Submit your proposal now for this opportunity to present at Oracle OpenWorld, the most important Oracle technology and business conference of the year.
We recommend you take the time to review the General Information, Submission Information, Content Program Policies, and Tips and Guidelines pages before you begin. We look forward to your submissions.
Submit Your Proposal
By submitting a session for consideration, you authorize Oracle to promote, publish, display, and disseminate the content submitted to Oracle, including your name and likeness, for use associated with the Oracle OpenWorld and JavaOne San Francisco 2015 conferences. Press, analysts, bloggers and social media users may be in attendance at OpenWorld or JavaOne sessions.
- Conference location: San Francisco, California, USA
- Dates: Sunday, October 25 to Thursday, October 29, 2015
- Website: Oracle OpenWorld
Key Dates for 2015
|Call for Proposals—Open||Wednesday, March 25|
|Call for Proposals—Closed||Wednesday, April 29, 11:59 p.m. PDT|
|Notifications for accepted and declined submissions sent||Mid-June|
- For questions regarding the Call for Proposals, send an e-mail to firstname.lastname@example.org.
- For technical questions about the submission tool or issues with submitting your proposal, send an e-mail to OpenWorldContent@gpj.com.
- Oracle employee submitters should contact the appropriate Oracle track leads before submitting. To view a list of track leads, click here.
Monday Dec 15, 2014
By Denny Wong-Oracle on Dec 15, 2014
Sentiment analysis has been a hot topic recently; sentiment analysis or opinion mining refers to the application of natural language processing, computational linguistics, and text analytics to identify and extract subjective information in source materials. Social media websites are good source of people sentiments. Companies have been using social networking sites to make new product announcements, promote their products, collect product reviews and user feedback, interact with their customers, etc. It is important for companies to sense customer sentiments toward their products, so they can react accordingly to benefit from customers’ opinion.
In this blog, we will show you how to use Data Miner to perform some basic sentiment analysis (based on text analytics) using Twitter data. The demo data was downloaded from the developer API console page of the Twitter website. The data itself originated from the Oracle Twitter page, and it contains about a thousand tweets posted in the past six months (May to Oct 2014). We will determine the sentiments (highly favored, moderately favored, and less favored) of tweets based on their favorite counts, and assign the sentiment to each tweet. We then build classification models using these tweets along with their assigned sentiments. The goal is to predict how well a new tweet will be received by customers. This may help marketing department to better craft a tweet before it is posted.
The demo (click here to download demo twitter data and workflow) will use the newly added JSON Query node in the Data Miner 4.1 to import the twitter data; please review the “How to import JSON data to Data Miner for Mining” blog entry in previous post.
Workflow for Sentiment Analysis
The following workflow shows the process we use to prepare the twitter data, determine the sentiments of tweets, and build classification models on the data.
The following describes the nodes used in the above workflow:
- Data Source (TWITTER_LARGE)
- Select the demo Twitter data source. The sample Twitter data is attached with this blog.
- JSON Query (JSON Query)
- Select the required JSON attributes used for analysis; we only use the “id”, “text”, and “favorite_count” attributes. The “text” attribute contains the tweet, and the “favorite_count” attribute indicates how many times the tweet has been favorited.
- SQL Query (Cleanse Tweets)
- Remove shorten URLs and punctuations within tweets because these data contain no predictive information.
- Filter Rows (Filter Rows)
- Remove retweeted tweets because these are duplicate tweets.
- Transform (Transform)
- Perform quantile bin of the “favorite_count” data into three quantiles; each quantile represent a sentiment. The top quantile represents “highly favored” sentiment, the middle quantile represents “moderately favored” sentiment, and the bottom quantile represents “less favored” sentiment.
- SQL Query (Recode Sentiment)
- Assign quantiles as determined sentiments to tweets.
- Create Table (OUTPUT_4_29)
- Persist the data to a table for classification model build (optional).
- Classification (Class Build)
- Build classification models to predict customer sentiment toward a new tweet (how much will customer like this new tweet?).
Data Source Node (TWITTER_LARGE)
Select the JSON_DATA in the TWITTER_LARGE table. The JSON_DATA contains about a thousand tweets to be used for sentiment analysis.
JSON Query Node (JSON Query)
Use the new JSON Query node to select the following JSON attributes. This node projects the JSON data to relational data format, so that it can be consumed within the workflow process.
SQL Query Node (Cleanse Tweets)
Use the REGEXP_REPLACE function to remove numbers, punctuations, and shorten URLs inside tweets because these data are considered noises and do not provide any predictive information. Notice we do not treat hash tags inside tweets specially; these tags are treated as regular words.
We specify the number, punctuation, and URL patterns in regular expression syntax and use the database function REGEXP_REPLACE to replace these patterns inside all tweets with empty spaces.
REGEXP_REPLACE("JSON Query_N$10055"."TWEET", '([[:digit:]*]|[[:punct:]*]|(http[s]?://(.*?)(\s|$)))', '', 1, 0) "TWEETS",
Filter Rows Node (Filter Rows)
Remove retweeted tweets because these are duplicate tweets. Usually, retweeted tweets start with a “RT” abbreviate, so we specify the following row filter condition to filter out those tweets.
Transform Node (Transform)
Use the Transform node to perform quantile bin of the “favorite_count” data into three quantiles; each quantile represent a sentiment. For simplicity, we just bin the count into three quantiles without applying any special treatment first.
SQL Query Node (Recode Sentiment)
Assign quantiles as determined sentiments to tweets; top quantile represents “highly favored” sentiment, the middle quantile represents “moderately favored” sentiment, and the bottom quantile represents “less favored”. These sentiments become target classes for the classification model build.
Classification Node (Class Build)
Build Classification models using the sentiment as target and tweet id as case id.
Since the TWEETS column contains the textual tweets, so we change the mining type to Text Custom.
Enable the Stemming option for text processing.
Compare Test Results
After the model build completes successfully, open the test viewer to compare model test results, the SVM model seems to produce the best prediction for the “highly favored” sentiment (57% correct prediction).
Moreover, the SVM model has better lift result than other models, so we will use this model for scoring.
Sentiment Prediction (Scoring)
Let’s score this tweet “this is a boring tweet!” using the SVM model.
As expected, this tweet receives a “less favored” prediction.
How about this tweet “larry is doing a data mining demo now!” ?
Not surprisingly, this tweet receives a “highly favored” prediction.
Last but not least, let’s see the sentiment prediction for the title of this blog
Not bad it gets a “highly favored” prediction, so it seems this title will be well received by audience.
The best SVM model only produces 57% accuracy for the “highly favored” sentiment prediction, but it is reasonably better than random guess. For a larger sample of tweet data, the model accuracy could be improved. With the new JSON Query node, it enables us to perform data mining on JSON data which is the most popular data format produced by prominent social networking sites.
Monday Dec 08, 2014
By Denny Wong-Oracle on Dec 08, 2014
JSON is a popular lightweight data structure used by Big Data. Increasingly, a lot of data produced by Big Data are in JSON format. For example, web logs generated in the middle tier web servers are likely in JSON format. NoSQL database vendors have chosen JSON as their primary data representation. Moreover, the JSON format is widely used in the RESTful style Web services responses generated by most popular social media websites like Facebook, Twitter, LinkedIn, etc. This JSON data could potentially contain wealth of information that is valuable for business use. So it is important that we can bring this data over to Data Miner for analysis and mining purposes.
Oracle database 184.108.40.206 provides ability to store and query JSON data. To take advantage of the database JSON support, the upcoming Data Miner 4.1 added a new JSON Query node that allows users to query JSON data as relational format. In additional, the current Data Source node and Create Table node are enhanced to allow users to specify JSON data in the input data source.
In this blog, I will show you how to specify a JSON data in the input data source and use JSON Query node to selectively query desirable attributes and project the result in relational format. Once the data is in relational format, users can treat it as a normal relational data source and start analyzing and mining it immediately. The Data Miner repository installation installs a sample JSON dataset ODMR_SALES_JSON_DATA, which I will be using it here. However, Oracle Big Data SQL supports queries against vast amounts of big data stored in multiple data sources, including Hadoop. Users can view and analyze data from various data stores together, as if it were all stored in an Oracle database.
Specify JSON Data
The Data Source node and Create Table nodes are enhanced to allow users to specify the JSON data type in the input data source.
Data Source Node
For this demo, we will focus on the Data Source node. To specify JSON data, create a new workflow with a Data Source node. In the Define Data Source wizard, select the ODMR_SALES_JSON_DATA table. Notice there is only one column (JSON_DATA) in this table, which contains the JSON data.
Click Next to go to the next step where it shows the JSON_DATA is selected with the JSON(CLOB) data type. The JSON prefix indicates the data stored is in JSON format; the CLOB is the original data type. The JSON_DATA column is defined with the new “IS JSON” constraint, which indicates only valid JSON document can be stored there. The UI can detect this constraint and automatically select the column as JSON type. If there was not a “IS JSON” constraint defined, the column would be shown with a CLOB data type. To manually designate a column as a JSON type, click on the data type itself to bring up a in-place dropdown where it lists the original data type (e.g. CLOB) and a corresponding JSON type (e.g. JSON(CLOB)), so just select the JSON type. Note: only the following data types can be set to JSON type: VARCHAR2, CLOB, BLOB, RAW, NCLOB, and NVARCHAR2.
Click Finish and run the node now.
Once the node is run successfully, open the editor to examine the generated JSON schema.
Notice the message “System Generated Data Guide is available” at the bottom of the Selected Attributes listbox. What happens here is when the Data Source node is run, it parsed the JSON documents to produce a schema that represents the document structure. Here is what the schema looks like:
The JSON Path expression syntax and associated data type info (OBJECT, ARRAY, NUMBER, STRING, BOOLEAN, NULL) are used to represent JSON document structure. We will refer to this JSON schema as Data Guide throughout the product.
Before we look at the Data Guide in the UI, let’s look at the settings that can affect how it is generated. Click the “JSON Settings…” button to open the JSON Parsing Settings dialog.
The settings are described below:
· Generate Data Guide if necessary
o Generate a Data Guide if it is not already generated in parent node.
o Sample JSON documents for Data Guide generation.
· Max. number of documents
o Specify maximum number of JSON documents to be parsed for Data Guide generation.
· Limit Document Values to Process
o Sample JSON document values for Data Guide generation.
· Max. number per document
o Specify maximum number of JSON document scalar values (e.g. NUMBER, STRING, BOOLEAN, NULL) per document to be parsed for Data Guide generation.
The sampling option is enabled by default to prevent long-running parsing of JSON documents; parsing could take a while for large number of documents. However, users may supply a Data Guide (Import from File) or reuse an existing Data Guide (Import from Workflow) if compatible Data Guide is available.
Now let’s look at the Data Guide, go back to the Edit Data Source Node dialog, select the JSON_DATA column and click the above to open the Edit Data Guide dialog. The dialog shows the JSON structure in a hierarchical tree view with data type information. The “Number of Values Processed” shows the total number of JSON scalar values was parsed to produce the Data Guide.
Users can control whether to enable Data Guide generation or import a compatible Data Guide via the menu under the icon.
The menu options are described below:
o Use the “Generate Data Guide if necessary” setting found in the JSON Parsing Setting dialog (see above).
o Always generate a Data Guide.
o Do not generate a Data Guide.
· Import From Workflow
o Import a compatible Data Guide from a workflow node (e.g. Data Source, Create Table). The option will be set to Off after the import (disable Data Guide generation).
· Import From File
o Import a compatible Data Guide from a file. The option will be set to Off after the import (disable Data Guide generation).
Users can also export the current Data Guide to a file via the icon.
Select JSON Data
In Data Miner 4.1, a new JSON Query node is added to allow users to selectively bring over desirable JSON attributes as relational format.
JSON Query Node
The JSON Query node is added to the Transforms group of the Workflow.
Let’s create a JSON Query node and connect the Data Source node to it.
Double click the JSON Query node to open the editor. The editor consists of four tabs, and these tabs are described as followings:
The Column dropdown lists all available columns in the data source where JSON structure (Data Guide) is found. It consists of the following two sub tabs:
o Show the JSON structure of the selected column in a hierarchical tree view.
o Show sample of JSON documents found in the selected column. By default it displays first 2,000 characters (including spaces) of the documents. Users can change the sample size (max. 50,000 chars) and run the query to see more of the documents.
· Addition output
o Allow users to select any non-JSON columns in the data source as additional output columns.
o Allow users to define aggregations of JSON attributes.
o Output Columns
o Show columns in the generated relational output.
o Output Data
o Show data in the generated relational output.
Let’s select some JSON attributes to bring over. Skip the SALES attributes because we want to define aggregations for these attributes (QUANTITY_SOLD and AMOUNT_SOLD).
To peek at the JSON documents, go to the Data tab. You can change the Sample Size to look at more JSON data. Also, you can search for specific data within the displayed documents by using the search control.
Addition Output Tab
If you have any non-JSON columns in the data source that you want to carry over for output, you can select those columns here.
Let’s define aggregations (use SUM function) for QUANTITY_SOLD and AMOUNT_SOLD attributes (within the SALES array) for each customer group (group by CUST_ID).
Click the icon in the top toolbar to open the Edit Group By dialog, where you can select the CUST_ID as the Group-By attribute. Notice the Group-By attribute can consists of multiple attributes.
Click OK to return to the Aggregate tab, where you can see the selected CUST_ID Group-By attribute is now added to the Group By Attributes table at the top.
Click the icon in the bottom toolbar to open the Add Aggregations dialog, where you can define the aggregations for both QUANTITY_SOLD and AMOUNT_SOLD attributes using the SUM function.
Next, click the icon in the toolbar to open the Edit Sub Group By dialog, where you can specify a Sub-Group By attribute (PROD_ID) to calculate quantity sold and amount sold per product per customer.
Specifying a Sub-Group By column creates a nested table; the nested table contains columns with data type DM_NESTED_NUMERICALS.
Click OK to return to the Aggregate tab, where you can see the defined aggregations are now added to the Aggregation table at the bottom.
Let’s go to the Preview tab to look at the generated relational output. The Output Columns tab shows all output columns and their corresponding source JSON attributes. The output columns can be renamed by using the in-place edit control.
The Output Data tab shows the actual data in the generated relational output.
Click OK to close the editor when you are done. The generated relational output is single-record case format; each row represents a case. If we had not defined the aggregations for the JSON array attributes, the relational output would have been in multiple-record case format. The multiple-record case format is not suitable for building mining models except for Association model (which accepts transactional data format with transaction id and item id).
Here is an example of how JSON Query node is used to project the JSON data source to relational format, so that the data can be consumed by Explore Data node for data analysis and Class Build node for building models.
This blog shows how JSON data can be brought over to Data Miner via the new JSON Query node. Once the data is projected to relational format, it can easily be consumed by Data Miner for graphing, data analysis, text processing, transformation, and modeling.
Wednesday Oct 08, 2014
By Charlie Berger, Advanced Analytics-Oracle on Oct 08, 2014
2014 was a very good year for Oracle Advanced Analytics at Oracle Open World 2014. We had a number of customer, partner and Oracle talks that focused on the Oracle Advanced Analytics Database Option. See below with links to presentations. Check back later to OOW Sessions Content Catalog as not all presentations have been uploaded yet. :-(
- Julia Minkowski - Risk Manager, Fiserv, Inc.
- Miguel Barrera - Director, Risk Analytics, fiserv
- Charles Berger - Senior Director, Product Management, Data Mining and Advanced Analytics, Oracle
Moving data mining algorithms to run as native data mining SQL functions eliminates data movement, automates knowledge discovery, and accelerates the transformation of large-scale data to actionable insights from days/weeks to minutes/hours. In this session, Fiserv, a leading global provider of electronic commerce systems for the financial services industry, shares best practices for turning in-database predictive models into actionable policies and illustrates the use of Oracle Data Miner for fraud prevention in online payments. Attendees will learn how businesses that implement predictive analytics in their production processes significantly improve profitability and maximize their ROI.
Olive Garden, traditionally managing its 830 restaurants nationally, transitioned to a localized approach with the help of predictive analytics. Using k-means clustering and logistic classification algorithms, it divided its stores into five behavioral segments. The analysis leveraged Oracle SQL Developer 4.0 and Oracle R Enterprise 1.3 to evaluate 115 million transactions in just 5 percent the time required by the company’s BI tool. While saving both time and money by making it possible to develop the solution internally, this analysis has informed Olive Garden’s latest remodel campaign and continues to uncover millions in profits by optimizing pricing and menu assortment. This session illustrates how Oracle Advanced Analytics solutions directly affect the bottom line.
A Perfect Storm: Oracle Big Data Science for Enterprise R and SAS Users [CON8331]
- Marcos Arancibia Coddou - Product Manager, Oracle Advanced Analytics, Oracle
- Mark Hornick - Director, Advanced Analytics, Oracle
With the advent of R and a rich ecosystem of users and developers, a myriad of bloggers, and thousands of packages with functionality ranging from social network analysis and spatial data analysis to empirical finance and phylogenetics, use of R is on a steep uptrend. With new R tools from Oracle, including Oracle R Enterprise, Oracle R Distribution, and Oracle R Advanced Analytics for Hadoop, users can scale and integrate R for their enterprise big data needs. Come to this session to learn about Oracle’s R technologies and what data scientists from smart companies around the world are doing with R.
The need for speed could not be greater—not speed of processing but time to market. The problem is driven by the long journey data takes before evolving into insight. Insight, however, is always relative to assumption. In fact, analytics is often seen as a battle between assumption and data. Assumptions can be classified into three types: related to distributions, ratios, and relations. In this session, you will see how the most-valuable business insights can come in the matter of hours, not months, when assumptions are challenged with data. This is made possible by the integration of Oracle Big Data Appliance, enabling transparent access to in-database analytics from the data warehouse and avoiding the traditional long journey of data to insight.
With almost 120 years of franchising experience, Dunkin’ Brands owns two of the world’s most recognized, beloved franchises: Dunkin’ Donuts and Baskin-Robbins. This session describes a market basket analysis solution built from scratch on the Oracle Advanced Analytics platform at Dunkin’ Brands. This solution enables Dunkin’ to look at product affinity and a host of associated sales metrics with a view to improving promotional effectiveness and cross-sell/up-sell to increase customer loyalty. The presentation discusses the business value achieved and technical challenges faced in scaling the solution to Dunkin’ Brands’ transaction volumes, including engineered systems (Oracle Exadata) hardware and parallel processing at the core of the implementation.
This session presents three case studies related to predictive analytics with the Oracle Data Mining feature of Oracle Advanced Analytics. Service contracts cancellation avoidance with Oracle Data Mining is about predicting the contracts at risk of cancellation at least nine months in advance. Predicting hardware opportunities that have a high likelihood of being won means identifying such opportunities at least four months in advance to provide visibility into suppliers of required materials. Finally, predicting cloud customer churn involves identifying the customers that are not as likely to renew subscriptions as others.
SQL has a long and storied history. From the early 1980s till today, data processing has been dominated by this language. It has changed and evolved greatly over time, gaining features such as analytic windowing functions, model clauses, and row-pattern matching. This session explores what's new in SQL and Oracle Database for exploiting big data. You'll see how to use SQL to efficiently and effectively process data that is not stored directly in Oracle Database.
- Pirama Arumuga nainar - Senior Software Engineer, Oracle
- Ekine Akuiyibo - Software Engineer, Oracle
- Debabrata Sarkar - Senior Engineering Manager, Oracle
Traditional database applications use SQL queries to filter, aggregate, and summarize data. This is called descriptive analytics. The next level is predictive analytics, where hidden patterns are discovered to answer questions that give unique insights that cannot be derived with descriptive analytics. Businesses are increasingly using machine learning techniques to perform predictive analytics, which helps them better understand past data, predict future trends, and enable better decision-making. This session discusses how to use machine learning algorithms such as regression, classification, and clustering to solve a few selected business use cases.
- Roel Hartman - Director, Logica
- Brendan Tierney - Consultant, DIT & Oralytics.com (Author, Predictive Analytics using Oracle Data Miner book)
Have you ever wanted to add some data science to your Oracle Application Express applications? This session shows you how you can combine predictive analytics from Oracle Data Miner into your Oracle Application Express application to monitor sentiment analysis. Using Oracle Data Miner features, you can build data mining models of your data and apply them to your new data. The presentation uses Twitter feeds from conference events to demonstrate how this data can be fed into your Oracle Application Express application and how you can monitor sentiment with the native SQL and PL/SQL functions of Oracle Data Miner. Oracle Application Express comes with several graphical techniques, and the presentation uses them to create a sentiment dashboard.
Transforming Customer Experience with Big Data and Predictive Analytics [CON8148]
Delivering a high-quality customer experience is essential for long-term profitability and customer retention in the communications industry. Although service providers own a wealth of customer data within their systems, the sheer volume and complexity of the data structures inhibit their ability to extract the full value of the information. To change this situation, service providers are increasingly turning to a new generation of business intelligence tools. This session begins by discussing the key market challenges for business analytics and continues by exploring Oracle’s approach to meeting these challenges, including the use of predictive analytics, big data, and social network analytics.
There are a few others where Oracle Advanced Analytics is included e.g. Retail GBU, Big Data Strategy, etc. but they are typically more broadly focused. If you search the Content Catalog for “Advanced Analytics” etc. you can find other related presentations that involve OAA.
Hope this helps. Enjoy!
Wednesday Aug 06, 2014
By Charlie Berger, Advanced Analytics-Oracle on Aug 06, 2014
Great New Book Now Available: Predictive Analytics Using Oracle Data Miner, by Brendan Tierney, Oracle ACE Director
If you have an Oracle Database and want to leverage that data to discover new insights, make predictions and generate actionable insights, this book is a must read for you! In Predictive Analytics Using Oracle Data Miner: Develop & Use Oracle Data Mining Models in Oracle Data Miner, SQL & PL/SQL, Brendan Tierney, Oracle ACE Director and data mining expert, guides the user through the basic concepts of data mining and offers step by step instructions for solving data-driven problems using SQL Developer’s Oracle Data Mining extension. Brendan takes it full circle by showing the reader how to deploy advanced analytical methodologies and predictive models immediately into enterprise-wide production environments using the in-database SQL and PL/SQL functionality.
Definitely a must read for any Oracle data professional!
Sunday May 18, 2014
By Charlie Berger, Advanced Analytics-Oracle on May 18, 2014
Oracle Data Miner and Oracle R Enterprise Integration - Watch Demo
Oracle Advanced Analytics (Database EE) Option turns the database into an enterprise-wide analytical platform that can quickly deliver enterprise-wide predictive analytics and actionable insights. Oracle Advanced Analytics is comprised of both the Oracle Data Mining SQL data mining functions, Oracle Data Miner, an extension to SQL Developer that exposes the data mining SQL functions for data analysts, and Oracle R Enterprise which integrates the R statistical programming language with SQL. 15 powerful in-database SQL data mining functions, the SQL Developer/Oracle Data Miner workflow GUI and the ability to integrate open source R within an analytical methodology, makes the Oracle Database + Oracle Advanced Analytics Option the ideal platform for building and deploying enterprise-wide predictive analytics applications/solutions.
In Oracle Data Miner 4.0 we added a new SQL Query node to allow users to insert arbitrary SQL scripts within an ODMr analytical workflow. Additionally, the SQL Query node allows users to leverage registered R scripts to extend Oracle Data Miner's analytical capabilities. For applications that are mostly OAA/Oracle Data Mining SQL data mining functions based but require additional analytical techniques found in the R community, this is an ideal method for integrating the power of in-database SQL analytical and data mining functions with the flexibility of open source R. For applications that are built entirely using the R statistical programming language, it may be more practical to stay within the R console or RStudio environments, but for SQL-centric in-database predictive methodologies, this integration is just what might satisfy your needs.
Watch this Oracle Data Miner and Oracle R Enteprise Integration YouTube to see the demo.
There is an excellent related Oracle Data Miner: Integrate Oracle R Enterprise Algorithms into workflow using the SQL Query node (pdf, companion files) white paper on this topic that includes examples on the Oracle Technology Network in the Oracle Data Mining pages.
Tuesday May 06, 2014
By Charlie Berger, Advanced Analytics-Oracle on May 06, 2014
Oracle Data Miner 4.0 New Features
Oracle Data Miner/SQLDEV 4.0 (for Oracle Database 11g and 12c)
- New Graph node (box, scatter, bar, histograms)
- SQL Query node + integration of R scripts
- Automatic SQL script generation for deployment
Oracle Advanced Analytics 12c New SQL data mining algorithms/enhancements features exposed in Oracle Data Miner 4.0
- Expectation Maximization Clustering algorithm
- PCA & Singular Vector Decomposition algorithms
- Decision Trees can also now mine unstructured data
- Improved/automated Text Mining, Prediction Details and other algorithm improvements
- SQL Predictive Queries—automatic build, apply within simple yet powerful SQL query
Tuesday Nov 12, 2013
By Charlie Berger, Advanced Analytics-Oracle on Nov 12, 2013
Wednesday Sep 04, 2013
Oracle Data Miner (Extension of SQL Developer 4.0) Integrate Oracle R Enterprise Mining Algorithms into workflow using the SQL Query node
By Charlie Berger, Advanced Analytics-Oracle on Sep 04, 2013
I posted a new white paper authored by Denny Wong, Principal Member of Technical Staff, User Interfaces and Components, Oracle Data Mining Technologies. You can access the white paper here and the companion files here. Here is an excerpt:
Oracle Data Miner (Extension of SQL Developer 4.0)
Integrate Oracle R Enterprise Mining Algorithms into workflow using the SQL Query node
Oracle R Enterprise (ORE), a component of the Oracle Advanced Analytics Option, makes the open source R statistical programming language and environment ready for the enterprise and big data. Designed for problems involving large amounts of data, Oracle R Enterprise integrates R with the Oracle Database. R users can develop, refine and deploy R scripts that leverage the parallelism and scalability of the database to perform predictive analytics and data analysis.
Oracle Data Miner (ODMr) offers a comprehensive set of in-database algorithms for performing a variety of mining tasks, such as classification, regression, anomaly detection, feature extraction, clustering, and market basket analysis. One of the important capabilities of the new SQL Query node in Data Miner 4.0 is a simplified interface for integrating R scripts registered with the database. This provides the support necessary for R Developers to provide useful mining scripts for use by data analysts. This synergy provides many additional benefits as noted below.
· R developers can further extend ODMr mining capabilities by incorporating the extensive R mining algorithms from the open source CRAN packages or leveraging any user developed custom R algorithms via SQL interfaces provided by ORE.
· Since this SQL Query node can be part of a workflow process, R scripts can leverage functionalities provided by other workflow nodes which can simplify the overall effort of integrating R capabilities within the database.
· R mining capabilities can be included in the workflow deployment scripts produced by the new sql script generation feature. So the ability of deploy R functionality within the context of an Data Miner workflow is easily accomplished.
· Data and processing are secured and controlled by the Oracle Database. This alleviates a lot of risk that are incurred by other providers, when users have to export data out of the database in order to perform advanced analytics.
Oracle Advanced Analytics saves analysts, developers, database administrators and management the headache of trying to integrate R and database analytics. Instead, users can quickly gain the benefit of new R analytics and spend their time and effort on developing business solutions instead of building homegrown analytical platforms.
Monday Jul 15, 2013
Oracle Data Miner GUI, part of SQL Developer 4.0 Early Adopter 1 is now available for download on OTN
By Charlie Berger, Advanced Analytics-Oracle on Jul 15, 2013
The NEW Oracle Data Miner GUI, part of SQL Developer 4.0 Early Adopter 1 is now available for download on OTN. See link to SQL Developer 4.0 EA1.
The Oracle Data Miner 4.0 New Features are applicable to Oracle Database 11g Release 2 and Oracle Database Release 12c: See Oracle Data Miner Extension to SQL Developer 4.0 Release Notes for EA1 for additional information
· Workflow SQL Script Deployment
o Generates SQL scripts to support full deployment of workflow contents
· SQL Query Node
o Integrate SQL queries to transform data
or provide a new data source
o Supports the running of R Language
Scripts and viewing of R generated data and graphics
· Graph Node
o Generate Line, Scatter, Bar, Histogram
and Box Plots
· Model Build Node Improvements
o Node level data usage specification applied to underlying models
o Node level text specifications to govern text transformations
o Displays heuristic rules responsible for excluding predictor columns
o Ability to control the amount of Classification and Regression test results generated
· View Data
o Ability to drill in to view custom objects and nested tables
These new Oracle Data Miner GUI capabilities expose Oracle Database 12c and Oracle Advanced Analytics/Data Mining Release 1 features:
· Predictive Query Nodes
o Predictive results without the need to build models using Analytical Queries
o Refined predictions based on data
· Clustering Node New Algorithm
o Added Expectation Maximization algorithm
· Feature Extraction Node New Algorithms
o Added Singular Value Decomposition and Principal Component Analysis algorithms
· Text Mining Enhancements
o Text transformations integrated as part of Model's Automatic Data Preparation
o Ability to import Build Text node specifications into a Model Build node
· Prediction Result Explanations
o Scoring details that explain predictive result
· Generalized Linear Model New Algorithm Settings
o New algorithm settings provide feature selection and generation
Wednesday May 08, 2013
By Charlie Berger, Advanced Analytics-Oracle on May 08, 2013
Updated June 15, 2015
Periodically, I've recorded a demonstration and/or presentation on Oracle Advanced Analytics and Data Mining and have posted them on YouTube.
Here are links to some of more recent YouTube postings--sort of an Oracle Advanced Analytics and Data Mining at the Movies experience.
- New - Oracle Academy Webcast: Ask the Oracle Experts Big Data Analytics with Oracle Advanced Analytics - Watch YouTube
- Oracle Data Miner and Oracle R Enterprise Integration via SQL Query node - Watch Demo
- Oracle Data Miner 4.0 (SQL Developer 4.0 Extension) New Features - Watch Demo
- Oracle Business Intelligence Enterprise Edition (OBIEE) SampleAppls Demo featuring integration with Oracle Advanced Analytics/Data Mining
- Oracle Big Data Analytics Demo mining remote sensor data from HVACs for better customer service
- In-Database Data Mining for Retail Market Basket Analysis Using Oracle Advanced Analytics
- In-Database Data Mining Using Oracle Advanced Analytics for Classification using Insurance Use Case
- Fraud and Anomaly Detection using Oracle Advanced Analytics Part 1 Concepts
- Fraud and Anomaly Detection using Oracle Advanced Analytics Part 2 Demo
- Overview Presentation and Demonstration of Oracle Advanced Analytics Database Option
So.... grab your popcorn and a comfortable chair. Hope you enjoy!
Friday Feb 22, 2013
By Charlie Berger, Advanced Analytics-Oracle on Feb 22, 2013
I wanted to highlight a wonderful new resource provided by our partner Vlamis Software. Extremely easy! Fill out the form, wait a few minutes for the Amazon Cloud instance to start up and them BAM! You can login and start using the Oracle Advanced Analytics Oracle Data Miner work flow GUI. Demo data and online Oracle by Example Learning Tutorials are also provided to ensure your data mining test drive is a positive one, Enjoy!!
We have partnered with Amazon Web Services to provide to you, free of charge, the opportunity to work, hands-on, with the latest of Oracle's Business Intelligence offerings. By signing up to one of the labs below, Amazon's Elastic Cloud Computer (EC2) environment will generate a complete server for you to work with.
These hands on labs are working with the actual Oracle software running on the Amazon Web Services EC2 environment. They each take approximately 2 hours to work through and will give you hands-on experience with the software and a tour of the features. Your EC2 environment will be available for you for 5 hours, at which time it will self-terminate. If, after registration, you need additional time or need further instructions, simply reply to the registration email and we would be glad to help you.
This test drive walks through some basic exercises in doing predictive analytics within an Oracle 11g Database instance using the Oracle Data Miner extension for Oracle SQL Developer. You use a drag-and-drop "workflow" interface to build a data mining model that predicts the likelihood of purchase for a set of prospects. Oracle Data Mining is ideal for automatically finding patterns, understanding relationships, and making predictions in large data sets.
Tuesday Jan 01, 2013
By Charlie Berger, Advanced Analytics-Oracle on Jan 01, 2013
Turkcell İletişim Hizmetleri A.S. Successfully Combats Communications Fraud with Advanced In-Database Analytics
[Original link available on oracle.com http://www.oracle.com/us/corporate/customers/customersearch/turkcell-1-exadata-ss-1887967.html]
- Oracle Customer: Turkcell İletişim Hizmetleri A.Ş.
- Location: Istanbul, Turkey
- Industry: Communications
- Employees: 3,583
- Annual Revenue: Over $5 Billion
Turkcell İletişim Hizmetleri A.Ş. is a leading provider of mobile communications in Turkey with more than 34 million subscribers. Established in 1994, Turkcell created the first global system for a mobile communications (GSM) network in Turkey. It was the first Turkish company listed on the New York Stock Exchange.
Communications fraud, or the use of telecommunications products or services without intention to pay, is a major issue for the organization. The practice is fostered by prepaid card usage, which is growing rapidly. Anonymous network-branded prepaid cards are a tempting vehicle for money launderers, particularly since these cards can be used as cash vehicles—for example, to withdraw cash at ATMs. It is estimated that prepaid card fraud represents an average loss of US$5 per US$10,000 in transactions. For a communications company with billions of transactions, this could result in millions of dollars lost through fraud every year.
Consequently, Turkcell wanted to combat communications fraud and money laundering by introducing advanced analytical solutions to monitor key parameters of prepaid card usage and issue alerts or block fraudulent activity. This type of fraud prevention would require extremely fast analysis of the company’s one petabyte of uncompressed customer data to identify patterns and relationships, build predictive models, and apply those models to even larger data volumes to make accurate fraud predictions.
To achieve this, Turkcell deployed Oracle Exadata Database Machine X2-2 HC Full Rack, so that data analysts can build predictive antifraud models inside the Oracle Database and deploy them into Oracle Exadata for scoring, using Oracle Data Mining, a component of Oracle Advanced Analytics, leveraging Oracle Database11g technology. This enabled the company to create predictive antifraud models faster than with any other machine, as models can be built using search and query language (SQL) inside the database, and Oracle Exadata can access raw data without summarized tables, thereby achieving extremely fast analyses.
A word from Turkcell İletişim Hizmetleri A.Ş.
“Turkcell manages 100 terabytes of compressed data—or one petabyte of uncompressed raw data—on Oracle Exadata. With Oracle Data Mining, a component of the Oracle Advanced Analytics Option, we can analyze large volumes of customer data and call-data records easier and faster than with any other tool and rapidly detect and combat fraudulent phone use.” – Hasan Tonguç Yılmaz, Manager, Turkcell İletişim Hizmetleri A.Ş.
- Combat communications fraud and money laundering by introducing advanced analytical solutions to monitor prepaid card usage and alert or block suspicious activity
- Monitor numerous parameters for up to 10 billion daily call-data records and value-added service logs, including the number of accounts and cards per customer, number of card loads per day, number of account loads over time, and number of account loads on a subscriber identity module card at the same location
- Enable extremely fast sifting through huge data volumes to identify patterns and relationships, build predictive antifraud models, and apply those models to even larger data volumes to make accurate fraud predictions
- Detect fraud patterns as soon as possible and enable quick response to minimize the negative financial impact
Oracle Product and Services
- Used Oracle Exadata Database Machine X2-2 HC Full Rack to create predictive antifraud models more quickly than with previous solutions by accessing raw data without summarized tables and providing unmatched query speed, which optimizes and shortens the project design phases for creating predictive antifraud models
- Leveraged SQL for the preparation and transformation of one petabyte of uncompressed raw communications data, using Oracle Data Mining, a feature of Oracle Advanced Analytics to increase the performance of predictive antifraud models
- Deployed Oracle Data Mining models on Oracle Exadata to identify actionable information in less time than traditional methods—which would require moving large volumes of customer data to a third-party analytics software—and achieve an average gain of four hours and more, taking into consideration the absence of any system crash (as occurred in the previous environment) during data import
- Achieved extreme data analysis speed with in-database analytics performed inside Oracle Exadata, through a row-wise information search—including day, time, and duration of calls, as well as number of credit recharges on the same day or at the same location—and query language functions that enabled analysts to detect fraud patterns almost immediately
- Implemented a future-proof solution that could support rapidly growing data volumes that tend to double each year with Oracle Exadata’s massively scalable data warehouse performance
“We selected Oracle because in-database mining to support antifraud efforts will be a major focus for Turkcell in the future. With Oracle Exadata Database Machine and the analytics capabilities of Oracle Advanced Analytics, we can complete antifraud analysis for large amounts of call-data records in just a few hours. Further, we can scale the solution as needed to support rapid communications data growth,” said Hasan Tonguç Yılmaz, datawarehouse/data mining developer, Turkcell Teknoloji Araştırma ve Geliştirme A.Ş.
Oracle Partner: Turkcell Teknoloji Araştırma ve Geliştirme A.Ş.
All development and test processes were performed by Turkcell Teknoloji. The company also made significant contributions to the configuration of numerous technical analyses which are carried out regularly by Turkcell İletişim Hizmetleri's antifraud specialists.
- Turkcell İletişim Hizmetleri Uses Engineered System to Analyze 10 Billion Daily, Call-Data Records and Service Logs and to Generate 100,000 Monthly Reports
- Turkcell Deploys Oracle Data Integrator to Drive Efficiency
- Turkcell Accelerates Reporting Tenfold, Saves on Storage and Energy Costs with Consolidated Oracle Exadata Platform
- Turkcell Superonline Transforms Its Order Management and Service Fulfillment with Oracle Communications Solutions
- Technologist of the Year
- Turkcell is an Exemplary Oracle Cross Stack Customer
- Turkcell Gets Three 10X Improvements with Oracle
- Oracle Exadata Changes the Rules of the Game for Turkcell
- Customers Discuss Benefits of Oracle Exadata
- Turkcell Technology Uses Oracle Complex Event Processing for Extreme Scale Mobile Networks
- Turkcell Technology Research & Development Inc. Achieves Substantial Savings with Fault Prevention
- Turkcell iletisim Hizmetleri A.S. Processes Mobile Network Data of 33 Million Subscribers in Real Time
- Kcell Boosts Business Intelligence with Data Warehouse Solution
- Turkcell Gets the Benefits of Oracle Exadata
- Turkcell Eliminates Manual Updates with Oracle IDM
Tuesday May 29, 2012
By Charlie Berger, Advanced Analytics-Oracle on May 29, 2012
I've created and recorded another YouTube-like presentation and "live" demos of Oracle Advanced Analytics Option, this time focusing on Fraud and Anomaly Detection using Oracle Data Mining. [Note: It is a large MP4 file that will open and play in place. The sound quality is weak so you may need to turn up the volume.]
Data is your most valuable asset. It represents the entire history of your organization and its interactions with your customers. Predictive analytics leverages data to discover patterns, relationships and to help you even make informed predictions. Oracle Data Mining (ODM) automatically discovers relationships hidden in data. Predictive models and insights discovered with ODM address business problems such as: predicting customer behavior, detecting fraud, analyzing market baskets, profiling and loyalty. Oracle Data Mining, part of the Oracle Advanced Analytics (OAA) Option to the Oracle Database EE, embeds 12 high performance data mining algorithms in the SQL kernel of the Oracle Database. This eliminates data movement, delivers scalability and maintains security.
But, how do you find these very important needles or possibly fraudulent transactions and huge haystacks of data? Oracle Data Mining’s 1 Class Support Vector Machine algorithm is specifically designed to identify rare or anomalous records. Oracle Data Mining's 1-Class SVM anomaly detection algorithm trains on what it believes to be considered “normal” records, build a descriptive and predictive model which can then be used to flags records that, on a multi-dimensional basis, appear to not fit in--or be different. Combined with clustering techniques to sort transactions into more homogeneous sub-populations for more focused anomaly detection analysis and Oracle Business Intelligence, Enterprise Applications and/or real-time environments to "deploy" fraud detection, Oracle Data Mining delivers a powerful advanced analytical platform for solving important problems. With OAA/ODM you can find suspicious expense report submissions, flag non-compliant tax submissions, fight fraud in healthcare claims and save huge amounts of money in fraudulent claims and abuse.
This presentation and several brief demos will show Oracle Data Mining's fraud and anomaly detection capabilities.
Thursday May 10, 2012
Oracle Virtual SQL Developer Days DB May 15th - Session #3: 1Hr. Predictive Analytics and Data Mining Made Easy!
By Charlie Berger, Advanced Analytics-Oracle on May 10, 2012
Oracle Data Mining's SQL Developer based ODM'r GUI + ODM is being featured in this upcoming Virtual SQL Developer Day online event next Tuesday, May 15th. Several thousand people have already registered and registration is still growing. We recorded and uploaded presentations/demos and then anyone can view them "on demand", but at the specified date/time per the SQL DD event agenda. Anyone can also download a complete 11gR2 Database w/ SQL Developer 3.1 & Oracle Data Miner GUI extension VM installation for the Hands-on Labs and follow our 4 ODM Oracle by Examples e-training. We moderators monitor the online chat and answer questions.
Session #3: 1Hr. Predictive Analytics and Data Mining Made Easy!We're also included in the June 7th physical event in NYC and future virtual and physical events. Great event(s) and great "viz" for OAA/ODM.
Oracle Data Mining, a component of the Oracle Advanced Analytics database option, embeds powerful data mining algorithms in the SQL kernel of the Oracle Database for problems such as customer churn, predicting customer behavior, up-sell and cross-sell, detecting fraud, market basket analysis (e.g. beer & diapers), customer profiling and customer loyalty. Oracle Data Miner, SQL Developer 3.1 extension, provides data analysts a “workflow” paradigm to build analytical methodologies to explore data and build, evaluate and apply data mining models—all while keeping the data inside the Oracle Database. This workshop will teach the student the basics of getting started using Oracle Data Mining.
By Charlie Berger, Advanced Analytics-Oracle on May 10, 2012
Two Oracle Data Mining Virtual Classes are now scheduled. Register for a course in 2 easy steps.
Step 1: Select your Live Virtual Class options
|Live Virtual Class
Course ID: D76362GC10
Course Title: Oracle Database 11g: Data Mining Techniques NEW
Duration: 2 Days
Price: US$ 1,300 Dollars
Step 2: Select the date and location of your Live Virtual Class
Please select a location below then click on the Add to Cart button
|Location||Duration||Class Date||Class Start Time||Class End Time||Course Materials||Instruction Language||Seats||Audience||Employees|
|2 Days||09-Aug-2012||04:00 AM EDT||12:00 PM EDT||English||English||Available||Public|
|2 Days||18-Oct-2012||04:00 AM EDT||12:00 PM EDT||English||English||Available||Public|
Wednesday Apr 04, 2012
By Charlie Berger, Advanced Analytics-Oracle on Apr 04, 2012
Ever want to just sit and watch a YouTube-like presentation and "live" demos of Oracle Advanced Analytics/Oracle Data Mining? Then click here! (plays large MP4 file in a browser)
This 1+ hour long session focuses primarily on the Oracle Data Mining component of the Oracle Advanced Analytics Option and is tied to the Oracle SQL Developer Days virtual and onsite events. I cover:
- Big Data + Big Data Analytics
- Competing on analytics & value proposition
- What is data mining?
- Typical use cases
- Oracle Data Mining high performance in-database SQL based data mining functions
- Exadata "smart scan" scoring
- Oracle Data Miner GUI (an Extension that ships with SQL Developer)
- Oracle Business Intelligence EE + Oracle Data Mining results/predictions in dashboards
- Applications "powered by Oracle Data Mining for factory installed predictive analytics methodologies
- Oracle R Enterprise
Please contact email@example.com should you have any questions. Hope you enjoy!
Charlie Berger, Sr. Director of Product Management, Oracle Data Mining & Advanced Analytics, Oracle Corporation
Friday Mar 23, 2012
By Charlie Berger, Advanced Analytics-Oracle on Mar 23, 2012
A NEW 2-Day Instructor Led Course on Oracle Data Mining has been developed for customers and anyone wanting to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database. To register interest in attending the class, click here and submit your preferred format.
- Explain basic data mining concepts and describe the benefits of predictive analysis
- Understand primary data mining tasks, and describe the key steps of a data mining process
- Use the Oracle Data Miner to build,evaluate, and apply multiple data mining models
- Use Oracle Data Mining's predictions and insights to address many kinds of business problems, including: Predict individual behavior, Predict values, Find co-occurring events
- Learn how to deploy data mining results for real-time access by end-users
Five reasons why you should attend this 2 day Oracle Data Mining Oracle University course. With Oracle Data Mining, a component of the Oracle Advanced Analytics Option, you will learn to gain insight and foresight to:
- Go beyond simple BI and dashboards about the past. This course will teach you about "data mining" and "predictive analytics", analytical techniques that can provide huge competitive advantage
- Take advantage of your data and investment in Oracle technology
- Leverage all the data in your data warehouse, customer data, service data, sales data, customer comments and other unstructured data, point of sale (POS) data, to build and deploy predictive models throughout the enterprise.
- Learn how to explore and understand your data and find patterns and relationships that were previously hidden
- Focus on solving strategic challenges to the business, for example, targeting "best customers" with the right offer, identifying product bundles, detecting anomalies and potential fraud, finding natural customer segments and gaining customer insight.
UDDATED for Oracle Database 12c & SQLDEV 4.0: Evaluating Oracle Data Mining Has Never Been Easier - Evaluation "Kit" Available
By Charlie Berger, Advanced Analytics-Oracle on Mar 23, 2012
UPDATED (July 2014) for ORACLE DATABASE 12c & SQL DEVELOPER 4.0 (with ORACLE DATA MINER 4.0) Extension
The Oracle Advanced Analytics Option turns the database into an enterprise-wide analytical platform that can quickly deliver enterprise-wide predictive analytics and actionable insights. Oracle Advanced Analytics empowers data and business analysts to extract knowledge, discover new insights and make predictions—working directly with large data volumes in the Oracle Database. Oracle Advanced Analytics, an Option of Oracle Database Enterprise Edition, offers a combination of powerful in-database algorithms and integration with open source R algorithms accessible via SQL and R languages and provides a range of GUI (Oracle Data Miner) and IDE (R client, RStudio, etc.) options targeting business users, data analysts, application developers and data scientists.
Now you can quickly and easily get set up to starting using Oracle Data Mining, the SQL API & GUI component of the Oracle Advanced Analytics Database Option for evaluation purposes. Just go to the Oracle Technology Network (OTN) and follow these simple steps.
Oracle Data Mining Evaluation "Kit" Instructions
- Anyone can download and install the Oracle Database for free for evaluation purposes. Read OTN web site http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html for details.
- Oracle Database 12c is the latest release and contains many new features. See Oracle Advanced Analytics 12c Dcoumentation's New Features and this recent Oracle Data Mining Blog posting. NOTE: A major new feature of the 12c Oracle Database is multi-tenant and the ability to set up multiple Container databases. However, to keep things simpler, UNCHECK the "create as Container database" option. This makes your SQLDEV database connections simpler and then you can use the simpler case Oracle Data Miner tutorials. If you create the Container database(s), your connection details get a bit more complicated.
- For Oracle Database Release 11g, 220.127.116.11.0 DB is the minimum, 18.104.22.168 is better and naturally 22.214.171.124 is best if you are a current customer and on active support.
- Either 32-bit or 64-bit is fine. 4GB of RAM or more works fine for SQL Developer and the Oracle Data Miner GUI extension.
- Downloading the database and then installing it should take just about an hour or so at most, depending on your network and computer.
- For more instructions on setting up Oracle Data Mining see: http://www.oracle.com/technetwork/database/options/odm/dataminerworkflow-168677.html
- When you install the Oracle Database, the Oracle Data Mining Examples including sample data is available as part of the total Database installation. See link.
Step 2: Install SQL Developer 4.0 (the Oracle Data Miner GUI Extension installs automatically but additional post installation Set Up in required. See Setting Up Oracle Data Miner )
- Setting Up Oracle Data Miner 4.0 This tutorial covers the process of setting up Oracle Data Miner for use within Oracle SQL Developer 4.0.
- Using Oracle Data Miner 4.0 This tutorial covers the use of Oracle Data Miner 4.0 to perform data mining against Oracle Database 12c. In this lesson, you examine and solve a data mining business problem by using the Oracle Data Miner graphical user interface (GUI). The Oracle Data Miner GUI is included as an extension of Oracle SQL Developer, version 4.0.
- Using Feature Selection and Generation with GLM This tutorial covers the use of Oracle Data Miner 4.0 to leverage enhancements to the Oracle implementation of Generalized Liner Models (GLM) for Oracle Database 12c. These enhancements include support for Feature Selection and Generation.
- Text Mining with an EM Clustering Model This tutorial covers the use of Oracle Data Miner 4.0 to leverage new text mining enhancements while applying a clustering model. In this lesson, you learn how to use the Expectation Maximization (EM) algorithm in a clustering model.
- Using Predictive Queries With Oracle Data Miner 4.0 This tutorial covers the use of Predictive Queries against mining data by using Oracle Data Miner 4.0.
- Using the SQL Query Node in a Data Miner Workflow This tutorial covers the use of the new SQL Query Node in an Oracle Data Miner 4.0 workflow.
That’s it! Easy, fun and the fastest way to get started evaluating Oracle Advanced Analytics/Oracle Data Mining. Enjoy!
Note: There are also four (4) additional Oracle Data Miner 3.2 Tutorials that are similar that may be helpful to review.
- Setting Up Oracle Data Miner 11g Release 2 This tutorial covers the process of setting up Oracle Data Miner 11g Release 2 for use within Oracle SQL Developer 3.0.
- Using Oracle Data Miner 11g Release 2 This tutorial covers the use of Oracle Data Miner to perform data mining against Oracle Database 11g Release 2. In this lesson, you examine and solve a data mining business problem by using the Oracle Data Miner graphical user interface (GUI).
- Star Schema Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform star schema mining against Oracle Database 11g Release 2.
- Text Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform text mining against Oracle Database 11g Release 2.
Wednesday Feb 08, 2012
By Charlie Berger, Advanced Analytics-Oracle on Feb 08, 2012
Monday Sep 19, 2011
By Charlie Berger, Advanced Analytics-Oracle on Sep 19, 2011
Example Predictive Analytics Applications (partial list)
- Oracle Communications & Retail Industry Models —factory installed data mining for specific industries
- Oracle Spend Classification
- Oracle Fusion Human Capital Management (HCM) Predictive Workforce
- Oracle Fusion Customer Relationship Management (CRM) Sales Prediction
- Oracle Adaptive Access Manager real-time Security
- Oracle Complex Event Processing integrated with ODM models
- Predictive Incident Monitoring Service for Oracle Database customers
Pretty cool stuff if you or your customers are interested in analytics. Here's the link to the ppt slides.
Tuesday Aug 09, 2011
By Charlie Berger, Advanced Analytics-Oracle on Aug 09, 2011
America's Cup: Oracle Data Mining supports crew and BMW ORACLE Racing
BMW ORACLE Racing won the 33rd America’s Cup yacht race in February 2010, beating the Swiss team, Alinghi, decisively in the first two races of the best-of-three contest.
BMW ORACLE Racing’s victory in the America’s Cup challenge was a lesson in sailing skill, as one of the world’s most experienced crews reached speeds as fast as 30 knots. But if you listen to the crew in their postrace interviews, you’ll notice that what they talk about is technology.
'The story of this race is in the technology,' says Ian Burns, design coordinator for BMW ORACLE Racing.
Learning by Data
'One of the problems we faced at the outset was that we needed really high accuracy in our data because we didn’t have two boats,' says Burns. 'Generally, most teams have two boats, and they sail them side by side. Change one thing on one boat, and it’s fairly easy to see the effect of a change with your own eyes.'
With only one boat, BMW ORACLE Racing’s performance analysis had to be done numerically by comparing data sets. To get the information needed, says Burns, the team had to increase the amount of data collected by nearly 40 times what they had done in the past.
The USA holds 250 sensors to collect raw data: pressure sensors on the wing; angle sensors on the adjustable trailing edge of the wing sail to monitor the effectiveness of each adjustment, allowing the crew to ascertain the amount of lift it’s generating; and fiber-optic strain sensors on the mast and wing to allow maximum thrust without overbending them.
But collecting data was only the beginning. BMW ORACLE Racing also had to manage that data, analyze it, and present useful results. The team turned to Oracle Data Mining in Oracle Database 11g.
Peter Stengard, a principal software engineer for Oracle Data Mining and an amateur sailor, became the liaison between the database technology team and BMW ORACLE Racing. 'Ian Burns contacted us and explained that they were interested in better understanding the performance-driving parameters of their new boat,' says Stengard. 'They were measuring an incredible number of parameters across the trimaran, collected 10 times per second, so there were vast amounts of data available for analysis. An hour of sailing generates 90 million data points.'
After each day of sailing the boat, Burns and his team would meet to review and share raw data with crewmembers or boat-building vendors using a Web application built with Oracle Application Express. 'Someone in the meeting would say, 'Wouldn’t it be great if we could look at some new combination of numbers?’ and we could quickly build an Oracle Application Express application and share the information during the same meeting,' says Burns. Later, the data would be streamed to Oracle’s Austin Data Center, where Stengard and his team would go to work on deeper analysis.
Because BMW ORACLE Racing was already collecting its data in an Oracle database, Stengard and his team didn’t have to do any extract, transform, and load (ETL) processes or data conversion. 'We could just start tackling the analytics problem right away,' says Stengard. 'We used Oracle Data Mining, which is in Oracle Database. It gives us many advanced data mining algorithms to work with, so we have freedom in how we approach any specific task.'
Using the algorithms in Oracle Data Mining, Stengard could help Burns and his team learn new things about how their boat was working in its environment. 'We would look, for example, at mast rotations—which rotation works best for certain wind conditions,' says Stengard. 'There were often complex relationships within the data that could be used to model the effect on the target—in this case something called velocity made good, or VMG. Finding these relationships is what the racing team was interested in.'
Stengard and his team could also look at data over time and with an attribute selection algorithm to determine which sensors provided the most-useful information for their analysis. 'We could identify sensors that didn’t seem to be providing the predictive power they were looking for so they could change the sensor location or add sensors to another part of the boat,' Stengard says.
Burns agrees that without the data mining, they couldn’t have made the boat run as fast. 'The design of the boat was important, but once you’ve got it designed, the whole race is down to how the guys can use it,' he says. 'With Oracle database technology, we could compare our performance from the first day of sailing to the very last day of sailing, with incremental improvements the whole way through. With data mining we could check data against the things we saw, and we could find things that weren’t otherwise easily observable and findable.'
Flying by Data
The greatest challenge of this America’s Cup, according to Burns, was managing the wing sail, which had been built on an unprecedented scale. 'It is truly a massive piece of architecture,' Burns says. 'It’s 20 stories high; it barely fits under the Golden Gate Bridge. It’s an amazing thing to see.'
The wing sail is made of an aeronautical fabric stretched over a carbon fiber frame, giving it the three-dimensional shape of a regular airplane wing. Like an airplane wing, it has a fixed leading edge and an adjustable trailing edge, which allows the crew to change the shape of the sail during the course of a race.
'The crew of the USA was the best group of sailors in the world, but they were used to working with sails,' says Burns, 'Then we put them under a wing. Our chief designer, Mike Drummond, told them an airline pilot doesn’t look out the window when he’s flying the plane; he looks at his instruments, and you guys have to do the same thing.'
A second ship, known as the performance tender, accompanied the USA on the water. The tender served in part as a floating datacenter and was connected to the USA by wireless LAN.
'The USA generates almost 4,000 variables 10 times a second,' says Burns. 'Sometimes the analysis requires a very complicated combination of 10, 20, or 30 variables fitted through a time-based algorithm to give us predictions on what will happen in the next few seconds, or minutes, or even hours in terms of weather analysis.'
Like the deeper analysis that Stengard does back at the Austin Data Center, this real-time data management and near-real-time data analysis was done in Oracle Database 11g. 'We could download the data to servers on the tender ship, do some quick analysis, and feed it right back to the USA,' says Burns.
'We started to do better when the guys began using the instruments,' Burns says. 'Then we started to make small adjustments against the predictions and started to get improvements, and every day we were making gains.'
Those gains were incremental and data driven, and they accumulated over years—until the USA could sail at three times the wind speed. Ian Burns is still amazed by the spectacle.
'It’s an awesome thing to watch,' he says. 'Even with all we have learned, I don’t think we have met the performance limits of that beautiful wing.'
Read more about Oracle Data Mining
Hear a podcast interview with Ian Burns
Download Oracle Database 11g Release 2
Story republished from: www.oracle.com/technology/oramag/oracle/10-may/o30racing.html
by Jeff Erickson Share 11:41 PM Sat 24 Apr 2010 GMT
Thursday Jul 14, 2011
Oracle Fusion Human Capital Management Application uses Oracle Data Mining for Workforce Predictive Analytics
By Charlie Berger, Advanced Analytics-Oracle on Jul 14, 2011
Oracle's new Fusion Human Capital Management (HCM) Application now embeds predictive analytic models automatically generated by Oracle Data Mining to enrich dashboards and manager's portals with predictions about the likelihood that an employee with voluntarily leave the organization and a prediction about the employee's likely future performance. Armed with this new information that is based on historical patterns and relationships found by Oracle Data Mining, enterprises can more proactively manage their valuable employee assets and better compete. The integrated Oracle Fusion HCM Application requires the Oracle Data Mining Option to the Oracle Database. With custom predictive models generated using the customer's own data, Oracle Fusion HCM enables managers to better understand the employees, understand the key factors for each individual and even perform "What if?" analysis to see the likely impact on an employee by adjusting a critical HR factor e.g. bonus, vacation time, amount of travel, etc.
Excerpting from the Oracle Fusion HCM website and collateral: "Every day organizations struggle to answer essential questions about their workforce. How much money are we losing by not having the right talent in place and how is that impacting current projects? What skills will we need in the next 5 years that we don’t have today? How will business be impacted by impending retirements and are we prepared? Fragmented systems and bolt-on analytics are only some of the barriers that HR faces today. The consequences include missed opportunities, lost productivity, attrition, and uncontrolled operational costs. To address these challenges, Oracle Fusion Human Capital Management (HCM)puts information at your fingertips, helps you predict future trends, and enables you to turn insight into action. You will eliminate unnecessary costs, increase workforce productivity and retention, and gain a strategic advantage over your competition. Oracle Fusion HCM has been designed from the ground up so that you can work naturally and intuitively with analytics woven right into the fabric of your business processes."
This exceprt from the Solution Brief http://www.oracle.com/us/products/applications/fusion/fusion-hcm-know-your-people-356192.pdf describes the Predictive Analytics features and benefits: "Every day organizations struggle to answer essential questions about their workforce. How much money are we losing by not having the right talent in place and how is that impacting current projects? What skills will we need in the next 5 years that we don’t have today? How will business be impacted by impending retirements and are we prepared? Fragmented systems and bolt-on analytics are only some of the barriers that HR faces today. The consequences include missed opportunities, lost productivity, attrition, and uncontrolled operational costs. To address these challenges, Oracle Fusion Human Capital Management (HCM) puts information at your fingertips, helps you predict future trends, and enables you to turn insight into action. You will eliminate unnecessary costs, increase workforce productivity and retention, and gain a strategic advantage over your competition. Oracle Fusion HCM has been designed from the ground up so that you can work naturally and intuitively with analytics woven right into the fabric of your business processes." ....
"Predictive Analysis Imagine if you could look ahead and be prepared for upcoming workforce trends. Most organizations do not have the analytic capability to do predictive human capital analysis, yet the worker information needed to make educated forecasts already exists today. Aging populations, shifting demographics, rising and falling economies, and multi-generational issues can have a significant impact on workforce decisions – for employees, managers and HR professionals. Not being able to accurately predict how all the moving parts fit together, and where you really have potential problems, can make or break an organization. Oracle Fusion HCM gives you the ability to finally see into the future, analyzing worker performance potential, risk of attrition, and enabling what-if analysis on ways to improve your workforce. Additionally, modeling capabilities provide you with extra power to bring together information from sources unthinkable in the past. For example, imagine understanding which recruiting agencies are providing the highest-quality recruits by comparing first year performance ratings with sources of hire. Being able to see potential problems before they occur and take immediate action will increase morale, save money, and boost your competitive edge. Result: You are able to look ahead and be prepared for upcoming workforce trends."
There is a great demo of Oracle Fusion HCM Workforce Predictive Analytics that highlights the Oracle Data Mining. This is one of the latest examples of Applications "powered by Oracle Data Mining".
When you change your paradigm and move the algorithms to the data rather than the traditional approach of extracting the data and moving it to the algorithms for analysis, it CHANGES EVERYTHING. Keep watching for additional Applications powered by Oracle's in-database advanced analytics.
Everything about Oracle Data Mining, a component of the Oracle Advanced Analytics Option - News, Technical Information, Opinions, Tips & Tricks. All in One Place
- Big Data Analytics with Oracle Advanced Analytics: Making Big Data and Analytics Simple white paper
- 2015 BIWA SIG Virtual Conference - Two Days of "Live" Talks by Experts - FREE
- Call for Abstracts at BIWA Summit'16 - The Oracle Big Data + Analytics User Conference
- Oracle Data Miner 4.1, SQL Developer 4.1 Extension Now Available!
- OpenWorld 2015 Call for Proposals Extended to Wed, May 6th, 11:59 p.m
- Use Repository APIs to Manage and Schedule Workflows to run
- Use Oracle Data Miner to Perform Sentiment Analysis inside Database using Twitter Data Demo
- How to import JSON data to Data Miner for Mining
- ORACLE BI, DW, ANALYTICS, BIG DATA AND SPATIAL USER COMMUNITY - BIWA Summit'15 www.biwasummit.org
- 2014 was a very good year for Oracle Advanced Analytics at Oracle Open World 2014