X

Oracle Spatial and Graph – technical tips, best practices, and news from the product team

Recent Posts

Graph Features

Announcing Oracle RDF Graph Server and Query UI

Oracle Graph Server and Client 20.3 includes a new RDF graph feature: Oracle RDF Graph Server and Query UI.   This feature provides REST services and a Query UI for rapid development of a SPARQL endpoint that can be used to query RDF graphs in Oracle Database.  The REST API enables communication with Oracle Database to create, update, and delete RDF graphs, rule bases, and entailments managed in Oracle Database.   The REST API can also be used for loading data, creating indexes, and other data management operations.  RDF Graph Server eliminates the need for third party libraries (such as Apache Jena) to create a SPARQL end point.  The RDF Graph Server can be deployed in an application server like Oracle WebLogic Server. The Query UI is a Oracle JET based client for building application webpages that query and display RDF graphs.  It supports queries across multiple data sources.  The data sources can be RDF graphs in Oracle Database, or an external data source accessed by a SPARQL endpoint URL, such as DBpedia (https://dbpedia.org/sparql), enabling federated query across internal and external data sources. Oracle RDF Graph Server and Query UI is supported for use with RDF Graphs in Oracle Autonomous Database, Oracle Database cloud services, Oracle Exadata, and on-premise databases.  In Oracle Cloud it can be deployed in a compute node using the OCI Marketplace image, for easy install.  The RDF Server can then connect to a Oracle Database service to access RDF graphs in the database.   For on-premise systems, visit the Oracle Graph Server and Client downloads page and download 'Oracle Graph Webapps.'  This includes orardf-20.3.war, and the rdf-docs directory which contains the documentation.  You will need these resources to deploy Oracle RDF Server and Query UI.  

Oracle Graph Server and Client 20.3 includes a new RDF graph feature: Oracle RDF Graph Server and Query UI.   This feature provides REST services and a Query UI for rapid development of a SPARQL...

Graph Features

Graph Server and Client 20.3 is now available

A new release of Graph Server and Client (version 20.3) is now available. Download it from the Oracle Software Delivery Cloud  or from the Graph page on oracle.com.  New features and enhancements include: PGQL plugin for SQLcl: Issue PGQL queries from SQLcl. RDF Server: Enables a SPARQL endpoint without dependency on third party libraries. Issue SPARQL to Oracle or other RDF sources. Manage Oracle RDF content. Synchronizer API: Keep graphs in the Graph Server (PGX) and the database in sync. Updated OCI Marketplace image: A Weblogic license is no longer required. Graph Server Authentication: Use the database as an identity provider. Updated PL/SQL packages: Removes the MAX_STRING_SIZE = EXTENDED requirement. Further details can be found in the documentation. The user guide for the PGQL plugin for SQLcl is available here.   Note: The new Graph Server (PGX) authentication model is a breaking change and is mandatory for all deployments.  There will be a series of blogs posts on each of the key enhancements or new features over the next few weeks. That is, there will be a blog post on the functionality and usage of each of the following: The new Graph Server authentication and authorization model. The PGQL plugin for SQLcl The Sychronizer API The RDF Server      

A new release of Graph Server and Client (version 20.3) is now available. Download it from the Oracle Software Delivery Cloud  or from the Graph page on oracle.com.  New features and enhancements...

Graph Features

Oracle Graph Server and Client 20.2 is available

By Melliyal Annamalai, Sr. Principal Product Manager, Oracle Oracle Graph Server and Client 20.2, a software package for use with the Property Graph feature of Oracle Database, has the following new features.   Note that Oracle Graph Server and Client works with Oracle Database 12.2 and higher. Complete support for Property Graph Query Language (PGQL) 1.3 OCI Marketplace image for easy installation of Graph Server and Client GraphViz tool can visualize PGQL queries executed in the database (in addition to visualizing PGQL queries executed in the graph server PGX) Install updated graph PL/SQL packages on user managed and on-premises databases to remove restrictions when working with prior database versions Other features .bat script to run JShell CLI on a Windows Graph server includes .war files for Oracle WebLogic Server 12.1.x and 12.2.x PGQL 1.3 highlights include: CREATE PROPERTY GRAPH statement to create graphs from database tables.  DROP PROPERTY GRAPH statement to delete a graph.  These are also available in Oracle Graph Server and Client 20.1. INSERT, UPDATE, and DELETE clauses to modify graphs Single CHEAPEST path and TOP-K CHEAPEST path using cost functions Graph names can now be schema-qualified Download is available on edelivery.oracle.com and oracle.com.  Download and try it out by following install instructions and quick start in chapter 1 of the documentation.  Oracle Graph Server and Client is typically installed in a separate Linux machine; for Cloud deployment this would be a compute node instance.   The Property Graph feature of Oracle Database supports storage, query, analytics and visualization of graph data.  Graphs can be created from data in database tables, enabling new insights into your data by analyzing the connections and relationships in data. Learn more at oracle.com/goto/propertygraph.      

By Melliyal Annamalai, Sr. Principal Product Manager, Oracle Oracle Graph Server and Client 20.2, a software package for use with the Property Graph feature of Oracle Database, has the following new...

Graph Features

Database Compatibility Matrix for Oracle Graph Server and Client

By Melliyal Annamalai, Sr. Principal Product Manager, Oracle Oracle Graph Server and Client is a software package required for use with the Property Graph feature of Oracle Database.  It is available for separate download from e-delivery and oracle.com.   Any version of Oracle Graph Server and Client works with Oracle Database versions 12.2 onward, so you can use the latest Graph Server and Client features with prior database versions.   Oracle Autonomous Database, Oracle Database Cloud Service, Oracle Exadata Cloud Service and Oracle Database on-premises are all supported. The table below lists some limitations when using Oracle Graph Server and Client with prior Oracle Database versions.   Oracle Graph Server and Client 20.2 includes updated PL/SQL packages as part of the download.  You can remove the limitations in user managed databases by re-installing the PL/SQL packages (instructions are in the README files in the PL/SQL component in the kit). Table 1-1 Limitations in prior Oracle Database versions (without re-installing updated PL/SQL packages) Oracle Database 19c NULL property values when running PGQL queries in the database: use PROJ_NULL_PROPS=T No support for dropping a graph in a different schema using PGQL DDL Oracle Database 18c 19c restrictions, plus No support for vertex labels, workaround is to store the label as a property in the database and use a configuration property flag to load the property as a label into the graph server (PGX) Oracle Database 12.2 18c and 19c restrictions, plus No support for INSERT/UPDATE/DELETE in PGQL No support for sharding database   Table 1-2 Limitations in Oracle Database (after re-installing updated PL/SQL packages) There are no limitations in Oracle Database 18c and Oracle Database 19c after re-installing the PL/SQL packages. Oracle Database 12.2 No support for INSERT/UPDATE/DELETE in PGQL No support for sharding database

By Melliyal Annamalai, Sr. Principal Product Manager, Oracle Oracle Graph Server and Client is a software package required for use with the Property Graph feature of Oracle Database.  It is available...

Graph Features

KGC 2020 Tutorial: Modeling Evolving Data in Graphs While Preserving Backward Compatibility

We are giving a tutorial on modeling evolving graph data on May 5 2020 at the Knowledge Graph Conference (KGC 2020) https://www.knowledgegraph.tech/the-knowledge-graph-conference-kgc/workshops-and-tutorials/#1580503925986-452b8bbf-6757 Tutorial slides available here: here The tutorial will include several live demonstrations. This blog post describes how to set up the evnironment used during the tutorial for those who want to follow along or try things out themselves. All Oracle software used is free for evaluation purposes. Setting up Virtual Box and Importing the VM Download and Install Virtual Box: https://www.virtualbox.org/ Download Pre-built DB Developer Day VM (7.3 GB): https://www.oracle.com/database/technologies/databaseappdev-vm.html Follow directions to import the VM at the bottom of the download page (~ 30 GB total with data). Downloading Software and Data After importing the Developer Day VM, startup the VM, open a Firefox browser window inside the VM (Applications->Favorites->Firefox) and download the following Oracle software. Download the latest Oracle Jena Adapter from OTN (Oracle Database 19c, 18c, and 12c Support for Apache Jena 3.1.0, Apache Jena Fuseki 2.4, and Protégé Desktop 5.2): https://www.oracle.com/database/technologies/semantic-technologies-downloads.html Download the latest Oracle Graph Server and Client from OTN (Oracle Graph Server) https://www.oracle.com/database/technologies/spatialandgraph/property-graph-features/graph-server-and-client/graph-server-and-client-downloads.html Download Java 11, which is needed to run JShell (jdk-11.0.6_linux-x64_bin.tar.gz). https://www.oracle.com/java/technologies/javase-jdk11-downloads.html This tutorial will use some RDF data about movies. Specifically, it will use the Linked Movie Data Base dataset from a University of Toronto project described in this paper http://events.linkeddata.org/ldow2009/papers/ldow2009_paper12.pdf. An RDF dump of this dataset is available here: http://www.cs.toronto.edu/~oktie/linkedmdb/ We will use the 3 million triple dataset (linkedmdb-18-05-2009-dump.tar.gz) for this demo. Follow the link and download the dataset using Firefox in your VM. Configuring the Environment 1. Configuring Support for Apache Jena Open a terminal (Applications->Favorites->Terminal) and execute the following steps. [oracle@localhost oracle]$ cd /home/oracle [oracle@localhost oracle]$ mkdir RDF [oracle@localhost oracle]$ cd RDF [oracle@localhost RDF]$ mkdir Jena [oracle@localhost RDF]$ cd Jena/ [oracle@localhost Jena]$ mv ~/Downloads/Oracle19c_Jena-3.1.0_Build_20191127.zip . [oracle@localhost Jena]$ ls Oracle19c_Jena-3.1.0_Build_20191127.zip [oracle@localhost Jena]$ unzip Oracle19c_Jena-3.1.0_Build_20191127.zip Archive: Oracle19c_Jena-3.1.0_Build_20191127.zip creating: joseki/ inflating: joseki/application.xml … creating: fuseki/run/logs/ creating: fuseki/run/system_files/ [oracle@localhost Jena]$ ls bin jar protege_plugin bug_notes.txt javadoc README examples joseki release_notes.txt fuseki joseki-web-app sparqlgateway-web-app fuseki-web-app Oracle19c_Jena-3.1.0_Build_20191127.zip version.txt 2. Configuring Oracle Graph Server and Client 2.1 Installing Oracle Graph Server and Client Execute the following steps in a terminal. [oracle@localhost Jena]$ cd /home/oracle/RDF/ [oracle@localhost RDF]$ ls Jena [oracle@localhost RDF]$ mv ~/Downloads/oracle-graph-20.1.0.x86_64.rpm . [oracle@localhost RDF]$ ls Jena oracle-graph-20.1.0.x86_64.rpm [oracle@localhost RDF]$ sudo rpm -i oracle-graph-20.1.0.x86_64.rpm [sudo] password for oracle: oracle [oracle@localhost RDF]$ ls /etc/oracle/graph log4j2-server.xml log4j2.xml pgx.conf server.auth.conf server.conf [oracle@localhost RDF]$ ls /opt/oracle/graph bin COPYRIGHT data doc examples graphviz hadoop lib pgx scripts [oracle@localhost RDF]$ sudo chown oracle /opt/oracle/graph/pgx/tmp_data/ [sudo] password for oracle: 2.2 Setting up Java 11 Execute the following commands from a terminal. [oracle@localhost RDF]$ cd /home/oracle/RDF/ [oracle@localhost RDF]$ mv ~/Downloads/jdk-11.0.6_linux-x64_bin.tar.gz . [oracle@localhost RDF]$ ls jdk-11.0.6_linux-x64_bin.tar.gz Jena oracle-graph-20.1.0.x86_64.rpm [oracle@localhost RDF]$ tar -xzf jdk-11.0.6_linux-x64_bin.tar.gz [oracle@localhost RDF]$ ls jdk-11.0.6 Jena jdk-11.0.6_linux-x64_bin.tar.gz oracle-graph-20.1.0.x86_64.rpm [oracle@localhost RDF]$ 3 Setting up RDF Knowledge Graph in Oracle Database Oracle Database 19.3 should already be installed and running when you start the VM. First, we will create a directory in /usr to store a tablespace for our RDF data. Open a terminal and execute the following commands. [oracle@localhost ~]$ cd /usr/ [oracle@localhost usr]$ sudo mkdir dbs [sudo] password for oracle: [oracle@localhost usr]$ ls -l total 4 dr-xr-xr-x. 1 root root 31422 May 31 2019 bin drwxr-xr-x. 1 root root 0 May 1 15:01 dbs drwxr-xr-x. 1 root root 0 Apr 10 2018 etc drwxr-xr-x. 1 root root 0 Apr 10 2018 games drwxr-xr-x. 1 root root 2626 May 31 2019 include dr-xr-xr-x. 1 root root 4488 May 31 2019 lib dr-xr-xr-x. 1 root root 76110 May 31 2019 lib64 drwxr-xr-x. 1 root root 8672 May 31 2019 libexec drwxr-xr-x. 1 root root 90 Apr 10 2018 local dr-xr-xr-x. 1 root root 12296 May 1 09:48 sbin drwxr-xr-x. 1 root root 4080 May 31 2019 share drwxr-xr-x. 1 root root 24 May 31 2019 src lrwxrwxrwx. 1 root root 10 May 31 2019 tmp -> ../var/tmp [oracle@localhost usr]$ sudo chown oracle dbs [oracle@localhost usr]$ ls -l total 4 dr-xr-xr-x. 1 root root 31422 May 31 2019 bin drwxr-xr-x. 1 oracle root 0 May 1 15:01 dbs drwxr-xr-x. 1 root root 0 Apr 10 2018 etc drwxr-xr-x. 1 root root 0 Apr 10 2018 games drwxr-xr-x. 1 root root 2626 May 31 2019 include dr-xr-xr-x. 1 root root 4488 May 31 2019 lib dr-xr-xr-x. 1 root root 76110 May 31 2019 lib64 drwxr-xr-x. 1 root root 8672 May 31 2019 libexec drwxr-xr-x. 1 root root 90 Apr 10 2018 local dr-xr-xr-x. 1 root root 12296 May 1 09:48 sbin drwxr-xr-x. 1 root root 4080 May 31 2019 share drwxr-xr-x. 1 root root 24 May 31 2019 src lrwxrwxrwx. 1 root root 10 May 31 2019 tmp -> ../var/tmp [oracle@localhost usr]$ To use the RDF Knowledge Graph feature, we need to create a database user and an RDF network. Open a text editor (Applications->Accessories->Text Editor) and copy/paste the following SQL script. set echo on; set serverout on; set linesize 160; set pagesize 10000; set timing on; conn sys/oracle as sysdba; alter session set container=orcl; -- create rdfuser grant connect,resource,create view to rdfuser identified by rdfuser; -- create network and temporary tablespaces -- using /usr/dbs/ instead of /u01/app/... because /usr is mounted on a different -- virtual disk that has more free space create tablespace RDFTBS datafile '/usr/dbs/rdftbs.dbf' size 512M reuse AUTOEXTEND ON NEXT 128M MAXSIZE 10G segment space management auto; create temporary tablespace TEMPTBS tempfile '/usr/dbs/temptbs.dbf' size 128M reuse AUTOEXTEND ON NEXT 128M MAXSIZE 5G; alter database default temporary tablespace TEMPTBS; -- grant tablespace quotas alter user rdfuser quota unlimited on RDFTBS; alter user rdfuser default tablespace RDFTBS; alter user MDSYS quota unlimited on RDFTBS; -- create semantic network conn system/oracle@orcl; exec sem_apis.create_sem_network('RDFTBS'); -- setup indexes -- drop PSCGM index exec sem_apis.drop_sem_index('PSCGM'); -- add SPCGM index exec sem_apis.add_sem_index('SPCGM'); -- add CPSGM index exec sem_apis.add_sem_index('CSPGM'); -- create a bulk load event trace table in rdfuser’s schema conn rdfuser/rdfuser@orcl; CREATE TABLE RDF$ET_TAB ( proc_sid VARCHAR2(30), proc_sig VARCHAR2(200), event_name varchar2(200), start_time timestamp, end_time timestamp, start_comment varchar2(1000) DEFAULT NULL, end_comment varchar2(1000) DEFAULT NULL ); -- Grant privileges on RDF$ET_TAB to MDSYS GRANT INSERT, UPDATE on RDF$ET_TAB to MDSYS; Save the file as rdf_network_setup.sql in /home/oracle/RDF. Double-click the sqlcl icon on the Desktop to start sqlcl. Use system/oracle for the username and password. SQLcl: Release 19.1 Production on Fri May 01 15:23:10 2020 Copyright (c) 1982, 2020, Oracle. All rights reserved. Username? (''?) system Password? (**********?) ****** Last Successful login time: Fri May 01 2020 15:23:17 -04:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> Now execute the rdf_network_setup.sql script that we created. SQL> @ /home/oracle/RDF/rdf_network_setup.sql SQL> set serverout on; SQL> set linesize 160; SQL> set pagesize 10000; SQL> set timing on; SQL> SQL> conn sys/oracle as sysdba; Connected. SQL> alter session set container=orcl; Session altered. Elapsed: 00:00:00.016 SQL> SQL> -- create rdfuser SQL> grant connect,resource,create view to rdfuser identified by rdfuser; Grant succeeded. Elapsed: 00:00:00.077 SQL> SQL> -- create network and temporary tablespaces SQL> -- using /usr/dbs/ instead of /u01/app/... because /usr is mounted on a different SQL> -- virtual disk that has more free space SQL> create tablespace RDFTBS datafile '/usr/dbs/rdftbs.dbf' size 512M reuse AUTOEXTEND ON NEXT 128M MAXSIZE 10G segment space management auto; Tablespace created. Elapsed: 00:00:01.372 SQL> SQL> create temporary tablespace TEMPTBS tempfile '/usr/dbs/temptbs.dbf' size 128M reuse AUTOEXTEND ON NEXT 128M MAXSIZE 5G; Tablespace created. Elapsed: 00:00:00.069 SQL> SQL> alter database default temporary tablespace TEMPTBS; Database altered. Elapsed: 00:00:00.030 SQL> SQL> -- grant tablespace quotas SQL> alter user rdfuser quota unlimited on RDFTBS; User altered. Elapsed: 00:00:00.017 SQL> alter user rdfuser default tablespace RDFTBS; User altered. Elapsed: 00:00:00.010 SQL> alter user MDSYS quota unlimited on RDFTBS; User altered. Elapsed: 00:00:00.018 SQL> SQL> -- create semantic network SQL> conn system/oracle@orcl; Connected. SQL> exec sem_apis.create_sem_network('RDFTBS'); PL/SQL procedure successfully completed. Elapsed: 00:00:19.331 SQL> SQL> -- setup indexes SQL> -- drop PSCGM index SQL> exec sem_apis.drop_sem_index('PSCGM'); PL/SQL procedure successfully completed. Elapsed: 00:00:01.770 SQL> SQL> -- add SPCGM index SQL> exec sem_apis.add_sem_index('SPCGM'); PL/SQL procedure successfully completed. Elapsed: 00:00:00.472 SQL> SQL> -- add CPSGM index SQL> exec sem_apis.add_sem_index('CSPGM'); PL/SQL procedure successfully completed. Elapsed: 00:00:00.056 SQL> Elapsed: 00:00:00.058 SQL> SQL> SQL> -- create a bulk load event trace table in rdfuser’s schema SQL> conn rdfuser/rdfuser@orcl; Connected. SQL> CREATE TABLE RDF$ET_TAB ( 2 proc_sid VARCHAR2(30), 3 proc_sig VARCHAR2(200), 4 event_name varchar2(200), 5 start_time timestamp, 6 end_time timestamp, 7 start_comment varchar2(1000) DEFAULT NULL, 8 end_comment varchar2(1000) DEFAULT NULL 9 ); Table created. Elapsed: 00:00:00.032 SQL> SQL> -- Grant privileges on RDF$ET_TAB to MDSYS SQL> GRANT INSERT, UPDATE on RDF$ET_TAB to MDSYS; Grant succeeded. Elapsed: 00:00:00.006 SQL> Loading the Linked Movie Data Base RDF data. We will use the orardfldr command line utility included with Support for Apache Jena. First, prepare the LMDB data for loading. Open a terminal and execute the following commands. [oracle@localhost ~]$ cd /home/oracle/RDF [oracle@localhost RDF]$ mkdir LMDB [oracle@localhost RDF]$ cd LMDB [oracle@localhost LMDB]$ mkdir DATA [oracle@localhost LMDB]$ cd DATA/ [oracle@localhost DATA]$ mv ~/Downloads/linkedmdb-18-05-2009-dump.tar.gz . [oracle@localhost DATA]$ ls linkedmdb-18-05-2009-dump.tar.gz [oracle@localhost DATA]$ tar -xzf linkedmdb-18-05-2009-dump.tar.gz [oracle@localhost DATA]$ ls dump.nt linkedmdb-18-05-2009-dump.tar.gz [oracle@localhost DATA]$ rm -f linkedmdb-18-05-2009-dump.tar.gz [oracle@localhost DATA]$ ls dump.nt Some URIs in the LMDB dataset have invalid characters that should be URL encoded (", }, and `). We will use perl to do this URL encoding before loading the data. Execute the following commands from a terminal. [oracle@localhost DATA]$ cd /home/oracle/RDF/LMDB/DATA/ [oracle@localhost DATA]$ ls dump.nt [oracle@localhost DATA]$ perl -pi -e 's/\<([^\>]*)"([^\>]*)\>/\<$1%22$2\>/g' dump.nt [oracle@localhost DATA]$ perl -pi -e 's/\<([^\>]*)"([^\>]*)\>/\<$1%22$2\>/g' dump.nt [oracle@localhost DATA]$ perl -pi -e 's/\<([^\>]*)\}([^\>]*)\>/\<$1%7D$2\>/g' dump.nt [oracle@localhost DATA]$ perl -pi -e 's/\<([^\>]*) ([^\>]*)\>/\<$1%20$2\>/g' dump.nt [oracle@localhost DATA]$ perl -pi -e 's/\<([^\>]*)`([^\>]*)\>/\<$1%60$2\>/g' dump.nt [oracle@localhost DATA]$ perl -pi -e 's/\<([^\>]*)`([^\>]*)\>/\<$1%60$2\>/g' dump.nt [oracle@localhost DATA]$ perl -pi -e 's/\<([^\>]*)`([^\>]*)\>/\<$1%60$2\>/g' dump.nt [oracle@localhost DATA]$ Now we can setup the orardfldr utility. First, we need to define an $ORACLE_JENA_HOME environment variable and add $ORACLE_JENA_HOME/bin to our PATH. Run the following commands in a terminal. [oracle@localhost DATA]$ cd /home/oracle/RDF/LMDB [oracle@localhost LMDB]$ export ORACLE_JENA_HOME=/home/oracle/RDF/Jena [oracle@localhost LMDB]$ export PATH=$PATH:$ORACLE_JENA_HOME/bin [oracle@localhost LMDB]$ orardfldr --help Usage: orardfldr --modelName=NAME --fileDir=DIR --lang=FMT --jdbcUrl=URL --user=USER --password=PASSWORD [--bulkLoadFlags="FLAGS"] [--networkName=NAME] [--networkOwner="OWNER"] [--numThreads=INT] [--rebuildAppTabIdx=BOOLEAN] [--truncateStgTab=BOOLEAN] Performs an RDF bulk load of all files in a directory into Oracle Database. Mandatory Arguments: --modelName=NAME Target semantic model for the load --fileDir=DIR Data directory for the load --lang=FMT Data format (N-TRIPLE, RDF/XML, TURTLE, N-QUADS, TRIG) --jdbcUrl=URL JDBC url for the target database --user="USER" Database user name (case sensitive) --password=PASSWORD Database user password Optional Arguments: --bulkLoadFlags="FLAGS" Flags string to use with SEM_APIS.BULK_LOAD_FROM_STAGING_TABLE --networkName=NAME Name of the semantic network to load into (Default MDSYS network if omitted) --networkOwner="OWNER" Owner of the semantic network to load into (case sensitive, MDSYS is used if omitted) --numThreads=INT Number of threads to use when populating the staging table (default 1) --rebuildAppTabIdx=BOOLEAN TRUE (default) to do bottom-up rebuilding of application table indexes --truncateStgTab=BOOLEAN TRUE (default) to truncate pre-existing data in the staging table [oracle@localhost LMDB]$ Next, invoke orardfldr on the /home/oracle/RDF/LMDB/DATA directory to load the LMDB N-triple file. This will create an RDF model named LMDB and perform a bulk load into the model. Expect this step to take up to 10 min depending on your VM resources. [oracle@localhost LMDB]$ cd /home/oracle/RDF/LMDB [oracle@localhost LMDB]$ orardfldr --modelName=LMDB --fileDir=./DATA --lang=N-TRIPLE --jdbcUrl=jdbc:oracle:thin:@localhost:1521/orcl --user=RDFUSER --password=rdfuser --bulkLoadFlags="MBV_METHOD=SHADOW PARALLEL=2" loadRDF: enabling parallel DDL/DML/query for session [0] loadRDF: truncating staging tables loadRDF in parallel: PrepareWorker [0] running PrepareWorker: thread [0] process to 10000 PrepareWorker: thread [0] process to 20000 … PrepareWorker: thread [0] process to 3560000 PrepareWorker: thread [0] process to 3570000 PrepareWorker: thread [0] done to 0, file = ./DATA/dump.nt in (ms) 126155 loadRDF: preparing for bulk load loadRDF: setting application table index RDFUSER.LMDB_IDX to UNUSABLE loadRDF: starting bulk load loadRDF: bulk load flags="MBV_METHOD=SHADOW PARALLEL=2" loadRDF: bulk load completed in (ms) 541741 loadRDF: rebuilding application table index RDFUSER.LMDB_IDX loadRDF: finished rebuilding application table index RDFUSER.LMDB_IDX in (ms) 33639 [oracle@localhost LMDB]$ Next we need to gather statistics for the RDF data we loaded. Open sqlcl by double-clicking on the sqlcl icon on the desktop. Use system/oracle for the user and password. Then execute the following commands to gather statistics. SQLcl: Release 19.1 Production on Fri May 01 17:23:44 2020 Copyright (c) 1982, 2020, Oracle. All rights reserved. Username? (''?) system Password? (**********?) ****** Last Successful login time: Fri May 01 2020 17:23:51 -04:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> exec sem_perf.gather_stats(estimate_percent=>20, degree=>2); PL/SQL procedure successfully completed. SQL> Using SQL Developer Now we can open SQL Developer and configure the RDF Semantic Graph component. Double-click the SQL Developer icon on the Desktop. First, we need to create connections for SYSTEM and RDFUSER. Click on the green plus sign on the top left to create a new connection. Fill in the form using system/oracle, localhost, 1521 and orcl. Click test to verify your information and then click save. Note that orcl is the Service name not SID, which is selected by default. Next, fill in the form with rdfuser/rdfuser, localhost, 1521 and orcl. Click test to verify your information and then click save to save the connection. Note that orcl is the Service name not SID, which is selected by default. Before using the RDF Semantic Graph component of SQL Developer, we need to enable some features in the database. Expand the system connection in the connection tree and right-click on RDF Semantic Graph. Select Setup RDF Semantic Network. Then click Apply in the dialog that opens. Now the RDF Semantic Graph plugin for SQL Developer is ready to use on this database. This operation needs to be performed only once for each database. Running SPARQL queries against the LMDB RDF Dataset Expand the rdfuser connection in the connection tree of SQL Developer. Then expand RDF Semantic Graph, Networks, MDSYS, and Regular Models. Now click on LMDB under Regular Models. This will open a SPARQL editor for LMDB. Now we can execute several example SPARQL queries such as the ones below. Click the green triangle to run the query. Also, the Templates drop down lists several templates for common query patterns that can be loaded into the editor. A complete list of queries used in the tutorial is available here # select modifiers # Distinct genres of movies with Kenau Reeves PREFIX  rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX  owl: <http://www.w3.org/2002/07/owl#> PREFIX dcterms: <http://purl.org/dc/terms/> PREFIX movie: <http://data.linkedmdb.org/movie/> SELECT DISTINCT ?gname WHERE { ?movie movie:actor ?actor .         ?actor movie:actor_name "Keanu Reeves" .         ?movie movie:genre ?genre .         ?genre movie:film_genre_name ?gname . } # Aggregates # Find the 10 movie series with the most movies PREFIX  rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX  owl: <http://www.w3.org/2002/07/owl#> PREFIX dcterms: <http://purl.org/dc/terms/> PREFIX movie: <http://data.linkedmdb.org/movie/> SELECT ?sname (COUNT(?movie) AS ?mcnt) WHERE {   ?movie movie:film_series ?series .   ?series movie:film_series_name ?sname . } GROUP BY ?sname ORDER BY DESC(?mcnt) LIMIT 10 Performing SPARQL Update Operations on LMDB We will use the SQL interface to perform updates on the LMDB model to show how this data can evolve with movie rating information. Open a SQL worksheet for rdfuser. File->New. Then select Database Files and click OK. Pick a file name and location and click OK. Now we can perform some SPARQL Update operations using the SEM_APIS PL/SQL package. Some example updates are below. Be sure to select the rdfuser connection for this worksheet in the connection drop down menu on the top right. Some example updates are shown below. The full list of movie review updates is available here, and corresponding queries are available here. begin sem_apis.update_model('LMDB', 'PREFIX   movie: <http://data.linkedmdb.org/film/>     PREFIX    foaf: <http://xmlns.com/foaf/0.1/>     PREFIX dcterms: <http://purl.org/dc/terms/>     PREFIX    sioc: <http://rdfs.org/sioc/ns#>     PREFIX  schema: <http://schema.org/>     PREFIX        : <http://example.com/data/> INSERT DATA { # John likes Office Space :person123 foaf:name "John" ; sioc:likes movie:31916 . }' ); end; / begin sem_apis.update_model('LMDB', 'PREFIX   movie: <http://data.linkedmdb.org/film/>     PREFIX    foaf: <http://xmlns.com/foaf/0.1/>     PREFIX dcterms: <http://purl.org/dc/terms/>     PREFIX    sioc: <http://rdfs.org/sioc/ns#>     PREFIX  schema: <http://schema.org/>     PREFIX        : <http://example.com/data/> # remove triple DELETE DATA { :person123 sioc:likes movie:31916 . }; INSERT DATA { # replace triple with quad assigning :edge1 as id GRAPH :edge1 { :person123 sioc:likes movie:31916 . } # add edge property for rating :edge1 schema:ratingValue 5 . }' ); end; /   Graph Analytics with RDF Data The next portion of the tutorial will illustrate how to extract a subgraph from our LMDB RDF data and use an in-memory graph analytics engine on the extracted subgraph. Extracting the Subgraph We will use the SEM_MATCH SQL table function to create some database views that represent Vertex and Edge tables for our extracted subgraph. SEM_MATCH allows the execution of SPARQL queries within SQL statements. Use a SQL worksheet to create the following relational views in the rdfuser schema. The full script is available here. -- Create node and edge views for PGX -- Actor View CREATE VIEW ACTORS AS SELECT ACTOR$RDFVID as ID, 'Actor' AS "label", NAME as "name" FROM TABLE(SEM_MATCH( 'PREFIX  rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> SELECT ?actor ?name WHERE { ?actor rdf:type <http://data.linkedmdb.org/movie/actor>; <http://data.linkedmdb.org/movie/actor_name> ?name }', SEM_MODELS('LMDB'), null, null, null, null, ' PLUS_RDFT=VC DO_UNESCAPE=T ALLOW_DUP=T ' )); -- Movie table CREATE VIEW MOVIES AS SELECT MOVIE$RDFVID as ID, 'Movie' AS "label", MTITLE AS "title", MGENRE AS "genre" FROM TABLE(SEM_MATCH( 'PREFIX  rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> SELECT ?movie (MAX(STR(?genre)) AS ?mGenre) (MAX(STR(?title)) AS ?mTitle) WHERE { ?movie rdf:type <http://data.linkedmdb.org/movie/film>; <http://purl.org/dc/terms/title> ?title ; <http://data.linkedmdb.org/movie/genre>/<http://data.linkedmdb.org/movie/film_genre_name> ?genre } GROUP BY ?movie ', SEM_MODELS('LMDB'), null, null, null, null, ' PLUS_RDFT=VC DO_UNESCAPE=T ALLOW_DUP=T ' )); -- edge table -- forward edge CREATE VIEW ACTED_IN AS SELECT ACTOR$RDFVID AS SOURCE_ID, MOVIE$RDFVID AS DEST_ID, 'acted_in' AS "label" FROM TABLE(SEM_MATCH( 'PREFIX  rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> SELECT ?actor ?movie WHERE { ?movie <http://data.linkedmdb.org/movie/actor> ?actor }', SEM_MODELS('LMDB'), null, null, null, null, ' PLUS_RDFT=VC DO_UNESCAPE=T ALLOW_DUP=T ' )); -- backward edge CREATE VIEW HAD_ACTOR AS SELECT MOVIE$RDFVID AS SOURCE_ID, ACTOR$RDFVID AS DEST_ID, 'had_actor' AS LABEL FROM TABLE(SEM_MATCH( 'PREFIX  rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> SELECT ?actor ?movie WHERE { ?movie <http://data.linkedmdb.org/movie/actor> ?actor }', SEM_MODELS('LMDB'), null, null, null, null, ' PLUS_RDFT=VC DO_UNESCAPE=T ALLOW_DUP=T ' )); Setting up Oracle Graph Server and Client We will use JShell for this portion of the tutorial. First, we need to open a terminal and setup the environment. Execute the following commands in a terminal. [oracle@localhost oracle]$ export JAVA_HOME=/home/oracle/RDF/jdk-11.0.6 [oracle@localhost oracle]$ export PATH=$JAVA_HOME/bin:$PATH [oracle@localhost oracle]$ which java ~/RDF/jdk-11.0.6/bin/java [oracle@localhost oracle]$ In order to load a graph from Oracle Database, we need to setup a Java Keystore file for the database connection password, and we need to create a configuration file for the graph we are loading. Create a file called config.json in the RDF directory with the following contents. The full config.json file is available here. { "name":"lmdb", "jdbc_url":"jdbc:oracle:thin:@localhost:1521/orcl", "username":"rdfuser", "keystore_alias":"database1", "vertex_id_strategy": "keys_as_ids", "vertex_providers":[ { "name":"Actor", "format":"rdbms", "database_table_name":"ACTORS", "key_column":"ID", "key_type": "long", "props":[ { "name":"name", "type":"string" } ] }, { "name":"Movie", "format":"rdbms", "database_table_name":"MOVIES", "key_column":"ID", "key_type": "long", "props":[ { "name":"title", "type":"string" }, { "name":"genre", "type":"string" } ] } ], "edge_providers":[ { "name":"acted_in", "format":"rdbms", "database_table_name":"ACTED_IN", "source_column":"SOURCE_ID", "destination_column":"DEST_ID", "source_vertex_provider":"Actor", "destination_vertex_provider":"Movie" }, { "name":"had_actor", "format":"rdbms", "database_table_name":"HAD_ACTOR", "source_column":"SOURCE_ID", "destination_column":"DEST_ID", "source_vertex_provider":"Movie", "destination_vertex_provider":"Actor" } ] } Next, we need to setup the Java Keystore file. Execute the following in your terminal. [oracle@localhost oracle]$ cd /home/oracle/RDF [oracle@localhost RDF]$ keytool -importpass -alias database1 -keystore keystore.p12 -storetype pkcs12 Enter keystore password: Re-enter new password: Enter the password to be stored: Re-enter password: [oracle@localhost RDF]$ Enter a password for your keystore file and then enter the password for your rdfuser database account. Using Oracle Graph Server and Client with JShell This portion of the tutorial will use JShell to demonstrate parts of the Java API for Graph Server and Client. Run the following commands from your terminal to start the opg-jshell-rdbms client. [oracle@localhost RDF]$ unset CLASSPATH [oracle@localhost RDF]$ /opt/oracle/graph/bin/opg-rdbms-jshell --secret_store keystore.p12 12:26:25,578 INFO Ctrl$1 - >>> start engine enter password for keystore keystore.p12: For an introduction type: /help intro Oracle Graph Server Shell 20.1.0 PGX server version: 19.4.0 type: SM running in embedded mode. PGX server API version: 3.6.0 PGQL version: 1.2 Variables instance, session, and analyst ready to use. opg-jshell-rdbms> Run the following commands to load the extracted graph from LMDB into memory. opg-jshell-rdbms> var graph = session.readGraphWithProperties("config.json"); graph ==> PgxGraph[name=lmdb,N=51456,E=124888,created=1588523322551] opg-jshell-rdbms> Now, we can use graph.queryPgql to run some PGQL queries. A full list of example queries is available here. opg-jshell-rdbms> graph.queryPgql( ...> "SELECT a.name as name "+ ...> "MATCH (m:Movie)-[:had_actor]->(a:Actor) "+ ...> "WHERE m.title = 'Home Alone'").print(); +------------------+ | name | +------------------+ | Catherine O'Hara | | Daniel Stern | | John Candy | | Macaulay Culkin | | John Heard | | Daniel Stern | | Joe Pesci | | Roberts Blossom | +------------------+ $2 ==> PgqlResultSetImpl[graph=lmdb,numResults=8] opg-jshell-rdbms> opg-jshell-rdbms> graph.queryPgql( ...> "SELECT COUNT(e) AS pathLen, ARRAY_AGG(t.title) AS movie, ARRAY_AGG(t.name) AS coStar "+ ...> "MATCH SHORTEST ( (a) ((s)-[e:acted_in]-(t))* (b) ) "+ ...> "WHERE a.name = 'Charlie Chaplin' AND b.name = 'Mr. T'").print(); +------------------------------------------------------------------------------------------------------+ | pathLen | movie | coStar | +------------------------------------------------------------------------------------------------------+ | 6 | [Limelight, Home for the Holidays, Spy Hard] | [Geraldine Chaplin, Charles Durning, Mr. T] | +------------------------------------------------------------------------------------------------------+ $3 ==> PgqlResultSetImpl[graph=lmdb,numResults=1] We can also use the Analyst API to run graph algorithms over our extracted graph. Use the commands below to run Page Rank against the LMDB graph. Running the algorithm adds a pagerank property to each vertex, which we can access with PGQL queries. opg-jshell-rdbms> var analyst = session.createAnalyst(); analyst ==> NamedArgumentAnalyst[session=6b26e194-3dfe-40af-9e14-e9e34031b546] opg-jshell-rdbms> VertexProperty pagerank = analyst.pagerank(graph); pagerank ==> VertexProperty[name=pagerank,type=double,graph=lmdb] opg-jshell-rdbms> pagerank.getName(); $6 ==> "pagerank" opg-jshell-rdbms> graph.queryPgql("select id(a), a.pagerank, a.title match (a:Movie) order by a.pagerank desc limit 10").print(); +-------------------------------------------------------------------------------------------+ | id(a) | a.pagerank | a.title | +-------------------------------------------------------------------------------------------+ | 9203356100102031272 | 4.5957911542206383E-4 | Stranger Than Fiction | | 3043577596047303050 | 3.7362096519635195E-4 | Walk Hard: The Dewey Cox Story | | 1551281233598901313 | 3.0906541971418243E-4 | 30 Days of Night | | 4716692856789145745 | 2.9467453955043946E-4 | Talladega Nights: The Ballad of Ricky Bobby | | 5136965450075342208 | 2.7029926744780766E-4 | Untraceable | | 8906191549691061753 | 2.366584825237721E-4 | Baby Boy | | 1924525656707524741 | 2.2347530719121642E-4 | Night at the Museum | | 2044045883429832681 | 2.0619976574073263E-4 | Adventures Into Digital Comics | | 3830516857956502081 | 2.057737297912671E-4 | The Other Boleyn Girl | | 1250359656689937498 | 1.920351747698865E-4 | First Sunday | +-------------------------------------------------------------------------------------------+ $7 ==> PgqlResultSetImpl[graph=lmdb,numResults=10] opg-jshell-rdbms> graph.queryPgql("select id(a), a.pagerank, a.name match (a:Actor) order by a.pagerank desc limit 10").print(); +-----------------------------------------------------------------+ | id(a) | a.pagerank | a.name | +-----------------------------------------------------------------+ | 6918926442303567142 | 3.652943214921537E-4 | Oliver Hardy | | 5108249384479603329 | 3.534905433635076E-4 | Stan Laurel | | 2217693998232186748 | 3.5033153856543363E-4 | John Wayne | | 1521869729662604452 | 3.266064860979765E-4 | Claudette Colbert | | 7570309037436615508 | 3.187072825866198E-4 | William Garwood | | 4463808826027376555 | 3.147493787789601E-4 | Charlie Chaplin | | 1130732531415155834 | 2.7138550552431096E-4 | Harry von Meter | | 6706176589560490413 | 2.5675796106955844E-4 | Jackie Chan | | 3000787937460459606 | 2.336318740626907E-4 | Vincent Price | | 7730444284418262623 | 2.2997707687822115E-4 | Joan Crawford | +-----------------------------------------------------------------+ $8 ==> PgqlResultSetImpl[graph=lmdb,numResults=10] opg-jshell-rdbms>

We are giving a tutorial on modeling evolving graph data on May 5 2020 at the Knowledge Graph Conference (KGC 2020)https://www.knowledgegraph.tech/the-knowledge-graph-conference-kgc/workshops-and-tutor...

Spatial and Graph

Options for Visualizing Spatial Data in Oracle (Part 2)

  Overview This blog discusses more options for visualizing your Oracle spatial data.  Whether you’re a business analyst working with BI dashboards or Excel files who wants to add map views quickly without coding, or an analyst or developer who wants easy access to spatial operations, there’s an option for you.  These include Oracle Analytics Cloud and Oracle Spatial Studio.   Oracle Analytics Cloud For users of business intelligence dashboards, Oracle Analytics Cloud has built-in mapping features.  Spatial visualization in Oracle Analytics Cloud can be accessed via the Answers Dashboard. The Spatial Map Visualization Component is integrated with OAC and allows you to link your geospatial data with OAC data. This requires some expertise in OAC in order to create and configure your map layers. There are many great map features available in OAC such as: Associating a map layer to a data column directly in the data source definition so every map using that column (such as city column) will automatically use that correct layer for each map visualization included in your project – available in OAC5.5 Ability to add your own custom map layers Creating clusters when mapping large sets of point data to more easily visualize the data on a map Creating heatmap layers on a map visualization to identify the density or high concentration of point values or metric values associated with the points, such as identifying the high rent areas in a specific part of a city More information on creating maps and visualizing spatial data in OAC can be found here: Visualizing and Building Reports in OAC In addition, there is a great OAC Blog Oracle Spatial Studio Spatial Studio is a self-service web application for spatial analytics and mapping. It can be used to visualize your raw tables from spatial in the database, but it is even more useful for visualizing your spatial analyses results. It’s designed for users with no spatial or GIS knowledge and provides access to a wide range of spatial operations which are typically hidden behind complex SQL syntax. Everything is done through mouse clicks and menu options. Many tasks and daily operations faced by users are simplified with Spatial Studio. It provides an advanced web interface for loading your geospatial data into Oracle Database. Your source can be shapefiles, Excel spreadsheets, or GeoJSON files. Spatial Studio prepares the data for spatial analysis. You can then easily share the results of your spatial analyses and visualizations with others. Spatial Studio has a developer API. When you build a spatial analytics application using the menu options, you can also see the underlying SQL to understand what is happening behind the scenes. This allows you to take the underlying SQL and write more complex SQL if you want to build more complex applications. More information can be found here: Spatial Studio Customer example: oracle.com AnD'20 site Category 5 Analytics: How OUTFRONT Media Maps Hurricane Risk(pdf)Derek Hayden, OUTFRONT Media and Tim Vlamis, Vlamis Software Solutions  

  Overview This blog discusses more options for visualizing your Oracle spatial data.  Whether you’re a business analyst working with BI dashboards or Excel files who wants to add map views...

Spatial Features

Validating and Rectifying Spatial Data in the Oracle Database

It always amazes me, how many of the complaints about unexpected results or errors in spatial queries we hear about can be traced back to invalid geometries. In our documentation, should you ever bother to read it, we are actually very clear about this: “You should validate all geometry data, and fix any validation errors, before performing any spatial operations on the data. The recommended procedure for loading and validating spatial data is as follows: Load the data, using a method described in Bulk Loading or Transactional Insert Operations Using SQL. Use the SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT function or the SDO_GEOM.VALIDATE_LAYER_WITH_CONTEXT procedure on all spatial data loaded into the database. For any geometries with the wrong orientation or an invalid ETYPE or GTYPE value, use SDO_MIGRATE.TO_CURRENT on these invalid geometries to fix them. For any geometries that are invalid for other reasons, use SDO_UTIL.RECTIFY_GEOMETRY to fix these geometries.“ The need to eliminate invalid geometries is probably immediately obvious. If you have, say, a self-crossing polygon, how would you know what is inside and what is outside? What makes this trickier is that across the industry, tools and solutions have varying levels of strictness when it comes to tolerating errors in spatial data. This means that an incorrect geometry may have been loaded into a GIS tool without problems, but once it is saved to the database, it could raise errors when it is used in a subsequent query. Moreover, the Oracle Database has become less forgiving from one release to the next in this regard, so that an invalid geometry may have gone unnoticed in 11gR2 while in 19c it would cause issues. So, how can errors in SDO_GEOMETRY objects be identified and subsequently be fixed? The simplest approach for a table of geometries is to use the procedure below, which is based on the SDO_UTIL.RECTIFY_GEOMETRY() function. This function tries hard to correct the most common errors such as duplicate points, polygon orientation errors, polygon construction errors, and so on. However, any error may in turn hide other errors, which the function is not designed to correct. Here is what happens behind the scenes: The function validates the geometry If correct, it returns it unchanged If it detects one of the known errors, it tries to correct it. If it detects any uncorrectable error, it fails with an exception It repeats the process until there is no more error or it finds an uncorrectable error. Therefore, there are two possible outcomes: Either a corrected, valid geometry; or an exception. This means you need to catch the exception to detect which of the shapes are unrecoverable. The entire procedure could look like this: declare -- Declare a custom exception for uncorrectable geometries -- "ORA-13199: the given geometry cannot be rectified" cannot_rectify exception; pragma exception_init(cannot_rectify, -13199); v_geometry_fixed sdo_geometry; begin -- Process the invalid geometries for e in ( select rowid, shape from my_geom_table where sdo_geom.validate_geometry_with_context(shape, 0.05) != 'TRUE' ) loop -- Try and rectify the geometry. Throws an exception if it cannot be corrected begin v_geometry_fixed := sdo_util.rectify_geometry (e.shape, 0.05); exception when cannot_rectify then v_geometry_fixed := null; end; if v_geometry_fixed is not null then -- Update the base table with the rectified geometry update my_geom_table set shape = v_geometry_fixed where rowid = e.rowid; dbms_output.put_line('Successfully corrected geometry rowid='||e.rowid); else dbms_output.put_line('*** Unable to correct geometry rowid='||e.rowid); end if; commit; end loop; end; / Any shapes that are left uncorrected at this point need further examination, of course. A word about tolerance is necessary here. In the above example, we specify 0.05 as tolerance, i.e. 5cm. You should really use the actual tolerance of your data, which you will find in the metadata of your table (USER_SDO_GEOM_METADATA). Using the correct tolerance is important for both validation and rectification, and must be the same for both. Errors may exist at certain tolerance levels but not at others. For example, the above tolerance (5cm) means that two adjacent points, which are 3cm apart, will be diagnosed as “duplicate points”. Validating at a finer tolerance (say 1cm) will not report any such error.

It always amazes me, how many of the complaints about unexpected results or errors in spatial queries we hear about can be traced back to invalid geometries. In our documentation, should you...

Spatial Features

Performing spatial analyses on Latitude, Longitude data in Oracle Database

Many applications use GPS coordinates and other location information that is published as a pair of Longitude and Latitude or X and Y values. A frequent question of late on various forums has been: "We have the locations of our sites in a table with Latitude and Longitude as separate columns. How do we find sites that are say within 10 miles from a given site?". This post shows how to do that, and other spatial queries, using function-based indexes and spatial operators in an Oracle database.  First, let's create a table and insert some data into it.  Note: The following SQL statements were executed using SQLcl  -- create a table of PRESCHOOLS Create table preschools ( Id number,  Name   varchar2(64), Longitude  number, Latitude  number) ; -- insert some sample data  Insert into preschools values(1, 'CS Montessori', 103.871862, 1.35067551); Insert into preschools values(2, 'Dayspring',  103.898939, 1.41490187); Insert into preschools values(3, 'Divinity', 103.890186, 1.33249633); Insert into preschools values(4, 'Dreamkids', 103.903339, 1.30438467); Insert into preschools values(5, 'Elite Learning', 103.697497, 1.34435137); Insert into preschools values(6, 'Evelyns', 103.701169, 1.3461552); Insert into preschools values(7, 'Grace House', 103.816386, 1.29430568); Insert into preschools values(8, 'Gracefields', 103.824479, 1.30058938); Insert into preschools values(9, 'Hanis', 103.93705, 1.37406387); Insert into preschools values(10, 'Iroha', 103.880829, 1.30800945); Insert into preschools values(11, 'Jenius', 103.883566, 1.37757028); Insert into preschools values(12, 'ACES', 103.953599, 1.37714073); Insert into preschools values(13, 'Artskidz', 103.831883, 1.37880334); Insert into preschools values(14, 'Banyan Tree', 103.87539, 1.37792345); Insert into preschools values(15, 'Brainy Bunch', 103.891524, 1.32313886); Insert into preschools values(16, 'Buttercups', 103.788347, 1.30535122); Insert into preschools values(17, 'Callidus', 103.845963, 1.35760846); Insert into preschools values(18, 'Bethel', 103.886673, 1.32176645); Insert into preschools values(19, 'Bibinogs', 103.810676, 1.32313886); Insert into preschools values(20, 'Barker Road', 103.834489, 1.31918222); -- commit; commit; Now we can perform the necessary steps to Define a function which returns an SDO_GEOMETRY instance constructed using the Longitude and Latitude values for a preschool. Insert the metadata needed for using spatial operators and indexes. Create a function-based index. Execute spatial queries. Step 1 is to define a function that accepts a preschool name and returns an SDO_GEOMETRY instance which is the location of that school. -- define the sch_name_to_geom(name) function create or replace function sch_name_to_geom(aName in varchar2) return sdo_geometry deterministic is lon number; lat number; BEGIN if aName is not null then begin execute immediate 'select longitude, latitude from preschools where name=:1' into lon, lat using aName; RETURN sdo_geometry(2001, 4326, sdo_point_type(lon, lat, NULL), NULL, NULL); exception when OTHERS then RETURN NULL; end; else RETURN null; end if; END; /   show errors; Step 2 is inserting the necessary metadata into the USER_SDO_GEOM_METADATA view. This is needed for creating spatial indexes and using the spatial operators in queries.  insert into user_sdo_geom_metadata values('PRESCHOOLS', 'STUDIO_HOL.SCH_NAME_TO_GEOM(name)', sdo_dim_array(sdo_dim_element('Longitude', -180, 180, 0.5), sdo_dim_element('Latitude', -90, 90, 0.5)), 4326); Commit; -- NOTE: This example was done in STUDIO_HOL schema. Replace that with your specific schema/user -- i.e. '<your_schema>.SCH_NAME_TO_GEOM' Step 3 is create a function-based spatial index using SCH_NAME_TO_GEOM() create index sch2geom_sidx on preschools(sch_name_to_geom(name)) indextype is mdsys.spatial_index_v2; -- query the spatial index metadata view to verify it was created    select table_name, index_name from user_sdo_index_info ;                   TABLE_NAME                   INDEX_NAME  ____________________________ ____________________________              PRESCHOOLS                   SCH2GEOM_SIDX                 Now we can perform some spatial queries.  Find preschools that are within 5 Km of the 'Iroha' school.  Select name, id from preschools where sdo_within_distance(sch_name_to_geom(name), sch_name_to_geom('Iroha'), 'distance=5 unit=km')='TRUE' and name <> 'Iroha'; NAME                ID  ________________    _____  Dreamkids           4   Brainy Bunch        15  Divinity            3  Bethel              18  CS Montessori       1  -- Return the distance in KM too Select name, id, sdo_geom.sdo_distance(sch_name_to_geom(name), sch_name_to_geom('Iroha'), 0.5, 'unit=km') DistInKM from preschools where  sdo_within_distance(sch_name_to_geom(name), sch_name_to_geom('Iroha'), 'distance=5 unit=km')='TRUE' and name <> 'Iroha'; NAME              ID            DISTINKM  ________________  _____ ___________________  Dreamkids          4   2.53701589337829  Brainy Bunch       15   2.0531435455936  Divinity           3    2.90097770539312  Bethel             18   1.65438174282452  CS Montessori      1    4.82218375759093    Find the 4 nearest schools to the 'Barker Road' school.  select name from preschools where sdo_nn(sch_name_to_geom(name), sch_name_to_geom('Barker Road'), 'sdo_num_res=5 sdo_batch_size=10')='TRUE' and name <> 'Barker Road'; NAME  ______________  Gracefields     Bibinogs        Grace House     Callidus          Note that 5 preschools are returned by the sdo_nn() operator but we omit the 'Barker Road' itself school so the final result is 4 nearest preschools.   Now let's also get the (as the crow flies) distance of these preschools.  select name, sdo_nn_distance(1) distInKM from preschools where sdo_nn(sch_name_to_geom(name), sch_name_to_geom('Barker Road'), 'sdo_num_res=5 sdo_batch_size=10 unit=km', 1)='TRUE' and name <> 'Barker Road'; NAME            DISTINKM  ______________  ___________________  Gracefields      2.3383253177357  Bibinogs         2.68602143954703 Grace House      3.4096128337165  Callidus         4.43670650377725    And there we have it.    

Many applications use GPS coordinates and other location information that is published as a pair of Longitude and Latitude or X and Y values. A frequent question of late on various forums has been:...

Options for Visualizing Spatial Data in Oracle

Part One (Part Two coming in April) Overview You may be aware of the spatial data types and analysis in Oracle Database…but did you know about the many development tools for creating maps to visualize your data? Do you have customer, sales territory, asset information in your database that you and your end users want to visualize and interact with on a map? Whether you’re a seasoned database user or developer, or are just starting to look at using Oracle’s spatial features, you have several choices of tools to create the perfect map to fit your needs. This blog discusses Oracle options for visualizing your spatial data in Oracle Database.  Visualization means you see your geospatial data displayed on a map. There are different options you can use to visualize your data, such as Oracle products, tools and APIs. In addition, you can use open source tools and APIs. Although all of these options are different, the basic architecture and data flow is the same for each one. Oracle Visualization Options Oracle provides products, tools, and APIs for visualizing spatial data stored in Oracle Database. These include Map Builder, Spatial Map Visualization Component (SMVC), Oracle Analytics Cloud, Spatial Studio, and map visualization APIs.  (Note that the Spatial Map Visualization Component was formerly known as MapViewer.) Map Tools and Products Map Builder Map Builder is a standalone Java-based desktop application. The latest version is 20c and supports Oracle Autonomous Database wallet connections. It has a simple UI for you to pan, zoom, or identify features on a map. It utilizes public Oracle spatial Java libraries such as SDOAPI which can transform database geometries or GeoRaster objects into corresponding objects in Java. It is a useful tool for quickly visualizing any kind of geospatial data stored in your Oracle spatial database tables. You can run simple and adhoc SQL queries (via JDBC) and visualize the spatial data result sets inside Map Builder. It is typically used as a companion tool to the Spatial Map Visualization Component, which will be discussed next.   Spatial Map Visualization Component SMVC is a Java middleware component that is deployed and run inside your JavaEE containers, including WebLogic Server and Tomcat. It is used by many customers and is integrated into a number of Oracle products, such as Oracle Analytics Cloud. You can think of SMVC as a single, large server that is running in your WebLogic Server instance. It provides enterprise level mapping and data services. It can be used as a raster or image tile server and can generate or stream a high volume of vector data on the fly. SMVC can also generate GeoJSON data out of your tables or queries on the fly. You can use SMVVC to publish your geospatial data using the OGC standard Web Map Service (WMS) and OGC Web Map Tile Service (WMTS). Depending on your application needs, you may be using one or more of these services.  Raster/Image Tile Server The Raster/Image Tile Server generates and caches map tile images. Tiles are displayed using the Oracle Maps JavaScript API (more below). Tiles can also be displayed using any open source mapping API (Leaflet, OpenLayer, etc.) when the tiling scheme conforms to the standard World Mercator tile scheme. GeoJSON Data Server GeoJSON Data Server generates GeoJSON on the fly. GeoJSON is the spatial flavor of JSON, a standard, lightweight data interchange format popular for web applications.  It supports simplifying geometries while generating GeoJSON. GeoJSON data can be displayed using Oracle Maps JS API or any open source mapping API (Leaflet, OpenLayer, etc.). Vector Tile Server Vector Tile Server generates vector tiles on the fly. Each tile includes a single SMVC-defined layer/theme only. Vector tiles are a format for encoding map and business data in a single file.  This is a very compact binary format that enables large volumes of geometry data to be transmitted to the client.  Web map applications using vector tiles are versatile (since attributes are included along with the geometries, which users can view with tooltips) and highly performant. Vector tiles can be displayed using open source mapping APIs (OpenLayer, Mapbox GL JS API, etc.). Map Visualization APIs Java APIs In addition to the Spatial ready-to-use tools, Oracle provides map visualization APIs. If you are a Java developer and you want to develop your Java-based applications or services, then you probably want to start with these Java Packages. They allow you to transform your native spatial data types stored in the database - geometries, GeoRaster, network model, and topology – and bring them over to Java where you can manipulate and process that data. These Java packages aren’t specifically for visualization. They are used to process your business and geospatial objects or just display them. Visualization is typically done using standard Java2D graphics API. Oracle Spatial Java API   Oracle Maps JavaScript API Oracle Maps JavaScript API is also known as the V2 API, or HTML5 API. It is designed to work with the Spatial Map Visualization Component as the backend. It can display both image tiles and GeoJSON data and provides a lot of interactivity. The V2 API can also be used in standalone web applications – you don’t have to use SMVC. For example, you can load your spatial data from a GeoJSON service and use the V2 API to create your web map application. The V2 API has some advanced geometry editing functions including splitting and merging polygons. Map Visualization downloads:  https://www.oracle.com/database/technologies/spatial-studio/spatial-graph-map-vis-downloads.html Map Visualization with Oracle Spatial and Graph – presentation from Analytics and Data Summit 2020 Oracle Spatial & Graph Map Visualization's Developer Guide   Visualizing Spatial Data (Part Two) will include easy to use, low code options such as Oracle Analytics Cloud, Spatial Studio, and open source tools and APIs. Stay tuned.

Part One (Part Two coming in April) Overview You may be aware of the spatial data types and analysis in Oracle Database…but did you know about the many development tools for creating maps to visualize...

Visualizing Oracle Spatial Map Visualization Component (formerly MapViewer) themes using the Open Source Mapbox GL JS API

This guest post is by Liujian Qian. Overview Since the release of MapViewer 12.2.1.3.0, and the Oracle Spatial Map Visualization Component 19.1.0 (which contains all the essential functions of Oracle FMW MapViewer) there has been a lesser known but powerful new feature called “Vector Tile Service”. This service can be used to generate and stream vector tiles that contain data for a single pre-defined SMVC (Spatial Map Visualization Component) theme on the fly. Vector tiles, in case you are wondering, is a compact binary data format where each vector tile (divided geographically according to the same popular World Mercator tiling scheme used by many world-map services) can contain geometry and attribute data for one or more map layers. For more details about Vector Tiles, check out the Mapbox blog here. Note that SMVC currently does not cache previously generated vector tiles, unlike image map tiles which are typically cached to the disk. So how do you visualize large volumes of Oracle Spatial managed spatial data using vector tiles? Currently SMVC does not provide any tool out of the box for visualizing the vector tiles it generates, but there are a number of open source API options, including the OpenLayers API and the Mapbox GL JS API.  In this blog we will show an example of using the Mapbox API, but the same concept applies when using the OpenLayers API.   How to run the example The above screenshot illustrates what you will see when you run this demo. Note that the demo has been tested with SMVC version 19.1. All the files and source code for the demo are located here:  https://github.com/ljqian/oraclemaps. To run this example, you must already have SMVC 19.1 or MapViewer 12.2.1.3+ installed and running, with the “mvdemo” sample data schema defined as a SMVC data source named “mvdemo”.  Note that if you didn’t have this data schema or data source defined, you can still run the demo by changing the name of the data source and the theme, as well as the properties used in the example for showing a popup. Also, it is best that your own theme contains point type geometries which are what the demo is designed to show.  It is trivial to modify the demo code so that other types of geometries such as linestring or polygon can be displayed.  The vector tiles generated by SMVC can contain any type of geometries. If your SMVC or MapViewer server is up and running, the next step is to copy the demo files to your application’s web server document folder so you can open the demo.html page in a browser. The demo code itself is entirely embedded in the demo.html file, with detailed comments to help understand the code. If you are already familiar with the Mapbox GL JS API in general then the demo should be fairly easy to understand.  Stylesheet for the background world map The demo starts by creating a new Mapbox Map instance. Its background map uses the image map tiles hosted on Oracle Maps Cloud Service. The image tiles are part of a SMVC tile layer. Note that Oracle Maps Cloud Service can only be used for demo purposes and cannot be incorporated into an end-user application. In the demo’s map-styles sub-folder you will find the JSON stylesheet for the world-map as well as the accompanying sprites and fonts, which are not used in this demo but might be useful down the road in a more sophisticated application.  If you want to use your own SMVC defined tile layer as your application map’s background, just make sure the tile layer is created using the Google-like Web Mercator tiling scheme, as any other tile scheme will not work with Mapbox GJ JS. To use your own SMVC defined tile layer as a background", you will need to change the JSON stylesheet so that it references tiles generated from your local SMVC instance instead of the one running on Oracle Maps Cloud Service. Do note that while this demo displays a background map that is based on a SMVC (image) tile layer, you can easily switch to a Mapbox or OpenStreetMap hosted background (world) maps which can be either image or vector tile based.  If you already have a subscription account with Mapbox and custom maps then those are likely what you will be using as background maps. Displaying Theme and Showing a Pop-up The rest of the demo code is clearly documented as inline comments and should be self-explanatory. The key steps are: Defining a Mapbox vector tile source. In this case it includes a single URL for fetching the SMVC generated vector tiles for the given SMVC theme “CUSTOMERS”. Defining a Mapbox map layer (out of the vector tile source) including its rendering details or “paint”. Since our vector tile source contains only one layer, this is pretty straightforward as well. Adds a “click” listener so that when you click on a customer feature on the map, a pop-up will be displayed. In the event listener we are simply adding all the attributes for the clicked customer to a very simple pop-up.  Note that the attribute names correspond to the names of the “Info” columns defined in your SMVC theme. We hope this simple demo provides enough information to get you started developing your own SMVC vector tile applications using the Mapbox GL JS API. Final Note   While any given vector tile generated by SMVC can only contain one pre-defined theme, that does not mean in your application's map visualization you are limited to showing only one such theme at a time. Using the Mapbox API (or OpenLayers) you are free to add as many data layers as needed, as long as you define the source for each and add the theme contained in it as a layer to the Mapbox Map object. The below screenshot shows such a map where three different SMVC themes (customers, sales territories and inter-state highways) are displayed on top of a single background map. Here the "customers" layer is displayed using Mapbox API's heatmap rendering type, while the territories and highways are displayed using the fill and line rendering types. At run time for each area on the map that falls inside a tile's bounding box, 4 separate tile requests will be made, one to get the background map's tile, and one each for a vector tile containing the customers, sales territories and highways data.

This guest post is by Liujian Qian. Overview Since the release of MapViewer 12.2.1.3.0, and the Oracle Spatial Map Visualization Component 19.1.0 (which contains all the essential functions of Oracle...

Spatial Features

Deploying Oracle Spatial Map Visualization Component 19.3 to Tomcat 9

Note: This guest post is by Liujian Qian This post describes how to deploy the Oracle Spatial’s Map Visualization Component (map viz) that is shipped with Oracle database 19.3 to a Tomcat 9 instance. The Map Viz Component ships as a zip file mapviewer.ear.zip, which can be found in an Oracle database 19.3 instance’s <oracle_home>/md/jlib directory. Note that the bundled mapviewer.ear.zip file is intended for and tested with Oracle WebLogic Server 12.2.1.3.0+ only. It needs additional jar files and edits to web.xml in order to work with other containers such as Tomcat 9.  Also note that it's best to define MapViewer data sources directly in its config file instead of trying to define it in Tomcat, for simplicity. The detailed steps for Tomcat 9 are given below. Preparing the WAR file Step 1:  unzip the file mapviewer.ear.zip into a folder named mapviewer.ear. Step 2: Locate the web.war sub-folder in the mapviewer.ear folder. Rename this sub-folder to “mapviewer”. Step 3: Use a text editor to open the web.xml inside the “mapviewer/WEB-INF” folder. Modify the login-config as follows:   <login-config>     <auth-method>CLIENT-CERT,FORM</auth-method>     <form-login-config>       <form-login-page>/login.jsp</form-login-page>       <form-error-page>/login_failed.jsp</form-error-page>     </form-login-config>   </login-config>   Remove the “CLIENT-CERT” and the comma, leaving only “FORM” as the content of the <auth-method> tag to make it <auth-method>FORM</auth-method>. Save the changes. Step 4: Download common dependent 3rd party jar files that are included in Weblogic : The  mapviewer.ear shipped with Oracle Database 19.3 does not bundle a number of required JAR files which exist in a Weblogic server environment.  So, in order for it to work in Tomcat manually download those missing JAR files and add them to the “mapviewer/WEB-INF/lib” folder.  There are 3 sets of JAR files (Jersey, Jackson parser, JDBC) that need to be downloaded. Java REST API Jersey2: Download URL: https://repo1.maven.org/maven2/org/glassfish/jersey/bundles/jaxrs-ri/2.25.1/jaxrs-ri-2.25.1.zip Unzip and then copy each jar file in each sub-folder (api, ext, lib) and paste into the “mapviewer/WEB-INF/lib” folder. Jersey2 Companion Jars: Download URLs: https://repo1.maven.org/maven2/org/glassfish/jersey/media/jersey-media-multipart/2.13/jersey-media-multipart-2.13.jar   https://repo1.maven.org/maven2/org/jvnet/mimepull/mimepull/1.9.12/mimepull-1.9.12.jar These two jars should also be copied into the “mapviewer/WEB-INF/lib” folder. Jackson JSON Parser:   Download URLs:    https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-core/2.9.10/jackson-core-2.9.10.jar   https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-databind/2.9.10.2/jackson-databind-2.9.10.2.jar   https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-annotations/2.9.10/jackson-annotations-2.9.10.jar Copy and paste these three JAR files into the “mapviewer/WEB-INF/lib” folder. Oracle JDBC Driver: Finally, go to the Oracle Database 19c JDBC Driver & UCP Downloads page (https://www.oracle.com/database/technologies/appdev/jdbc-ucp-19c-downloads.html), and download the 19.3 JDBC Thin driver (odjbc8.jar), the Universal Connection Pool (ucp.jar) and the companion jars: Download URL:  https://download.oracle.com/otn/utilities_drivers/jdbc/193/ojdbc8-full.tar.gz Unzip or untar the downloaded archive and then copy and paste all the files, in each sub-folder, with the extension “.jar” to the “mapviewer/WEB-INF/lib” folder. The next step is deploying the mapviewer.ear toTomcat. But first we need to make sure Tomcat is set up to allow users to log in as a MapViewer administrator. This is typically done by editing the Tomcat server config file as shown below. Preparing Tomcat User as Map Viz Component administrator Locate and open the file <apache-tomcat-home>/conf/tomcat-users.xml in a text editor. Add these two lines before the </tomcat-users> tag: <role rolename="map_admin_role"/> <user username="admin" password="<chosen_admin_pwd>" roles="map_admin_role"/> </tomcat-user> The creates a new Tomcat role named “map_admin_role”, and a Tomcat user named “admin”, with the newly created role (needed by the Map Viz Component at runtime to recognize a logged in user as a MapViewer administrator).  If necessary designate other existing Tomcat users as Map Viz Component admin by adding the “map_admin_role” to that user’s roles list. Note: A user pointed out that an additional set of changes need to be made for Tomcat. Add the following at the end of the catalina.properties file. javax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl javax.xml.transform.TransformerFactory=com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl javax.xml.parsers.SAXParserFactory=com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl javax.xml.datatype.DatatypeFactory=com.sun.org.apache.xerces.internal.jaxp.datatype.DatatypeFactoryImpl That got rid of an "Invalid CSRF token" that they experienced. You may need to restart Tomcat for the above to take effect. Now we are finally ready to deploy and run the Map Viz Component in Tomcat 9. Deploying and Testing Map Viz Component  To deploy the Map Viz Component, simply copy the folder “mapviewer” and paste it inside the Tomcat’s web apps folder:  <apache-tomcat-home>/webapps/. Your folder structure should now look like the screenshot below:   Once copied, Tomcat should automatically notice the new web application and start it up. Alternatively, you may need to log into Tomcat’s admin web application and manually start the newly deployed application “/mapviewer”. To test that the Map Viz Component is working, simply access its URL from a browser: http://localhost:8080/mapviewer Log in as a Tomcat user with the “map_admin_role” role when prompted.

Note: This guest post is by Liujian Qian This post describes how to deploy the Oracle Spatial’s Map Visualization Component (map viz) that is shipped with Oracle database 19.3 to a Tomcat 9 instance. Th...

Graph Features

RDF# - Extending RDF to Support Named Triples

Named Graph component (in RDF quads) can be used to name triples, but what if we want to name both graphs and triples? Letting the predicate carry the triple name is one way to do it! Oracle Database has provided support for Resource Description Framework (RDF) since 2005 as it evolved over time. The early versions of W3C RDF – published in 1999 and 2004 –  supported RDF Triples only. The latest version of RDF –  published in 2014 – allowed RDF Quads as well. The fourth component in a quad is usually referred to as the named graph, because of its envisioned use for partitioning the content of an RDF graph into multiple named subsets. The RDF 1.1 Recommendation, however, noted that “RDF does not place any formal restrictions on what resource the graph name may denote... .” In a recent blog article [2], we showed that the fourth component of a quad can be used to name a triple. This approach is well-known to many in the RDF community. Then we went on to explain how the ability to name RDF triples (via “quadification”) allows maintaining backward compatibility (specifically, pre-existing queries remain valid) even in the face of complex additions to an RDF graph. The question that arises, however, is the following: if we use the fourth component of an RDF quad for naming the triple contained in the quad, what happens if we also need to make that triple belong to a named graph (i.e., a named subset of the RDF graph)? Note: This question is not relevant in the context of a comparison with Property Graph or Labeled Property Graph because those graph models do not support partitioning a graph into arbitrary named subsets. In a purely RDF context, however, providing the ability to name the triples ideally should not preclude the ability to create named subsets of a graph. Motivating Example: Consider the rather mundane everyday content shown in the first row and its evolution to accommodate addition of the somewhat sensational assertion in the second row of the table below:   Type of Addition to a Graph (Cumulative) Content to model as graph Vertex, Edge, Vertex-Property John donated to Top University. Mary, a child of John, got admitted to Top University. Edge as Endpoint of an Edge … John’s donation helped Mary’s admission.   RDF Graph: Initially, three edges (triples) – with labels (predicates) “donatedTo”, “admittedTo”, and “childOf” –  make up a named graph :factGraph to represent the first row data in the above table. To reflect the changed data in the second row, the RDF graph is augmented by creating a new named graph :gossipGraph to store a new assertion that the “donatedTo” triple helped the “admittedTo” triple. The “before and after” diagram below shows how this can be accomplished if we had the ability to assign names to the relevant triples, even when they belong to named graphs. The nodes with dotted outline shown adjacent to edges represent the names assigned to the corresponding triples. (The diagram does not explicitly show the named graphs :factGraph and :gossipGraph.) RDF Data: The table below shows the relevant “before and after” quads (in TriG serialization) with the proposed syntax extension for naming RDF triples described below. It is worth noting that each named triple is fully self-contained, that is, each named triple includes its name inline. This makes it fully aligned with line-oriented serializations such as N-Triple and N-Quad and allows creating duplicate triples (quads) with identical subject, predicate, object, (graph,) but with distinct names. The ability to assign custom name to each triple (explicit naming) further enables assigning the same name to a group of triples, not necessarily just one, if needed. Original RDF quads   If we could name triples, we could represent “donation helped admission” graph :factGraph {    :v1    :donatedTo    :v2  .   :v3    :admittedTo   :v2  .   :v3    :childOf          :v1  . } => graph :factGraph {    :v1    [:v4]:donatedTo    :v2  .   :v3    [:v5]:admittedTo   :v2  .   :v3    :childOf                 :v1  . } graph :gossipGraph { :v4    :helped    :v5 .}   RDF# - A Proposal to Extend RDF to Support Named Triples This article presents a proposal for extending RDF to support named triples without affecting its support for named graphs. The extension, that would maintain validity of any existing RDF data (in all serializations such as N-Triple, Turtle, N-Quad, TriG), and SPARQL queries, could be stated simply as follows: Allow predicate component of an RDF triple to optionally hold the name of the triple. For SPARQL query, this applies to the predicate component of a triple-pattern. Syntax: RDF/SPARQL component grammar example (extended) predicate ( ‘[’ ( IRI | blankNodeLabel | var ) ? ‘]’ ) ? predicate <http://ex/donatedTo> [<http://ex/donTriple>]<http://ex/donatedTo> [_:donTriple]<http://ex/donatedTo> []<http://ex/donatedTo> [?tname]<http://ex/donatedTo> [?tname]?pred   Example 1: RDF data with named or unnamed triples: RDF triple Unnamed Graph (shown as N-Triple) Named Graph (shown as TriG) Unnamed triple     :v1           :donatedTo   :v2 . graph :factGraph {  :v1           :donatedTo    :v2  } Named Triple     :v1  [:e12]:donatedTo   :v2 .     :e12         :year             2010 .     :e12         :amount        1000000 . graph :factGraph {  :v1  [:e12]:donatedTo    :v2 .                                 :e12        :year            2010 .                                 :e12        :amount  1000000 }   Example 2: Named or unnamed triple-pattern in a SPARQL query (to match RDF triples in a dataset): SPARQL triple-pattern default graph Named Graph Unnamed triple-pattern     ?x           :donatedTo   ?y . graph :factGraph {  ?x           :donatedTo  ?y  } Named triple-pattern     ?x     [?t]:donatedTo   ?y .     ?t           :year             ?yr .     ?t           :amount        ?amt .  graph :factGraph {  ?x     [?t]:donatedTo   ?y  .                                 ?t          :year              ?yr .                                 ?t          :amount         ?amt }   Example 3: Turtle Shortcuts for RDF data with named triples RDF data using Turtle shortcuts (each example below is independent) is shortcut for … Example 3a: v1 donated an amount of 1 million dollar to  v2 in the year 2010 [ :v1     [:e12]:donatedTo   :v2 ]       :year       2010 ;  :amount  1000000 . :v1     [:e12]:donatedTo   :v2 . :e12   :year                      2010 . :e12   :amount                1000000 . Note: [:e12] could be eliminated (or replaced with [] ), to have an auto-generated name for the triple: [ :v1    :donatedTo    :v2 ]  :year 2010 ; :amount 1000000 . Example 3b: v1’s donation to v2 helped v3 get admitted to v2 [ :v1  [:e12]:donatedTo   :v2 ]  :helped [ :v3  [:e32]:admittedTo   :v2 ] . :v1     [:e12]:donatedTo   :v2 . :v3     [:e32]:admittedTo   :v2 . :e12            :helped         :e32 . Note: If auto-generated triple names are acceptable instead of :e12 and :e32, we could use the following:  [ :v1  :donatedTo   :v2 ]  :helped [ :v3  :admittedTo   :v2 ] . Example 3c: v7 suspects that v1’s donation to v2 helped v3 get admitted to v2 :v7  :suspects [ [ :v1  [:e12]:donatedTo   :v2 ]  [:e1232]:helped [ :v3  [:e32]:admittedTo   :v2 ] ] . :v1     [:e12]:donatedTo   :v2 . :v3     [:e32]:admittedTo   :v2 . :e12   [:e1232]:helped      :e32 . :v7     :suspects               :e1232 . Note: We could have auto-generated triple names instead of :e12, :e32, and :e1232, to rewrite as follows: :v7 :suspects [ [ :v1  :donatedTo   :v2 ]  :helped [ :v3  :admittedTo   :v2 ] ] .   Related Work Our March 2014 EDBT paper [1] discussed, while staying within the capabilities of W3C RDF 1.1 Recommendation, three approaches for representing property graphs in RDF: reification, named graphs, and subproperty (i.e., rdfs:subPropertyOf) based. Use of the fourth component, usually referred to as “named graph”, for representing the names of triples, was shown to achieve the goal and proposed as the preferred approach. The disadvantage, although not relevant for comparison with Property Graph due to its lack of support for graph partitioning, was that the use of “named graph” component for naming the triples inhibited its use for partitioning an RDF graph into named subsets. Unlike [1], this article focuses on a proposal, RDF#, for a minimal extension to RDF that allows naming a triple without making any use of the “named graph” component.   In [2] we showed that, by using the fourth component (“named graph”) in an RDF quad to name the triples (as suggested in [1]), any type of relationships allowed directly in property graph and far more (such as, edge with edges, edge-properties, and/or vertex-properties as endpoints) can be represented in RDF and ongoing data changes in a graph can be managed in a backward-compatible way such that pre-existing queries will continue to remain valid. In the Appendix section below, we show similar power for RDF#, by working out a complete example involving changing graph data.   A prominent RDF extension proposal, RDF*, first published in June 2014 [3], extends the RDF graph model by allowing use of an RDF triple (identified by specifying its subject, predicate, and object inside << … >>: for example, << :v1 :donatedTo  :v2 >> ) as the subject or object component of another triple. Nested use of this capability allows expressing compound facts such as that shown in Example 3c above as follows: :v7 :suspects << << :v1 :donatedTo  :v2 >> :helped << :v3 :admittedTo  :v2 >> >> . RDF# too supports these by allowing the user to assign a name to (i.e., do explicit naming of) a triple and then use that name as subject or object of another triple which itself can be assigned a name, and so on. RDF# allows implicit naming too in case the to-be-named triple does not need to be referred to more than once. For example, as we showed above in Example 3c, the same compound fact may be expressed using implicit naming in RDF# as follows: :v7 :suspects [ [ :v1  :donatedTo   :v2 ]  :helped [ :v3  :admittedTo   :v2 ] ] . The main difference between RDF* and RDF# becomes apparent when the same triple needs to be referred to more than once from arbitrary contexts. The option of using explicit naming of triples in RDF# and lack of it in RDF* creates this difference rooted in name-based equivalence vs. value-based equivalence. Unlike RDF* and another approach, RDF+ [4], that use value-based equivalence, RDF# uses name-based equivalence which allows simultaneous presence (even within the same named graph, if applicable) of multiple distinctly named triples with same subject-predicate-object value: for example, following two value-identical triples with distinct (explicitly-specified) names: :v1  [:e12]:donatedTo   :v2 . and :v1  [e12-2]:donatedTo   :v2 . . References Das, S., Srinivasan, J., Perry, M., Chong, E., Banerjee, J. A Tale of Two Graphs: Property Graphs as RDF in Oracle. Published in Proc. Of 17th International Conference on Extending Database Technology (Athens, Greece, March 24–28, 2014) EDBT’14, on OpenProceedings.org. Das, S. Modeling Evolving Data in Graphs: The Power of RDF Quads. Posted January 14, 2020. Online: https://blogs.oracle.com/oraclespatial/modeling-evolving-data-in-graphs%3a-the-power-of-rdf-quads Hartig, O., Thompson, B. Foundations of an Alternative Approach to Reification in RDF. CoRR, abs/1406.3399, June 2014. Latest revision: Online: http://arxiv.org/abs/1406.3399. Schueler, B., Sizov, S., Staab, S., Tran, D. T. Querying for Meta Knowledge. Published in Proceedings of the 17th international conference on World Wide Web (Beijing, China, April 21–25, 2008) WWW’08. Appendix: A Complete Example To illustrate representing and handling change in graph data using RDF#, we show in the table below the steps involved in handling the cumulative content additions in the table shown below. The changes for each step are shown in red color in the diagram.  # Type of Addition to a Graph (Cumulative) Content to model as graph 1 Vertex, Edge, Vertex-Property John, whose net worth is $1 billion, donated to Top University. Mary, a child of John, got admitted to Top University. 2 Duplicate Edge John … donated twice to Top University. … 3 Edge-Property John … donated twice to Top University, in the years 2010 and 2012, respectively. Mary … got admitted to Top University in 2011. 4 Edge as Endpoint of an Edge … Bob suspects that John’s 2010 donation helped Mary’s admission. 5 Edge-Property as Endpoint of an Edge John … donated twice to Top University,  in the years 2010 and 2012 (the year 2012 was confirmed by Alex) … 6 Vertex-Property as Endpoint of an Edge John, whose net worth is $1 billion (according to Dan), …   Shorthand Notations: (To make the descriptions below more objective and concise) # Notation Vertex Representing … (in RDF terminology) Vertex Representing … (in Graph Terminology) 1 R-vertex resource: <http://John> vertex 2 V-vertex value: “John” VALUE : not shown as a vertex in the diagrams (see below) 3 T-vertex triple: <http://John>  <http://name> “John” edge, edge-property, or vertex-property # Notation Represents Triple … (in RDF terminology) Represents … in Graph Terminology 4 RpR triple < Resource, predicate, Resource > edge: vertex to vertex 4 RpV triple < Resource, predicate, VALUE > vertex-property : shown in box attached to the vertex in the diagram 4 RpT triple < Resource, predicate, Triple > edge: vertex to edge 4 TpR triple < Triple, predicate, Resource > edge: edge to vertex 4 TpV triple < Triple, predicate, VALUE > edge-property : shown in box attached to the edge in the diagram 4 TpT triple < Resource, Predicate, Resource > edge: edge to edge   Using RDF# for Adding Data to an RDF Graph (in six steps)   Add Vertices, Edges, and Vertex-Properties John, whose net worth is $1 billion, donated to Top University. Mary, a child of John, got admitted to Top University. unnamed triples: :v1  :name  “John” .             # RpV triple (vertex-property) :v1  :worth  “1 Bil” .              # RpV triple (vertex-property) :v2  :name  “TopUniv” .        # RpV triple (vertex-property) :v3  :name  “Mary” .             # RpV triple (vertex-property) :v1  :donatedTo  :v2 .           # RpR triple (edge) :v3  :admittedTo  :v2 .          # RpR triple (edge) :v3  :childOf  :v1 .                 # RpR triple (edge)   Add Duplicate Edge John … donated twice to Top University. … Add a named triple with e12-2 as the name to reflect the second donation. (Note: Needs a named triple to distinguish from the unnamed triple representing the first donation and thus avoid automatic de-duplication in RDF.) named triple: :v1  [:e12-2]:donatedTo  :v2 .   # named RpR triple (edge)   Add Edge-Properties John … donated twice to Top University, in the years 2010 and 2012, respectively. Mary … got admitted to Top University in 2011. There are three target triples to hang the edge-properties from. One of these is already a named triple, with e12-2 as its name. Name the remaining two target triples as e12 and e32. Next, using as subject each of the T-vertices corresponding to the three named target triples, simply add a TpV triple to reflect the desired edge-property. named triples: (uses delete-triple followed by insert-named-triple) :v1  :donatedTo  :v2 .               # RpR triple (edge) :v3  :admittedTo  :v2 .              # RpR triple (edge) :v1  [:e12]:donatedTo  :v2  .     # named RpR triple (edge) :v3  [:e32]:admittedTo  :v2  .    # named RpR triple (edge) unnamed triples: :e12  :year  2010 .                    # TpV triple (edge-property) :e12-2  :year  2012 .                 # TpV triple (edge-property) :e32  :year  2011 .                    # TpV triple (edge-property)   Add Edge with Edges as Endpoints … Bob suspects that John’s 2010 donation helped Mary’s admission. Create named TpT triple e1232 connecting T-vertex e12 (for “donatedTo” triple) to T-vertex e32 (for “admittedTo” triple) via “helped” predicate. Create RpV triple (vertex-property) for “name = Bob” attribute-value of new R-vertex v7. Create an unnamed RpT triple connecting R-vertex v7 to T-vertex e1232 (created above) via “suspects” predicate. named triple: :e12  [:e1232]:helped  :e32  . # named TpT triple (edge) unnamed triples: :v7  :name  “Bob” .                # RpV triple (vertex-property) :v7  :suspects  :e1232 .         # RpT triple (edge)   Add Edge with Edge-Property as Endpoint John … donated twice to Top University, in the years 2010 and 2012 (the year 2012 was confirmed by Alex) … Name as e12-2year the TpV triple that was created in STEP 3 above assigning the “year = 2012” attribute-value pair to the T-vertex e12-2. Create a new R-vertex v10 and then create a new RpV triple for assigning “name = Alex” attribute-value pair to v10. Finally, create a new TpR triple connecting the e12-2year T-vertex (created above) to the R-vertex v10 (created above) via the “confBy” predicate. named triples: (uses delete-triple followed by insert-quad) :e12-2  :year  2012 .                     # TpV triple (vertex-property) :e12-2  [:e12-2year]:year  2012 . # named TpV triple (vertex-property) unnamed triples: :v10  :name  “Alex” .                     # RpV triple (vertex-property) :e12-2year  :confBy  :v10  .          # TpR triple (edge)   Add Edge with Vertex-Property as Endpoint John, whose net worth is $1 billion (according to Dan), … Name as v1worth the RpV triple that was created in STEP 1 above assigning the “worth = 1 Bil” attribute-value pair to the R-vertex v1. Create a new R-vertex v12 and then create a new RpV triple for assigning “name = Dan” attribute-value pair to v12. Finally, create a new TpR triple connecting the v1worth T-vertex (created above) to the R-vertex v12 (created above) via the “accTo” predicate. named triples: (uses delete-triple, then insert-named-triple) :v1  :worth  “1 Bil” .                    # RpV triple (vertex-property) :v1  [:v1worth]:worth  “1 Bil”  .  # named RpV triple (vertex-property) unnamed triples: :v12  :name  “Dan” .                   # RpV triple (vertex-property) :v1worth  :accTo  :v12 .              # TpR triple (edge)  

Named Graph component (in RDF quads) can be used to name triples, but what if we want to name both graphs and triples? Letting the predicate carry the triple name is one way to do it! Oracle Database...

Using graphs to understand relationships with Oracle Database 20c

With all the information that resides in our databases and operational systems, you would think that it would be easy to answer questions about who our most important customers and suppliers are, where are the critical points in a supply chain, which business or financial transactions may be suspicious, and what patterns are visible in sales data.  But these kinds of questions, and others like them that are based on how data elements are connected to others, can be difficult to express and analyze using traditional methods. Graphs let you model, explore and perform analysis on your data based on how it is related to other information.  Whether it is a hierarchy structure in a human capital management schema, more complex relationships across sales data, consumer data, and demographic data, or near-real-time analysis of online payment patterns, it is more natural to model these connections and relationships as a graph. Oracle Database 20c makes it simpler to install, configure and deploy property graph features with the introduction of the Graph Server and Client Kit.  This simplified packaging makes it easier for application developers to start developing applications. How to use Graphs in Oracle Database Model data as a Graph The first step in performing graph analysis is to model the data as a graph.  Data from tables can be modeled as a graph and stored in the property graph schema, and loaded into the in memory analytics engine (PGX) to run the high speed analytics algorithms.   Alternatively, data can be instantiated as a graph in PGX in memory directly from standard tables.    If the data is in files in a known graph format like (GraphML, GraphSON, GML or the Oracle Flat File Format) it can directly be loaded into the property graph schema. Once data is represented as a graph, in the property graph schema or in PGX, you can query, run analyses and visualize the results.   Querying and visualizing graph data Oracle Database has a graph query language, PGQL, that simplifies the expression of traversal queries commonly used to retrieve data from graphs.  These might be queries to answer questions like “What path did the funds in this account take before they were deposited?” or “How close are the personal relationships of our purchasing managers to our suppliers?”.   PGQL is a SQL-like query language for property graph data structures.  The language is based on the concept of graph pattern matching, which allows you to specify patterns that are matched against vertices and edges in a data graph.   Oracle Database 20c now supports a rich set of graph visualization tools that lets you interactively explore the graph, customize layouts, and highlight interesting relationships in your data.   Seeing graph data and relationships visually lets analysts, data scientists, and developers quickly understand and explore clusters, outliers, anomalies, patterns, communities, and critical connections in their data.   Graph Algorithms to Perform Analysis One of the benefits of graphs is that they give you the ability to apply algorithms to analyze the relationships in the data.  These go beyond pattern matching and traversal and allow you to assess things like the strength of relationships, the importance of data elements, and whether something is in a critical path or influences other people or processes.   Oracle Database property graph includes over 50 out of the box graph analytic algorithms to let you do things like find communities and influences, discover anomalies and fraud, recommend products, discover complex patterns and analyze paths.   These prebuilt, parallel algorithms enable you to perform churn risk analysis, targeted marketing, and HR turnover analysis, spam and fraud detection, product recommendation, and path analysis and reachability. In Oracle Database 20c, you can create or extend graph algorithms using Java syntax, in addition to the dozens of pre-built graph analytics that come with the product.  These user-defined algorithms will execute as fast as native algorithms in the product because they are compiled with the same optimizations.  For unique and specialized use cases, customizing graph algorithms lets you add analysis that analysts and data scientists design specifically for your applications.     Graph in Oracle Database 20c Oracle Database Oracle wants every developer, every data scientist and anyone who uses Oracle Database to be able to use graph analytics, graph models, and graph querying of their data.  To enable this, Oracle is including all of the database graph capabilities, as well as the spatial and machine learning features, in Oracle Database at no additional cost. In addition to the simplified packaging, graph visualization, and user-defined algorithms mentioned earlier, Oracle Database 20c includes an optimized representation of a property graph that uses less memory when using the in-memory analytics server (PGX).  PGX can execute complex graph queries in milliseconds on graphs containing billions of nodes and edges.  The optimized graph representation gives you faster performance and is transparent; existing applications will run faster with no change.      

With all the information that resides in our databases and operational systems, you would think that it would be easy to answer questions about who our most important customers and suppliers are,...

Spatial and Graph

New Spatial features in Oracle Database 20c

New Spatial features in Oracle Database 20c   Oracle Database 20c, is now available for preview on Oracle Cloud (Database Cloud Service Virtual Machine). As with every major release it includes new features and enhancements that further extend its multi-model capabilities.    This post outlines a few of the new (and now free) Spatial features in this release which address ease of use, developer productivity, and performance.    Spatial Studio is a self-service low-code web application. It simplifies the use of the database’s vast array of spatial analysis and query functionality. It makes it easy to create and publish interactive maps to display the results of these queries and analyses. This video shows how to use with Oracle Analytics Cloud.   Newly added support for queries on spatial data stored in the OCI Object Storage service provides more flexibility for developers. They no longer need to load data into Oracle Database before performing spatial analysis on ad hoc data sets. They can keep these data sets in place (as GeoJSON or shapefiles) and run spatial queries directly against them.   Enhanced Spatial support for Database In-Memory (DBIM), in this release, address both ease of use and performance. Spatial filter (sdo_filter) operations can be performed on tables stored in DBIM. Users do not have to create and maintain spatial indexes when spatial tables are stored using Database In-Memory for faster query performance.   The Network Data Model now supports contraction hierarchies. A contraction hierarchy is a precomputed in-memory approach that can significantly speed up path computations. Shortest-path, drive time polygon analysis, and traveling salesperson computations may be between 10 to 100 time faster with this approach. It also enables the execution of more network analyses and concurrent requests with fewer CPUs as compared to previous releases.   In keeping with the Oracle mission to help people see data in new ways, discover insights, unlock endless possibilities the Oracle Database 20c makes it even easier for users to include spatial capabilities in their applications and business workflows.   Watch this introductory video to appreciate just how easy it is to integrate spatial data and analytics into database applications. 

New Spatial features in Oracle Database 20c   Oracle Database 20c, is now available for preview on Oracle Cloud (Database Cloud Service Virtual Machine). As with every major release it includes new...

Graph Features

Oracle Graph Server and Client 20.1 is now available for download

By Melliyal Annamalai, Senior Principal Product Manager, Oracle Oracle Graph Server and Client 20.1, which includes property graph features for Oracle Database, is now available for download.  Oracle Graph Server and Client 20.1 is a software package that works with Oracle Database.   It includes the in-memory analytics server (PGX) and client libraries required to work with the Property Graph feature in Oracle Database.  The package simplifies installation and provides access to the latest graph features and updates.   New Property Graph features in Oracle Graph Server and Client 20.1 include: Support for working with Autonomous Database New PGQL DDL to create a Property Graph from data in database tables GraphViz: Lightweight, single-page web application to visualize graphs New PGQL query features and algorithms: TOP k CHEAPEST path queries Compute High Degree Vertices Create Distance Index Limited Shortest Path (Hop Distance) All Reachable Vertices/Edges In-memory graph representation optimized for reduced memory usage and faster performance Improved software packaging Oracle Graph Server and Client works with Oracle Database 12.2 onward. Download is available on edelivery.oracle.com and oracle.com.    Download and try it out by following install instructions in section 1.2 of the documentation.    Oracle Graph Server and Client is typically installed in a separate Linux machine; for Cloud deployment this would be a compute node instance.    Sections 1.6 and 1.7 include additional useful information for getting started, and section 1.9 includes a quickstart. The Property Graph feature supports storage, modeling and analysis of data as graphs in Oracle Database.   Graphs are a different way of looking at data.  They enable data to be modeled based on relationships and connections between data entities.  Graph algorithms are designed specifically to analyze relationships in data, making possible additional data insights not easily discovered using other methods.   With graph analytics you can explore and discover connections and patterns in social networks, IoT, big data, data warehouses, and complex transaction data for applications such as fraud detection in banking, customer 360, and smart manufacturing.   Learn more at oracle.com/goto/propertygraph.

By Melliyal Annamalai, Senior Principal Product Manager, Oracle Oracle Graph Server and Client 20.1, which includes property graph features for Oracle Database, is now available for download. Oracle...

Modeling Evolving Data in Graphs: The Power of RDF Quads

Adding Data to Your Graph? Use of RDF Quads Guarantees Your Old Queries Remain Valid and Backward-Compatible with Your New Schema! Oracle Database has support for Resource Description Framework (RDF) quadstore and Property Graph (PG). We compare, from a schema evolution and backward compatibility point of view, the handling of evolving real-world data modeled as graph using the following graph models: RDF Graphs (W3C RDF 1.1 Recommendation, 25-FEB-2014) RDF-T: Triples-Only RDF-TQ: Triples+Quads RDF-Q: Quads-Only Property Graphs (PG) We will do a comparative analysis of the ways in which the different graph data models handle the types of additions, listed in the table below, in an evolving graph and their implications on pre-existing queries and schema for the graph. # Type of Addition to a Graph (Cumulative) Content to model as graph 1 Vertex, Edge, Vertex-Property John, whose net worth is $1 billion, donated to Top University. Mary, a child of John, got admitted to Top University. 2 Duplicate Edge John … donated twice to Top University. … 3 Edge-Property John … donated twice to Top University, in the years 2010 and 2012, respectively. Mary … got admitted to Top University in 2011. 4 Edge as Endpoint of an Edge … Bob suspects that John’s 2010 donation helped Mary’s admission. 5 Edge-Property as Endpoint of an Edge John … donated twice to Top University,  in the years 2010 and 2012 (the year 2012 was confirmed by Alex) … 6 Vertex-Property as Endpoint of an Edge John, whose net worth is $1 billion (according to Dan), …   The main finding that we will try to motivate and illustrate here is that the following ability – relevant to RDF triples, and for PG, to edges, edge-properties, and vertex-properties – is of critical importance for handling graph data addition in a query-preserving, backward-compatible manner: Assign a unique id to a relationship (and property-value association). Make use of that id as endpoints of a relationship Do the above: without affecting pre-existing queries, and while maintaining backward compatibility of the evolving schema (used for query design purposes) Motivation: Consider the rather mundane everyday content shown in the first row and its evolution to accommodate addition of the somewhat sensational assertion in the second row of the table below:   Type of Addition to a Graph (Cumulative) Content to model as graph Vertex, Edge, Vertex-Property John donated to Top University. Mary, a child of John, got admitted to Top University. Edge as Endpoint of an Edge … John’s donation helped Mary’s admission.   Types of Transformation: We will make use of the following two types of transformation in the illustration below: Vertexification of an edge, edge-property, vertex-property, or RDF triple. Edge: A new vertex – “edge-vertex” – is created to represent the edge. A vertex-property reflecting the edge-type of the original edge is added to the edge-vertex. Two new edges are created connecting the edge-vertex to the two endpoints of the original edge. The original edge is deleted. Vertex-Property: A new vertex – “vp-vertex” – is created to represent the vertex-property. The target property-value pair is removed from the original vertex and added to the vp-vertex. A new edge is created connecting the original vertex to the vp-vertex. Edge-Property: A new vertex – “ep-vertex” – is created to represent the edge-property. The original edge is replaced with two edges connecting the ep-vertex to its two endpoints. The target property-value pair is removed from the original edge and added to the ep-vertex. Finally, the steps used for vertexification of a vertex-property (as noted above) are followed for this newly added vertex-property of the ep-vertex. RDF triple: (In RDF terminology, this step is essentially the same as reification.) A new resource (or vertex) – “triple-vertex” – is created to represent the original triple. The original triple is replaced with three new triples created with the triple-vertex as the subject, recording respectively the subject, predicate, and object components of the original triple. Quadification of an RDF triple: This is relevant for RDF-TQ only. A new resource (or vertex) – “triple-vertex” – is created to represent the original triple. A quad is created with the same subject, predicate, and object components as the original triple and the triple-vertex as the “named graph” component. The original triple is deleted. (Quadification is essentially the conversion of a triple or unnamed edge to a quad or named edge.)   Vertexification in PG and RDF-T: Representation of the original content in PG is shown on the left-side of the diagram below and that of the extended content on the right-side. (It will be quite similar for RDF Triples-Only model too.)   It is easy to see that a lot has changed in the schema to accommodate the small additional content. Two edge-types – “donatedTo” and “admittedTo” – have disappeared, and they have been replaced with two pairs of new edge-types <“donor” and “receiver”> and <”student” and “university”>, respectively. A new vertex-property, “event”, with value reflecting the original edge-type has been added. (In addition, a new edge-type, “helped”, has been added, but this is just an expected expansion of the original schema.)   What are the implications of this significant change in schema? Many original queries (e.g., “Who donated to Top University?”), that were designed based on the original schema, would not work anymore. Query designers have to familiarize themselves with the new schema and then rewrite the queries accordingly.   Ideally, in case of PG, adding a new edge of edge-type “helped” connecting the edge “e12: donatedTo” to the edge “e32: admittedTo”, should have been enough. Similarly, for RDF-T, a new triple connecting the “donatedTo” triple (as subject) to the “admittedTo” triple (as object) with “helped” as the predicate should have been enough.   However, that is not allowed in both PG and RDF-T. In PG, an edge cannot be an endpoint of an edge (and the same restriction applies to vertex-properties and edge-properties as well). In RDF-T, a triple cannot be the subject or object of another triple. In PG, even though every edge has a unique edge id, the edge id cannot be used for anything other than hanging edge-properties from the edge, and in RDF-T, there is no id for a triple.   These restrictions lead to the following: only vertices are allowed to be endpoints of an edge. The workaround that was used, therefore, to arrive at the right-side diagram above was to “vertexify” the relevant edges in PG to create new vertices (edge-vertices) to represent those edges, respectively, and then use those as the endpoints. Same workaround works for RDF-T too: to use an RDF triple as endpoint of an edge, i.e., as subject and/or object of a new triple, we must vertexify the triple to create a resource (triple-vertex) that will represent the triple and then use it in the new triple. (Note that vertexification is really similar to “reification”, but the word “vertexification” is probably more intuitive in the graph context.)   Quadification in RDF-TQ: Representation of the original content in RDF-TQ is shown on the left-side of the diagram below and that of the extended content on the right-side. (For simplicity, the value triples – equivalent of vertex-properties – are shown as boxes, attached to the subject node, containing the predicate-value pair.)     The change in this case looks much simpler. For each of the two relevant RDF triples that need to be connected (that is, used as the endpoints of an edge, or in RDF terminology, used as the subject or object component of a new triple), we can simply “quadify” it, that is, replace the triple with a quad that has the same subject, predicate, and object components as the original triple and has a newly created resource or vertex (triple-vertex) as its “named graph” component. The two triple-vertices created for the quadification of the two RDF triples are shown above as v4 and v5, respectively. We show only the relevant triples and quads (generated upon quadification) in the table below.   (relevant) triples before change   (relevant) triples/quads after change :v1    :donatedTo    :v2 . => graph :v4 {  :v1    :donatedTo    :v2  } :v3    :admittedTo   :v2 . graph :v5 {  :v3    :admittedTo   :v2  }   :v4    :helped    :v5 .   Most important thing to note here is that the original schema was not modified, but only expanded to include the new relationship type “helped”. The two relationship types “donatedTo” and “admittedTo” still continue to exist – they did not disappear. The triples that were converted to quads are still present – now as part of the quads – and pre-existing queries (such as, “Who donated to Top University?”) would continue to work without any change. A query designer does not need to re-learn a new version of the original schema to rewrite the original queries. (If interested in designing new queries, the query designer may want to learn about the newly added “helped” relationship type.)   RDF-Q does NOT need vertexification or quadification: Note that in an RDF Quads-Only model (that contains quads only, no triples), if the “named graph” component of a quad is used as a name that represents the triple portion of the quad, neither vertexification nor quadification is relevant. Since RDF does not impose any restrictions on use of such names of triples, the complex situations identified above – using triples as subject and/or object components of other triples – do not pose a problem in RDF-Q model.   Note: Although use of the “named graph” component of a quad in RDF-TQ and RDF-Q for representing edge-id conflicts with its use for partitioning of RDF data into a single unnamed and multiple named graphs, we do not address it in this article because PG also does not allow such partitioning of graphs. Nonetheless, we are currently exploring mixed use of the RDF “named graph” component – both as edge-id and for graph partitioning. Related Work Our March 2014 EDBT paper [1] discussed three approaches for representing property graphs in RDF: reification, named graphs, and subproperty (i.e., rdfs:subPropertyOf) based. The two types of transformation discussed here – vertexification and quadification – map directly to the methods utilizing reification and named graphs, respectively. The subproperty-based approach, very similar to the singleton-property based approach [2] introduced around the same time, is not discussed here in the interest of brevity because, with respect to schema backport-compatibility in an evolving graph, it has similar drawbacks as the reification-based (and hence, vertexification) approach, unless one (redundantly) stores the same triple in two different forms to allow old queries to continue to work.   Unlike [1], this article focuses mainly on the schema evolution and backward-compatibility problem for evolving graphs and explains with examples (see Appendix section below) how the quadification (named graph) based approach is able to guarantee backward-compatibility even as the schema evolves due to new data getting added to an existing graph. Summary and Conclusion First, based on the discussion above about the two types of transformation – vertexification and quadification – we can summarize their implications in the following table:   Transformation All pre-existing queries remain valid? New schema backward-compatible with original? vertexification No No quadification Yes Yes   Also, the following table shows when to use which of these two types of transformations. We used the case of adding an “edge as endpoint of an edge” (listed in row #4 in the table below) earlier to show why and how vertexification is used for the RDF Triples-Only and Property Graph models whereas use of the (much simpler) quadification transformation was sufficient for the RDF Triples+Quads model. The Appendix section below will go through a complete example to illustrate each of the types of addition listed in the table below for the Property Graph and RDF Triples+Quads models.   # Type of Addition to a Graph RDF Triples-Only Property Graph RDF Triples+Quads RDF Quads-Only 1 Vertex, Edge, Vertex-Property - - - - 2 Duplicate Edge Vertexify - Add quad - 3 Edge-Property Vertexify - Quadify - 4 Edge as Endpoint of an Edge Vertexify Vertexify edge Quadify - 5 Edge-Property as Endpoint of an Edge Vertexify Vertexify (edge and) edge-property Quadify - 6 Vertex-Property as Endpoint of an Edge Vertexify Vertexify vertex-property Quadify -   Based on the discussions above (summarized in the two tables here), we posit that, with respect to maintaining backward compatibility of the evolving schema in the face of data additions to an existing graph, the following is a reasonable ranking of the different graph models: RDF Triples-Only < Property Graph < RDF Triples+Quads < RDF Quads-Only or, simply: RDF Triples < Property Graph < RDF Quads References Das, S., Srinivasan, J., Perry, M., Chong, E., Banerjee, J. A Tale of Two Graphs: Property Graphs as RDF in Oracle. Published in Proc. Of 17th International Conference on Extending Database Technology (Athens, Greece, March 24–28, 2014) EDBT’14, on OpenProceedings.org. Nguyen, V., Bodenreider, O., Sheth, A. Don’t Like RDF Reification? Making Statements About Statements Using Singleton Property. Published in Proc. of 23rd International Conference on World Wide Web WWW’14. April 2014. Appendix: A Complete Example To illustrate handling of change in graph data using both PG and RDF-TQ, we show below the steps involved in handling the cumulative content additions in the table shown at the beginning of this article. The changes for each step are shown in red color in the diagram.    (Note: To avoid clutter in the diagrams below, RDF “value” triples, that connect subject resources to values, are shown like vertex-properties in PG. If the subject resource (vertex), due to quadification, represents a triple, then the value triples for that subject are shown like edge-property in PG.)   Add Vertices, Edges, and Vertex-Properties John, whose net worth is $1 billion, donated to Top University. Mary, a child of John, got admitted to Top University. Property Graph RDF Triples+Quads Three vertices – v1, v2, and v3 – represent the three entities namely John, TopUniv, and Mary, respectively. Vertex properties represent the names and, in case of John, his net worth too. Three edges – e12, e32, and e31 – connect the ordered pairs (v1, v2), (v3, v2), and (v3, v1), reflecting the “donatedTo”, “admittedTo”, and “childOf” edge-types, respectively. Similar to PG case, but no edge-ids. RDF triples only. No quads. (Diagram shows the “value” triples in the same way as PG vertex-properties.) added triples: :v1  :name  “John” . :v1  :worth  “1 Bil” . :v2  :name  “TopUniv” . :v3  :name  “Mary” . :v1  :donatedTo  :v2 . :v3  :admittedTo  :v2 . :v3  :childOf  :v1 .   Add Duplicate Edge John … donated twice to Top University. … Property Graph RDF Triples+Quads Add a new edge e12-2 with edge-type “donatedTo” with same endpoints as the first donation edge. Add a quad with e12-2 as the triple-vertex (named graph) to reflect the second donation. (Note: Needs a quad to distinguish from the triple representing the first donation and thus avoid automatic de-duplication in RDF.) added quad: graph  :e12-2  {  :v1  :donatedTo  :v2  }       Add Edge-Properties John … donated twice to Top University, in the years 2010 and 2012, respectively. Mary … got admitted to Top University in 2011. Property Graph RDF Triples+Quads Hang the edge-properties from the three target edges e12, e32, and e12-2, respectively. There are three target edges (triples) to hang the edge-properties from. One of these is already in quad form, with e12-2 as the triple-vertex (named graph). Quadify the remaining two edges (triples) with e12 and e32 as the triple-vertices (named graphs). Next, for each of the three target edges, simply add a (value) triple with the triple-vertex of the edge as the subject to reflect the desired edge-property. quadified triples: (uses delete-triple followed by insert-quad) :v1  :donatedTo  :v2 . :v3  :admittedTo  :v2 . graph  :e12  {  :v1  :donatedTo  :v2  } graph  :e32  {  :v3  :admittedTo  :v2  } added triples: :e12  :year  2010 . :e12-2  :year  2012 . :e32  :year  2011 .     Add Edge with Edges as Endpoints … Bob suspects that John’s 2010 donation helped Mary’s admission. Property Graph RDF Triples+Quads Two new edges need to be created: a “helped” edge and a “suspects” edge. The “helped” edge would have two edges as its source and destination endpoints: the first “donatedTo” edge e12 and the “admittedTo” edge e32. The “suspects” edge would have a new “Bob” vertex as its source and the “helped” edge as its destination. Vertexify edge e12: New edge-vertex v4 gets created with appropriate connections and vertex-properties. Edge e12 is deleted. Vertexify edge e32: New edge-vertex v5 gets created with appropriate connections and vertex-properties. Edge e32 is deleted. Assume that we create a hypothetical new edge, “e45: helped”, connecting v4 to v5 at this point. Vertexify (hypothetical) edge e45: Since the “suspects” edge needs to have this e45 edge as an endpoint, we need to vertexify the hypothetical edge e45. New edge-vertex v6 gets created with appropriate connections and vertex-properties. Finally, create the “e76: suspects” edge connecting the new “Bob” vertex (v7) to the edge-vertex (v6) representing the “helped” edge.   Two new edges need to be created: a “helped” edge and a “suspects” edge. The “helped” edge would have two edges as its source and destination endpoints: the first “donatedTo” edge  and the “admittedTo” edge. Since both edges were already quadified (as edge-vertices :e12 and :e32) when adding edge-properties, simply connect those edge-vertices representing those two edges to create the “helped” edge as a triple. The “suspects” edge would have the newly created “helped” edge as the destination and a new vertex for “Bob” as the source. Quadify the “helped” edge, as edge-vertex :e1232, and then create the “suspects” edge as a triple with the new “Bob” vertex (v7) as the subject and the edge-vertex :e1232 as the object. added quad: graph  :e1232  {  :e12  :helped  :e32  } added triples: :v7  :name  “Bob” . :v7  :suspects  :e1232 .         Add Edge with Edge-Property as Endpoint John … donated twice to Top University, in the years 2010 and 2012 (the year 2012 was confirmed by Alex) … Property Graph RDF Triples+Quads Vertexify the e12-2 edge to create the edge-vertex v8 with event=“donatedTo” and “year=2012” as vertex properties. Vertexify the year=2012 vertex-property created above to create the vp-vertex v9, connected from v8 by new edge “e89: year”, and “year=2012” as a vertex-property (transferred from v8). Next, create the desired “e910: confBy” edge by connecting the above vp-vertex (v9) to a new “Alex” vertex (v10). Quadify the “value” triple that was created in STEP 3 above:  “:e12-2  :year. 2012”. The new triple-vertex, e12-2year, that gets created here represents this “year=2012” edge-property for the second donation edge. Next, create the desired “confBy” edge as a triple with the triple-vertex e12-2year as the subject and a new “Alex” vertex (v10) as the object. quadified triples: (uses delete-triple followed by insert-quad) :e12-2  :year  2012 . graph  :e12-2year  {  :e12-2  :year  2012 } added triples: :v10  :name  “Alex” . :e12-2year  :confBy  :v10  .   Add Edge with Vertex-Property as Endpoint John, whose net worth is $1 billion (according to Dan), … Property Graph RDF Triples+Quads Vertexify the worth=“1 Bil” vertex-property that was created in STEP 1 above. The newly created vp-vertex is v11. Add year=“1 Bil” as a vertex-property to the vp-vertex (v11). Next, create the desired “e1112: accTo” edge by connecting the vp-vertex (v11) to a new “Dan” vertex (v12). Quadify the (value) triple, worth=“1 Bil”, that was created in STEP 1 above. The newly created triple-vertex is v1worth. Next, create the desired “accTo” edge as a triple with the triple-vertex v1worth as the subject and a new “Dan” vertex (v12) as the object. quadified triples: (uses delete-triple followed by insert-quad) :v1  :worth  “1 Bil” . graph  :v1worth  { :v1  :worth  “1 Bil” } added triples: :v12  :name  “Dan” . :v1worth  :accTo  :v12 .      

Adding Data to Your Graph? Use of RDF Quads Guarantees Your Old Queries Remain Valid and Backward-Compatible with Your New Schema! Oracle Database has support for Resource Description Framework (RDF)...

Graph Features

Graph Database and Analytics for everyone

In keeping with the Oracle mission to help people see data in new ways, discover insights, unlock endless possibilities, customers wishing to use the Machine Learning, Spatial, and Graph features of Oracle Database are no longer required to purchase additional licenses. Oracle wants every developer, every data scientist and anyone who uses Oracle Database to be able to use graph analytics, graph models, and graph querying of their data.  For decades, Oracle Database has included industry-leading multi-model and analytic capabilities.  Oracle’s converged database architecture supports multiple data types and data models (e.g. spatial, graph, JSON, XML) and algorithms (e.g. machine learning, graph and statistical functions) and workload types (e.g. operational and analytical) within a single database.  While many of these capabilities are included in Oracle Database products and cloud services today, it is our goal that all developers have the ability to use these advanced development APIs.    What you can do with Graphs Graphs let you to model data based on relationships in a more natural, intuitive way.  They let you explore and discover connections and patterns in social networks, IoT, big data, data warehouses and complex transaction data for applications such as fraud detection in banking, customer 360, and smart manufacturing.  Graph algorithms – operations specifically designed to analyze relationships and behaviors among data in graphs – make it possible to understand things that are difficult to see with other methods. For example, graph algorithms can identify what individual or item is most connected to others in social networks or business processes.   They can identify communities, anomalies, common patterns, and paths that connect individuals or related transactions.  Every Oracle Database now includes both property graph and RDF graph data models as well as algorithms, query languages, and visualization tools.   Property graphs are often used to identify potentially fraudulent transactions.  Companies like Paysafe, a payment processing system, use Oracle property graph technologies to identify fraud patterns while ensuring flawless customer experience and real-time money transfer.  With traditional data models, it is almost impossible to see beyond individual accounts to the connections between them. Paysafe has implemented Oracle property graph, including its fast, built-in, in-memory graph analytics, to perform fast graph queries that identify patterns of fraud. Public safely and intelligence agencies use Oracle property graph analysis to analyze suspicious travel patterns for early detection of potential threats.  A European police force models the Integrated Operations Management System and the Advance Passenger Information System data in Oracle property graph to look at passenger relationships to determine whether individuals are traveling with or have traveled with known persons of interest. RDF graphs are widely used by statistics bureaus, pharmaceutical research and in EU Linked Open Data programs for knowledge graphs and to create a unified metadata layer for disparate applications that facilitates identification, integration, and discovery.  RDF graphs are central to knowledge management, publishing and social network applications common in the healthcare and life sciences, finance, media and intelligence communities. All graph features are included All Graph features are now included with Oracle Database licenses:   Property Graph database PGX in-memory graph engine PGQL graph query language 50+ Graph algorithms Support for graph visualization    RDF Graph database SPARQL graph query language Java APIs via open source Apache Jena  W3C standards support for semantic data, ontologies and inferencing RDF Graph views of relational tables Developers and data scientists can use notebook, shell UI, and a rich Java API to work with graphs.  Watch this introductory video just a few of the things you can do with Oracle’s graph capabilities.               

In keeping with the Oracle mission to help people see data in new ways, discover insights, unlock endless possibilities, customers wishing to use the Machine Learning, Spatial, and Graph features of...

Spatial and Graph

Spatial now included with all editions of Oracle Database

In keeping with the Oracle mission to help people see data in new ways, discover insights, unlock endless possibilities, customers wishing to use the Machine Learning, Spatial and Graph features of Oracle Database are no longer required to purchase additional licenses.   For decades, Oracle Database has included industry-leading multi-model and analytic capabilities.  Oracle’s converged database architecture supports multiple data types and data models (e.g. spatial, graph, JSON, XML) and algorithms (e.g. machine learning, graph and statistical functions) and workload types (e.g. operational and analytical) within a single database.  While many of these capabilities are included in Oracle Database products and cloud services today, it is our goal that all developers have the ability to use these advanced development APIs. The core spatial features in Oracle Spatial have been included in Oracle Database Locator feature for over a decade. Now, all Oracle Spatial capabilities are available with both Enterprise Edition and Standard Edition 2. All Spatial features are included Spatial features now available for development and deployment with Oracle Database includes: 2D and 3D geometry data types to represent locations, regions, and other geometries All spatial functions and operators  Map authoring tools  Sensor and imagery data support with Point Cloud, LiDAR and GeoRaster data types and operators  Networks and parcel data management with Linear Referencing, Network and Topology data models  Geocoder, Routing Server, Tracking Server, Map Server  Open Geospatial Consortium web service APIs and Spatial Studio, a self-service, no-code/low-code map canvas and spatial analysis tools  What you can do with Spatial At the simplest level,  spatial analysis enables better understanding of complex interactions based on geographic relationships.  This can be important to virtually every application; it can help provide better customer service, optimize the workforce, easily locate retail and distribution centers, evaluate sales and marketing campaigns, just to name a few! Every Oracle Database now includes comprehensive spatial analytics and data models.  For traditional business analysis, you can perform queries based on proximity (how near or far something may be) and containment (whether something is within or outside a given region).  In fact, there are hundreds of functions and operations to filter data, measure distance relationships, and combine and/or transform geometries.  Powerful visualization and self-service map-based analysis tools and APIs let developers, data scientists and analysts easily benefit from these capabilities.  For example, Neustar, a real-time provider of cloud-based information systems and data analytics, gives clients spatially-informed marketing insights for retail site analysis, market potential sizing, marketing campaign creation, and more to acquire and retain customers and improve site location strategies.  For more specialized analysis and applications, often referred to as GIS or geographic information systems, the spatial features in Oracle Database also support large scale and complex geospatial applications.  For example, a global construction and engineering firm uses Oracle Database spatial features to manage survey data and ensure they are consistent and properly aligned.  These precise measurements, based on weather sensor data, are collected and validated for renewable solar energy projects. Other spatial use cases include: The US Census Bureau ensures that every land parcel and geographic feature such as roads, railroads, geographic areas, landmarks, waterways and other relevant geographic information is accurately captured, seamlessly merging geospatial data (with associated feature attributes) with non-spatial residential and business address datasets.  Ordnance Survey Ireland (OSi), the Irish national mapping agency, maintains national spatial data sets and creates mapping products for government agencies and the public.  This includes remotely sensed data, point clouds, aerial photos and satellite imagery, needing to automate complex workflow processes, and sharing data widely with the public. OSi addressed these challenges with the Oracle Cloud Infrastructure (OCI), Oracle Spatial and Graph, and Oracle Exadata platform.   Oracle Database now makes it easy for developers to add spatial capabilities to applications using modern application frameworks  with standards-based SQL and Java APIs, JSON and REST support, and integration with database tools, analytics, and applications. Watch this introductory video to appreciate just how easy it is to integrate spatial data and analytics into database applications.         

In keeping with the Oracle mission to help people see data in new ways, discover insights, unlock endless possibilities, customers wishing to use the Machine Learning, Spatial and Graph features of...

Spatial and Graph

Spatial Studio on Oracle Cloud Marketplace

  Oracle Spatial Studio is now available on the Oracle Cloud Marketplace. The Oracle Cloud Marketplace is an online store offering hundreds of apps and services from both Oracle and Oracle Partners. More info here. The following walks through the steps to create an instance. Locate Spatial Studio on the Marketplace If you have an Oracle Cloud account, you access the Marketplace from the main menu: If you do not have an Oracle Cloud account, go to https://cloudmarketplace.oracle.com/marketplace, click Sign In and then Create Account. You may also create a free tier account at https://www.oracle.com/cloud/free Find Spatial Studio by filtering on type: "Stack" and Publisher: "Oracle" Click on Spatial Studio and then the Usage Instructions tab to see important prerequisite steps: Perform Prerequisite Steps The installation process will need networking, compute, storage resources, access to the marketplace, and privileges to invoke and run jobs. These steps are outlined in the Usage Instruction and are appropriate for an Oracle Cloud Infrastructure (OCI) admin. For those unfamiliar with OCI: Overviews: Intro to OCI Compute (video),  Overview of Compute Service Specific topics:  Compartments, Virtual Cloud Networks (VCNs), Security Lists, Groups,  Policies.   Configure VCN We first create a Virtual Cloud Network (VCN). Access VCNs from the main menu under Networking: Select your Compartment on the bottom left. I am working in a Compartment named "MyCompartment".  Then click Networking Quickstart. Leave the default selection VCN with Internet Connectivity and click Start Workflow Under Basic Information, enter a name for your VCN and select your Compartment.  Under Configure VCN and Subnets, enter the example blocks provided. Then click Next.. Once all VCN resources are created, click View Virtual Cloud Network.   Next we will configure our VCN's security list with ingress rules for the ports used to access Studio (port 8001) and WebLogic Admin Console (port 7002). Click on Security Lists on the left, and then click "Default Security Lists for [your VCN name]" : Next click on Add Ingress Rules: In SOURCE CIDR enter "0.0.0.0/0" and in DESTINATION PORT RANGE enter "8001,7002" without spaces. Then click Add Ingress Rules at bottom: You should now see the rules applied: Configure Group Policy Next we add Policies that allow management of required resources in your compartment. Best practice is to apply policies to Groups and not directly to Users. In this example I have already created a group called MyGroup. You may access and create Groups from the main menu under Identity. We will now apply the set of Policies required to deploy and run the spatial Studio stack. Access the Policies page from the main Menu under Identity. Select your Compartment on the lower left, and then click Create Policy: Enter a name for the Policy. In this example it's called "MyPolicy". Next paste in the Policy statements from the Usage Instructions 1 by 1, replacing "MyGroup" and "MyCompartment" with your actual Group and Compartment names. After each statement, click the "+" button to add a new empty row for the next statement. repeat until all of the following statements are entered: allow group MyGroup to manage instance-family in compartment MyCompartment allow group MyGroup to manage virtual-network-family in compartment MyCompartment allow group MyGroup to manage volume-family in compartment MyCompartment allow group MyGroup to manage load-balancers in compartment MyCompartment allow group MyGroup to manage orm-stacks in compartment MyCompartment allow group MyGroup to manage orm-jobs in compartment MyCompartment allow group MyGroup to manage app-catalog-listing in compartment MyCompartment When all statements are entered, click Create at bottom: You will now have the Policy listed on the Polices page:   Create SSH key pair Finally, we must have an existing SSH key pair or generate a new one. A SSH key is used by OCI to authenticate a remote user and is required during installation. For those unfamiliar, please see general information here, To create your SSH key pair you may use Putty on a PC or ssh-keygen on the command line on a Mac. Have your public key handy for the following steps.   Identify Database for Spatial Studio Repository A database is not required for the installation of the Spatial Studio application. However on first launch Spatial Studio does require connecting to an Oracle 12.2+ database accessible from the Studio deployment, as described here. Therefore, before proceeding with Studio install, it is recommended that you identify the database to be used.   Install Studio Now that prerequisites are complete, we can begin the install. Return to the Spatial Studio page on the Marketplace, select the Studio version and Compartment, and click Launch Stack: Leave the defaults and click Next: Enter required Compute and Network info. Descriptions of shapes is here (scroll down to Virtual Machine Shapes).   Review the summary and click Create: You will see the stack creation job move from status of "Accepted" to "In Progress"  Once the status is "Succeeded", an "Application Information" tab will be available showing login info. Note the Public IP. WebLogic Console will be available at https://[public IP]:7002/console Spatial Studio will be available at https://[public IP]:8001/spatialstudio As noted in the instructions, you should change the default "weblogic" user password from "welcome1" to a something secure. Instructions are here. After changing the password, you may need to clear your browser's cache in order to log in.   Access Spatial Studio Access Spatial Studio at https://[public IP]:8001/spatialstudio, or using the link on the Application Information page. Spatial Studio is configured to use the https://  protocol. So unless you have configured a valid security certificate in WebLogic, you will see a security warning when opening Spatial Studio. If this is the case, you may click the option to "Proceed to [IP address]" and then accept the option to permanently store this certificate exception.This issue will be handled more smoothly in a future release. Once the security exception is accepted, you will see the Spatial Studio login page. Spatial Studio logins are based on WebLogic users. The default login in weblogic/welcome1, but as mentioned above, this should default password be changed: Create schema for Spatial Studio schema On first login, Spatial Studio will prompt for database connection info for its repository. The repository schema holds metadata as described here, and the required tables are created automatically by Spatial Studio upon first connection. The database version must be 12.2+, accessible from your OCI compute node, and can be traditional or Autonomous.   Uninstalling the Marketplace Deployment In order to remove a Cloud Marketplace installation of Spatial Studio, go to the Main main, click "Resource Manager" and then click the sub-item "Stacks": Click on the Spatial Studio stack to drill to details. Then click the "Terraform" menu and select "Destroy": It is important that you perform the Terraform "Destroy". The "Delete Stack" option might seem like the way to uninstall, but this only removes the Terraform stack creation instructions. If you want to free up the resources, make sure you "Destroy".   Please note: A known issue with the first release of Spatial Studio on the Cloud Marketplace requires the following steps to start Spatial Studio after reboot of the system. (This issue has been fixed so that new deployments do not have this issue.) 1. SSH to the Compute node     See https://docs.oracle.com/en/cloud/iaas/compute-iaas-cloud/stcsg/accessing-oracle-linux-instance-using-ssh.html 2. Configure boot.properties file  (only needs to be done once)     Check for existence of boot.properties using $ ls /u01/app/oracle/domains/spatialDomain/servers/AdminServer/security/boot.properties    If the file does not exist, create it: $ echo "username=weblogic" > /u01/app/oracle/domains/spatialDomain/servers/AdminServer/security/boot.properties $ echo "password=[your password]]" >> /u01/app/oracle/domains/spatialDomain/servers/AdminServer/security/boot.properties       (where [your password] is the default welcome1 or a custom value if you updated it)           If the file does exist, verify it includes encrypted username and password: $ grep 'username\|password' /u01/app/oracle/domains/spatialDomain/servers/AdminServer/security/boot.properties      3. Copy boot.properties file to the managed server (only needs to be done once) $ cp /u01/app/oracle/domains/spatialDomain/servers/AdminServer/security/boot.properties  /u01/app/oracle/domains/spatialDomain/servers/ms_1/security/ 4. Run the following to start all servers (needs to be run after every system reboot) $ nohup /u01/app/oracle/domains/spatialDomain/bin/startNodeManager.sh  > /dev/null 2>&1 & $ nohup /u01/app/oracle/domains/spatialDomain/bin/startWebLogic.sh > /dev/null 2>&1 & $ nohup /u01/app/oracle/domains/spatialDomain/bin/startManagedWebLogic.sh ms_1 > /dev/null 2>&1 &    These can be combined into a shell script for convenience.   Now it's your turn We hope you give Spatial Studio a try!  Please feel free to post feedback on our newly created Spatial Studio forum, and we'll update this info as the process evolves.

  Oracle Spatial Studio is now available on the Oracle Cloud Marketplace. The Oracle Cloud Marketplace is an online store offering hundreds of apps and services from both Oracle and Oracle Partners....

Spatial Studio Best Practice - Spatial Filters

We will be providing blog posts on best practices with Spatial Studio, and we begin here with spatial filters. But before we begin, a quick comment;  understanding best practices such as the info below will become less and less important with future releases. This is because we plan to enhance Spatial Studio with an increasingly guided UX to avoid pitfalls. Now on with the info... Spatial Studio supports multi-step analyses, where the result of an analysis is input to another analysis. In these cases the steps can be accomplished in more than 1 possible order of operations. However the order of operations can make a huge difference in performance, particularly with spatial filters. The rules to follow for spatial filters are: Apply spatial filters to datasets that are based on tables. Do not apply a spatial filter to an analysis result. The layer to use as the spatial filter (i.e. search area) can be a based on a table or an analysis. Use analyses to create a final search area, and use it to filter a dataset based on a table. Here's an example to illustrate this: In a previous article we demonstrated a spatial filter to identify Urban Areas located in the British Isles (UK and Ireland). The spatial filter operation was a 1-step process involving datasets based on tables (uploaded from Shapefiles) which worked just fine and there was no ambiguity in any order of operations. As a multi-step example, we'll identify Airports located in an Urban Area and in the British Isles. We begin by adding AIRPORTS from Natural Earth using the same steps as shown in previous article linked above: We have a few options to accomplish our scenario and need to understand which is the right way to go. Option 1:  Filter for Airports in British Isles, and then filter that result for items in Urban Areas. We reject this option since it applies a spatial filter to an analysis result Option 2:  Filter for Airports in Urban Areas, and then filter that result for items in British Isles. Again, we reject this option since it applies a spatial filter to an analysis result Option 3:  Filter for Urban Areas in British Isles, and then filter Airports using the result of the 1st analysis as the search area. This follows the rule of creating the overall spatial filter using analyses, and applying the spatial filter to a layer based on a table We now proceed with the scenario using the preferred Option 3 above: We encourage you to follow this approach to ensure performant spatial filter analyses. If you run into performance issues with a spatial filter analysis, the first thing to check will be whether this best practice has been followed.        

We will be providing blog posts on best practices with Spatial Studio, and we begin here with spatial filters. But before we begin, a quick comment;  understanding best practices such as the info...

Spatial and Graph

Spatial Studio - Hello World!

The newly released Oracle Spatial Studio ("Spatial Studio") simplifies spatial analysis and visualization capabilities in Oracle Database. This is the first installment of blog posts providing how-to, tips, best practices, and overall guidance in the use of Spatial Studio. We look forward to your feedback and creative uses of Spatial Studio! For an overview of Oracle Spatial Studio, please visit the product page at https://www.oracle.com/database/technologies/spatial-studio.html.  Spatial Studio is provided as both a deployable Java EE application and a pre-deployed, self-contained Quick Start. The Quick Start is an easy way to get up and running in just a few minutes. You just need an Oracle Database with Spatial and Graph, version 12.2 or newer. For instructions on setting up Spatial Studio, please refer to https://www.oracle.com/database/technologies/spatial-studio/get-started.html. Once you have Spatial Studio running, follow along with the basic workflow steps below: In this workflow we will load country and urban area polygons and then determine which urban areas are in a country of interest. Download Shapefiles: Go to https://www.naturalearthdata.com/downloads/10m-cultural-vectors/ Download "Admin 0 - Countries" and 'Urban Areas" and unzip the files You should now have the following 2 folders containing the Shapefiles: Create datasets: Log into Spatial Studio If using the Quick Start, log in as Admin user If using a WebLogic deployment, log in as the WebLogic admin user Click Create > Dataset Create Datasets from Shapefiles: Create Project: Click Create > Project Add datasets Create map and style layers: Perform Spatial Analysis: We will apply a spatial filter to identify Urban Areas in the British Isles Select the United Kingdom and Ireland. This will be our spatial filter Open Spatial Analysis for Urban Areas Configure and run spatial filter Drag analysis result into map and set style: There's lots more to cover, such as Spatial Analysis best practices Feeding the results of a spatial analysis into another analysis Saving our project so that we can revisit our work later Publishing our project for others to consume Exporting our results for upload to others applications such as Oracle Data Visualization Exposing our results as a GeoJSON REST endpoint for application integration These an other topics will be covered in subsequent posts so please stay tuned.      

The newly released Oracle Spatial Studio ("Spatial Studio") simplifies spatial analysis and visualization capabilities in Oracle Database. This is the first installment of blog posts providing how-to,...

Spatial and Graph

Announcing Oracle Spatial Studio

Oracle Spatial Studio, a self-service application for creating interactive maps and performing spatial analysis on your data, is now available for download on Oracle Technical Resources (formerly Oracle Technology Network) and Oracle Service Delivery Cloud. It has been designed for business users who do not have any GIS or Spatial knowledge. Now business users and non-GIS developers have a simple user interface to access the spatial features in Oracle Database and incorporate spatial analysis in applications and workflows. Users can easily get their data in Spatial Studio, prepare their data, visualize and perform spatial analysis on their data, and publish the results. Spatial Studio is included as an application with all Autonomous Database subscriptions and offerings that include Oracle Spatial and Graph option. These include Autonomous Database, Autonomous Data Warehouse, Autonomous Transaction Processing, Oracle Database Cloud Service High Performance Edition, Oracle Database Cloud Service Extreme Performance Edition, Oracle Database Exadata Cloud, Oracle Database Exadata Cloud at Customer, and Oracle Database Enterprise Edition with Spatial and Graph option. For more information on how to get started and to view an Oracle Spatial Studio demo, go to Oracle Spatial Studio.  

Oracle Spatial Studio, a self-service application for creating interactive maps and performing spatial analysis on your data, is now available for download on Oracle Technical Resources (formerly...

Graph Features

A simple Customer 360 analytics example with Oracle Property Graph

Customer 360 example Note: This is based on Ryota Yamanaka's (Senior Solutions Consultant, Big Data & Analytics, APAC) demos. The original content is here in his Github repository.    Introduction   This example shows how integrating multiple datasets, using a graph, facilitates additional analytics can lead to new insights. We will use three small datasets for illustrative purposes. The first contains accounts and account owners. The second is purchases by the people who own those accounts. The third is transactions between these accounts.    The combined dataset is then used to perform the following common graph query and analyses: Pattern Matching (e.g. a path meeting specified criteria), Detection of cycles (e.g. circular payments), Finding weakest or strongest links (e.g. influencer identification), Community Detection (i.e. connected components), and Recommendation (aka link prediction).   The datasets The data can come from multiple sources including database tables, web services, or files. We use a simple property graph descriptive format to list each one. The file lists the nodes, edges, and their properties. Each line consists of IDs (one for Nodes and Source and Destination for Edges), the Type (or Label), and key:value pairs of properties.   Accounts and account owners:    # Nodes 101 type:customer name:"John" age:15 location:"Boston" 102 type:customer name:"Mary" gender:"F" 103 type:customer name:"Jill" location:"Boston" 104 type:customer name:"Todd" student:"true" 201 type:account account_no:"xxx-yyy-201" open:"2015-10-04" 202 type:account account_no:"xxx-yyy-202" open:"2012-09-13" 203 type:account account_no:"xxx-yyy-203" open:"2016-02-04" 204 type:account account_no:"xxx-yyy-204" open:”2018-01-05"   # Edges 201 -> 101 :owned_by 202 -> 102 :owned_by 203 -> 103 :owned_by 204 -> 104 :owned_by 103 -> 104 :parent_of   and the resulting subgraph (for these nodes and edges) is shown in the figure below. The PGQL query for it is  SELECT * MATCH ()-[e:parent_of|owned_by]->()  which selects all edges (and their nodes) that have a label of owned_by or parent_of.  Purchases:   # Nodes 301 type:merchant name:"Apple Store" 302 type:merchant name:"PC Paradise" 303 type:merchant name:"Kindle Store" 304 type:merchant name:"Asia Books" 305 type:merchant name:"ABC Travel"   # Edges 201 -> 301 :purchased amount:800 201 -> 302 :purchased amount:15 202 -> 301 :purchased amount:150 202 -> 302 :purchased amount:20 202 -> 304 :purchased amount:10 203 -> 301 :purchased amount:350 203 -> 302 :purchased amount:20 203 -> 303 :purchased amount:15 204 -> 303 :purchased amount:10 204 -> 304 :purchased amount:15 204 -> 305 :purchased amount:45   and the resulting subgraph (for these nodes and edges) is shown in the figure below. The PGQL query for it is  SELECT * MATCH ()-[e:purchased]->()  which selects all edges (and their nodes) that have a label of purchased. Transactions:   # Nodes 211 type:account account_no:xxx-zzz-001 212 type:account account_no:xxx-zzz-002   # Edges 201 -> 202 :transfer amount:200 date:"2018-10-05" 211 -> 202 :transfer amount:900 date:"2018-10-06" 202 -> 212 :transfer amount:850 date:"2018-10-06" 201 -> 203 :transfer amount:500 date:"2018-10-07" 203 -> 204 :transfer amount:450 date:"2018-10-08" 204 -> 201 :transfer amount:400 date:"2018-10-09" 202 -> 203 :transfer amount:100 date:"2018-10-10" 202 -> 201 :transfer amount:300 date:”2018-10-10"   and the resulting subgraph (for these nodes and edges) is shown in the figure below. The PGQL query for it is  SELECT * MATCH ()-[e:transfer]->()  which selects all edges (and their nodes) that have a label of transfer.   And the full graph, once created, is as shown below.  Getting started   The sample data, and some config files, are available from Ryota's GitHub pgx-training repository under the c360/2019-05-06 directory. PGX is part of the supported Big Data Spatial and Graph or Database Spatial and Graph releases and included in those downloads. A standalone, unsupported, version of the PGX server package is also available from the Oracle Labs PGX site on the Oracle Technology Network. Download and install PGX server and the required Groovy and set the GROOVY_HOME environment variable to point to its installation.  Verify that the pgx shell works. e.g.    echo $GROOVY_HOME /opt/groovy-2.5.6  cd ~/Examples/pgx-19.1.0 ./bin/pgx [WARNING] PGX shell is using Groovy installed in /opt/groovy-2.5.6. It is the responsibility of the user to use the latest version of Groovy and to protect against known Groovy vulnerabilities. PGX Shell 19.1.0 PGX server version: 19.1.0 type: SM PGX server API version: 3.3.0 PGQL version: 1.2 type :help for available commands variables instance, session and analyst ready to use   Load the C360 sample data. Let's assume the pgx-training repository was downloaded to ~/Examples/pgx-training-master.    pgx> c360graph = session.readGraphWithProperties("~/Examples/pgx-training-master/c360/all.pgx.json") ==> PgxGraph[name=all.pgx,N=15,E=19,created=1556742411188]    Performing Graph Analyses    Now let's use PGQL to perform some pattern matching and graph analyses like detecting cycles, or finding communities and influencers.   Pattern Matching   Let’s say we’re looking for accounts that are mostly, or only, used as a temporary holding area and thus have inbound and outbound transfers within a short time period. For example, we might look for accounts that had an inbound and an outbound transfer, of over 500, on the same day. The PGQL query for this is: SELECT a.account_no, t1.amount, t2.amount, t1.date MATCH (a) <-[t1:transfer]-(a1),       (a)- [t2:transfer]->(a2)  WHERE t1.date = t2.date AND t1.amount > 500 AND t2.amount > 500     We can execute this in the PGX shell using (note: the triple quotes """ mark the start and end of a multiline statement)   c360graph.queryPgql(""" SELECT a.account_no, t1.amount, t2.amount, t1.date MATCH (a) <-[t1:transfer]-(a1),       (a)- [t2:transfer]->(a2)  WHERE t1.date = t2.date AND t1.amount > 500 AND t2.amount > 500  “"").print()   and the result is   +---------------------------------------------------+ | a.account_no | t1.amount | t2.amount | t1.date    | +---------------------------------------------------+ | xxx-yyy-202  | 900       | 850       | 2018-10-06 | +---------------------------------------------------+ In this particular instance the transfers are from and to an external entity and hence this account may be unprofitable depending on its fee structure. Detect Cycles   Next we use PGQL to find a series of transfers that start and end at the same account such as A to B to A, or A to B to C to A. The first query could be expressed as:   SELECT a1.account_no, t1.date, t1.amount, a2.account_no, t2.date, t2.amount MATCH (a1)-[t1:transfer]->(a2)-[t2:transfer]->(a1) WHERE t1.date < t2.date   Which when executed in the PGX shell using   c360graph.queryPgql(""" SELECT a1.account_no, t1.date, t1.amount, a2.account_no, t2.date, t2.amount MATCH (a1)-[t1:transfer]->(a2)-[t2:transfer]->(a1) WHERE t1.date < t2.date “"").print();   gives the following result.   +---------------------------------------------------------------------------------+ | a1.account_no | t1.date    | t1.amount | a2.account_no | t2.date    | t2.amount | +---------------------------------------------------------------------------------+ | xxx-yyy-201   | 2018-10-05 | 200       | xxx-yyy-202   | 2018-10-10 | 300       | +---------------------------------------------------------------------------------+ The second query just adds one more transfer to the pattern (list) and could be expressed as:   SELECT a1.account_no, t1.date, t1.amount, a2.account_no, t2.date, t2.amount, a3.account_no, t3.amount  MATCH (a1)-[t1:transfer]->(a2)-[t2:transfer]->(a3)-[t3:transfer]->(a1) WHERE t1.date < t2.date AND t2.date < t3.date   Which when executed in the PGX shell using   c360graph.queryPgql(""" SELECT a1.account_no, t1.date, t1.amount, a2.account_no, t2.date, t2.amount, a3.account_no, t3.amount MATCH (a1)-[t1:transfer]->(a2)-[t2:transfer]->(a3)-[t3:transfer]->(a1) WHERE t1.date < t2.date AND t2.date < t3.date “"").print();   gives the following result.   +-------------------------------------------------------------------------------------------------------------+ | a1.account_no | t1.date    | t1.amount | a2.account_no | t2.date    | t2.amount | a3.account_no | t3.amount | +-------------------------------------------------------------------------------------------------------------+ | xxx-yyy-201   | 2018-10-07 | 500       | xxx-yyy-203   | 2018-10-08 | 450       | xxx-yyy-204   | 400       | +-------------------------------------------------------------------------------------------------------------+ Finding Communities (or connected components)   Let's find which subsets of accounts form communities. That is, there are more transfers among accounts in the same subset than there are between those and accounts in another subset. We'll use the built-in Kosaraju strongly connected components algorithm. The first step is to create a subgraph that only has the accounts and the transfers among them. This is done by creating and applying an edge filter (for edges with the table "transfer') to the c360graph.   acctsSubgraph = c360graph.filter(new EdgeFilter("edge.label()='transfer' ")) ==> PgxGraph[name=sub-graph_13,N=6,E=8,created=1556744669838]   Then run sccKosaraju on that subgraph.   result = analyst.sccKosaraju(acctsSubgraph) ==> ComponentCollection[name=compproxy_14,graph=sub-graph_13] ==> ComponentCollection[name=compproxy_14,graph=sub-graph_13] ==> ComponentCollection[name=compproxy_14,graph=sub-graph_13]   Check the results; i.e. iterate through the result and print the number of vertices in each component (or community).   result.eachWithIndex {   it, index -> println "Partition ${index} has ${it.size()} vertices" } Partition 0 has 1 vertices Partition 1 has 4 vertices Partition 2 has 1 vertices   Setting this result to node properties, it can be retrieved by PGQL queries.        cs = sg.createChangeSet()     rs = sg.queryPgql("SELECT DISTINCT a WHERE (a)-[:transfer]-()")     for (r in rs) {       v = r.getVertex(1)       i = result.getPartitionIndexOfVertex(v)       cs.updateVertex(v.getId()).setProperty("component", i)     }     sg = cs.build()     sg.queryPgql("""       SELECT a.component, COUNT(a.account_no), MAX(a.account_no)       MATCH (a)       GROUP BY a.component       ORDER BY a.component     """).print() +-------------------------------------------------------+ | a.component | COUNT(a.account_no) | MAX(a.account_no) | +-------------------------------------------------------+ | 0           | 1                   | xxx-zzz-001       | | 1           | 4                   | xxx-yyy-204       | | 2           | 1                   | xxx-zzz-002       | +-------------------------------------------------------+   If we look at the initial subgraph of accounts we see that these correspond to the red (4 vertices), pink (1) and orange (1) vertices. Lastly let's use Personalized PageRank to find stores that John may purchase from given that people he is connected to have made purchases from those stores.   Recommendation   Create a subgraph with customers and merchants, i.e. filter the original graph based on the edge label “purchased".   customersAndMerchants = c360graph.filter(new EdgeFilter("edge.label()='purchased' "))   The algorithms require bi-directional edges so we add them programmatically. The first step is to create a change set with the required updates, i.e. added reverse edges for the purchases. Then web build a new graph using that change set.   purchases_changeSet = customersAndMerchants.createChangeSet() // get the vertices and add a reverse edge purchases_resultSet  = customersAndMerchants.queryPgql("SELECT a, x MATCH (a)-[:purchased]->(x)") for (r in purchases_resultSet) {       a = r.getVertex(1).getId()       x = r.getVertex(2).getId()       purchases_changeSet.addEdge(x, a).setLabel("purchased_by") } // build the graph purchases_Graph = purchases_changeSet.build() // query it to test that the build worked  purchases_Graph.queryPgql("SELECT x.name, label(r), a.account_no MATCH (x)-[r:purchased_by]->(a)") Now let's compute the personalized page rank (PPR) for John's account (account "a01"). A PPR is biased towards a specified set of vertices. So first we create and populate that vertex set. Then we compute the PPR.   ppr_vertexSet = purchases_Graph.createVertexSet(); ppr_vertexSet.addAll("101"); ppr = analyst.personalizedPagerank(purchases_Graph, ppr_vertexSet);   Now query the graph to get the results (i.e. computed ppr values which are stored in a new property named "pagerank"). The query "find merchants that John has not purchased from and list them ordered by their ppr" is expressed as follows.   purchases_Graph.queryPgql(""" SELECT ID(x), x.name, x.pagerank MATCH (x) WHERE x.type = 'merchant' AND NOT EXISTS ( SELECT * MATCH (x)-[:purchased_by]->(a) WHERE ID(a)='101') ORDER BY x.pagerank DESC """).print()   which gives the results shown below.   +--------------------------------------------+ | ID(x) | x.name       | x.pagerank          | +--------------------------------------------+ | 303   | Kindle Store | 0.04932640133302745 | | 304   | Asia Books   | 0.04932640133302745 | | 305   | ABC Travel   | 0.01565535511504672 | +--------------------------------------------+

Customer 360 example Note: This is based on Ryota Yamanaka's (Senior Solutions Consultant, Big Data & Analytics, APAC) demos. The original content is here in his Github repository.    Introduction   This...

Graph Features

RDF Graph – Standards and Open Source Support

In the RDF graph area, Oracle supports leading open source technologies and actively participates in the W3C standards body to meet our customers’ business-critical application needs.  Here is a list of some of the technologies and standards organizations we support. W3C – Semantic Web and Semantic Web Activity, RDF, RDF Schema, RIF OWL, SPARQL, SKOS, URI, RDF/XML Jena—Java framework for building Semantic Web applications. It provides a programmatic environment for RDF, RDFS and OWL, SPARQL and includes a rule-based inference engine. Fuseki—an HTTP engine that supports the SPARQL Protocol and the SPARQL RDF Query language. ARQ—a query engine for Jena that supports the SPARQL RDF Query language. TDB—Jena component that provides large scale storage and query of RDF datasets using a pure Java engine and SPARQL. SDB—Jena component that provides for scalable storage and query w/ SPARQL of RDF datasets using conventional SQL databases for use in standalone applications, J2EE and other application frameworks Sesame—an open source framework for storage, inferencing and querying of RDF data. Pellet—OWL 2 reasoner for Oracle Database 11g R2, providing scalable and correct OWL reasoning. D2RQ—declarative language to describe mappings between relational database schemata and OWL/RDFS ontologies, enables applications to access a RDF-view on a non-RDF database through the Jena and Sesame APIs, as well as over the Web via the SPARQL Protocol and as Linked Data. Jetty—provides an HTTP server, HTTP client, and javax.servlet container. Cytoscape—Open source visualization tool for graph data GATE—Entity extraction & text analysis engine Protégé—Open source ontology editor  

In the RDF graph area, Oracle supports leading open source technologies and actively participates in the W3C standards body to meet our customers’ business-critical application needs.  Here is a list...

Spatial and Graph

Oracle’s Spatial Map Visualization Component: Getting Started and Resources

  Here are resources to help you get started using the map visualization component of Oracle Spatial and Graph. Overview The Oracle Spatial and Graph map visualization component is a programmable tool for rendering maps using spatial data managed by Oracle Spatial and Graph. The map visualization component provides tools that hide the complexity of spatial data queries and cartographic rendering, while providing customizable options for more advanced users. These tools can be deployed in a platform-independent manner and are designed to integrate with map-rendering applications. Location of kits, documentation, and other collateral The spatial visualization EAR file ships with the database software media. It can be found in the ORACLE_HOME/rdbms/md/jlib directory. The EAR file can be deployed to a supported JEE container The following blog post explains how to deploy, and use, it in the Oracle Public Cloud https://blogs.oracle.com/oraclespatial/deploy-spatial-and-graph-map-visualization-component-to-oracle-cloud Documentation is part of the database documentation library.  18c version of the Spatial and Graph Map Visualization Developer's Guide The component for Oracle Database 12.2 and 18c is compatible with the Fusion Middleware MapViewer release 12.2.1.3. So the quickstart kit, map authoring tool (aka MapBuilder) and demos/tutorial (i.e. mvdemo war file and dataset), which are available at its download page, can be used as an alternate way to get started with this component.    

  Here are resources to help you get started using the map visualization component of Oracle Spatial and Graph. Overview The Oracle Spatial and Graph map visualization component is a programmable tool...

Graph Features

Executing Federated SPARQL Queries with Oracle Spatial and Graph – RDF Knowledge Graph

Linked (Open) Data is all about publishing data in a standard way so that it can be integrated and linked with other data, which leads to much more useful data. The W3C has defined several standards as part of its Data Activity to enable linked data, and thanks to adoption of W3C standards, there are currently over 1,200 datasets listed in the Linked Open Data Cloud. Much of this data can be accessed by standard SPARQL endpoints. The folks at https://lod-cloud.net/ have created an image to nicely illustrate the LOD Cloud. The SPARQL 1.1 standard defines a SERVICE keyword for executing federated SPARQL queries that combine data from a local database and one or more remote SPARQL endpoints. Oracle Spatial and Graph has supported federated SPARQL queries since version 12.1.0.2 (see the user guide for more information). In this blog post, we show how to configure and execute federated queries against public SPARQL endpoints provided by the Italian National Institute of Statistics and the Japanese Statistics Bureau. We will be using a 12c Release 2 database, and our examples assume that a semantic network and a user named RDFUSER have been created. We built this environment using the Oracle Database Developer Day VM available here. See a previous post for instructions on how to create a user and semantic network using SQL Developer. First, create an RDF model owned by RDFUSER with the following SQL commands: create table atab(tri sdo_rdf_triple_s) compress; grant insert on atab to mdsys; exec sem_apis.create_sem_model('M1','ATAB','TRI'); RDFUSER needs some additional privileges to run federated queries in the database. We need to grant execute privileges on MDSYS.SPARQL_SERVICE and use DBMS_NETWORK_ACL_ADMIN to give RDFUSER privileges to connect to external websites. Run the following SQL commands as a privileged user. grant execute on mdsys.sparql_service to rdfuser; begin   dbms_network_acl_admin.create_acl (     acl       => 'rdfuser.xml',     description => 'Allow rdfuser to query SPARQL endpoints',     principal => 'RDFUSER',     is_grant  => true,     privilege => 'connect'   );   dbms_network_acl_admin.assign_acl (     acl  => 'rdfuser.xml',     host => '*'   ); end; / You can use UTL_HTTP.SET_PROXY for any proxy settings. Now that RDFUSER has appropriate privileges, we can try a simple SEM_MATCH query that uses the SERVICE keyword to execute a portion of the query remotely on the Italian National Institute of Statistics’ SPARQL endpoint located at http://datiopen.istat.it/sparql/oracle. More specifically, the portion of the query inside SERVICE <http://datiopen.istat.it/sparql/oracle> { } is sent to the remote endpoint. select s$rdfterm, p$rdfterm, o$rdfterm from table(sem_match( 'SELECT *  WHERE {    SERVICE <http://datiopen.istat.it/sparql/oracle> {      SELECT * WHERE { ?s ?p ?o } LIMIT 3    }  }' ,sem_models('M1') ,null,null,null,null ,' PLUS_RDFT=VC ')); That’s it! Thanks to adoption of W3C SPARQL protocol standards, we just sent a SPARQL query pattern to a remote endpoint in Italy where the remote endpoint evaluated the query and sent the result back to our local query processor. We will use the Japanese Statistics Bureau’s SPARQL endpoint located at https://data.e-stat.go.jp/lod/sparql/alldata/query for our next example. select s$rdfterm, p$rdfterm, o$rdfterm from table(sem_match( 'SELECT *  WHERE {    SERVICE <https://data.e-stat.go.jp/lod/sparql/alldata/query> {      SELECT * WHERE { ?s ?p ?o } LIMIT 3    }  }' ,sem_models('M1') ,null,null,null,null ,' PLUS_RDFT=VC ')); Note that this endpoint uses HTTPS instead of HTTP, so we will need to use some certificates to properly connect to the endpoint. Without certificates, we get an HTTP request failed error. To properly connect to this HTTPS endpoint, we need to use the orapki utility to create a wallet with the certificates for this site. The first step is to download certificates from the site. Open a web browser to https://data.e-stat.go.jp/lod/sparql/alldata/query (we are using Firefox) and click on the green lock icon to the left of the address bar. Then click the right arrow next to data.e-stat.go.jp in the popup. Click More Information at the bottom of the next screen. Then click the View Certificate button on the right. Next, click the Details tab. Now we need to export the two non-leaf certificates (DigiCert Global Root CA and GeoTrust RSA CA 2018). Highlight the certificate name and click Export. Save the certificate to the filesystem as a .crt file. Repeat this process to save the other non-leaf certificate. Next, we will use the orapki utility to create a wallet with these certificates. Open up a UNIX command prompt to the directory where you saved the certificates and execute the following commands. Note that you will need to enter a password for your wallet during the wallet create command. mkdir wallet $ORACLE_HOME/bin/orapki wallet create -wallet <path_to_your_wallet_directory> $ORACLE_HOME/bin/orapki wallet add -wallet <path_to_your_wallet_directory> -trusted_cert -cert "<path_to_your_certificate>/DigiCertGlobalRootCA.crt" -pwd <passwd> $ORACLE_HOME/bin/orapki wallet add -wallet <path_to_your_wallet_directory> -trusted_cert -cert "<path_to_your_certificate>/GeoTrustRSACA2018.crt" -pwd <passwd> $ORACLE_HOME/bin/orapki wallet display -wallet <path_to_your_wallet_directory> -pwd <passwd> Now that the wallet is created, we need to load it into our database session with UTL_HTTP.SET_WALLET. Execute the following command in your SQL*Plus session. EXEC UTL_HTTP.set_wallet('file:<path_to_your_wallet_directory>', '<passwd_for_wallet>'); Now we can execute a federated query against the Japanese Statistics Bureau’s HTTPS SPARQL endpoint from this session. So far, we have only done simple queries against these remote endpoints where our whole SPARQL query is just a SERVICE clause. Let’s now do query were we join some local data with a remote endpoint. To illustrate this, we’ll insert a triple locally using one of the subject URIs from the data.e-stat.go.jp endpoint. begin   sem_apis.update_model('M1', 'INSERT DATA {    <urn:a> <urn:p> <http://data.e-stat.go.jp/lod/dataset/gridCode/dm012015405/obsTD7WVMQBYGBXALAVG4CWDNNO675M64S4> .  }'); end; / Now, we’ll execute the query below that does a simple object-subject join with the remote endpoint. select s$rdfterm, p$rdfterm, o$rdfterm from table(sem_match( 'SELECT *  WHERE {    <urn:a> <urn:p> ?s    SERVICE <https://data.e-stat.go.jp/lod/sparql/alldata/query> {      SELECT * WHERE { ?s ?p ?o }    }  } LIMIT 3' ,sem_models('M1') ,null,null,null,null ,' PLUS_RDFT=VC SERVICE_JPDWN=T ')); Note that we are using a very important option in our SEM_MATCH  query: SERVICE_JPDWN=T. This option tells the query engine to push the join down into the SERVICE clause. The SPARQL standard specifies a bottom-up query execution, which means that the fully unbound { ?s ?p ?o } pattern should execute on the remote endpoint first and return all these intermediate results back to our query processor to then join with the <urn:a> <urn:p> ?s triple pattern in the top level of the query. The data.e-stat.go.jp endpoint contains over 1 billion triples, so this bottom-up evaluation will obviously be problematic. We don’t want to send 1 billion triples back to our local database over HTTP. With join push down, we follow a top-down evaluation and evaluate the <urn:a> <urn:p> ?s triple pattern first. Then we will execute one or more remote queries in a loop, but we will add constraints on ?s to the remote query ( { ?s ?p ?o FILTER (?s = <http://data.e-stat.go.jp/lod/dataset/gridCode/dm012015405/obsTD7WVMQBYGBXALAVG4CWDNNO675M64S4>) } in this case). In general, the SERVICE_JPDWN=T option should be used any time you have a selective local query pattern and an unselective remote query pattern. Alright, this is great if we are running SQL SEM_MATCH queries, but what if we want to run these queries as pure SPARQL queries from a Fuseki endpoint connected to our database? One simple way to do this is to create a database logon trigger for the database user in our Fuseki connection. In this case, we are using RDFUSER for the Fuseki connection, so a privileged database user can create the following trigger. create or replace trigger rdfuser_fedq after logon on database declare begin   if (user = 'RDFUSER') then     -- set proxy     UTL_HTTP.SET_PROXY('www-proxy.com:80');        -- set wallet     UTL_HTTP.set_wallet('file:<path_to_wallet_directory>', '<wallet_passwd>');   end if;   exception when others then null; end; / Now, our database session for Fuseki will be properly configured to connect to the data.e-stat.go.jp endpoint, and we can run the same query through Fuseki. Note how we used the special ORACLE_SEM_HT_NS prefix to pass the SERVICE_JPDWN=T hint along to the database server. See a previous blog post for more information about how to setup Fuskei. That’s it. Now have fun running queries against all the SPARQL endpoints out there in the Linked Open Data Cloud!

Linked (Open) Data is all about publishing data in a standard way so that it can be integrated and linked with other data, which leads to much more useful data. The W3C has defined several standards...

Graph Features

Accessing PGX REST Endpoint using C++

This post is based a recent collaboration with Yogesh Purohit (primary contributor)  who is a passionate expert from one of the largest industrial manufacturing companies. In this collaboration, we used C++ to access a PGX REST endpoint and also compared the performance of C++/REST and Oracle's built-in Java/REST calls. Below we focus on illustrating how one can access a PGX REST endpoint using C++. Requirements first, we need this latest Oracle Spatial and Graph Property Graph patch and you may also want to check this blog to see some details of the underlying REST calls. The following code snippet uses Poco for HTTP calls. It starts with getting a CSRF token and then uses it to establish a new PGX session.     unsigned short oraclePort = 7007;     std::string pgxHostName = "localhost";         session = new Poco::Net::HTTPClientSession(pgxHostName, oraclePort);     session->setKeepAlive(true);     std::string oraclePath = "/token";     std::string finalBody =     "";     // send request     req = new Poco::Net::HTTPRequest(Poco::Net::HTTPRequest::HTTP_GET, oraclePath, Poco::Net::HTTPMessage::HTTP_1_1);     req->setContentType("application/json; charset=UTF-8; stream=true");     req->setContentLength(finalBody.length());     std::ostream &os = session->sendRequest(*req);     os << finalBody.c_str();     os.flush();     std::istream &is1 = session->receiveResponse(res);     std::vector<HTTPCookie> cookies;     res.getCookies(cookies);     std::string csrfToken = cookies[0].getValue();     //2///////////////////////////     oraclePath = "/core/v1/sessions";     req->setMethod(Poco::Net::HTTPRequest::HTTP_POST);     req->setURI(oraclePath);     finalBody = "{                    \         \"source\" : \"graph-session\",    \         \"idleTimeout\": null,    \         \"taskTimeout\":null,    \         \"timeUnitName\":null,    \         \"_csrf_token\" :'" +  csrfToken + "'}";     req->setContentLength(finalBody.length());     NameValueCollection reqCookies;     reqCookies.add("_csrf_token", csrfToken );     req->setCookies(reqCookies);     std::ostream &os2 = session->sendRequest(*req);     os2 << finalBody.c_str();     os2.flush();     std::istream &is2 = session->receiveResponse(res);     std::vector<HTTPCookie> sid;     res.getCookies(sid);     std::string sessionId = sid[0].getValue(); Thanks, Zhe

This post is based a recent collaboration with Yogesh Purohit (primary contributor)  who is a passionate expert from one of the largest industrial manufacturing companies. In this collaboration, we...

Graph Features

Finding Relevant REST APIs for Your Graph Operations

From time to time, I got a request asking for details on the REST APIs for the PGX component of Oracle Spatial and Graph (OSG) and Oracle Big Data Spatial and Graph (BDSG). In this blog post, I am going to show a simple way, demonstrated to me by my colleague Korbi Schmid,  to find details on REST API calls using the most recent property graph patch. It is actually very simple. Just enable the trace level log4j setting for Groovy (or your own Java program), connect to a PGX instance, run some Java APIs, and you will find the relevant REST APIs from the trace. The following trace was from a flow of establishing a PGX session, loading a graph from Oracle Database, and running a PGQL. It's annotated for better readability. As a convention, "<<" denotes an incoming response from an REST call, while ">>" denotes an outgoing REST request. -- Assume we have a graph config as follows and the PGX endpoint used is  http://127.0.0.1:7007/ {"attributes":{},"username":"pg","error_handling":{},"db_engine":"RDBMS","vertex_id_type":"long","format":"pg","jdbc_url":"jdbc:oracle:thin:@127.0.0.1:1521:orcl122","max_num_connections":2,"loading":{"load_edge_label":false},"vertex_props":[{"type":"timestamp_with_timezone","name":"ts"},{"type":"string","default":"default_name","name":"name"}],"name":"g1","password":"*******","edge_props":[{"type":"double","default":"1000000","name":"cost"}]} ** Get a CSRF token and PGX server version info 15:35:48.650 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-0 >> "GET /token HTTP/1.1[\r][\n]" 15:35:48.656 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-0 << "HTTP/1.1 201 [\r][\n]" 15:35:48.656 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-0 << "SET-COOKIE: _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc;Version=1; HttpOnly[\r][\n]" 15:35:48.656 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-0 << "Content-Length: 0[\r][\n]" 15:35:48.656 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-0 << "Date: Thu, 16 Aug 2018 22:35:48 GMT[\r][\n]" 15:35:48.656 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-0 << "[\r][\n]" 15:35:48.684 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 >> "GET /version?extendedInfo=true HTTP/1.1[\r][\n]" 15:35:48.684 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 >> "Host: 127.0.0.1:7007[\r][\n]" 15:35:48.684 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 >> "Connection: Keep-Alive[\r][\n]" 15:35:48.684 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:35:48.684 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 >> "Cookie: _csrf_token=18bf8ab2-c1af-49ce-9a33-190ab1208647[\r][\n]" 15:35:48.684 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:35:48.684 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 >> "[\r][\n]" 15:35:48.691 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 << "HTTP/1.1 200 [\r][\n]" 15:35:48.691 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:35:48.691 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 << "Content-Length: 130[\r][\n]" 15:35:48.691 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 << "Date: Thu, 16 Aug 2018 22:35:48 GMT[\r][\n]" 15:35:48.691 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 << "[\r][\n]" 15:35:48.691 [pgx-client-thread-1] DEBUG org.apache.http.wire - http-outgoing-1 << "{"version":"3.1.0","commit":"cb648f2da0cdf006f10588a69bb1beca81b65190","server_type":"sm","built":"2018-08-06T22:29:43.011-07:00"}" * * Create a new PGX session. Need to pass in the CSRF token * 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "POST /core/v1/sessions HTTP/1.1[\r][\n]" 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "Content-Length: 132[\r][\n]" 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "Content-Type: application/json; charset=UTF-8[\r][\n]" 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "Host: 127.0.0.1:7007[\r][\n]" 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]" 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "Cookie: _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "[\r][\n]" 15:35:48.704 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 >> "{"source":"session1","idleTimeout":null,"taskTimeout":null,"timeUnitName":null,"_csrf_token":"a838d0ad-e165-4e79-928f-139628db11fc"}" 15:35:48.748 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 << "HTTP/1.1 201 [\r][\n]" 15:35:48.748 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 << "SET-COOKIE: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12;Version=1; HttpOnly[\r][\n]" 15:35:48.748 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 << "Content-Length: 0[\r][\n]" 15:35:48.749 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 << "Date: Thu, 16 Aug 2018 22:35:48 GMT[\r][\n]" 15:35:48.749 [pgx-client-thread-2] DEBUG org.apache.http.wire - http-outgoing-0 << "[\r][\n]" * * Issue a load graph command. Need to pass in session ID, CSRF token, and the graph config itself. * 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "POST /core/v1/loadGraph HTTP/1.1[\r][\n]" 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Content-Length: 503[\r][\n]" 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Content-Type: application/json; charset=UTF-8[\r][\n]" 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Host: 127.0.0.1:7007[\r][\n]" 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Connection: Keep-Alive[\r][\n]" 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "[\r][\n]" 15:50:12.799 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "{"graphConfig":{"username":"pg","db_engine":"RDBMS","format":"pg","jdbc_url":"jdbc:oracle:thin:@127.0.0.1:1521:orcl122","attributes":{},"error_handling":{},"max_num_connections":2,"loading":{"load_edge_label":false},"vertex_props":[{"type":"timestamp_with_timezone","name":"ts"},{"type":"string","default":"default_name","name":"name"}],"name":"g1","password":"pg","edge_props":[{"type":"double","default":"1000000","name":"cost"}]},"graphName":null,"_csrf_token":"a838d0ad-e165-4e79-928f-139628db11fc"}" 15:50:12.901 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "HTTP/1.1 202 [\r][\n]" 15:50:12.901 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "Location: http://127.0.0.1:7007/core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513/status[\r][\n]" 15:50:12.901 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:50:12.901 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Length: 51[\r][\n]" 15:50:12.901 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "Date: Thu, 16 Aug 2018 22:50:12 GMT[\r][\n]" 15:50:12.901 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "[\r][\n]" 15:50:12.902 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "{"futureId":"4cb75a56-8acc-4958-bc67-eab12c67e513"}" * * Wait for the async task (future) to complete * 15:50:12.908 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "GET /core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513/status HTTP/1.1[\r][\n]" 15:50:12.908 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Host: 127.0.0.1:7007[\r][\n]" 15:50:12.908 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Connection: Keep-Alive[\r][\n]" 15:50:12.908 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:50:12.908 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:50:12.908 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:50:12.908 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "[\r][\n]" 15:50:12.930 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "HTTP/1.1 200 [\r][\n]" 15:50:12.931 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:50:12.931 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Length: 577[\r][\n]" 15:50:12.931 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Date: Thu, 16 Aug 2018 22:50:12 GMT[\r][\n]" 15:50:12.931 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "[\r][\n]" 15:50:12.931 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "{"id":"4cb75a56-8acc-4958-bc67-eab12c67e513","links":[{"href":"http://127.0.0.1:7007/core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513/status","rel":"self","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513","rel":"abort","method":"DELETE","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513/status","rel":"canonical","method":"GET","interaction":["async-polling"]}],"progress":"processing","completed":false,"intervalToPoll":1}" * * The above has not completed, yet. Try again. * 15:50:18.066 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "GET /core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513/status HTTP/1.1[\r][\n]" 15:50:18.066 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Host: 127.0.0.1:7007[\r][\n]" 15:50:18.066 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Connection: Keep-Alive[\r][\n]" 15:50:18.066 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:50:18.066 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:50:18.066 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:50:18.066 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "[\r][\n]" 15:50:18.070 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "HTTP/1.1 200 [\r][\n]" 15:50:18.070 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:50:18.070 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Length: 733[\r][\n]" 15:50:18.070 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Date: Thu, 16 Aug 2018 22:50:18 GMT[\r][\n]" 15:50:18.070 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "[\r][\n]" 15:50:18.070 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "{"id":"4cb75a56-8acc-4958-bc67-eab12c67e513","links":[{"href":"http://127.0.0.1:7007/core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513/status","rel":"self","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513","rel":"abort","method":"DELETE","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513/status","rel":"canonical","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513/value","rel":"related","method":"GET","interaction":["async-polling"]}],"progress":"succeeded","completed":true,"intervalToPoll":1000}" * * Graph is now read into memory. Get some meta data of the graph * 15:50:19.074 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "GET /core/v1/futures/4cb75a56-8acc-4958-bc67-eab12c67e513/value HTTP/1.1[\r][\n]" 15:50:19.074 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Host: 127.0.0.1:7007[\r][\n]" 15:50:19.074 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Connection: Keep-Alive[\r][\n]" 15:50:19.074 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:50:19.074 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:50:19.074 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:50:19.074 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 >> "[\r][\n]" 15:50:19.108 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "HTTP/1.1 201 [\r][\n]" 15:50:19.108 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Location: http://127.0.0.1:7007/core/v1/graphs/g1[\r][\n]" 15:50:19.108 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:50:19.108 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Length: 2175[\r][\n]" 15:50:19.108 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "Date: Thu, 16 Aug 2018 22:50:19 GMT[\r][\n]" 15:50:19.108 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "[\r][\n]" 15:50:19.108 [pgx-client-thread-4] DEBUG org.apache.http.wire - http-outgoing-2 << "{"id":"g1","links":[{"href":"http://127.0.0.1:7007/core/v1/graphs/g1","rel":"self","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/graphs/g1","rel":"canonical","method":"GET","interaction":["async-polling"]}],"nodeProperties":{"name":{"id":"name","links":[{"href":"http://127.0.0.1:7007/core/v1/graphs/g1/properties/name","rel":"self","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/graphs/g1/properties/name","rel":"canonical","method":"GET","interaction":["async-polling"]}],"dimension":0,"name":"name","entityType":"vertex","type":"string","transi ent":false},"ts":{"id":"ts","links":[{"href":"http://127.0.0.1:7007/core/v1/graphs/g1/properties/ts","rel":"self","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/graphs/g1/properties/ts","rel":"canonical","method":"GET","interaction":["async-polling"]}],"dimension":0,"name":"ts","entityType":"vertex","type":"timestamp_with_timezone","transient":false}},"metaData":{"id":null,"links":null,"numVertices":482372,"numEdges":241186,"memoryMb":34,"dataSourceVersion":"21321357","config":{"vertex_props":[{"name":"ts","type":"timestamp_with_timezone"},{"name":"name","default":"default_name","type":"string"}],"error_handling":{},"edge_props":[{"name":"cost","default":"1000000","type":"double"}],"password":"pg","format":"pg","attributes":{}, "name":"g1","max_num_connections":2,"jdbc_url":"jdbc:oracle:thin:@127.0.0.1:1521:orcl122","db_engine":"RDBMS","loading":{"load_edge_label":false},"username":"pg"},"creationRequestTimestamp":1534459812913,"creationTimestamp":1534459816597,"vertexIdType":"long","edgeIdType":"long","directed":true},"vertexLabels":null,"edgeLabel":null,"graphName":"g1","edgeProperties":{"cost":{"id":"cost","links":[{"href":"http://127.0.0.1:7007/core/v1/graphs/g1/properties/cost","rel":"self","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/graphs/g1/properties/cost","rel":"canonical","method":"GET","interaction":["async-polling"]}],"dimension":0,"name":"cost","entityType":"edge", "type":"double","transient":false}},"ageMs":0,"transient":false}" * * Get ready to run PGQL. Need to pass in SID, CSRF token, and PGQL body + graph name etc. * Here I am running a very simple PGQL: SELECT n.ts WHERE (n), n.name = 'unique' 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "POST /core/v1/pgql/run HTTP/1.1[\r][\n]" 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Content-Length: 180[\r][\n]" 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Content-Type: application/json; charset=UTF-8[\r][\n]" 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Host: 127.0.0.1:7007[\r][\n]" 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Connection: Keep-Alive[\r][\n]" 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "[\r][\n]" 15:57:54.766 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "{"pgqlQuery":"SELECT n.ts WHERE (n), n.name = 'unique'","semantic":"HOMOMORPHISM","graphName":"g1","schemaStrictnessMode":true,"_csrf_token":"a838d0ad-e165-4e79-928f-139628db11fc"}" 15:57:54.784 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "HTTP/1.1 202 [\r][\n]" 15:57:54.784 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "Location: http://127.0.0.1:7007/core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640/status[\r][\n]" 15:57:54.784 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:57:54.784 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Length: 51[\r][\n]" 15:57:54.784 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "Date: Thu, 16 Aug 2018 22:57:54 GMT[\r][\n]" 15:57:54.784 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "[\r][\n]" 15:57:54.784 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "{"futureId":"1e6c3163-d549-4357-8924-f539001a1640"}" * * Wait for the above async PGQL call to complete. Not yet. * 15:57:54.789 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "GET /core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640/status HTTP/1.1[\r][\n]" 15:57:54.789 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Host: 127.0.0.1:7007[\r][\n]" 15:57:54.789 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Connection: Keep-Alive[\r][\n]" 15:57:54.789 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:57:54.789 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:57:54.789 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:57:54.789 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "[\r][\n]" 15:57:54.793 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "HTTP/1.1 200 [\r][\n]" 15:57:54.793 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:57:54.793 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Length: 577[\r][\n]" 15:57:54.793 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Date: Thu, 16 Aug 2018 22:57:54 GMT[\r][\n]" 15:57:54.793 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "[\r][\n]" 15:57:54.793 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "{"id":"1e6c3163-d549-4357-8924-f539001a1640","links":[{"href":"http://127.0.0.1:7007/core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640/status","rel":"self","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640","rel":"abort","method":"DELETE","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640/status","rel":"canonical","method":"GET","interaction":["async-polling"]}],"progress":"processing","completed":false,"intervalToPoll":1}" * * Ping again to see if PGQL completes * 15:57:55.355 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "GET /core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640/status HTTP/1.1[\r][\n]" 15:57:55.355 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Host: 127.0.0.1:7007[\r][\n]" 15:57:55.355 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Connection: Keep-Alive[\r][\n]" 15:57:55.355 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:57:55.355 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:57:55.355 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:57:55.355 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "[\r][\n]" 15:57:55.359 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "HTTP/1.1 200 [\r][\n]" 15:57:55.359 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:57:55.359 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Length: 732[\r][\n]" 15:57:55.359 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Date: Thu, 16 Aug 2018 22:57:54 GMT[\r][\n]" 15:57:55.359 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "[\r][\n]" 15:57:55.359 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "{"id":"1e6c3163-d549-4357-8924-f539001a1640","links":[{"href":"http://127.0.0.1:7007/core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640/status","rel":"self","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640","rel":"abort","method":"DELETE","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640/status","rel":"canonical","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640/value","rel":"related","method":"GET","interaction":["async-polling"]}],"progress":"succeeded","completed":true,"intervalToPoll":512}" * * Fetch PGQL results. * 15:57:55.873 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "GET /core/v1/futures/1e6c3163-d549-4357-8924-f539001a1640/value HTTP/1.1[\r][\n]" 15:57:55.873 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Host: 127.0.0.1:7007[\r][\n]" 15:57:55.873 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Connection: Keep-Alive[\r][\n]" 15:57:55.873 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:57:55.873 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:57:55.873 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:57:55.873 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "[\r][\n]" 15:57:55.878 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "HTTP/1.1 201 [\r][\n]" 15:57:55.878 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Location: http://127.0.0.1:7007/core/v1/pgql/run[\r][\n]" 15:57:55.878 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:57:55.878 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Length: 587[\r][\n]" 15:57:55.878 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Date: Thu, 16 Aug 2018 22:57:55 GMT[\r][\n]" 15:57:55.878 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "[\r][\n]" 15:57:55.878 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "{"id":"pgql_1","links":[{"href":"http://127.0.0.1:7007/core/v1/pgqlProxies/pgql_1","rel":"self","method":"GET","interaction":["sync"]},{"href":"http://127.0.0.1:7007/core/v1/pgqlResultProxies/pgql_1/elements","rel":"related","method":"GET","interaction":["sync"]},{"href":"http://127.0.0.1:7007/core/v1/pgqlResultProxies/pgql_1/results","rel":"related","method":"GET","interaction":["sync"]},{"href":"http://127.0.0.1:7007/core/v1/pgqlProxies/pgql_1","rel":"canonical","method":"GET","interaction":["async-polling"]}],"exists":true,"graphName":"g1","resultSetId":"pgql_1","numResults":1}" 15:57:55.890 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "GET /core/v1/pgqlProxies/pgql_1/elements HTTP/1.1[\r][\n]" 15:57:55.890 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Host: 127.0.0.1:7007[\r][\n]" 15:57:55.890 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Connection: Keep-Alive[\r][\n]" 15:57:55.890 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:57:55.890 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:57:55.890 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:57:55.890 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 >> "[\r][\n]" 15:57:55.900 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "HTTP/1.1 200 [\r][\n]" 15:57:55.900 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:57:55.900 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Length: 488[\r][\n]" 15:57:55.900 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "Date: Thu, 16 Aug 2018 22:57:55 GMT[\r][\n]" 15:57:55.900 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "[\r][\n]" 15:57:55.900 [pgx-client-thread-5] DEBUG org.apache.http.wire - http-outgoing-3 << "{"id":"/core/v1/pgqlProxies/pgql_1/elements","links":[{"href":"http://127.0.0.1:7007/core/v1/pgqlProxies/pgql_1/elements","rel":"self","method":"GET","interaction":["sync"]},{"href":"http://127.0.0.1:7007/core/v1/pgqlProxies/pgql_1/elements","rel":"canonical","method":"GET","interaction":["async-polling"]}],"count":1,"totalItems":1,"items":[{"elementType":"TIMESTAMP_WITH_TIMEZONE","varName":"n.ts","vertexEdgeIdType":null}],"hasMore":false,"offset":0,"limit":1,"showTotalResults":true}" 15:57:55.924 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "GET /core/v1/pgqlProxies/pgql_1/results?start=0&size=2048 HTTP/1.1[\r][\n]" 15:57:55.924 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Host: 127.0.0.1:7007[\r][\n]" 15:57:55.924 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Connection: Keep-Alive[\r][\n]" 15:57:55.924 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 15:57:55.924 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Cookie: SID=e2b2ead4-10b6-41e0-a7f4-4e960859da12; _csrf_token=a838d0ad-e165-4e79-928f-139628db11fc[\r][\n]" 15:57:55.924 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "Accept-Encoding: gzip,deflate[\r][\n]" 15:57:55.924 [main] DEBUG org.apache.http.wire - http-outgoing-3 >> "[\r][\n]" 15:57:55.931 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "HTTP/1.1 200 [\r][\n]" 15:57:55.931 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Type: application/json;charset=utf-8[\r][\n]" 15:57:55.931 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "Content-Length: 482[\r][\n]" 15:57:55.931 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "Date: Thu, 16 Aug 2018 22:57:55 GMT[\r][\n]" 15:57:55.931 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "[\r][\n]" 15:57:55.931 [main] DEBUG org.apache.http.wire - http-outgoing-3 << "{"id":"/core/v1/pgqlProxies/pgql_1/results","links":[{"href":"http://127.0.0.1:7007/core/v1/pgqlProxies/pgql_1/results","rel":"self","method":"GET","interaction":["sync"]},{"href":"http://127.0.0.1:7007/core/v1/pgqlProxies/pgql_1/results","rel":"canonical","method":"GET","interaction":["async-polling"]}],"count":1,"totalItems":1,"items":[[{"TIMESTAMP_PART_OF_TS_WITH_TZ":1534430004000,"TZ_PART_OF_TS_WITH_TZ":-25200}]],"hasMore":false,"offset":0,"limit":1,"showTotalResults":true}" opg-oracledb> pgxGraph.close()   // JAVA Call // Corresponding REST APIs to close the in-memory graph 17:20:07.314 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "DELETE /core/v1/graphs/g2?_csrf_token=86f245d0-c4bf-45ca-aa7c-3b3b8b3ec181&ignoreNotFound=false&retention=DESTROY_IF_NOT_USED HTTP/1.1[\r][\n]" 17:20:07.314 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Host: 127.0.0.1:7007[\r][\n]" 17:20:07.314 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Connection: Keep-Alive[\r][\n]" 17:20:07.314 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 17:20:07.314 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Cookie: SID=f83eb140-fef0-4a46-bef0-031a09f77f9e; _csrf_token=86f245d0-c4bf-45ca-aa7c-3b3b8b3ec181[\r][\n]" 17:20:07.314 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "Accept-Encoding: gzip,deflate[\r][\n]" 17:20:07.314 [main] DEBUG org.apache.http.wire - http-outgoing-2 >> "[\r][\n]" 17:20:07.320 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "HTTP/1.1 202 [\r][\n]" 17:20:07.320 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "Location: http://127.0.0.1:7007/core/v1/futures/f93147d3-9556-4245-b60a-a51b0fea2488/status[\r][\n]" 17:20:07.320 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Type: application/json;charset=utf-8[\r][\n]" 17:20:07.320 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Length: 51[\r][\n]" 17:20:07.320 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "Date: Wed, 22 Aug 2018 00:20:07 GMT[\r][\n]" 17:20:07.320 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "[\r][\n]" 17:20:07.320 [main] DEBUG org.apache.http.wire - http-outgoing-2 << "{"futureId":"f93147d3-9556-4245-b60a-a51b0fea2488"}" 17:20:07.321 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "GET /core/v1/futures/f93147d3-9556-4245-b60a-a51b0fea2488/status HTTP/1.1[\r][\n]" 17:20:07.321 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Host: 127.0.0.1:7007[\r][\n]" 17:20:07.321 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Connection: Keep-Alive[\r][\n]" 17:20:07.321 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 17:20:07.321 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Cookie: SID=f83eb140-fef0-4a46-bef0-031a09f77f9e; _csrf_token=86f245d0-c4bf-45ca-aa7c-3b3b8b3ec181[\r][\n]" 17:20:07.322 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Accept-Encoding: gzip,deflate[\r][\n]" 17:20:07.322 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "[\r][\n]" 17:20:07.324 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "HTTP/1.1 200 [\r][\n]" 17:20:07.324 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Type: application/json;charset=utf-8[\r][\n]" 17:20:07.324 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Length: 730[\r][\n]" 17:20:07.324 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "Date: Wed, 22 Aug 2018 00:20:07 GMT[\r][\n]" 17:20:07.324 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "[\r][\n]" 17:20:07.324 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "{"id":"f93147d3-9556-4245-b60a-a51b0fea2488","links":[{"href":"http://127.0.0.1:7007/core/v1/futures/f93147d3-9556-4245-b60a-a51b0fea2488/status","rel":"self","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/f93147d3-9556-4245-b60a-a51b0fea2488","rel":"abort","method":"DELETE","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/f93147d3-9556-4245-b60a-a51b0fea2488/status","rel":"canonical","method":"GET","interaction":["async-polling"]},{"href":"http://127.0.0.1:7007/core/v1/futures/f93147d3-9556-4245-b60a-a51b0fea2488/value","rel":"related","method":"GET","interaction":["async-polling"]}],"progress":"succeeded","completed":true,"intervalToPoll":1}" 17:20:07.327 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "GET /core/v1/futures/f93147d3-9556-4245-b60a-a51b0fea2488/value HTTP/1.1[\r][\n]" 17:20:07.327 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Host: 127.0.0.1:7007[\r][\n]" 17:20:07.327 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Connection: Keep-Alive[\r][\n]" 17:20:07.327 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 17:20:07.327 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Cookie: SID=f83eb140-fef0-4a46-bef0-031a09f77f9e; _csrf_token=86f245d0-c4bf-45ca-aa7c-3b3b8b3ec181[\r][\n]" 17:20:07.327 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Accept-Encoding: gzip,deflate[\r][\n]" 17:20:07.327 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "[\r][\n]" 17:20:07.328 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "HTTP/1.1 200 [\r][\n]" 17:20:07.328 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Type: application/json;charset=utf-8[\r][\n]" 17:20:07.328 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Length: 2[\r][\n]" 17:20:07.328 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "Date: Wed, 22 Aug 2018 00:20:07 GMT[\r][\n]" 17:20:07.328 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "[\r][\n]" 17:20:07.328 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "{}" opg-oracledb> pgxSession.destroy()   // Java CALL  // Corresponding REST calls to destroy the current PGX session 17:20:38.587 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "DELETE /core/v1/session?_csrf_token=86f245d0-c4bf-45ca-aa7c-3b3b8b3ec181 HTTP/1.1[\r][\n]" 17:20:38.587 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Host: 127.0.0.1:7007[\r][\n]" 17:20:38.587 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Connection: Keep-Alive[\r][\n]" 17:20:38.587 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "User-Agent: Apache-HttpClient/4.5.4 (Java/1.8.0_144)[\r][\n]" 17:20:38.587 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Cookie: SID=f83eb140-fef0-4a46-bef0-031a09f77f9e; _csrf_token=86f245d0-c4bf-45ca-aa7c-3b3b8b3ec181[\r][\n]" 17:20:38.587 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "Accept-Encoding: gzip,deflate[\r][\n]" 17:20:38.587 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 >> "[\r][\n]" 17:20:38.592 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "HTTP/1.1 200 [\r][\n]" 17:20:38.592 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Type: application/json;charset=utf-8[\r][\n]" 17:20:38.592 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "Content-Length: 2[\r][\n]" 17:20:38.592 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "Date: Wed, 22 Aug 2018 00:20:38 GMT[\r][\n]" 17:20:38.592 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "[\r][\n]" 17:20:38.592 [pgx-client-thread-3] DEBUG org.apache.http.wire - http-outgoing-2 << "{}"   Cheers, Zhe

From time to time, I got a request asking for details on the REST APIs for the PGX component of Oracle Spatial and Graph (OSG) and Oracle Big Data Spatial and Graph (BDSG). In this blog post, I am...

Graph Features

Better support for Timestamp with Zone in the latest Oracle Spatial and Graph Property Graph Patch

Good news, we just released a patch for the Oracle Spatial and Graph Property Graph feature. It contains, among many things, PGX 3.1 and latest Data Access Layer. One important fix is a much better support of timestamp with time zone.  Here is a quick example that illustrates the improvement. cfg = GraphConfigBuilder.forPropertyGraphRdbms().setJdbcUrl("jdbc:oracle:thin:@127.0.0.1:1521:orcl122") .setUsername("pg").setPassword("<YOUR_PASSWORD>")  .setName("test_graph") .setMaxNumConnections(2) .setLoadEdgeLabel(false).addVertexProperty("ts", PropertyType.TIMESTAMP_WITH_TIMEZONE, null)  .addVertexProperty("name", PropertyType.STRING, "default_name")  .addEdgeProperty("cost", PropertyType.DOUBLE, "1000000")  .build(); opg = OraclePropertyGraph.getInstance(cfg); v=opg.addVertex(10000l); v.setProperty("ts", new Date(1000l)); opg.commit() v=opg.getVertex(10000l); ==>Vertex ID 10000 [vertex] {ts:dat:1969-12-31 16:00:01.0}   Note that the above getVertex returns a property of type java.util.Date of which java.sql.Timestamp is a subclass. Now, let's read this graph into a remote PGX endpoint. pgxSession=Pgx.getInstance("http://127.0.0.1:7007").createSession("session1"); analyst=pgxSession.createAnalyst(); pgxGraph = pgxSession.readGraphWithProperties(opg.getConfig(), true); For some older versions of OSG PG, you are likely going to hit the following exception:   "java.lang.UnsupportedOperationException: loading type time_with_timezone through DAL not yet supported" With this new patch, the above will go through without a problem. To sanity check, run a simple PGQL query and print the type of the timestamp property ts. pgxResultSet = pgxGraph.queryPgql("SELECT n.ts MATCH (n)") elem=pgxResultSet.getResults() e1=elem.iterator().next() twz=e1.getTimestampWithTimezone(0) ==>1969-12-31T16:00:01Z twz.getClass().getName() ==>java.time.OffsetDateTime   Cheers, Zhe References: Oracle Spatial and Graph Property Graph Patch (28577866): https://support.oracle.com/epmos/faces/PatchDetail?patchId=28577866 

Good news, we just released a patch for the Oracle Spatial and Graph Property Graph feature. It contains, among many things, PGX 3.1 and latest Data Access Layer. One important fix is a much better...

Graph Features

Using RDF Knowledge Graphs in the Oracle Public Cloud (Part 3)

This is the third and final installment of the series "Using RDF Knowledge Graphs in the Oracle Public Cloud." In this blog post, we will complete the setup of our RDF triplestore in the Oracle Public Cloud by configuring a W3C-standard SPARQL endpoint. Click the links for previous posts in this series: part 1, part 2. The W3C defines several standard REST APIs for querying and updating RDF data. You can read more about the standards here. Oracle RDF Semantic graph leverages Apache Jena Fuseki to provide an implementation of those interfaces. Oracle Support for Apache Jena provides a tight integration between Apache Jena and Oracle RDF Semantic Graph through Oracle-specific implementations of Apache Jena interfaces. This blog post will show how to setup and run Apache Jena Fuseki on our DBCS instance. Fuseki can run as a standalone server or a Java web application. In this case, we will run Fuseki as a standalone server on our DBCS instance. You could also setup an Oracle Java Cloud Service instance and deploy the Fuseki Java web application into the included Web Logic Server instance. The first step is to download the latest Oracle Support for Apache Jena from OTN. Open a web browser to our OTN downloads page. Choose Download Oracle Database 12c Release 12.1.0.2 Support for Apache Jena 3.1, Apache Jena Fuseki 2.4, and Protégé Desktop 5.0. Note that this download works with Oracle Database versions 12.1.0.2 and later, so it is compatible with our 18.1 DBCS instance. After the download completes, transfer the downloaded Oracle Support for Apache Jena file to your DBCS instance. In this example, we copied the file to /home/oracle. Further instructions on how to copy files to/from a DBCS instance can be found in the DBCS user guide or in the detailed HOW-TO available at the end of this post. Open an SSH connection to your DBCS instance as the oracle user. See the DBCS user guide for more information on how to make an SSH connection to a DBCS instance. Create a directory named Jena in /home/oracle. Then move rdf_semantic_graph_support_for_12c_and_jena310_protege_5.0_2017_01_19.zip to the newly created Jena directory and unzip the file. After the unzip command has finished, you will see several directories and a README file. We will now configure Fuseki to access the LGD_SPORT semantic model that we created earlier. Change directory to /fuseki and edit the config-oracle.ttl file. Change the following default <#oracle> dataset specification from <#oracle> rdf:type oracle:Dataset;     oracle:connection     [ a oracle:OracleConnection ;       oracle:jdbcURL "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcl)))";       oracle:User "rdfuser" ;       oracle:Password "rdfuser"     ];     oracle:allGraphs [ oracle:firstModel "TEST_MODEL" ] . to <#oracle> rdf:type oracle:Dataset;     oracle:connection     [ a oracle:OracleConnection ;       oracle:jdbcURL "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=PDB1.uspm020.oraclecloud.internal)))";       oracle:User "rdfuser" ;       oracle:Password "rdfuser"     ];     oracle:allGraphs [ oracle:firstModel "LGD_SPORT" ] . Note that SERVICE_NAME will be different depending on the settings for your particular DBCS instance. Next, we will change the default shiro.ini configuration to allow non-localhost connections. First, we need to startup Fuseki to create a /run directory. Simply execute the following command in the current /fuseki directory. ./fuseki-server Once you see the message that Fuseki has started on port 3030, kill the process with Ctrl-C. Now the /run directory should be created. Change directory to /run and edit shiro.ini. Replace /$/** = localhostFilter with /$/server = anon $/** = localhostFilter Change directory back to /fuseki and start the Fuseki service by running the following command: nohup ./fuseki-server --config config-oracle.ttl > fuseki_out.log & Note that we are using nohup to prevent the Fuseki process from terminating if our connection is closed. That’s it. A Fuseki SPARQL endpoint is now up and running on our DBCS instance. Now that the Fuseki server is up and running on port 3030 of our DBCS instance, there are two options for connecting: Create an SSH tunnel to our DBCS instance for port 3030. Create an access rule to open up port 3030 of our DBCS instance to the public internet. In this blog post, we will illustrate the first method using an SSH tunnel. See the detailed HOW-TO at the end of the post for instructions on the second option. Using an SSH tunnel allows us to securely access port 3030 on our DBCS instance without opening port 3030 to the public internet. First, use PuTTY or a similar tool to create an SSH tunnel to forward port 3030 on your client computer to port 3030 on your DBCS instance. We are using PuTTY in this example, as shown below. Refer to the DBCS user guide for detailed instructions. Click Open to open the SSH tunnel and then open a web browser to http://localhost:3030. Click query to open the SPARQL query interface. Click info to see all the available REST endpoints. Now we have used an SSH tunnel to connect to the SPARQL endpoint running on our DBCS instance. We can also use curl to directly test the SPARQL REST interface over the SSH tunnel. In this example, we are using a Cygwin terminal on a Windows client computer. The following curl command will send the SPARQL query in the file test_query.rq to the Fuseki endpoint running on our DBCS instance and print the result to stdout. curl –X POST –data-binary "@test_quey.rq" –H "Content-Type: application/sparql-query" –H "Accept: application/sparql-results+json" "http://localhost:3030/sparql" And we’re done! We have successfully accessed a W3C-standard SPARQL REST endpoint running on our DBCS instance. This concludes the final post in our "Using RDF Knowledge Graphs in the Oracle Public Cloud" series. In the first post, we setup and configured Oracle Spatial and Graph – RDF Semantic Graph on an 18.1 DBCS instance. In the second post, we loaded some RDF data and used SQL Developer’s SPARQL query editor to run some example queries, and, in this final post, we setup a W3C-standard SPARQL endpoint on our DBCS instance to provide a REST interface for our triplestore. Detailed steps for everything covered in this blog series are available in this HOW-TO.

This is the third and final installment of the series "Using RDF Knowledge Graphs in the Oracle Public Cloud." In this blog post, we will complete the setup of our RDF triplestore in the Oracle...

Graph Features

Using RDF Knowledge Graphs in the Oracle Public Cloud (Part 2)

This is the second installment of the series "Using RDF Knowledge Graphs in the Oracle Public Cloud." In this blog post, we will use SQL Developer to load some publicly available RDF data into our DBCS instance and execute a few SPARQL queries with SQL Developer's SPARQL query editor. Click here for part 1 of this series. The data we will use comes from the Linked Geo Data project: http://linkedgeodata.org/About The project provides several downloadable datasets, which are indexed here: https://hobbitdata.informatik.uni-leipzig.de/LinkedGeoData/downloads.linkedgeodata.org/releases/ We will be using a 1.2 million triple dataset of sports facilities: https://hobbitdata.informatik.uni-leipzig.de/LinkedGeoData/downloads.linkedgeodata.org/releases/2015-11-02/2015-11-02-SportThing.node.sorted.nt.bz2 Download SportThing.node.sorted.nt.bz2 to your client computer and use a program such as WinSCP to copy the file to your DBCS instance. Refer to the DBCS user guide for more information on how to copy files to/from your DBCS instance. Detailed instructions for WinSCP can be found in the HOW-TO document available at the end of this blog post. In this example, we have copied the file to /home/oracle on the DBCS instance. We will use a normal database user with connect, resource, and appropriate tablespace privileges to perform the load operation. Recall that we used the SYSTEM user to create and configure our semantic network in the previous installment. Open a connection for your desired database user in SQL Developer. We are using RDFUSER in this example. Expand the RDFUSER connection and expand the RDF Semantic Graph component. The first step is to create a semantic model to hold our RDF dataset. Right click on Models and choose New Model. Enter a Model Name and choose to create a new Application table with TRIPLE column. Choose the tablespace you used when creating the semantic network for Model tablespace. Click Apply. We have now created a model to hold our RDF data. If you expand Models and Regular Models under RDF Semantic Graph, you should see the LGD_SPORT model we created. Now, we will bulk load the downloaded RDF file. The bulk load process consists of two major steps: Loading the file from the file system into a simple staging table in the database. Loading data from the staging table into our semantic model. The first step involves loading from an external table, so we need to use the SYSTEM connection to create a DIRECTORY in the database and grant privileges on it to RDFUSER. Expand the SYSTEM connection and right-click on Directories. Then choose Create Directory. Enter a Directory Name and the full path for the directory on the database server. Click Apply. Expand Directories and click the directory name to see details. Now we need to grant privileges on this directory to RDFUSER. Click actions and select Grant. Grant READ and WRITE privileges to RDFUSER. Click Apply. RDFUSER now has access to the /home/oracle directory on our DBCS instance. Before loading into the staging table, we need to do a few operations from the Unix command line. The downloaded RDF file is compressed, so we will use a Unix named pipe to stream the uncompressed data rather than storing an uncompressed version of the file. Use PuTTY to make an SSH connection to the remote DBCS instance as the oracle user (see the DBCS user guide for more information on how to make an SSH connection). Then execute the following commands to create a named pipe to stream the uncompressed data. mkfifo named_pipe.nt bzcat 2015-11-02-SportThing.node.sorted.nt.bz2 > named_pipe.nt & Now, expand the RDFUSER connection in SQL Developer and expand the RDF Semantic Graph component. Then right click on Models and select “Load RDF data into staging table (External Table).” Choose a name for the external table to create (we are using LGD_EXT_TAB) and fill in the other fields on the Source External Table tab. Enter the names of the files to load (named_pipe.nt in this example) on the Input Files tab. Finally, use the Staging Table tab to enter a name for the staging table that will be created (we are using LGD_STG_TAB) and choose the appropriate format. Now click Apply to load the data into LGD_STG_TAB. Check the contents of LGD_STG_TAB. Next, we will load the data from LGD_STG_TAB into the LGD_SPORT semantic model. To bring up the bulk load interface, expand RDF Semantic Graph under the RDFUSER connection. Then, right-click on Models and select “Bulk Load into model from staging table”. Enter LGD_SPORT for Model and unselect the Create model option since we have already created this semantic model. Also, choose the LGD_STG_TAB for Staging table name. Be careful not to select the external table (LGD_EXT_TAB), as it will also be listed. Consult the user guide for more information on the other options for bulk load. Click Apply, and the load will finish in a minute or so. Now that we have completed the bulk load, it is a good idea to gather statistics on the whole RDF network. Only a privileged user can gather statistics on the whole RDF network, so we need to use the SYSTEM user or some other DBA user. Expand the RDF Semantic Graph component under the SYSTEM connection, right click on RDF Semantic Graph and select Gather Statistics. Enter the desired Degree of parallelism and click Apply. After gathering statistics, the data is ready to query. Now, we will use SQL Developer’s SPARQL query editor to query our dataset. Go back to the RDFUSER connection and expand Models and Regular Models under RDF Semantic Graph. Clicking LGD_SPORT will open the SPARQL query editor for this semantic model. You can edit and execute SPARQL queries here. In addition, several pre-created templates are available. Click Templates -> Demographics -> COUNT ALL to count all triples in the LGD_SPORT model. Click Yes when the warning comes up. Click the green triangle to run the query. You can also directly edit SPARQL queries. The following example shows a SPARQL query to get the top 10 properties and their triple counts.   In addition to SPARQL SELECT queries, CONSTRUCT and DESCRIBE queries are also supported. The query below describes a particular resource in the LGD_SPORT model. Note that any namespace PREFIXes used in the query will also be used to simplify values in the query result. Here we have added several more PREFIXes. That’s it. We have successfully bulk loaded a publicly available RDF dataset into our DBCS instance and executed some queries with SQL Developer’s SPARQL query editor. In the next installment, we will setup a W3C-standard SPARQL endpoint to provide a REST interface. A detailed HOW-TO is available HERE.

This is the second installment of the series "Using RDF Knowledge Graphs in the Oracle Public Cloud." In this blog post, we will use SQL Developer to load some publicly available RDF data into our...

Graph Features

Using RDF Knowledge Graphs in the Oracle Public Cloud (Part 1)

Oracle Database includes an enterprise-grade Resource Description Framework (RDF) triplestore as part of the Spatial and Graph option. RDF is a W3C-standard graph data model for representing knowledge graphs, which are increasingly used to underpin intelligent applications. This is part 1 of a blog series that shows how to setup a fully-functioning RDF triplestore on the Oracle Public Cloud. In part 1, we will show how configure Oracle Spatial and Graph – RDF Semantic Graph on an existing Oracle Database Cloud Service (DBCS) instance. In this example, we will use Oracle Database version 18.1. For more information on how to create a DBCS instance, refer to the tutorial here or download the detailed HOW-TO document at the end of this blog. Subsequent posts in this blog series will show how to load a publicly-available RDF dataset and configure a W3C-standard SPARQL endpoint. We will use Oracle SQL Developer for most of our interaction with the DBCS instance. Starting with version 18.1, SQL Developer includes a nice RDF Semantic Graph plugin that we will use in this blog post. First, use SQL Developer to open a connection to your DBCS instance for the SYSTEM user. There are several ways to connect to a DBCS instance with SQL Developer. Refer to the Database Cloud Service user guide for more information on connecting with SQL Developer. Use a SQL Worksheet for your SYSTEM connection to execute the following query to check the RDF Semantic Graph installation on your DBCS instance. SELECT * FROM MDSYS.RDF_PARAMETER; The query result should show a valid 18.1.0.0.0 installation of RDF Semantic Graph. Next, we will create a Semantic Network to prepare the database for storing RDF data. As a prerequisite, we need to create a tablespace for the Semantic Network. Run the following SQL statement as SYSTEM to create a tablespace for the Semantic Network. create bigfile tablespace rdftbs datafile '?/dbs/rdftbs.dat'  size 512M reuse autoextend on next 512M maxsize 10G  extent management local  segment space management auto; Now we can use the RDF Semantic Graph component of SQL Developer to create the Semantic Network. Expand the system connection by clicking the plus sign next to the connection name, and then scroll down to the RDF Semantic Graph component. Right-click on RDF Semantic Graph and select Create Semantic Network. Use the drop down menu to select the tablespace that we created earlier and click Apply. That’s it. We have verified the RDF Semantic Graph installation and created all the necessary database objects needed to store RDF data. Next, we will make some changes to the default index configuration for better general-purpose query performance. Expand Network Indexes under RDF Semantic Graph to see index codes for the current network indexes. Each letter in an index code corresponds to a component of an RDF quad: S – subject, P – predicate, C – canonical object, G – graph, M – model. By default, two indexes are created: a mandatory PCSGM unique index and a PSCGM index. This indexing scheme works very well when SPARQL triple patterns have constants in the predicate position, but this scheme may encounter performance problems if variables appear in the predicate position. For a more general scheme, a three-index combination of PCSGM, SPCGM and CPSGM works well, so we will drop the PSCGM index and add SPCGM and CPSGM indexes. Right click on RDF_LNK_PSCGM_IDX and select Drop Semantic Index.   Click Apply. Only the PCSGM index should appear under Network Indexes now. Right click on Network Indexes and select Create Semantic Index. Enter SPCGM as the Index code and click Apply. Next, repeat this process using CSPGM as the index code. You should now see CSPGM, PCSGM and SPCGM indexes under Network Indexes. At this point, we have created a semantic network on our DBCS instance and setup an indexing scheme for general-purpose SPARQL queries. Our DBCS instance is now ready to load some RDF data. The next blog post in this series will show how to load and query a publicly-available RDF dataset. A detailed HOW TO document for this blog series is available HERE.

Oracle Database includes an enterprise-grade Resource Description Framework (RDF) triplestore as part of the Spatial and Graph option. RDF is a W3C-standard graph data model for representing knowledge...

Graph Features

Effective Graph Visualization with Cytoscape and Oracle's Property Graph Database (3)

In this installment of the Cytoscape visualization for Oracle's Property Graph Database series, I am going to talk about key steps to make your first connection to a backend property graph (PG) database. Remember Oracle offers a PG DB on Big Data as well as a PG DB on big data platform. First of all, I'd like to make a quick announcement. Very recently, we released a Cytoscape Plugin for Oracle Database 18c. It is designed and tested for Oracle Database 18c. And we have fine tuned and improved performance when running Cytoscape client against a remote Oracle 18c database. To get this plugin, please click here and choose the one dated May 2018. Now, let me switch back to making connections to a backend PG database. There are two cases: 1) the backend PG database is an Oracle Database. Start Cytoscape, click File, Load, Property Graph, and Connect to Oracle Database. A screenshot is shown below. After that, type in JDBC connection url, username, password, and click the magnifier icon which is highlighted below. This is going to start a graph DB connection. Once a connection is established, you will see a drop down list under the "Graph name" and it is then up to you to pick a graph to start your visualization. 2) the backend PG database is either an Oracle NoSQL Database or an Apache HBase. The steps are almost identical other than the database connection settings are different among Oracle NoSQL Database, Apache HBase, and Oracle Database. Hope this helps you get started. Zhe  

In this installment of the Cytoscape visualization for Oracle's Property Graph Database series, I am going to talk about key steps to make your first connection to a backend property graph (PG)...

Graph Features

Oracle's Property Graph Database in Shining Armor (5)

This is the fifth installment of the series "Oracle's Property Graph Database in Shining Armor." In this blog, I am going to talk about fast property graph data loading using the built-in Java APIs on Oracle Database Cloud Service Bare Metal. Click 1, 2, 3, 4 for previous installments. The following is a code snippet running in Groovy. You can take it out and embed it in your Java application (with very minor changes including type declaration) if needed. -- Step 1: create a partitioned graph. 16 partitions are used. oracle=new Oracle("jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=g.subnet201703141927320.graphdbnet1.oraclevcn.com)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=pdb1.subnet201703141927320.graphdbnet1.oraclevcn.com)))","pg", "<PASSWORD_HERE>"); opg=OraclePropertyGraph.getInstance(oracle, "lj", 16 /*iHashPartitionsNum*/); -- Step 2: create an instance of OraclePropertyGraph cfg = GraphConfigBuilder.forPropertyGraphRdbms().setJdbcUrl("jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=g.subnet201703141927320.graphdbnet1.oraclevcn.com)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=pdb1.subnet201703141927320.graphdbnet1.oraclevcn.com)))") .setUsername("pg").setPassword("<PASSWORD_HERE>") .setName("lj") .setMaxNumConnections(16) .setLoadEdgeLabel(false).addEdgeProperty("weight", PropertyType.DOUBLE, "1000000")  .build(); opg=OraclePropertyGraph.getInstance(cfg); -- Step 3: load in parallel the LiveJ in .opv/.ope format (compressed) opg-oracledb> opgdl=OraclePropertyGraphDataLoader.getInstance(); opg-oracledb> vfile="/home/oracle/liveJ.opv.gz"  // vertex flat file opg-oracledb> efile="/home/oracle/liveJ.ope.gz"  // edge flat file opg-oracledb> lStarttime=System.currentTimeMillis();   opg-oracledb> opgdl.loadDataWithSqlLdr(opg, "pg", "<PASSWORD_HERE>", "orcl122", vfile, efile, 32/*dop*/, true/*bNamedPipe*/, "/u01/app/oracle/product/12.2.0.1/dbhome_1/bin/sqlldr", true, "PDML=T,PDDL=T,NO_DUP=T,"); opg-oracledb> System.out.println("total ms " + (System.currentTimeMillis() - lStarttime)); total ms 308560   opg-oracledb> opg-oracledb> opg.countEdges() ==>68993773 All done. The above API completed loading LiveJ graph (with 68.99M edges) in just 308 seconds. And this time included building a few unique key indexes and B*Tree indexes. Cheers, Zhe  

This is the fifth installment of the series "Oracle's Property Graph Database in Shining Armor." In this blog, I am going to talk about fast property graph data loading using the built-in Java APIs on...

Graph Features

Powerful and Effective Graph Visualization with Cytoscape and Oracle's Property Graph Database (2)

In this installment of the Cytoscape visualization for Oracle's Property Graph Database series, I am going to talk about key steps required to set up Cytoscape visualization for Oracle's Property Graph Database. These steps are the same for Oracle Spatial and Graph (OSG), and Oracle Big Data Spatial and Graph (BDSG). Assume you are using Linux or Mac OS. The major steps are as follows. 0) Make sure you have Oracle JDK 8. 1) Download & install Cytoscape (3.2.1 or above). Assume you install Cytoscape under   /Applications/Cytoscape_v3.6.1 2) Start Cytoscape to initialize. Make sure the following directory is created ~/CytoscapeConfiguration   Once the above directory is created, quit Cytoscape. 3) cd /Applications/Cytoscape_v3.6.1 4) Unzip the Cytoscape plugin for OSG (or BDSG) in the above directory. A new sub directory will be created. The directory name is oracle_property_graph_cytoscape/ if you are using Cytoscape plugin for Oracle Database. 5) Copy propertyGraphSupport*.jar from the jar/ in the above sub directory into ~/CytoscapeConfiguration/3/apps/installed/ 6) Copy propertyGraph.properties from the jar/ in the above sub directory into ~/CytoscapeConfiguration To customize this configuration, follow the usage guide (a PDF file you can find in the Cytoscape plugin zip file). 7) kick off Cytoscape by running the following under /Applications/Cytoscape_v3.6.1 sh ./startCytoscape.sh NOTE: it is important to use startCytoscape.sh to start the visualization. Do not use the original cytoscape.sh because you will not see any property graph related functions (highlighted below).   Cheers, Zhe References [1] http://www.oracle.com/technetwork/database/options/spatialandgraph/downloads/index-156999.html [2] Oracle Big Data Spatial and Graph Downloads    

In this installment of the Cytoscape visualization for Oracle's Property Graph Database series, I am going to talk about key steps required to set up Cytoscape visualization for Oracle's Property...

Graph Features

Oracle's Property Graph Database in Shining Armor (4)

This is the fourth installment of the series "Oracle's Property Graph Database in Shining Armor." In this blog, I am going to talk about starting PGX server in a few quick steps. Click 1, 2, 3 for previous installments. Note that if you chose Oracle Database 12.2.0.1 during service creation, then you may want to install this patch in order to run PGQL, a powerful graph pattern matching language. If you chose Oracle Database 18.1, then install patch 27639357 instead. To start PGX server, we first login to the machine running the Oracle Database instance. Here I am assuming you want to run PGX on the same hardware running Oracle Database. - Login  [rdf@hqgraph1 ~]$  ssh -i  <key_filename>   opc@<IP_HERE>   - Become oracle [opc@g dbhome_1]$ sudo su - oracle [oracle@g ~]$ cd /u01/app/oracle/product/12.2.0.1/dbhome_1   - Change enable_tls from true to false in the following configuration file [oracle@g dbhome_1]$ cd md/ [oracle@g md]$ cd property_graph [oracle@g property_graph]$ cat pgx/conf/server.conf {   "port": 7007,   "enable_tls": false,   "enable_client_authentication": false }   - Set a few options about number of workers, on-heap, off-heap memory size (explained in this blog).  [oracle@g property_graph]$ export _JAVA_OPTIONS="-Dpgx.num_workers_io=4 -Dpgx.max_off_heap_size=12000 -Dpgx.num_workers_analysis=4 -Xmx12000m "   - Kick off the server.  [oracle@g property_graph]$ cd pgx/bin/ [oracle@g bin]$ ./start-server Picked up _JAVA_OPTIONS: -Dpgx.num_workers_io=4 ... ... SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 00:05:09,092 INFO Ctrl$2 - >>> PGX engine running. May 05, 2018 12:05:11 AM org.apache.coyote.AbstractProtocol start INFO: Starting ProtocolHandler ["http-nio-7007"] ...   - Verify that the server is listening.  [oracle@g ~]$ netstat -an |grep 7007 tcp        0      0 0.0.0.0:7007                0.0.0.0:*                   LISTEN      [oracle@g ~]$   - Ping the endpoint.  [oracle@g ~]$ curl http://127.0.0.1:7007/version "2.X.0"   OK. All done. Cheers, Zhe  

This is the fourth installment of the series "Oracle's Property Graph Database in Shining Armor." In this blog, I am going to talk about starting PGX server in a few quick steps. Click 1, 2, 3 for pr...

Graph Features

Join AskTOM Office Hours on May 31 - Gain Insights with Graph Analytics

Want to learn how to use powerful Oracle Database graph analysis in your business applications?  Get a jumpstart in our free, monthly 1 hour sessions for developers in the AskTOM series.  This month’s topic: May 31, 2018    8:00 US PDT  |  17:00 CEST Exploring Gain Insights with Graph Analytics See the magic of graphs in this session. Graph analysis can answer questions like detecting patterns of fraud or identifying influential customers - and do it quickly and efficiently. Our experts Albert Godfrind and Zhe Wu will show you the APIs for accessing graphs and running analytics such as finding influencers, communities, anomalies, and how to use them from various languages including Groovy, Python, and Javascript, with Jupiter and Zeppelin notebooks. As always, feel free to bring any graph-related questions you have for us also. Learn more and register here:  https://devgym.oracle.com/pls/apex/dg/office_hours/3084 If you missed them, view replays from prior AskTOM property graph sessions here: Introduction to Property Graphs https://youtu.be/e_lBqPh2k6Y How To Model and Construct Graphs with Oracle Database  https://youtu.be/evFTmXWU7Zw    

Want to learn how to use powerful Oracle Database graph analysis in your business applications?  Get a jumpstart in our free, monthly 1 hour sessions for developers in the AskTOM series.  This month’s...

Graph Features

A Trick to Build Efficiently a Config for a Property Graph with Many Properties

If you have used Oracle's property graph database, then you are very likely familiar with "graph config" concept. Basically, it contains all the information required to read a graph from a backend database and construct an in-memory snapshot of the graph. Specifically, the config includes the graph name, the underlying graph database information, and the list of vertex/edge properties that need to be included in the in-memory snapshot of the graph. An example is as follows. cfg = GraphConfigBuilder.forPropertyGraphRdbms().setJdbcUrl("jdbc:oracle:thin:@127.0.0.1:1521:orcl122")        .setUsername("pg").setPassword("pg") .setName("lj1") .setMaxNumConnections(16)       .setLoadEdgeLabel(false).addEdgeProperty("weight", PropertyType.DOUBLE, "1000000")  .build(); Now, it is important that one lists out every single property to be included. If a property P is not declared in the config, then you cannot use P in a follow-up PGQL query. Declaring all properties required can be a tedious task when a graph happens to have many properties.  Fortunately, with a bit of SQL trick, this can be simplified. Say, we have a graph named myGraph stored in an Oracle Database (either on-prem or in Oracle Database Cloud Service). The following SQL lists out all the property keys and the data type of the property values. -- "1" denotes an integer property data type.  SQL>  select distinct k, t from myGraphVT$; K --------------------------------------------------------------------------------      T ---------- LOB      1 Label      1 Type      1 ... To generate the necessary config setting, I use a SQL statement as follows and it would produce a list of addVertexProperty commands in one shot. SQL> set heading off; SQL> set pagesize 9999 SQL> select distinct '.addVertexProperty("' || k || '", PropertyType.STRING, "null")' from myGraphVT$; .addVertexProperty("LOB", PropertyType.STRING, "null") .addVertexProperty("Label", PropertyType.STRING, "null") .addVertexProperty("Type", PropertyType.STRING, "null") .addVertexProperty("Address", PropertyType.STRING, "null") .addVertexProperty("Name", PropertyType.STRING, "null") .addVertexProperty("Web", PropertyType.STRING, "null")  ... The rest of the work is a simple copy & paste.  Cheers, Zhe

If you have used Oracle's property graph database, then you are very likely familiar with "graph config" concept. Basically, it contains all the information required to read a graph from a backend...

Graph Features

Graph resources - videos and blog posts

Recently, several helpful learning resources on using graph technologies have become available.  Check out these videos and blog posts! Videos and replays AskTOM Office Hours Introduction to Property Graphs – February session replay:  https://youtu.be/e_lBqPh2k6Y  An intro to property graph use cases and features with a demo in this developer-oriented series. Oracle Code Los Angeles session videos: Property Graph 101 – Dan Vlamis, Vlamis Software – https://youtu.be/qaQO-mW6lFs  A great introduction to property graphs from Vlamis. Analyzing Blockchain and Bitcoin Transaction Data – Zhe Wu, Oracle – https://youtu.be/w8OEVobyhFE   Zhe explains how graph databases can be used to gain insight into blockchain/bitcoin transaction data. ODTUG Webinar recording:  Property Graphs 101: How to Get Started with Property Graphs on the Oracle Database –  Arthur Dayton, Vlamis Software https://youtu.be/QSj0zOjOAWI  Arthur shows step-by-step how to use Oracle Database to take HR data in standard relational tables, convert them to Oracle’s property graph format, and perform graph visualizations and analysis to view relationships in the data. More Spatial and Graph videos on our YouTube channel:  https://www.youtube.com/channel/UCZqBavfLlCuS0il6zNY696w Blog posts & presentation slides Intro to Graphs at Oracle by Michael Sullivan, Oracle A-Team  http://www.ateam-oracle.com/intro-to-graphs-at-oracle/ A great intro to Oracle’s graph technologies, covering both RDF semantic graphs and property graphs on Oracle Database and Oracle Big Data platforms. GDPR and Analytics – Data Lineage on Steroids ITOUG presentation by Gianni Ceresa - https://speakerdeck.com/gianniceresa/gdpr-and-you-the-nightmare-of-ba Blog post by Gianni Ceresa on https://gianniceresa.com/ An end-to-end example of graphs in Oracle Database using OE Sample Schema Spatial and Graph Summit at Analytics and Data Summit, March 19-22, 2018 Slides are available at https://analyticsanddatasummit.org/schedule/ (currently for registered attendees only)    

Recently, several helpful learning resources on using graph technologies have become available.  Check out these videos and blog posts! Videos and replays AskTOM Office Hours Introduction to Property...

Graph Features

Oracle's Property Graph Database in Shining Armor (3)

This is the third installment of the series "Oracle's Property Graph Database in Shining Armor." In this blog, I am going to talk about configuring the database so that it is ready to run property graph functions. Click here for creating the database on OCI Bare Metal.  Step 1: Login, sudo to become "oracle", enable 32K (extended max string) support. Note that I am assuming the PDB is "PDB1" in the following script. $ ssh -i <your_key_file_here> opc@<YOUR_IP_HERE> Last login: Mon Mar 19 16:49:28 2018 from <YOUR_IP_HERE> [opc@g ~]$ [opc@g ~]$ sudo su - oracle [oracle@g ~]$ [oracle@g ~]$ cat /etc/oratab # db1:/u01/app/oracle/product/12.2.0.1/dbhome_2:N [oracle@g dbhome_2]$ tcsh [oracle@g dbhome_2]$ source bin/coraenv ORACLE_SID = [oracle] ? db1 The Oracle base has been set to /u01/app/oracle sqlplus / as sysdba alter system set max_string_size=extended scope=spfile; alter session set container=PDB1; alter system set max_string_size=extended scope=spfile; shutdown immediate; conn / as sysdba shutdown immediate; startup upgrade purge recyclebin; @?/rdbms/admin/utl32k.sql alter session set container=PDB1; startup upgrade purge recyclebin; @?/rdbms/admin/utl32k.sql shutdown immediate; conn / as sysdba shutdown immediate; startup    Step 2: Create a user 'PG' alter session set container=PDB1; CREATE bigfile TABLESPACE pgts DATAFILE '+DATA/pgts.data' SIZE 2G REUSE AUTOEXTEND ON next 128M maxsize unlimited EXTENT MANAGEMENT LOCAL ; create user pg identified by <YOUR_PASSWORD_HERE>; alter user pg  default tablespace pgts; grant connect, resource, alter session to pg; grant unlimited tablespace to pg;   Step 3: Create a test PG graph using PL/SQL API [oracle@g dbhome_2]$ sqlplus pg/<YOUR_PASSWORD_HERE>@db122 SQL*Plus: Release 12.2.0.1.0 Production on Fri Apr 6 23:50:54 2018 Copyright (c) 1982, 2016, Oracle.  All rights reserved. Connected to: Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production SQL> exec opg_apis.create_pg('my_first_pg',4,4,'PGTS', null); PL/SQL procedure successfully completed. SQL> desc my_first_pgVT$;  Name                       Null?    Type  ----------------------------------------- -------- ----------------------------  VID                       NOT NULL NUMBER  K                            NVARCHAR2(3100)  T                            NUMBER(38)  V                            NVARCHAR2(15000)  VN                            NUMBER  VT                            TIMESTAMP(6) WITH TIME ZONE  SL                            NUMBER  VTS                            DATE  VTE                            DATE  FE                            NVARCHAR2(4000)   Note that if you encounter an "ORA-28353: failed to open wallet" exception, you can do the following.  tcsh  setenv ORACLE_UNQNAME <YOUR_DB_UNIQUE_NAME>  sqlplus / as sysdba alter system set encryption wallet open identified by <YOUR_PASSWORD>; SQL> conn / as sysdba alter session set container=PDB1; administer key management set keystore open identified by <YOUR_PASSWORD> ;   Cheers, Zhe

This is the third installment of the series "Oracle's Property Graph Database in Shining Armor." In this blog, I am going to talk about configuring the database so that it is ready to run property...

Graph Features

Oracle's Property Graph Database in Shining Armor (2)

This is the second installment of the series "Oracle's Property Graph Database in Shining Armor." In this blog, I am going to talk about setting up the property graph database on Oracle Cloud, and specifically, Oracle Database Cloud Service - Bare Metal. The database creation is straightforward. Just login to OCI with your credential (a screenshot as follows). Click on Database tab, and follow the UI. I must say the UI is really intuitive and easy to use. I did not read a single page of documentation and I managed to get a 12.2.0.1 database up and running in a few minutes.  Note that in order to use the property graph feature in Oracle Spatial and Graph, you can choose either Oracle Database 12.2 or 18.1. Once your database is installed, write down the IP address and of course the database name. The following screenshot shows the DB info, shape, and the public IP (masked out). For performance, I selected Bare Metal (BM DenseIO). And you can find Database Unique Name in the Databases section. In my setup, the unique name is graph_phx1hc, highlighted below. In a follow up blog, I am going to talk about a few simple configurations required to get the property graph functions ready for business.  Cheers, Zhe

This is the second installment of the series "Oracle's Property Graph Database in Shining Armor." In this blog, I am going to talk about setting up the property graph database on Oracle Cloud, and...

Using SSD to Speed up Property Graph Operations in Oracle Database (III)

This is the third installment of the series "Using SSD to Speed up Property Graph Operations in Oracle Database." For the first two, click 1st & 2nd. In this blog, I am going to show you how to serialize out an already-loaded in-memory graph into a .PGB format. A PGB format can be viewed as a memory dump of a graph snapshot. Hence it is very efficient to read back into memory. However, there is no easy way to make an update to an existing PGB. ... opg-oracledb> pgxServer="http://127.0.0.1:7007/" opg-oracledb> pgxSession=Pgx.getInstance(pgxServer).createSession("session1"); opg-oracledb> pgxGraph = pgxSession.readGraphWithProperties(opg.getConfig()); ... opg-oracledb> lStarttime=System.currentTimeMillis();  opg-oracledb> pgxGraph.store(Format.PGB, "/tmp/livej.pgb", true); opg-oracledb> System.out.println("total ms " + (System.currentTimeMillis() - lStarttime)); total ms 13306 opg-oracledb> pgxGraph.memoryMb ==>1745 $ ls -l /tmp/livej.pgb -rw-rw-r--. 1 user group 886096199 Mar  2 13:51 /tmp/livej.pgb   From the above results, we can see that it took about 13.3 seconds to write out a PGB of size 886 MB. In the next installment, I am going to talk about how to store this PGB persistently into Oracle Database. Cheers, Zhe Wu  

This is the third installment of the series "Using SSD to Speed up Property Graph Operations in Oracle Database." For the first two, click 1st & 2nd. In this blog, I am going to show you how to...

Graph Features

Announcing Graph Developer Training Day – March 19, 2018

The Oracle Spatial and Graph product team announces Graph Developer Training Day 2018, a free full-day workshop to help you understand and develop property graph applications using Oracle Database Spatial and Graph and Big Data Spatial and Graph.  Oracle partners, customers, attendees of Analytics and Data Summit 2018, and Oracle staff are invited.  Targeted toward developers, architects and data scientists.  Sessions will be delivered by Oracle developers and product managers. The event will take place at Oracle HQ on Monday, March 19, before Analytics and Data Summit.  Please RSVP to marion.smith@oracle.com by March 6 if you’re planning to attend!  Seating is limited.   Details: March 19, 2018 8:15am – 5:00pm Oracle Conference Center 350 Oracle Pkwy, Redwood City, CA 94065   Agenda (subject to change): 8:15 – 9:00am       Breakfast/Registration   9:00 - 10:30am      Getting Started with Graph Databases - Welcome and overview of graph technologies - Provisioning an Oracle Database Cloud Service (DBCS) - Understanding graph formats and efficient data loading   10:30 - 11:00am    Break   11:00 -12:30            Generating and Analyzing Graph Data - Graph generation - how to construct a graph from source data - Graph analytics using PGX and RDBMS   12:30 – 1:30pm     Networking Lunch   1:30 – 3:00pm                 Graph Query and Visualization - Property Graph Query Language (PGQL) - on PGX and RDBMS - Graph visualization (Cytoscape)   3:00 – 3:30pm                 Break   3:30 – 5:00pm                 New Tooling and Functionality + Lightning Round - Notebook UI - Graph Studio - Lightning Round   5:00                       Wrap-up and Close    

The Oracle Spatial and Graph product team announces Graph Developer Training Day 2018, a free full-day workshop to help you understand and develop property graph applications using Oracle Database...

Spatial and Graph

Spatial and Graph Sessions at Analytics and Data Summit 2018

All Analytics. All Data. No Nonsense. Featuring Spatial and Graph Summit March 20–22, 2018 Oracle HQ Conference Center 350 Oracle Pkwy, Redwood City, CA 94065 We’ve changed our name! Formerly called the BIWA Summit with the Spatial and Graph Summit.  Same great technical content – great new name! Announcing 24+ technical sessions, case studies, and hands on labs around spatial and graph technologies View the agenda here Register today for best rates. The agenda for Spatial and Graph Summit at Analytics and Data Summit 2018 is now available.  Join us for this premier event for spatial, graph, and analytics and attend: Technology sessions from Oracle experts on the latest developments, useful features and best practices Case studies from top customers and partners Hands-on Labs - ramp up fast on new technologies Keynotes by industry experts Sessions on machine learning, big data, cloud and database technologies from Analytics and Data Summit View the agenda at  http://www.biwasummit.org/schedule/ Selected Spatial + Graph sessions include the following.  Check out the agenda for more sessions and details. Spatial technologies for business applications and GIS Enriching Business Data with Location – Albert Godfrind, Oracle Using GeoJSON in the Oracle Database – Albert Godfrind, Oracle Powerful Spatial Features You Never Knew Existed in Oracle Spatial and Graph – Daniel Geringer, Oracle 18c Spatial New Features Update – Siva Ravada, Oracle Using Spatial in Oracle Cloud with Developer Tools and Frameworks – David Lapp & Siva Ravada, Oracle Spatial analytics with Oracle DV, Analytics Cloud, and Database Cloud Service – David Lapp & Jayant Sharma, Oracle Spatial Analytics with Spark & Big Data – Siva Ravada, Oracle Geospatial Industry Use Cases Country Scale digital maps data with Oracle Spatial & Graph – Ankeet Bhat, MapmyIndia Ordnance Survey Ireland: National Mapping as a Service – Éamonn Clinton, Ordnance Survey Ireland Feeding a Hungry World: Using Oracle Products to Ensure Global Food Security – Mark Pelletier, USDA/InuTeq 3D Spatial Utility Database at CALTRANS – Donna Rodrick, California Dept of Transportation Hands On Labs – Get started with property graphs Property Graph 101 on Oracle Database 12.2 for the completely clueless –  Arthur Dayton and Cathye Pendley, Vlamis Software Solutions Using Property Graph and Graph Analytics on NoSQL to Analyze Data on Meetup.com – Karin Patenge, Oracle Germany Using R for Big Data Advanced Analytics, Machine Learning, and Graph – Mark Hornick, Oracle Graph Technical Sessions An Introduction to Graph: Database, Analytics, and Cloud Services – Hans Viehmann, Zhe Wu, Jean Ihm, Oracle Sneak Preview:  Graph Cloud Services and Spatial Studio for Database Cloud – Jim Steiner & Jayant Sharma, Oracle Analyzing Blockchain and Bitcoin Transaction Data as Graph –Zhe Wu & Xavier Lopez, Oracle Ingesting streaming data into Graph database – Guido Schmutz, Trivadis   Applications of Graph Technologies Analyze the Global Knowledge Graph with Visualization, Cloud, & Spatial Tech – Kevin Madden, Tom Sawyer Software Graph Modeling and Analysis for Complex Automotive Data Management – Masahiro Yoshioka, Mazda Follow the Money: A Graph Model for Monitoring Bank Transactions – Federico Garcia Calabria, Oracle Anomaly Detection in Medicare Provider Data using OAAgraph – Sungpack Hong, Mark Hornick, Francisco Morales, Oracle Fake News, Trolls, Bots and Money Laundering – Find the truth with Graphs – Jim Steiner and Sungpack Hong, Oracle

All Analytics. All Data. No Nonsense. Featuring Spatial and Graph Summit March 20–22, 2018 Oracle HQ Conference Center 350 Oracle Pkwy, Redwood City, CA 94065 We’ve changed our name! Formerly called the...

Graph Features

Using SSD to Speed up Property Graph Operations in Oracle Database (II)

This is the second installment of the series "Using SSD to Speed up Property Graph Operations in Oracle Database." Click here for the first one. In this blog, I am going to share with you a follow up test after the graph data was loaded into the database. The same hardware and the same Oracle Database version are used as those in the first installment. However, I upgraded the OS version to the latest EL7. The upgrade was not related to this graph database performance series but rather due to a need to run Docker engine on the same hardware system. OK, enough talking, let's see some code snippets. On the PGX side, before I started the PGX server, I set the following options. export _JAVA_OPTIONS="-Dpgx.num_workers_io=8 -Dpgx.max_off_heap_size=12000 -Dpgx.num_workers_analysis=8 -Xmx12000m " Then, I kicked off the following Groovy script (Java code snippets). opg-oracledb> cfg = GraphConfigBuilder.forPropertyGraphRdbms().setJdbcUrl("jdbc:oracle:thin:@127.0.0.1:1521:orcl122") .setUsername("pg").setPassword("<YOUR_PASSWORD_HERE>") .setName("lj1") .setMaxNumConnections(16) .setLoadEdgeLabel(false).addEdgeProperty("weight", PropertyType.DOUBLE, "1000000")  .build(); pgxServer="http://127.0.0.1:7007/" lStarttime=System.currentTimeMillis(); pgxSession=Pgx.getInstance(pgxServer).createSession("session1"); pgxGraph = pgxSession.readGraphWithProperties(opg.getConfig()); System.out.println("total ms " + (System.currentTimeMillis() - lStarttime)); I did three consecutive runs (cold, warm, hot) and the time took to read this 70M-edges graph were: total ms 117335 total ms 87873 total ms 86983 As you can see, a warm run took a bit less than 90 seconds to read the whole graph into PGX's in-memory data structures. During the graph reading, I did a top to see the system workload and observed the following. It is obvious that both the Java process (PGX) and the database were working hard in parallel.  Tasks: 597 total,   4 running, 593 sleeping,   0 stopped,   0 zombie %Cpu(s): 30.0 us,  2.7 sy,  0.0 ni, 53.2 id, 13.1 wa,  0.0 hi,  1.0 si,  0.0 st KiB Mem : 65698368 total,  5023204 free, 15870304 used, 44804860 buff/cache KiB Swap: 28344760 total, 28342916 free,     1844 used. 37149828 avail Mem   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                        16600 user       20   0 20.575g 3.079g  25348 S 214.2  4.9   1:13.57 java                                           17282 user       20   0 19.236g 275956 269628 D  22.8  0.4   0:02.95 oracle_17282_or                                17256 user       20   0 19.236g 302200 295284 D  21.5  0.5   0:02.93 oracle_17256_or                                17258 user       20   0 19.236g 274808 268480 S  21.2  0.4   0:02.98 oracle_17258_or                                17262 user       20   0 19.236g 276572 270252 S  21.2  0.4   0:02.90 oracle_17262_or                                17264 user       20   0 19.236g 275912 269580 D  21.2  0.4   0:03.00 oracle_17264_or                                17268 user       20   0 19.236g 274812 268460 D  21.2  0.4   0:02.97 oracle_17268_or                                17270 user       20   0 19.236g 269464 263136 D  21.2  0.4   0:02.79 oracle_17270_or                                17278 user       20   0 19.236g 273056 266724 R  21.2  0.4   0:02.92 oracle_17278_or  Cheers,

This is the second installment of the series "Using SSD to Speed up Property Graph Operations in Oracle Database." Click here for the first one. In this blog, I am going to share with you a follow up...

Graph Features

Using Advanced Options When Creating a New Property Graph in Oracle Database

For property graph users that know Oracle Database well, you likely know it already that the PL/SQL API, opg_apis.create_pg, offers an options parameter. One of the supported option is "INMEMORY=T" which leverages the Database In-Memory feature.  A question arises then, can we use this option from Java or via the graph config? The answer is yes to both. To create a new property graph with an advanced option, the following static Java method (part of oracle.pg.rdbms.OraclePropertyGraph class) can be used. One can easily set the option(s) in the last parameter.   public static OraclePropertyGraph getInstance(Oracle oracle, String szGraphName,       int iHashPartitionsNum, int iDOP, String szTBS, String szOptions)     throws SQLException;   The following shows, on the other hand, a way to set the advanced option via graph config. Here we assume the graph name "tg1" is new. builder = GraphConfigBuilder.forPropertyGraphRdbms() builder.setJdbcUrl("jdbc:oracle:thin:@<HOST>:1521:<SID>") .setUsername("pg").setPassword("pg")  .setName("tg1") .setMaxNumConnections(2) .setLoadEdgeLabel(false) .addVertexProperty("name", PropertyType.STRING, "default_name")  .addEdgeProperty("cost", PropertyType.DOUBLE, "1000000") ; builder.setOptions("INMEMORY=T"); cfg = builder.build(); opg = OraclePropertyGraph.getInstance(cfg); Once the above Groovy script (running Java code snippets) completes, you can run a simple SQL to verify that the inmemory option is enabled. Note that there are 8 rows here because there are 8 partitions in this vertex table TG1VT$. SQL> select inmemory from user_tab_partitions where table_name='TG1VT$'; INMEMORY -------- ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED 8 rows selected.   Cheers,    

For property graph users that know Oracle Database well, you likely know it already that the PL/SQL API, opg_apis.create_pg, offers an optionsparameter. One of the supported option is "INMEMORY=T"...

Graph Features

Using SSD to Speed up Property Graph Operations in Oracle Database (I)

First of all, Happy New Year!  At the end of last year, I got a question on property graph data loading performance in Oracle Database. After a while, I realized the performance issue was most likely caused by the database setup: a single SATA disk used in a VM. The IO performance was obviously poor in the setup. To illustrate the importance of a sufficiently fast storage to an Oracle Database, I did an experiment using an old commodity server setup which has 64 GB RAM and 2 quad-core CPUs. The server was assembled from parts 6 years ago and the total cost was < 1000 USD. There were a few SATA disks installed. To speed up storage, I added two  Samsung SSDs (850 Pro, 1TB each). After that, I installed Oracle Database release 12.2.0.1 and the two SSDs are under the management of ASM. external redundancy was chosen. The graph used is the LiveJournal which has around 70M edges. And the following code snippet was executed in Groovy. A property graph named lj1 was first created with 16 partitions. I then used the built-in loadDataWithSqlLdr API (part of Data Access Layer) to load the data into Oracle Database in parallel (DOP=8).  oracle=new Oracle("jdbc:oracle:thin:@127.0.0.1:1521:orcl122", "pg", "<password>"); opg=OraclePropertyGraph.getInstance(oracle, "lj1", 16 /*iHashPartitionsNum*/); opg-oracledb> opgdl=OraclePropertyGraphDataLoader.getInstance(); opg-oracledb> vfile="/home/rdf/liveJ.opv.gz"  // vertex flat file opg-oracledb> efile="/home/rdf/liveJ.ope.gz"  // edge flat file opg-oracledb> lStarttime=System.currentTimeMillis(); opg-oracledb> opgdl.loadDataWithSqlLdr(opg, "pg", "pg", "orcl122", vfile, efile, 8/*dop*/, false/*bNamedPipe*/, "/u01/app/rdf/product/12.2.0/dbhome_1/bin/sqlldr", true, "PDML=T,PDDL=T,NO_DUP=T,"); opg-oracledb> System.out.println("total time in ms " + (System.currentTimeMillis() - lStarttime)); total time in ms 710577   In this test, I bulk loaded ~70M edges in 710 seconds. This is at least 15x faster than the performance reported on that system with slow IO. Note that within 710 seconds, we not only loaded the data into Oracle Database, but also built a few key B*Tree indices on the underlying tables holding vertices and edges. Cheers,  

First of all, Happy New Year!  At the end of last year, I got a question on property graph data loading performance in Oracle Database. After a while, I realized the performance issue was most likely...

Graph Features

Deploy In-Memory Parallel Graph Analytics (PGX) to Oracle Java Cloud Service (JCS)

This is the follow up for one previous blog "How to enable Oracle Database Cloud Service with Property Graph Capabilities". A PGX server can run in two modes: embedded mode and remote mode. The embedded mode means using the PGX's embedded Java container; To run PGX in a remote mode, user needs to deploy the PGX web application to a remote Java container, for example, Tomcat, or WebLogic Server. Oracle Java Cloud Service offers a very simple way to host the PGX web application in a WebLogic Server. This blog tells you how to deploy the PGX web application to an existing JCS WebLogic server and connect to it through SSL. For tutorial on how to create a Oracle JCS WebLogic service, please visit the link, or you can download the HOW-TO document at the end of this blog. Download and Deploy PGX to JCS instance Download the Oracle Big Data Spatial and Graph with this link: https://support.oracle.com/epmos/faces/PatchDetail?patchId=27118297&requestId=21722011   Unzip the file and locate the pgx war file, pgx-webapp-wls.war, under folder: md/property_graph/pgx/server/  In your browser Open WebLogic Server Console log in page Log in with the user weblogic and the password you defined when creating the instance.  At the top left, click the button “Lock & Edit”  On the left under Domain Structure, click Deployments  In the middle under Deployments, click Install  Above the Path textbox, click the link to Upload your file(s)  ​ Next to Deployment Archive, click “Browse…” and open “pgx-webapp-<version>-wls.war” which you downloaded earlier Click Next and wait for the upload to complete.  Upload % complete is displayed on the lower left corner of your browser  When complete you will see a message indicating upload successful.  Click Next  Under “Choose installation type and scope”, leave the default selection (“Install this deployment as an application”)  Click Next  Under Servers check the JCS1_dom_adminserver, you can also use the clusters and check all servers in the cluster Click Next  Under General, leave the name as it is  Click Next  Under Additional Configuration leave default (“Yes, take me to the deployment's configuration screen.”)  Click Finish  At top left, click button “Activate Changes”  In the middle under “Settings for pgx-webapp-name”, click the tab “Control”  Check the box next to “pgx-webapp-name”  Click on the menu “Start” and select “Servicing all requests”  Click “Yes” when prompted to confirm  You should now see pgx state is “Active”  Congratulation! The PGX server is now deployed to Java Cloud Service and ready to be used with your PGX client instance for development. Connect to the PGX server using HTTPS/SSL Download the server side certificate Simply used Firefox, connected to the SSL-based endpoint, and exported the certificate. You may refer to the following screenshot to download the pgx server side self-signed certification, note the host name, you will need to add this to your client hosts file. Add the server side certificate to your client* key store Create an empty keystore The following example command creates a keystore.jks. It will ask a few questions along the way but those are very straightforward. You definitely want to use a much stronger password than "changeit".  keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass changeit -validity 360 -keysize 2048 Import the server side certificate to the above keystore keytool -import -trustcacerts -alias DemoCertFor_JCS1_domain -file <YOUR_CERTIFICATE_FILE_HERE>  -keystore keystore.jks Specify the keystore for your client For example, if you are using the built-in Groovy, you can add the following to the JAVA_OPTIONS setting in gremlin-opg-rdbms.sh. -Djavax.net.ssl.trustStore=/home/oracle/keystore.jks -Djavax.net.ssl.trustStorePassword=changei Or run below scripts with your console export JAVA_OPTIONS="$JAVA_OPTIONS -Djavax.net.ssl.trustStore=/home/oracle/jcs_final/keystore.jks -Djavax.net.ssl.trustStorePassword=changeit" Configure the client hosts file for the connection Because the self signed certificate uses host name instead of IP address, user needs to modify the client side hosts file to translate the specified host name to an IP address, you can follow the below steps to complete this operation 1, Login to the client machine 2, Open the /etc/hosts file (assume client system is Linux based) 3, Add one line like below to the end of the file and save the change <YOUR_IP_ADDRESS>   <YOUT_HOST_NAME> You may refer to another post for this topic: https://blogs.oracle.com/oraclespatial/four-quick-steps-to-use-a-https-ssl-based-pgx-service-endpoint Now it is time to test the secure connection to the PGX server from your client. cd $ORACLE_HOME/md/property_graph/dal/groovy sh ./gremlin-opg-rdbms.sh cfg =GraphConfigBuilder.forPropertyGraphRdbms().setJdbcUrl("jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(HOST=YOUR_CLIENT_IP)(PORT=1521)(PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=PDB1.uspm020.oraclecloud.internal)))").setUsername("scott").setPassword("<YOUR_PASSWORD_HERE>").setName("connections").setMaxNumConnections(8).setLoadEdgeLabel(false).addVertexProperty("name",PropertyType.STRING, "default_name").addEdgeProperty("cost",PropertyType.DOUBLE, "1000000").build(); opg = OraclePropertyGraph.getInstance(cfg); opg.clearRepository(); // start from scratch opgdl=OraclePropertyGraphDataLoader.getInstance(); vfile="../../data/connections.opv" //vertex flat file efile="../../data/connections.ope" //edge flat file opgdl.loadData(opg, vfile, efile, 2,10000, true, null); session=Pgx.createSession("https://YOUR_HOST_NAME:PORT/pgx", "session-id-1231"); opg-nosql> analyst=session.createAnalyst(); opg-nosql> pgxGraph = session.readGraphWithProperties(opg.getConfig(),true); //count triangles analyst.countTriangles(pgxGraph, true); //pgql opg-oracledb> pgxResultSet = pgxGraph.queryPgql("SELECT n,m WHERE (n) -> (m)") pgxResultSet.print(10); +------------------------------------+ | n | m | +------------------------------------+ | PgxVertex[ID=2] | PgxVertex[ID=1] | | PgxVertex[ID=3] | PgxVertex[ID=1] | | PgxVertex[ID=6] | PgxVertex[ID=1] | | PgxVertex[ID=7] | PgxVertex[ID=1] | | PgxVertex[ID=8] | PgxVertex[ID=1] | | PgxVertex[ID=9] | PgxVertex[ID=1] | | PgxVertex[ID=10] | PgxVertex[ID=1] | | PgxVertex[ID=11] | PgxVertex[ID=1] | | PgxVertex[ID=12] | PgxVertex[ID=1] | | PgxVertex[ID=19] | PgxVertex[ID=1] |   *A PGX client could be any machine with the database (12.2 or above) installed HOW-TO document is available HERE.

This is the follow up for one previous blog "How to enable Oracle Database Cloud Service with Property Graph Capabilities". A PGX server can run in two modes: embedded mode and remote mode. The...

Use Python with Oracle Spatial and Graph Property Graph (III)

The previous two installments (I, II) talked about some basics on using Python with Oracle Spatial and Graph (OSG) Property Graph. In this 3rd installment, I will show how one uses BokehJS [1] together with property graph for interactive data visualization.  The graph data I am going to use is a movie graph which was created based on the movie demo that's bundled with Oracle Big Data Lite VM. There are two types of vertices in this graph: users and movies. And users and movies are linked with edges denoting click relationships between a user and a movie.  The following code snippet assumes we have opg, an instance of OraclePropertyGraph. It starts with reading two properties "budget" and "gross" from a set of movie vertices, plots a budget vs gross chart with matplotlib, and finally uses bokeh to plot the same data in an interactive manner. opg.getVertices() is an API to fetch detailed information on vertices. I am reading both movie and user vertices. It is smart to add a search criteria "type"="movie" to focus on just movies, but you get the idea. Plot out budget and gross on a static image with matplotlib.  Now, switch to BokehJS.   One can easily interact (pan and zoom) with this chart. The following shows a zoomed-in portion (bottom left corner) of the previous chart. Acknowledgement: many thanks to my teammates, Hugo, Gaby, and Ana for their contributions to this blog! Happy Thanksgiving!   References [1] https://bokeh.pydata.org/en/latest/docs/dev_guide/bokehjs.html

The previous two installments (I, II) talked about some basics on using Python with Oracle Spatial and Graph (OSG) Property Graph. In this 3rd installment, I will show how one uses BokehJS [1]...

Four Quick Steps to Use a HTTPS (SSL) based PGX Service Endpoint

In the past, I have shown many examples on connecting a client to a remote PGX service endpoint. For simplicity, the endpoint used was HTTP based. In practice, one may want to use HTTPS (SSL) based endpoint for better security. Given the most recent Equifax security breach, who wouldn't use better security?! If the SSL-enabled endpoint is using a self-signed certificate, then you will most likely see this error "unable to find valid certification path to requested target". In this blog, I am going to show you four easy steps to address that problem. Step 1. Create an empty keystore The following example command creates a keystore.jks. It will ask a few questions along the way but those are very straightforward. You definitely want to use a much stronger password than "changeit".    keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass changeit -validity 360 -keysize 2048 Step 2: Download the server side certificate I simply used Firefox, connected to the SSL-based endpoint, and exported the certificate. If you have never done this before, the following two screenshots can help you get there quickly. Step 3: Add the server side certificate to the above keystore keytool -import -trustcacerts -alias DemoCertFor_JCS2_domain -file <YOUR_CERTIFICATE_FILE_HERE>  -keystore keystore.jks   Step 4: Specify the keystore for your client For example, if you are using the built-in Groovy, you can add the following to the JAVA_OPTIONS setting in gremlin-opg-rdbms.sh. -Djavax.net.ssl.trustStore=/home/oracle/keystore.jks -Djavax.net.ssl.trustStorePassword=changeit    Time to test, the following should go through without any errors. Note that to avoid seeing "javax.net.ssl.SSLException: hostname in certificate didn't match", a hostname instead of an IP address has to be used when specifying <YOUR_PGX_SERVER>. opg-oracledb> s=Pgx.createSession("https://<YOUR_PGX_SERVER>:<PORT>/pgx", "s1") Cheers,  

In the past, I have shown many examples on connecting a client to a remote PGX service endpoint. For simplicity, the endpoint used was HTTP based. In practice, one may want to use HTTPS (SSL)...

Use Python with Oracle Spatial and Graph Property Graph (II)

This is the second installment of the "Use Python with Oracle Spatial and Graph Property Graph" series. Click here for the first one.  Once in a while, I got a question "Do you like Java or Python better?" I have used Java for 17+ years so I do like it a lot. In the past few years however I got to use Python more and more, partly because many sales/technical consultants use Python with graph technologies and I need to answer their Python questions, partly because of Python's use in Tensorflow, and partly because I am teaching my kids to write code in Python. There are certainly pros and cons of these two languages. To me, they are, however, just tools along the same line as SQL/NoSQL/MapReduce/Spark, etc. I use whatever tools that suit my purpose, put them aside once I am done. No strings attached. Seriously :) Now, back to business. The property graph feature (in OSG or BDSG) has many Java APIs. The existing Python wrappers do not cover all of them. So what do I do when I need to use a function in one of the Java APIs but not yet exposed by the Python wrapper? This is where JPype come into play. The following Jupyter Notebook snippets reflect a most recent Q/A regarding how to generate a graph out of a relational data source using Python. First, connect to an Oracle Database (12.2.0.1 or higher). Then, I invoke the convertRDBMSTable2OPV Java API from Python using JPype. In this example, a set of vertices is created out of the famous employee "EMP" table. As you can see, I am using JClass to pull in the functions I need. That's it. You should see a .opv file generated under /tmp. cat /tmp/testg.opv 7369,name,1,SMITH,, 7499,name,1,ALLEN,, 7521,name,1,WARD,, 7566,name,1,JONES,, 7654,name,1,MARTIN,, 7698,name,1,BLAKE,, 7782,name,1,CLARK,, 7788,name,1,SCOTT,, 7839,name,1,KING,, 7844,name,1,TURNER,, 7876,name,1,ADAMS,, 7900,name,1,JAMES,, 7902,name,1,FORD,, 7934,name,1,MILLER,, Cheers,

This is the second installment of the "Use Python with Oracle Spatial and Graph Property Graph" series. Click here for the first one.  Once in a while, I got a question "Do you like Java or...

How to enable Oracle Database Cloud Service with Property Graph Capabilities

With Oracle Database release 12.2, Oracle property graph capability is available from Oracle Database side as well. Oracle property graph feature is available with the Oracle Spatial and Graph product.  This feature provides graph data management and analysis. The property graph feature is also a feature of Oracle Big Data Spatial and Graph.  You can refer to Oracle Big Data Spatial and Graph for more details on this product. This blog tells you how to enable property graph capabilities with an existing Oracle Database Cloud service. For tutorial on how to create an Oracle Database Cloud service, please visit the link, or you can download the HOW-TO document at the end of this blog. Enable 32K Varchar2, which is required by property graph A detailed description of the following steps can be found in: https://docs.oracle.com/database/121/REFRN/GUID-D424D23B-0933-425F-BC69-9C0E6724693C.htm#REFRN10321 Before we do that, we may check the default MAX_STRING value SQL> show parameters max_string; SQL> ALTER SESSION SET CONTAINER=CDB$ROOT; Session altered. SQL> ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE; System altered. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup upgrade; ORACLE instance started. … Database opened. SQL> ALTER PLUGGABLE DATABASE ALL OPEN UPGRADE; Pluggable database altered. SQL> quit Disconnected from Oracle Database 12c EE High Perf Release 12.2.0.1… [oracle@GDBCS1 dbhome_1]$ cd $ORACLE_HOME/rdbms/admin [oracle@GDBCS1 admin]$ mkdir /u01/utl32k_cdb_pdbs_output [oracle@GDBCS1 admin]$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -l '/u01/utl32k_cdb_pdbs_output' -b utl32k_cdb_pdbs_output utl32k.sql … Enter Password: catcon.pl: completed successfully [oracle@GDBCS1 admin]$ sqlplus / as sysdba SQL> shutdown immediate; SQL> startup SQL> ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE; Pluggable database altered. SQL> quit Disconnected from Oracle Database 12c EE High Perf Release 12.2.0.1… [oracle@GDBCS1 admin]$ mkdir /u01/utlrp_cdb_pdbs_output [oracle@GDBCS1admin]$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -l '/u01/utlrp_cdb_pdbs_output' -b utlrp_cdb_pdbs_output utlrp.sql … Enter Password: … catcon.pl: completed successfully [oracle@GDBCS1 admin]$ Validate the change made to MAX_STRING_SIZE. To verify, run the following commands and you should see the value of max_string_size changed to "EXTENDED" [oracle@GDBCS1 admin]$ sqlplus / as sysdba SQL> alter session set container=PDB1; Session altered. SQL> show parameters max_string; NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ max_string_size string EXTENDED Create a schema for later use by enabling user ‘scott’ SQL> conn / as sysdba Connected. SQL> alter session set container=PDB1; Session altered. SQL> alter user scott identified by tiger789#_O0 account unlock; User altered. Apply a patch to enable database support for PGQL Download this patch: https://support.oracle.com/epmos/faces/PatchDetail?patchId=25640325 Notes on transferring files to/from DBCS are here https://docs.oracle.com/en/cloud/paas/database-dbaas-cloud/csdbi/copy-files-node.html In this exercise, we will use WinSCP to upload the patch file into the DBCS with the private key. Follow the readme and apply the patch as described below. [oracle@GDBCS1 gpatch]$ cd 25640325/ [oracle@GDBCS1 25640325]$ ls p25640325_osg_pg_update.zip  README.txt [oracle@GDBCS1 25640325]$ rm -rf $ORACLE_HOME/md/property_graph/* [oracle@GDBCS1 25640325]$ rm -f $ORACLE_HOME/md/admin/*opg*  [oracle@GDBCS1 25640325]$ cd $ORACLE_HOME [oracle@GDBCS1 dbhome_1]$ unzip /home/oracle/gpatch/25640325/p25640325_osg_pg_update.zip Archive:  /home/oracle/gpatch/25640325/p25640325_osg_pg_update.zip    creating: md/property_graph/examples/    creating: md/property_graph/pyopg/ … … Re-install Oracle Spatial and Graph Property Graph schema PL/SQL packages cd $ORACLE_HOME/md/admin/ sqlplus /  as sysdba        alter session set container=pdb1; @catopg.sql   Loading a Property Graph in Oracle Database using Groovy Before you can start building a text index or running graph analytics, you will first need to get the graph data loaded into the database. The following shows an example flow of using the parallel property graph data loader to ingest a property graph that represents a small social network into an Oracle Database (12c Release 2). This property graph data is encoded in flat file format (.opv/.ope). DBCS includes a built-in Groovy shell (based on the Gremlin Groovy shell script). With this command line shell interface, you can perform graph operations using Java APIs. cd $ORACLE_HOME/md/property_graph/dal/groovy $ sh ./gremlin-opg-rdbms.sh  -------------------------------- opg-oracledb> // you need to customize JdbcURL, Username, and Password in the following graph config setting. opg-oracledb> cfg =GraphConfigBuilder.forPropertyGraphRdbms().setJdbcUrl("jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(HOST= 129.150.84.48)(PORT=1521)(PROTOCOL=tcp))(CONNECT_DATA=(SERVICE_NAME=PDB1.uspm020.oraclecloud.internal)))").setUsername("scott").setPassword("tiger789#_O0").setName("connections") .setMaxNumConnections(8).setLoadEdgeLabel(false).addVertexProperty("name",PropertyType.STRING, "default_name").addEdgeProperty("weight",PropertyType.DOUBLE, "1000000").build();  opg-oracledb> opg = OraclePropertyGraph.getInstance(cfg); opg-oracledb> opg.clearRepository();   // start from scratch opg-oracledb> opgdl=OraclePropertyGraphDataLoader.getInstance(); opg-oracledb> vfile="../../data/connections.opv"  //vertex flat file opg-oracledb> efile="../../data/connections.ope"  //edge flat file opg-oracledb> opgdl.loadData(opg, vfile, efile, 2,10000, true, null); opg-oracledb> opg.countVertices() ==>78 opg-oracledb> opg.countEdges() ==>164 Run Graph Analysis and PGQL with an embedded PGX mode Find these people who has direct link between. // Create an in memory analytics session and analyst opg-oracledb> session=Pgx.createSession("session_ID_1"); opg-oracledb> analyst=session.createAnalyst(); // Read graph data from database into memory opg-oracledb> pgxGraph =session.readGraphWithProperties(opg.getConfig(),true); //count triangles analyst.countTriangles(pgxGraph, true); ==>22 opg-oracledb> pgxResultSet = pgxGraph.queryPgql("SELECT n,m WHERE (n) -> (m)") ==>PgqlResultSetImpl[graph=connections,numResults=164] opg-oracledb> pgxResultSet.print(10); +------------------------------------+ | n                | m               | +------------------------------------+ | PgxVertex[ID=2]  | PgxVertex[ID=1] | | PgxVertex[ID=3]  | PgxVertex[ID=1] | | PgxVertex[ID=6]  | PgxVertex[ID=1] | | PgxVertex[ID=7]  | PgxVertex[ID=1] | | PgxVertex[ID=8]  | PgxVertex[ID=1] | | PgxVertex[ID=9]  | PgxVertex[ID=1] | | PgxVertex[ID=10] | PgxVertex[ID=1] | | PgxVertex[ID=11] | PgxVertex[ID=1] | | PgxVertex[ID=12] | PgxVertex[ID=1] | | PgxVertex[ID=19] | PgxVertex[ID=1] | +------------------------------------+ Congratulations! you have now set up your Oracle Database Cloud Service instance with property graph capabilities, loaded a sample property graph data, and started performing property graph analysis with an embedded pgx, next you may follow another HOW-TO tutorial to deploy your own pgx service to a WebLogic server and access it remotely. HOW-TO document is available HERE    

With Oracle Database release 12.2, Oracle property graph capability is available from Oracle Database side as well. Oracle property graph feature is available with the Oracle Spatial and Graph...

Graph Features

Property Graph (v2.2) in a Box

To start, download Oracle Big Data Lite Virtual Machine v4.9 from the following page. http://www.oracle.com/technetwork/database/bigdata-appliance/oracle-bigdatalite-2104726.html It is recommended to use Oracle VM Virtual Box 5.0.16 or up to import this virtual machine. Once import is done successfully, login as oracle (default password is welcome1). On the desktop, there is a "Start/Stop Services" icon - double clicking that will lead to a console popup with a list of services. Check Oracle NoSQL Database, hit enter, and the built-in Oracle NoSQL Database will start automatically. If you need to shutdown the Oracle NoSQL Database, just repeat this process.  Next, I am going to show you a simple Groovy based script that loads a sample property graph representing a small social network, reads out vertices and edges, writes it out, and finally counts the number of triangles and run a simple pgql in this network. Open a Linux terminal in this virtual machine and type in the following: $ cd /opt/oracle/oracle-spatial-graph/property_graph/dal/groovy $ ./gremlin-opg-nosql.sh -------------------------------- opg-nosql> // // Connect to the Oracle NoSQL Database in this virtual box //     server = new ArrayList<String>();     server.add("bigdatalite:5000");     cfg = GraphConfigBuilder.forPropertyGraphNosql()             \       .setName("connections").setStoreName("kvstore")                \       .setHosts(server)                                          \       .addEdgeProperty("weight", PropertyType.DOUBLE, "1000000") \       .addVertexProperty("company", PropertyType.STRING, "NULL") \       .setMaxNumConnections(2).build(); // Get an instance of OraclePropertyGraph // opg = OraclePropertyGraph.getInstance(cfg); opg.clearRepository(); // // Use the parallel data loader API to load a // sample property graph in flat file formatwith a // degree of parallelism (DOP) 2 // vfile="../../data/connections.opv" efile="../../data/connections.ope" opgdl=OraclePropertyGraphDataLoader.getInstance(); opgdl.loadData(opg, vfile, efile, 2); // read through the vertices opg.getVertices(); // read through the edges opg.getEdges(); // // You can add vertices/edges, change properties etc. here. // ... // // // Serialize the graph out into a pair of flat files with DOP=2 // vOutput="/tmp/mygraph.opv" eOutput="/tmp/mygraph.ope" OraclePropertyGraphUtils.exportFlatFiles(opg, vOutput, eOutput, 2, false); // // Create an analyst instance // session=Pgx.createSession("session_ID_1"); analyst=session.createAnalyst(); // Read graph data from database into memory pgxGraph = session.readGraphWithProperties(opg.getConfig(), true); //call the count triangle api analyst.countTriangles(pgxGraph, true); ==>22 // run a pgql pgxResultSet = pgxGraph.queryPgql("SELECT n,m WHERE (n)->(m)->(n), n!=m") //print out the first 10 records from the resultset pgxResultSet.print(10); +-------------------------------------+ | n                | m                | +-------------------------------------+ | PgxVertex[ID=1]  | PgxVertex[ID=2]  | | PgxVertex[ID=36] | PgxVertex[ID=2]  | | PgxVertex[ID=37] | PgxVertex[ID=2]  | | PgxVertex[ID=1]  | PgxVertex[ID=11] | | PgxVertex[ID=10] | PgxVertex[ID=11] | | PgxVertex[ID=18] | PgxVertex[ID=16] | | PgxVertex[ID=15] | PgxVertex[ID=16] | | PgxVertex[ID=16] | PgxVertex[ID=18] | | PgxVertex[ID=21] | PgxVertex[ID=23] | | PgxVertex[ID=25] | PgxVertex[ID=23] | +-------------------------------------+ ==>null // get the total number of the records from the result set pgxResultSet.getNumResults()     ==>132 Note that the same virtual box has Apache HBase installed as well as Oracle NoSQL Database.Once Apache HBase is configured and started, the same script (except the DB connection initialization part) can be used without a change.

To start, download Oracle Big Data Lite Virtual Machine v4.9 from the following page. http://www.oracle.com/technetwork/database/bigdata-appliance/oracle-bigdatalite-2104726.htmlIt is recommended to...

Graph Features

A Few Easy Steps to Start a REST Endpoint for Oracle Spatial and Graph Property Graph

The property graph feature has been part of Oracle Spatial and Graph option starting from Oracle Database 12c Release 2 (12.2.0.1). With this feature, one can access and manipulate graph data stored in Oracle Database using Java APIs and Python wrappers. In this blog, I am going to show you a few easy steps to start an example REST endpoint for the property graph feature. cd ${ORACLE_HOME}/md/property_graph/dal/webapp # Make sure we have the following jar files under webapp/ directory # You can get the missing jar files from # (http://tinkerpop.com/downloads/rexster/rexster-server-2.3.0.zip) [webapp]$ ls -1 *.jar blueprints-rexster-graph-2.3.0.jar commons-cli-1.2.jar commons-collections-3.2.1.jar commons-configuration-1.6.jar commons-lang-2.4.jar commons-logging-1.0.4.jar grizzly-core-2.2.16.jar grizzly-http-2.2.16.jar grizzly-http-server-2.2.16.jar grizzly-http-servlet-2.2.16.jar javassist-3.15.0-GA.jar javax.servlet-api-3.0.1.jar jersey-core-1.17.jar jersey-json-1.17.jar jersey-server-1.17.jar jersey-servlet-1.17.jar jung-algorithms-2.0.1.jar jung-api-2.0.1.jar jung-visualization-2.0.1.jar msgpack-0.6.5.jar rexster-core-2.3.0.jar rexster-protocol-2.3.0.jar rexster-server-2.3.0.jar # Create rexster.sh under webapp/ $ cat rexster.sh #!/bin/bash CP=$( echo `dirname $0`/*.jar . | sed 's/ /:/g') CP=$CP:$(find -L `dirname $0`/../../lib/ -name "*.jar" | tr '\n' ':') CP=$CP:$(find -L `dirname $0`/../groovy/ -name "*.jar" | tr '\n' ':') PUBLIC=`dirname $0`/../public/ # Find Java if [ "$JAVA_HOME" = "" ] ; then     JAVA="java -server" else     JAVA="$JAVA_HOME/bin/java -server" fi # Set Java options if [ "$JAVA_OPTIONS" = "" ] ; then     JAVA_OPTIONS="-Xms2G -Xmx4G " fi # Launch the application $JAVA $JAVA_OPTIONS -cp $CP com.tinkerpop.rexster.Application $@ -wr $PUBLIC # Return the program's exit code exit $?   # # Create and customize rexster.xml. # $ cat rexster.xml <?xml version="1.0" encoding="UTF-8"?> <rexster>     <script-engine-reset-threshold>-1</script-engine-reset-threshold>     <script-engine-init>data/init.groovy</script-engine-init>     <script-engines>gremlin-groovy</script-engines>     <oracle-pool-size>3</oracle-pool-size>     <oracle-property-graph-backends>         <backend>             <backend-name>rdbms_connection</backend-name>             <backend-type>oracle_rdbms</backend-type>             <properties>                 <jdbcUrl>jdbc:oracle:thin:@127.0.0.1:1521:orcl</jdbcUrl>                 <user>YOUR_SCHEMA_HERE</user>                 <password>YOUR_PASSWORD_HERE</password>             </properties>         </backend>     </oracle-property-graph-backends>     <graphs>         <graph>             <graph-name>YOUR_GRAPH_NAME_HERE</graph-name>             <graph-type>oracle.pg.rdbms.OraclePropertyGraphConfiguration</graph-type>             <properties> <jdbcUrl>jdbc:oracle:thin:@127.0.0.1:1521:orcl</jdbcUrl> <user>YOUR_SCHEMA_HERE</user> <password>YOUR_PASSWORD_HERE</password> </properties>             <extensions> <allows> <allow>tp:gremlin</allow> </allows> </extensions>         </graph>     </graphs> </rexster> # Start the REST endpoint sh rexster.sh --start -c ./rexster.xml    # Give it a test, open the following URL in your web browser http://<YOUR_HOSTNAME>:8182/graphs/   Note: the following set of wget commands can fetch the libraries (jar files) mentioned in this blog post. One can run them from a Linux terminal. Be sure to set proxy if your machine is behind a firewall. echo "Downloading dependency blueprints-rexster-graph-2.3.0.jar" wget http://central.maven.org/maven2/com/tinkerpop/blueprints/blueprints-rexster-graph/2.3.0/blueprints-rexster-graph-2.3.0.jar echo "Downloading dependency commons-cli-1.2.jar" wget http://central.maven.org/maven2/commons-cli/commons-cli/1.2/commons-cli-1.2.jar echo "Downloading dependency commons-collections-3.2.1.jar" wget http://central.maven.org/maven2/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar echo "Downloading dependency commons-configuration-1.6.jar" wget http://central.maven.org/maven2/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar echo "Downloading dependency commons-lang-2.4.jar" wget http://central.maven.org/maven2/commons-lang/commons-lang/2.4/commons-lang-2.4.jar echo "Downloading dependency commons-logging-1.0.4.jar" wget http://central.maven.org/maven2/commons-logging/commons-logging/1.0.4/commons-logging-1.0.4.jar echo "Downloading dependency grizzly-core-2.2.16.jar" wget http://central.maven.org/maven2/org/glassfish/grizzly/grizzly-core/2.2.16/grizzly-core-2.2.16.jar echo "Downloading dependency grizzly-http-2.2.16.jar" wget http://central.maven.org/maven2/org/glassfish/grizzly/grizzly-http/2.2.16/grizzly-http-2.2.16.jar echo "Downloading dependency grizzly-http-server-2.2.16.jar" wget http://central.maven.org/maven2/org/glassfish/grizzly/grizzly-http-server/2.2.16/grizzly-http-server-2.2.16.jar echo "Downloading dependency grizzly-http-servlet-2.2.16.jar" wget http://central.maven.org/maven2/org/glassfish/grizzly/grizzly-http-servlet/2.2.16/grizzly-http-servlet-2.2.16.jar echo "Downloading dependency javassist-3.15.0-GA.jar" wget http://central.maven.org/maven2/org/javassist/javassist/3.15.0-GA/javassist-3.15.0-GA.jar echo "Downloading dependency javax.servlet-api-3.0.1.jar" wget http://central.maven.org/maven2/javax/servlet/javax.servlet-api/3.0.1/javax.servlet-api-3.0.1.jar echo "Downloading dependency jersey-core-1.17.jar" wget http://central.maven.org/maven2/com/sun/jersey/jersey-core/1.17/jersey-core-1.17.jar echo "Downloading dependency jersey-json-1.17.jar" wget http://central.maven.org/maven2/com/sun/jersey/jersey-json/1.17.1/jersey-json-1.17.1.jar echo "Downloading dependency jersey-server-1.17.jar" wget http://central.maven.org/maven2/com/sun/jersey/jersey-server/1.17/jersey-server-1.17.jar echo "Downloading dependency jersey-servlet-1.17.jar" wget http://central.maven.org/maven2/com/sun/jersey/jersey-servlet/1.17/jersey-servlet-1.17.jar echo "Downloading dependency jung-algorithms-2.0.1.jar" wget http://central.maven.org/maven2/net/sf/jung/jung-algorithms/2.0.1/jung-algorithms-2.0.1.jar echo "Downloading dependency jung-api-2.0.1.jar" wget http://central.maven.org/maven2/net/sf/jung/jung-api/2.0.1/jung-api-2.0.1.jar echo "Downloading dependency jung-visualization-2.0.1.jar" wget http://central.maven.org/maven2/net/sf/jung/jung-visualization/2.0.1/jung-visualization-2.0.1.jar echo "Downloading dependency msgpack-0.6.5.jar" wget http://central.maven.org/maven2/org/msgpack/msgpack/0.6.5/msgpack-0.6.5.jar echo "Downloading dependency rexster-core-2.3.0.jar" wget http://central.maven.org/maven2/com/tinkerpop/rexster/rexster-core/2.3.0/rexster-core-2.3.0.jar echo "Downloading dependency rexster-protocol-2.3.0.jar" wget http://central.maven.org/maven2/com/tinkerpop/rexster/rexster-protocol/2.3.0/rexster-protocol-2.3.0.jar echo "Downloading dependency rexster-server-2.3.0.jar" wget http://central.maven.org/maven2/com/tinkerpop/rexster/rexster-server/2.3.0/rexster-server-2.3.0.jar  

The property graph feature has been part of Oracle Spatial and Graph option starting from Oracle Database 12c Release 2 (12.2.0.1). With this feature, one can access and manipulate graph data stored...

Graph Features

An Easy Way to Get Graph Config in JSON for Oracle Database

To use the property graph functions provided by Oracle Database 12.2.0.1 (+), it is important to get the graph configuration right. A typical graph configuration contains critical information like DB connection info, graph name, properties to be read/used, and more.  It is straightforward to construct a graph config Java object using the provided Java APIs. In some cases, however, a user may want to generate a graph config in JSON and use it in a web application. Below, I am showing an easy way to get a graph config in JSON for Oracle Database. First start the built-in groovy, [oracle@groovy/]$ sh ./gremlin-opg-rdbms.sh  Next, construct a graph config using the Java APIs. opg-oracledb> cfg = GraphConfigBuilder.forPropertyGraphRdbms().setJdbcUrl("jdbc:oracle:thin:@127.0.0.1:1521:orcl") .setUsername("scott").setPassword("<your_password>")  .setName("my_graph") .setMaxNumConnections(2) .setLoadEdgeLabel(false) .addVertexProperty("name", PropertyType.STRING, "default_name")  .addEdgeProperty("cost", PropertyType.DOUBLE, "1000000")  .build(); ==>{"max_num_connections":2,"username":"scott","db_engine":"RDBMS","loading":{"load_edge_label":false},"error_handling":{},"edge_props":[{"name":"cost","default":"1000000","type":"double"}],"vertex_props":[{"name":"name","default":"default_name","type":"string"}],"name":"my_graph","jdbc_url":"jdbc:oracle:thin:@127.0.0.1:1521:orcl","attributes":{},"format":"pg","password":"<your_password>"}   Hey, the JSON based graph config is already there!  Cheers, Zhe

To use the property graph functions provided by Oracle Database 12.2.0.1 (+), it is important to get the graph configuration right. A typical graph configuration contains critical information like...

Graph Features

Property Graph Query Language (PGQL) support has been added to Oracle Database 12.2.0.1!

Great news! Oracle just released a patch that adds Property Graph Query Language (PGQL) support to the existing property graph functions in Oracle Spatial and Graph in Oracle Database 12c Release 2 (12.2.0.1.). At a high level, PGQL is a SQL-like declarative language that allows a user to express a graph query pattern that consists of vertices and edges, and constraints on the properties of the vertices and edges. Once this patch is applied to your database instance, you can run the following quick test, Groovy/Java based, to see how it works. cd $ORACLE_HOME/md/property_graph/dal/groovy $ sh ./gremlin-opg-rdbms.sh -------------------------------- opg-oracledb> // It is very likely that you need to customize JdbcURL, Username, and Password in the following graph config setting. opg-oracledb> // opg-oracledb> cfg = GraphConfigBuilder.forPropertyGraphRdbms().setJdbcUrl("jdbc:oracle:thin:@127.0.0.1:1521:orcl") .setUsername("scott").setPassword("tiger")  .setName("connections") .setMaxNumConnections(8) .setLoadEdgeLabel(false) .addVertexProperty("name", PropertyType.STRING, "default_name")  .addEdgeProperty("cost", PropertyType.DOUBLE, "1000000")  .build(); opg-oracledb> opg = OraclePropertyGraph.getInstance(cfg); opg-oracledb> opg.clearRepository();     // start from scratch opg-oracledb> opgdl=OraclePropertyGraphDataLoader.getInstance(); opg-oracledb> vfile="../../data/connections.opv"  // vertex flat file ==>../../data/connections.opv opg-oracledb> efile="../../data/connections.ope"  // edge flat file ==>../../data/connections.ope opg-oracledb> opgdl.loadData(opg, vfile, efile, 2, 10000, true, null); ==>null opg-oracledb> opg.countVertices() ==>78 opg-oracledb> opg.countEdges() ==>164 opg-oracledb> // Create an in memory analytics session and analyst opg-oracledb> session=Pgx.createSession("session_ID_1"); opg-oracledb> analyst=session.createAnalyst(); opg-oracledb> opg-oracledb> // Read graph data from database into memory opg-oracledb> pgxGraph = session.readGraphWithProperties(opg.getConfig()); ==>PgxGraph[name=connections,N=78,E=164,created=1488415009543] opg-oracledb> pgxResultSet = pgxGraph.queryPgql("SELECT n,m WHERE (n) -> (m)") ==>PgqlResultSetImpl[graph=connections,numResults=164] opg-oracledb> pgxResultSet.print(10); +------------------------------------+ | n                | m               | +------------------------------------+ | PgxVertex[ID=2]  | PgxVertex[ID=1] | | PgxVertex[ID=3]  | PgxVertex[ID=1] | | PgxVertex[ID=6]  | PgxVertex[ID=1] | | PgxVertex[ID=7]  | PgxVertex[ID=1] | | PgxVertex[ID=8]  | PgxVertex[ID=1] | | PgxVertex[ID=9]  | PgxVertex[ID=1] | | PgxVertex[ID=10] | PgxVertex[ID=1] | | PgxVertex[ID=11] | PgxVertex[ID=1] | | PgxVertex[ID=12] | PgxVertex[ID=1] | | PgxVertex[ID=19] | PgxVertex[ID=1] | +------------------------------------+ Cheers, [1] https://support.oracle.com/epmos/faces/PatchDetail?patchId=25640325  

Great news! Oracle just released a patch that adds Property Graph Query Language (PGQL) support to the existing property graph functions in Oracle Spatial and Graph in Oracle Database 12c Release 2...

Graph Features

Loading a Property Graph in Oracle Database using Groovy

Before you can start building text index or running graph analytics, one obvious thing you need to do is to get the graph data loaded into the database. The following shows an example flow of using the parallel property graph data loader to ingest a property graph, representing a small social network and encoded in flat file format (.opv/.ope), into an Oracle Database (12c Release 2).cd $ORACLE_HOME/md/property_graph/dal/groovy $ sh ./gremlin-opg-rdbms.sh -------------------------------- opg-oracledb> // It is very likely that you need to customize JdbcURL, Username, and Password in the following graph config setting. opg-oracledb> // opg-oracledb> cfg = GraphConfigBuilder.forPropertyGraphRdbms().setJdbcUrl("jdbc:oracle:thin:@127.0.0.1:1521:orcl") .setUsername("scott").setPassword("tiger")  .setName("connections") .setMaxNumConnections(8) .setLoadEdgeLabel(false) .addVertexProperty("name", PropertyType.STRING, "default_name")  .addEdgeProperty("cost", PropertyType.DOUBLE, "1000000")  .build(); opg-oracledb> opg = OraclePropertyGraph.getInstance(cfg); opg-oracledb> opg.clearRepository();     // start from scratch opg-oracledb> opgdl=OraclePropertyGraphDataLoader.getInstance(); opg-oracledb> vfile="../../data/connections.opv"  // vertex flat file opg-oracledb> efile="../../data/connections.ope"  // edge flat file opg-oracledb> opgdl.loadData(opg, vfile, efile, 2, 10000, true, null); opg-oracledb> opg.countVertices() ==>78 opg-oracledb> opg.countEdges() ==>164Quite simple, isn't it?  Note that to use a higher Degree of Parallelism (DOP), simply customize the 4th argument to the above loadData API call.Cheers,Zhe

Before you can start building text index or running graph analytics, one obvious thing you need to do is to get the graph data loaded into the database. The following shows an example flow of using...

Graph Features

Graph Database Says Hello from the Cloud (Part III)

In this installment, I am going to show you the steps required to create your first property graph on the Cloud. - Login to the newly created database service. You can use Putty or SSH or  a tool of your choice. - Create a tablespace as follows: [oracle@graph122 dbhome_1]$ sqlplus / as sysdba Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bitProduction SQL> alter session set container=PDB1; create bigfile tablespace pgts datafile '?/dbs/pgts.dat' size 512M reuse autoextend on next512M maxsize 10G EXTENT MANAGEMENT LOCAL segment space managementauto; - Enable 32K Varchar2 which isrequired by the Property Graph feature in Oracle Spatial and Graph option.Detailed description of the following steps can be found in: https://docs.oracle.com/database/121/REFRN/GUID-D424D23B-0933-425F-BC69-9C0E6724693C.htm#REFRN10321 SQL> conn/ as sysdba SQL>ALTER SESSION SET CONTAINER=CDB$ROOT;SQL>ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE; SQL>shutdown immediate; ORACLE instanceshut down. SQL>startup upgrade; ORACLEinstance started. … Databasemounted. SQL>ALTER PLUGGABLE DATABASE ALL OPEN UPGRADE; Pluggabledatabase altered. EXIT; SQL>Disconnected from Oracle Database 12c EE Extreme Perf Release 12.2.0.1… [oracle@graph122dbhome_1]$ cd $ORACLE_HOME/rdbms/admin [oracle@graph122admin]$ mkdir /u01/utl32k_cdb_pdbs_output [oracle@graph122admin]$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d$ORACLE_HOME/rdbms/admin -l '/u01/utl32k_cdb_pdbs_output' -butl32k_cdb_pdbs_output utl32k.sql … EnterPassword: catcon.pl:completed successfully [oracle@graph122admin]$ sqlplus / as sysdba SQL>shutdown immediate; SQL>startup SQL>ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE; Pluggabledatabase altered. SQL> quit Disconnectedfrom Oracle Database 12c EE Extreme Perf Release 12.2.0.1… [oracle@graph122admin]$ mkdir /u01/utlrp_cdb_pdbs_output [oracle@graph122admin]$ $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d$ORACLE_HOME/rdbms/admin -l '/u01/utlrp_cdb_pdbs_output' -butlrp_cdb_pdbs_output utlrp.sql … EnterPassword: … catcon.pl:completed successfully [oracle@graph122admin]$ - Validate the change made toMAX_STRING_SIZE. To verify, run the following commands and you should see thevalue of max_string_size changed to "EXTENDED" [oracle@graph122 admin]$ sqlplus / as sysdba SQL> alter session set container=PDB1; Session altered. SQL> show parameters max_string; NAME TYPE VALUE ------------------------------------ ----------------------------------------- max_string_size string EXTENDED - Create a simple Property Graph andadd one vertex with a name "Property Graph", a second vertex with aname "Oracle Database Cloud Service", and an edge with label"livesIn" linking these two vertices. In addition, this edge has aweight=1.0. SQL> conn scott/<password> SQL> exec opg_apis.create_pg('mypg', 4, 8, 'PGTS'); PL/SQL procedure successfully completed. SQL> insert into mypgVT$(vid,k,t,v)values(1,'name',1,'Property Graph'); SQL> insert into mypgVT$(vid,k,t,v) values(2,'name',1,'OracleDatabase Cloud Service'); SQL> insert intomypgGE$(EID,SVID,DVID,EL,K,T,VN) values(100,1,2,'livesIn','weight',3,1.0); SQL> commit; Commit complete. Cheers, Zhe

In this installment, I am going to show you the steps required to create your first property graph on the Cloud. - Login to the newly created database service. You can use Putty or SSH or  a tool of...

Graph Features

Graph Database Says Hello from the Cloud (Part II)

In this installment, I am going to show the steps needed to create a brand new Oracle Database 12.2 on the cloud. - Sign in to Oracle Cloud with your credential - Click Create Service button (assume you don't alreadyhave an Oracle Database 12c Release 2 service up and running) -  Choose Oracle Database 12c Release 2. You will need toprovide a SSH Public Key which can be generated with this user interface (clickEdit button will reveal a few choices). In addition, type in the Service Nameand choose appropriate Subscription Type and Billing Frequency. - Select the compute shapes, backup/recoveryconfiguration and more in the following page. Note that it is highlyrecommended to choose AL32UTF8 character set. In general, more CPUs and alarger RAM setting lead to better database performance. If one need to dealwith a very large graph, then a high storage setting is necessary. - Confirm your selections before clicking the Createbutton. - Wait for the database service creation to finish. Notethat you should see an "In Progress" status. - After the service is created, you will see a"Created On" message on the screen. Click on the newly created database service. You will notice a public IP assigned for this newlycreated database service. That is it, enjoy your new database on the cloud! For screenshots, please refer to this pdf.

In this installment, I am going to show the steps needed to create a brand new Oracle Database 12.2 on the cloud. - Sign in to Oracle Cloud with your credential - Click Create Service button (assume...

Join Oracle Spatial + BIWA Summit 2017 for Innovations in Geospatial Databases, Big Data, Cloud and Analytics

JoinOracle Spatial + BIWA Summit 2017 for Innovations in Geospatial Databases, BigData, Cloud and AnalyticsJanuary31 – February 2, 2017OracleConference Center, Redwood Shores, CA TheOracle Spatial and Graph SIGuser group andOracle’s Business Intelligence, Warehousing, and Analytics (BIWA) SIG present theOracle Spatial + BIWA Summit: THEBig Data + Analytics + Spatial + Cloud + IoT + Everything Cool User Conference (http://www.biwasummit.org). We hope you can join us for the majorworldwide Spatial event of the year – combined with the leadingbusiness intelligence and analytics conference for Oracle users. Register here. Whetheryou’re just getting started with incorporating maps into business apps, or aGIS expert, Spatial + BIWA Summit has something for you. Sessions will explore advances for geospatialapplications, and the latest location and map visualization features in BI, bigdata, and cloud tools and platforms. You’ll get to join · Technical sessions straight from the Oracle developers · Keynotes by industry leaders and hands on labs · Real-world use cases from customers and partners –from city modeling and airport GIS to insurance risk and fraud detection · Networking opportunities with the SIG user group · BIWA Summit – full access to 3 days/4 tracks onbusiness intelligence, data warehousing, analytics, machine learning, IOT Theagenda includes: SpatialTechnologies for Database, Big Data, and Cloud · Maps, 3-D, Tracking, JSON,and Location Analysis: What’s New with Oracle’s Spatial Technologies – SivaRavada, Oracle · Bringing LocationIntelligence To Big Data Applications on Spark, Hadoop, and NoSQL Hands On Lab –Siva Ravada/Eve Kleiman, Oracle · Deploying SpatialApplications in Oracle Public Cloud – David Lapp, Oracle · RESTful Spatial serviceswith Oracle Database as a Service and ORDS – Jayant Sharma, Oracle Fordevelopers of BI and business applications · Getting Started with Mapsin OBIEE, BI Cloud Service and Data Visualization Desktop – Wayne Van Sluys,Interrel · Deploy Custom Maps in OBIEEfor Free Hands On Lab – Arthur Dayton, Vlamis Software · See what's new in OracleBI: A Dive into DV, Mobile, BICS products – Philippe Lions, Oracle GeospatialApplications – Industry Use Cases and Best Practices · Smart 3D Cities andSituational Awareness Applications with Oracle Exadata and JET - Frank Suykens,Luciad · Spatial Best Practices,Tips, and Tricks with Oracle Spatial and Graph – Daniel Geringer, Oracle · Capturing Reality withPoint Clouds: Applications, Challenges and Solutions – Rico Richter, HassoPlattner Institute · Cadastral Management atItaly’s National Land Agency – Riccardo Del Frate, SOGEI · Location-based RiskManagement with Oracle Spatial Technologies – Ali Ufuk Peker, Infotech · Airport GIS at Los Angelesand Munich Airports · A Routing Solution for ImprovingEmergency Services Planning - Marc Lazarovici, Insitute for Emergency Medicine,LMU Munich Graph,Big Data, and Advanced Analytics · Dynamic Traffic Predictionin Road Networks - Ugur Demiryurek, USC · Analysing the Panama Paperswith Oracle Big Data Spatial and Graph - Robin Moffatt, Rittman Mead · Visualizing Graph Data withGeographical Information – Kevin Madden, Tom Sawyer Learnmore about the Summit and register at http://www.biwasummit.org. Wehope to see you there! Joinour social networks LinkedinOracle Spatial and Graph Group Google+ Spatial and GraphGroup Oracle Spatial and Graph SIG(join IOUG for free) Questions? Contact the SIG at oraclespatialsig@gmail.com

Join Oracle Spatial + BIWA Summit 2017 for Innovations in Geospatial Databases, Big Data, Cloud and AnalyticsJanuary 31 – February 2, 2017 Oracle Conference Center, Redwood Shores, CA TheOracle Spatial...

Oracle Spatial and Graph for Oracle Database 12c Release 2 (12.2) on Oracle Cloud is available!

Oracle Spatial and Graph for OracleDatabase 12c Release 2 (12.2), the latest generation of the world’smost popular database, is now available in the Oracle Cloud! With the 12.2 release, OracleSpatial and Graph introduces powerful new analytics, and features to makespatial and graph operations faster, cloud-ready and more developer friendly. New capabilities include Apowerful new property graph analysisfeature to address Big Data use cases such as making recommendations, findingcommunities and influencers, pattern matching, and identifying fraud and otheranomalies Nativesupport for GeoJSON AnHTML5 map visualization component Spatial index and partitioningimprovements toboost performance Location data enrichment services for unstructured textdocuments Location tracking service Enhancements for GeoRaster, NetworkData Model, and Spatial Web Services RDF Semantic Graph enhancements - SPARQL 1.1 update operations forpattern-matching queries, and integration with property graphs With 12.2, we continue to deliverthe most advanced spatial and graph database platform for applications from geospatialand location services to Internet of Things, social media, and big data. And it runs on Oracle DatabaseCloud – a great option to quickly get your environment up and running, whetheryou have a sandbox prototype, small departmental application, or large, missioncritical applications. Learn More Oracle Spatial and Graph /12.2 features on Oracle Cloud Oracle Spatial and Graph Data Sheet Spatialand Graph Analytics with Oracle Database 12c Release 2 Technical White Paper OracleDatabase 12c Release 2 Documentation OTNSpatial and Graph product website Oracle Database 12c Release 2 Oracle Openworld 2016 Keynote 'Transforming Data Management with Oracle Database 12c Release 2' White Paper: Transforming Data Management 12.2 on Oracle Cloud New Features Guide Oracle Database 12c Release 2 on OTN Oracle Cloud Database Services [#blog]Oracle Database 12c Release 2 on OracleCloud Now Available Join the conversation:#DB12c #FutureofDB #OracleSpatial

Oracle Spatial and Graph for Oracle Database 12c Release 2 (12.2), the latest generation of the world’s most popular database, is now available in the Oracle Cloud! With the 12.2 release, OracleSpatia...

Benefits of the 12c SDO_POINTINPOLYGON Function

Guest Post By: Nick Salem,Distinguished Engineer, Neustar and Technical Chair, OracleSpatial SIG The mdsys.SDO_POINTINPOLYGONfunction API is a new feature that was released in Oracle Database 12c. There is a nice blog post that explains how this feature can beused to address the challenges of ingesting large amounts of spatial data whereyou can handle the loading and querying of large spatial data sets without theoverhead associated with creating and maintaining a spatial index. The example shows how SDO_POINTINTPOLYGONcan really benefit massive scale operations, such as those using Exadataenvironments. In this post, I would like tocover some other benefits that the SDO_POINTINPOLYGON feature provides that canbe very helpful – especially for applications servicing a large number ofconcurrent spatial operations. This cangreatly improve performance for such applications that run on either Exadata ornon-Exadata environments. The fact that the SDO_POINTINPOLYGON does notuse a spatial index means that you can leverage data stored in an externaltable or a global temporary table to perform spatial point-in-polygonqueries. Global temporary tables aregreat for multi-session environments because every user session has their ownversion of the data for the same global temporary table, without any contentionor row locking conflicts between sessions. Furthermore in 12c, Oracle introduced some major performanceoptimizations to global temporary tables that result in substantially lowerredo and undo generation. You will needto make sure system parameter temp_undo_enabled is set to TRUE to ensure thatthe 12c global temporary tables optimization is fully in effect. Below is a screenshot fromNeustar’s ElementOne platform with a map showing a trade area and a set ofuploaded customer points. At Neustar, our clients workwith a lot of transient work data as part of a multi-step process for variousspatial and analytical use cases. Let’sput together a quick PL/SQL script that you can use to test drive the power ofthe SDO_POINTINPOLYGON function. Here, Iuse a simple polygon in the San Diego area and generate a set of randomcustomer points in and around the polygon. Then, I populate the global temporary table. The script is configurable: you can increaseor decrease the number of randomly generated customer points, and how far fromthe polygon centroid you may want to allow points to extend to. Once you have the data populated, you can runthe SDO_POINTINPOLYGON queries in serial or in parallel, or change some of theoptional MASK parameters. Here’s a screenshot of thetest polygon and a sample of randomly generated 1,000 customer points. 1. Create a globaltemporary table Ok, so let’s first create aglobal temporary table => create global temporary tableTMP_SPATIAL_POINT (x number,y number,id varchar2(512) )on commit preserve rows; 2. Generate a setof random points and populate the global temporary table Next, let’s run the followingscript to populate table TMP_SPATIAL_POINT. The script has two variables: maxDistanceInMeters and numberOfPoints inthe PL/SQL declaration section that you can adjust as needed. If you want to generate more points, then youcan change the value of numberOfPoints from 1000 to a greater number. In this example, I also have maxDistanceInMetersset to 4000. This will ensure that nocustomer points get generated further than 4000 meters away from the polygoncentroid; this can be increased or decreased as needed. The script goes through a loop up to thenumberOfPoints variable and uses the SDO_UTIL.POINT_AT_BEARING function to plotpoints around the centroid of the polygon using randomly generated values. The goal of the script is to quickly createsome test data you can play with. Ofcourse, you can also change the test polygon as well. declare polygon sdo_geometry; centroid sdo_geometry; newPoint sdo_geometry; maxDistanceInMeters number := 4000; numberOfPoints number := 10000; type tRecs is table oftmp_spatial_point%rowtype; recs tRecs := tRecs(); begin polygon := SDO_GEOMETRY(2003, 8307, NULL,SDO_ELEM_INFO_ARRAY(1,1003,1), SDO_ORDINATE_ARRAY(-117.1044,32.680882,-117.08895,32.661808,-117.06148,32.675102, -117.06045,32.697641, -117.09753,32.696774,-117.1044,32.680882)); centroid := SDO_GEOM.SDO_CENTROID( polygon, 0.05 ); recs.extend(numberOfPoints); for i in 1 .. numberOfPoints loop newPoint :=SDO_UTIL.POINT_AT_BEARING( start_point => centroid, bearing => dbms_random.value(0,6.283), distance => dbms_random.value(1,maxDistanceInMeters)); recs(i).id:= i; recs(i).x:= newPoint.sdo_point.x; recs(i).y:= newPoint.sdo_point.y; end loop; execute immediate ‘truncate tabletmp_spatial_point’; forall i in recs.first .. recs.last insert intotmp_spatial_point values ( recs(i).x, recs(i).y, recs(i).id ) ; commit; end; 3. RunSDO_POINTINPOLYGON queries (in serial or parallel) Ok, now we can startperforming queries using the SDO_POINTINPOLYGON function. Here’s a sample query that returns the countsof points that fall inside the polygon. The params parameter is optional; if omitted, a MASK=ANYINTERACT querywill be performed. set timing on select count(*) from table( SDO_PointInPolygon( cur => cursor(select * from tmp_spatial_point), geom_obj =>SDO_GEOMETRY(2003, 8307, NULL, SDO_ELEM_INFO_ARRAY(1,1003,1), SDO_ORDINATE_ARRAY(-117.1044,32.680882, -117.08895,32.661808,-117.06148,32.675102, -117.06045,32.697641,-117.09753,32.696774, -117.1044,32.680882)), tol => 0.05, Params =>'MASK=INSIDE' ) ) t ; Here’s a another example of the query using parallelism andwith the params parameter omitted.  select /*+ parallel(8) */ count(*) from table( SDO_PointInPolygon( cur => cursor(select * from tmp_spatial_point), geom_obj =>SDO_GEOMETRY(2003, 8307, NULL, SDO_ELEM_INFO_ARRAY(1,1003,1), SDO_ORDINATE_ARRAY(-117.1044,32.680882, -117.08895,32.661808,-117.06148,32.675102, -117.06045,32.697641,-117.09753,32.696774, -117.1044,32.680882)), tol => 0.05 ) ) t ; The SDO_POINTINPOLYGON function has been built to leverageOracle’s parallel processing capability.  To demonstrate the magnitude ofperformance gain when utilizing parallelism, I modified the point generationscript in part 2 to populate a million points with a max distance of 5,000meters from the center point.  I then tested the SDO_POINTINPOLYGON querywith no parallelism, and then with parallelism of 2, 4 and 8.  Here are theelapsed response times: Level of parallelism Elapsed time None 13.28 secs 2 9.62 secs 4 6.03 secs 8 3.43 secs  Utilizing parallelism can greatly shorten query processingtimes.  You can use these scripts in your environment to generatedifferent numbers of points, test various levels of parallelism, and comparethe response times. 

Guest Post By: Nick Salem, Distinguished Engineer, Neustar and Technical Chair, Oracle Spatial SIG The mdsys.SDO_POINTINPOLYGONfunction API is a new feature that was released in Oracle Database 12c....

Spatial & Graph Summit Presentations From BIWA Summit ’16 Now Available

Over 24presentations on spatial, map visualization, and graph technologies fordatabase, big data, and cloud platforms were delivered at BIWA Summit '16 – theOracle Big Data + Analytics + Spatial + YesSQL Community 3-Day User Conference,Jan. 26-28 at Oracle HQ. Slides andmaterials from technical sessions, handson labs, and customer and partneruse cases are now available. OTNSpatial and Graph Summit at BIWA ’16 Page:http://www.oracle.com/technetwork/database/options/spatialandgraph/learnmore/oracle-spatial-summit-at-biwa-2016-2881713.html Topics included bestpractices, point clouds/city models, rasters, mapping, big data technologies. New customer use cases came from government, telco, transportation, and energy,featuring Oracle Spatial and Graph/DB 12c deployments.  See below for the session list. For some usertips to get started on 2 new hands onlabs for Big Data Spatial and Graph, view TracyMcLane’s blog post here. And for more greatpresentations from BIWA Summit 16 – from tracks such as Advanced Analytics, BI,Data Warehousing, Big Data, and YesSQL, visit the BIWA Summit page. Many thanks to the BIWA Committee for makingthe event a success! Featured BIWA Session FS | What’s New with Spatial and Graph?Technologies to Better Understand Complex Relationships JamesSteiner, Oracle With theemergence of IoT, Cloud services, mobile tracking, social media, and real timesystems, we're finding new ways to incorporate social and location informationinto business workflows.  Learn how Oracle's spatial and graphtechnologies address these developments.  New offerings for the Cloud, andNoSQL and Hadoop platform are discussed.  A BIWA '16 featured talk. Oracle Technical Sessions TS | High Performance Raster Database Manipulation and Data Processing withOracle Spatial and Graph Qingyun(Jeffrey) Xie, Oracle Learn aboutthe core features of GeoRaster, the typical workflow for building large scaleraster databases, capabilities for in-database raster processing and analysis,and the architecture for building powerful raster data applications. TS | 3D Data Management - From Point Cloud to City Model HansViehmann, Oracle 3Dcity models generated from laser scanning or image matching data are findingmany applications today. Learn about thelatest 3D data support in Oracle Spatial and Graph and how it is applied tocity modeling – including how it handles large volumes of data sets such aspoint clouds, TINs, and 3D solids. TS | Best Practices, Tips and Tricks with Oracle Spatial and Graph DanielGeringer, Oracle Learntechniques to optimize performance with Oracle Spatial and Graph, such as parallelquery best practices, vector performance acceleration in 12c, and otheroptimizations deployed by real world customers. Techniques covered can apply to deployments on commodity hardware,Oracle engineered systems, or Oracle Database Cloud Service. TS | Big Data Spatial: Location Intelligence, Geo-enrichment and SpatialAnalytics Siva Ravada,Oracle Locationanalysis and map visualization are powerful tools to apply to data sources likesocial media feeds and sensor data, to uncover relationships and valuableinsights from big data. Learn more aboutOracle Big Data Spatial and Graph, which offers a set of analytic services anddata models that support Big Data workloads on Apache Hadoop and NoSQL databasetechnologies. TS | Map Visualization in Analytic Apps in the Cloud, On-Premise, andMobile LJ Qian,Oracle Learn about theinteractive map visualization capabilities of Oracle analytic and informationdiscovery applications. TS | The Power of Geospatial Visualization for Linear Assets Using OracleEnterprise Asset Management SudharsanKrishnamurthy, Oracle OracleEnterprise Asset Management provides features to manage pipelines, utilitytransmission and distribution lines, roads, and facilities. Learn how to leverage its geospatialcapabilities from Oracle Spatial and Graph, together with tools like Esri andGoogle Maps. TS | Massively Parallel Calculation of Catchment Areas in Retail Albert Godfrind, Oracle For site planning, targeted marketing, and other analyticalprocesses, analyzing catchment areas for retail points-of-sale is very useful. Learn about a highly scalable approach toassociate 1 million customers with their closest retail outlets by drivetime inthe US, using Oracle Spatial and Graph Network Data Model and Geocoder features. TS | Build Your Own Maps with the Big Data Discovery Custom VisualizationComponent Chris Hughes,Oracle Thispresentation provides a working example of a custom map visualization inside ofOracle Big Data Discovery – for deeper spatial context and analysis for usersto find, explore, transform, and analyze data in Hadoop. A cookbook and code are provided. TS | High Speed Video Processing for BigData Applications MelliyalAnnamalai, Oracle Applicationscentered on video such as surveillance, security, drone video capture, and trafficmanagement require fast, automated video processing and analysis.  Hadoop’s support for massive parallelism is uniquely suited to perform highspeed video processing.  We present an extensible framework in Hadoop thatcan be customized for many video analysis applications, for easy inclusion inanalytics tools and BI dashboards.  TS | Build Your Own Maps with the Big Data Discovery Custom Visualization Component Chris Hughes,Oracle Thispresentation provides a working example of a custom map visualization inside ofOracle Big Data Discovery – for deeper spatial context and analysis for usersto find, explore, transform, and analyze data in Hadoop. A cookbook and code are provided. Customer and Partner UseCase Sessions Large-ScaleSpatial Analytics and Cloud Deployments UC | Best Practices for Developing Geospatial Apps for the Cloud Nick Salem,Neustar Learn bestpractices and techniques for building robust, scalable, cloud-basedapplications using Oracle Database 12c Spatial and Graph, Partitioning, RAC,Advanced Security, and Weblogic 12c. Neustar organized a 2.5TB spatial databasefor maximum performance, deploying Oracle security, high availability, and richgeospatial analytics. Hear tips fromtheir experiences. UC | Implementation of LBS Services with Oracle Spatial and Graph andMapViewer in Zain Jordan Ali UfukPeker and Kerem Par, Infotech Zain is oneof the largest operators in Middle East, active in 7 countries with more than44 million subscribers. Learn how Infotech’splatform provides Zain with services such as location based advertisement,demographics, asset tracking and  fleet management, fully implemented withOracle Spatial and Graph and MapViewer. UC | Fast, High Volume, Dynamic Vehicle Routing Framework for E-Commerceand Fleet Management Ugur Demiryurek, University of Southern California Fleet management and scheduling are critical to e-commerce,utilities/telco, and military operations. Learn about an efficient, accurate, time-dependent solution for highvolume vehicle routing on real world road networks, using Oracle Spatial andGraph Network Data Model. LocationIntelligence UC | Electoral Fraud Location in Brazilian General Elections 2014 Alex Cordonand Henrique Da Silva, CDS A solution usingOracle MapViewer 12c and Oracle Business Intelligence Enterprise Editionallowed the Brazilian Electoral Justice to identify the location andgeographical distribution of potential fraud and failures in the 2014 Brazilianelections. Data came from 24 millionvoters using Brazil’s biometric voting system, with polling stations andgeographic locations associated with each voter. GIS forEngineering, Telco, and Energy UC | Oracle Spatial 12c as an Applied Science for Solving Today'sReal-World Engineering Problems Tracy McLane,Bechtel Corporation Bechtel'slarge global engineering and construction projects require a vast amount ofvaried spatial data for real-world problem solving – including earthquakecatalogs for seismic analysis, hydraulic data, infrastructure survey data, andindoor tracking locations. This presentation takes an in-depth look at OracleSpatial technologies used in the world of engineering. UC | Managing National Broadband Infrastructure at Turk Telekom with OracleSpatial and Graph MuratAltiparmak and Murat Hancerogtu, Turk Telekom Turk Telekomis Turkey's major fixed line and broadband operator, with 18 million+customers. Learn how they identify investment locations, infrastructureimprovement locations and available services for specific locations by thepower of its Oracle Spatial and Graph based GIS system. UC | ATLAS - Utilizing Oracle Spatial and Graph with Esri for Pipeline GISand Linear Asset Management DavidEllerbeck, Global Information Systems Existingtransmission pipeline models and data management systems pose challenges interms of maintenance, data quality, and redundancy. An approach thatcentralizes spatial data management in the database with Oracle Spatial andGraph addresses those issues. Real-world customer examples from the oil and gasindustry are shared. NationalGovernment and Land Management UC | Using Open Data Models to Rapidly Develop and Prototype a 3D NationalSDI in Bahrain DebbieWilson, Ordnance Survey Learn about aninnovative, large-scale land registration and spatial data management system prototypefor Smart Cities. It supports a national3D data model, and manages diverse data sets including topography, cadastre,urban planning, transport, utilities and industrial facilities in a singlesystem. UC | Delivering Smarter Spatial Data Management within Ordnance Survey, UK DebbieWilson, Ordnance Survey Learn how aleading national mapping agency has deployed an integrated geospatial datamanagement system based on Oracle technologies, to increase automation, reducecosts, and deliver higher value content and services. UC | Assembling a Large Scale Map for the Netherlands Using Oracle 12cSpatial and Graph RichardHuesken, Transfer Solutions TheNetherlands is combining diverse large scale map data from numerous governmentagencies into a single, topologically correct dataset for sharing andreuse. The BRAVOsystem uses Oracle Spatial and Graph 12c features for spatial data operationsfor validation, cleansing, and high performance. SpatialData ETL and the Cloud UC | Centralizing Spatial Data Management with Oracle Cloud Databases SteveMacCabe, Safe Software Clouddatabases provide a flexible and cost-effective way for organizations to managelarge volumes of data. Learn how totransform data in the cloud to take advantage of Oracle cloud data architectureusing Safe Software’s FME, the leading platform for spatial extract, transform,and load operations. GraphTechnologies UC | Graph Databases: A Social Network Analysis Use Case Xavier Lopez,Oracle, and Mark Rittman, Rittman Mead In this newpresentation, Mark Rittman demonstrates the use of Oracle Big Data Spatial andGraph to perform “sentiment analysis” and “influencer detection” across a realworld social network – Twitter feeds from the Rittman-Mead blog readercommunity.  Xavier Lopez also introduces graph databases and how theydrive social network analysis, IoT, and linked data applications.  UC | Dismantling Criminal Networks with Graph and Spatial Visualization andAnalysis Kevin Madden,Tom Sawyer Learn about ahighly visual situational awareness solution that combines linked data,temporal, and spatial analysis. Usingcrime incident data for a major city over a two year period, we use these techniquesto identify top criminal offenders and predict thelocation and time of future criminal activity. The application is created with Tom Sawyer Perspectives, and uses OracleSpatial and Graph for data storage and management. UC | Deploying a Linked Data Service at the Italian National Institute ofStatistics Monica Scannapieco and IstatGiovanni Corcione, Oracle Italy’s National Institute ofStatistics has developed a web portal to publish official census data foradministrative areas from the municipal to the national level. Learn how they have adopted innovative LinkedData techniques to harmonize data and provide advanced query and informationdiscovery services for policy makers and the public. UC | Hybrid Cloud Using Oracle DBaaS: How the Italian Workers CompensationAuthority Uses Graph Technology PatrizioGalasso, INAIL Italy, and Giovanni Corcionne, Oracle Organizationsare considering more nimble and lower cost cloud services to enhance workers’collaboration, productivity, and business insight. The Italian Government Workers CompensationAuthority chose the Oracle Cloud Platform, a public platform as a service(PaaS), to manage insurance coverage for workplace accidents and incidents inten Italian regions.  The platform incorporates W3C RDF linked open datastandards, using Oracle Database Cloud with Oracle Spatial and Graph option. Hands On Labs and DemoSessions HandsOn Lab materials on OTN HOL | Interactive Map Visualization ofLarge Datasets in Analytic Applications LJ Qian andDavid Lapp, Oracle This sessionpresents an approach to interactive map visualization using an HTML5 basedJavascript API, and includes a demonstration of powerful new features in themap visualization components of the Oracle Spatial and Graph product stack. HOL | Applying Spatial Analysis To BigData (Hands On Lab) Siva Ravadaand Eve Kleiman, Oracle In this HandsOn Lab, learn how to perform spatial categorization and filtering, and createmap visualizations on a Twitter dataset, and to load, prepare, and processlarge raster imagery datasets.  HOL | Gain Insight into Your Graph Data –A Hands-On Lab for Oracle Big Data Spatial and Graph (Hands On Lab) Zhe Wu,Oracle A set of property graph hands on lab exercisesand demos are available as part of the Big Data Lite VM.  Using areal-world social graph, these show how to manipulate graph data in NoSQL DB orHBase, and perform analytics like community detection and connectivity. 

Over 24 presentations on spatial, map visualization, and graph technologies for database, big data, and cloud platforms were delivered at BIWA Summit '16 – theOracle Big Data + Analytics + Spatial +...

Tips for Switching Between Geodetic and Mercator Projections

Guest Post By: Nick Salem, Distinguished Engineer, Neustar and Technical Chair of the Oracle Spatial SIG Note:  Thanks to Nick Salem for this valuable tip on handling multiple coordinate systems to optimize performance and storage! Oracle Spatial and Graph provides a feature rich coordinate system transformation and management capability for working with different map projections.  This includes utilities that convert spatial data from one coordinate system to another, from 2D to 3D projections, create EPSG rules, deal with various input and output formats and more.If you deal with geodetic data, you may have run into the need to display your data points and areas onto aerial or terrain maps.  For this, you could utilize the SDO_CS.TRANSFORM function to dynamically convert your geometries to the destination projection system.  The challenge we had at Neustar was that our customers wanted the option to switch frequently back and forth between our MapViewer geodetic base maps and aerial and terrain base maps with the Mercator projection.  They wanted to do this in a seamless and responsive manner.  And some of our customer datasets are fairly large.  The Neustar ElementOne system holds billions of rows of geospatial data.  We wanted to provide our customers with the capability to switch projections for any of their geometries, but we also wanted our system to scale and maintain quick responsiveness.  Coordinate transformation operations can be expensive, especially if they are performed on large volumes of geometries.  Initially, we tried to dynamically perform coordinate transformations on the fly for customer requests, but this did not give us the best performance, and resulted in some of the same geometries going through the same repetitive transformation over again and again.The solution for us was to maintain and manage two coordinate systems for all of our customer geometries.  For every spatial data record, we have two SDO_GEOMETRY columns, one to store the latitude/longitude geodetic data and other to store the Mercator projection data.  We use the geodetic geometries for queries and all spatial operations, and we use the Mercator projection solely for map visualizations.  The advantage of this approach is that every geometry goes through only one coordinate transformation during the data loading or updating process.  And for query visualizations, performance is optimal, since the data is already available for display.  This results in the best customer experience and snappy response times.  Another advantage of visualizing geodetic data using the Mercator projection is that radii appear circular instead of oval looking.Here’s a picture from Neustar’s ElementOne platform showing a 3 mile radius trade area. One obvious disadvantage of this approach is that it requires more storage as you store and manage two sets of geometry columns.  If you take a closer look at the geometries created by the coordinate transformations, the resulting geometry may include a greater amount of precision than your application actually needs.  A good rule of thumb is to only include the least amount of precision required to support your needs.  Let’s take a quick look at an example of converting a geodetic (8307) latitude/longitude point geometry to the Mercator (3785) projection.SQL> select  to_char(value(t).x) x,  to_char(value(t).y) yfrom   table(sdo_util.GetVertices(sdo_cs.transform(         sdo_geometry(2001,8307,sdo_point_type(-117.019493,32.765053,null),null,null)         ,0.5,3785))) t;X                                                        Y----------------------------------------         -----------------------------------------13026550.373647                             3864160.0406267The 8307 geodetic projection utilizes the unit of degrees for the latitude/longitude coordinates, while the 3785 Mercator projection uses meters as the measure.  From the example above, you can see up to 7 decimal places for the coordinates – which was far greater than what we need for our mapping analysis and visualization needs.  You may wonder why we should bother about the numeric precision of spatial geometries.  The answer is performance and storage savings.  The larger the precision, the more storage it will take.  The more storage for your geometries, the more Oracle blocks needed to store your data.  The more data blocks that the database has to fetch to satisfy a query, the longer the query will take.  To illustrate the amount of additional space that transformed geometries can take compared to the original geometries, I created 4 tables each consisting of 30,532 ZIP Code geometries. Next I ran a query joining USER_SEGMENTS and USER_LOBS to get the total space consumption of the SDO_ORDINATES for each of the 4 tables.   For polygon geometries, Oracle will likely store the geometry outside the table in LOB segments.SELECT   l.table_name, l.COLUMN_NAME, t.BYTES/(1024*1024) m_bytesFROM    user_segments t,     user_lobs lWHERE      t.segment_name = l.segment_name and      l.column_name like '%GEOM%SDO_ORDINATES%'; TABLE_NAME                                   COLUMN_NAME                            M_BYTES------------------------------                     ------------------------------                     ----------ZIP_CODE_SRID8307                       "GEOM"."SDO_ORDINATES"        120.1875ZIP_CODE_SRID3785                       "GEOM"."SDO_ORDINATES"        216.1875ZIP_CODE_SRID3785_ROUND0     "GEOM"."SDO_ORDINATES"        120.1875ZIP_CODE_SRID3785_ROUND1     "GEOM"."SDO_ORDINATES"        136.1875The original ZIP Code SDO_ORDINATES consumed 120M.  But when we converted the same ZIP geometries to the Mercator projection, we ended up with 216M - that is an 80% increase in size.  Then, when we truncated the decimals for the Mercator projected coordinates in table ZIP_CODE_SRID3785_ROUND0 –  this brought the size back to 120M, but we ended with 41 invalid ZIP boundaries.  Rounding to 1 decimal place resulted in 136M of size and all valid geometries.  The goal is to round the coordinates to the least decimal places needed for your application.  In our case, we used the Mercator projection geometries only for visualization – so we were not very concerned about how valid the geometries were, and opted for truncating the decimal places, which worked out great for us.  In your case, you can play around with what precision works out best for you.Here’s nice helper function that can be used to perform the coordinate transformation and then apply the required rounding all in one step.create or replace function transformToSRID (                           pGeometry    in sdo_geometry,                                                      pTolerance   in number,                           pToSRID      IN number,                           pRoundPos    in integer )return sdo_geometryisoutGeometry  sdo_geometry;beginoutGeometry := sdo_cs.transform( geom => pGeometry,                                 tolerance => pTolerance,                                 to_srid => pToSrid ) ;if outGeometry.sdo_point is not null then  outGeometry.sdo_point.x := round( outGeometry.sdo_point.x, pRoundPos );  outGeometry.sdo_point.y := round( outGeometry.sdo_point.y, pRoundPos );end if;if outGeometry.sdo_ordinates is not null then  for i in outGeometry.sdo_ordinates.first .. outGeometry.sdo_ordinates.last loop    outGeometry.sdo_ordinates(i) := round(outGeometry.sdo_ordinates(i),pRoundPos);  end loop;end if;return outGeometry;end;Quick usage example => SQL> select  to_char(value(t).x) x,  to_char(value(t).y) yfrom  table(sdo_util.GetVertices(transformToSRID( sdo_geometry(2001,8307,sdo_point_type(-117.019493,32.765053,null),null,null),0.5,3785,1)))X                                                        Y----------------------------------------         -----------------------------------------13026550.4                                       3864160Here are a picture from Neustar’s ElementOne platform overlaying a site trade area over a terrain map. Here’s another picture from Neustar’s ElementOne showing 10 and 15 minute drive time trade areas over an aerial map. In conclusion, the amount of precision for geometry coordinates matters for performance and storage.  If you perform a lot of repetitive coordinate transformation to support your application needs, you may want to consider storing the projected geometries.  By default, the SDO_CS.TRANSFORM function may create geometries with coordinates containing more precision than required for your needs.  You should always check the amount of precision of your geometries and round to the minimum number of decimal places needed to support your application requirements.

Guest Post By: Nick Salem, Distinguished Engineer, Neustar and Technical Chair of the Oracle Spatial SIG Note:  Thanks to Nick Salem for this valuable tip on handling multiple coordinate systems to...