By Sandra Cheevers-Oracle on Jul 26, 2016
Author: Pat Shuff
On 13th July 2016, AWS announced support for converting Oracle Data Warehouse to Amazon Redshift using its Schema Conversion Tool. In this blog post, we analyze this tool and uncover what’s real and what’s not. Converting a production Oracle Database to a much lighter weight and less functional database is not recommended but that is the purpose of this tool. The migration effort and customizations needed to move a production database instance to Amazon RDS typically breaks support, breaks functionality, and does not work as expected. We will look at three use cases to analyze how well the tool helps you and look at the tradeoffs required to move your applications from on- premises to Amazon RDS or another cloud service provider.
The idea behind the AWS Schema Conversion Toolkit is that it will look at your existing database (MySQL, Oracle, SQL Server, etc.) and provide you the effort required to migrate from your source database to Amazon RDS. The tool claims that it can convert an Oracle Database to Aurora, MySQL, PostgreSQL, or MariaDB. To test the tool, we looked at the sample database that is installed with an Oracle Database 12c Enterprise Edition called Application Express 5.0 and a public domain package SwingBench that contains server-side code as well as database objects and stored procedures. We could have looked at an Oracle E-Business Suite, Oracle JD Edwards or Oracle PeopleSoft installation but they typically need to run on Oracle Database Standard Edition, Enterprise Edition or in some cases Microsoft SQL Server. The targets supported by the AWS Toolkit are not valid database targets for these application packages and the tool does not support translating these repositories from an on-premises system to a supported target database in Amazon RDS. The tool starts out by asking you to attach to your source Oracle database instance. To do this you need to supply the SID of the database, the Service Name of the instance, the TNS Alias of the instance, or the TNS Connection Identifier. We tested our connection first using Oracle SQL Developer with all of these connection methods and the only one that worked with the AWS Schema Conversion Tool was the SID name. All connections to a multi-tenant instance of the Oracle Database 12c failed because the AWS Tool could not handle a complex Service Name. In our test database we could connect to the SID of ORCL but could not connect to the Service Name pdb1.metacsgse00028.oraclecloud.internal which is what the lsnrctl status command shows as the Service Name and TNS Connection Identifier.
For the Application Express 5.0 instance the AWS Schema Conversion Tool found 3,299 database storage objects and 1,097 code objects. As per the tool, 45 database storage objects and 180 code objects required manual conversion to get them to work on Amazon RDS for MySQL or Amazon Aurora.
Oracle Database 12c comes with a demo pluggable database, but we couldn’t test it with the AWS tool, as it doesn’t support connecting to a database using a complex Service Name We did not reconfigure the listener to adapt to a different SID because few customers do this in real-life scenarios and use the Service Name or TNS name to connect to the database (both of which are unsupported in the AWS Tool).
For SwingBench v2.0 the AWS Tool scanned and found 13 database objects and 2 database code objects and could not convert 1 storage object and 1 code object. The database object that failed was a check constraint as well as a time zone timestamp conversion. Both of these require manual conversion that stops the code from being migrated to Amazon RDS.
There are many hidden issues with the AWS Schema Conversion Tool. First, the conversion option that the tool defaults to is migrate from an Oracle database to Amazon RDS for MySQL or Amazon Aurora (alongside PostgreSQL and MariaDB). This is not an apples-to-apples conversion, as both Amazon RDS and Amazon Aurora lack several key enterprise features (no DataGuard support, no Multi-Tenant or Pluggable containers, no file system access, no Oracle Label Security, and anything that requires sys or sysdba rights to install and configure to mention a few).
Second, contrary to the AWS guidance, the manual interventions required to port an existing code to Amazon RDS for MySQL or Amazon Aurora are not trivial. The AWS Schema Conversion Tool generates a report that shows bar charts summarizing the conversion effort but does not detail what needs to be converted. To get this information, you have to save to a CSV or PDF format to get the full report. The report summarizes the conversions that fail because specific collection types are not supported in Amazon Aurora or Amazon RDS for MySQL. It also lists synonyms and sequences that fail to convert. The conversion also does not support function indexes, check constraints, user defined functions, virtual columns, the DATETIME structure with Time zone information, hierarchical queries, pseudo columns, and performance hints. In our specific example, the changes recommended by the AWS Schema Conversion Tool are massive and complex and require a total rewrite of the database logic.
In summary, the AWS Schema Conversion Tool is not enterprise-ready, but is more of a marketing attempt at getting you to move from an existing on-premises database to an Amazon RDS for MySQL or Amazon Aurora. We are making the results of the AWS Schema Conversion Tool for our specific tests available here (APEX database, SwingBench), so you can save time and effort testing the conversion. A better conversion of your on-premises database instance would be to copy it to the Oracle Cloud using Oracle Cloud Database as a Service and all your current features and functionality will be supported without having to change your schema, stored procedures, or applications.