X

Learn how businesses like yours can begin to optimize for today and plan for tomorrow with Cloud-Ready IT Infrastructure

Recent Posts

Engineered Systems

How to Build a High Availability Strategy for Oracle SuperCluster

As well as providing a platform that delivers outstanding performance and efficient consolidation for both databases and applications, Oracle SuperCluster M8 offers a solid foundation on which highly available services can be deployed. How can such services best be architected to ensure continuous service delivery with a minimum of disruption from both planned and unplanned maintenance events? The first step in architecting highly available services on Oracle SuperCluster M8 is to understand the building blocks of the platform and the ways in which they support redundancy and high availability (HA).   Hardware Redundancy Oracle SuperCluster M8 is built around best of breed components. The mean time between failures on these components is typically extremely long. Nevertheless, even well designed and manufactured hardware can fail. With that in mind, Oracle SuperCluster M8 is architected to avoid single points of failure, thereby reducing the likelihood of outage due to hardware failure. The redundancy characteristics of some of the key components of Oracle SuperCluster M8 are described below.   Compute servers. The compute servers used in Oracle SuperCluster M8 are robust SPARC M8-8 servers that boast many features designed to maximize reliability and availability.
Each SPARC M8-8 server in Oracle SuperCluster M8 also includes Physical Domains (PDoms) that are electrically isolated and function as independent servers. Either one or two SPARC M8-8 servers can be configured in an Oracle SuperCluster M8 rack, and each SPARC M8-8 server includes two PDoms. With multiple PDoms always present, it is possible to avoid single points of failure in compute resources. Exadata storage. Three or more Exadata X7 Storage Servers are configured in every Oracle SuperCluster M8 rack. A minimum of three Exadata Storage Servers allows a choice of normal redundancy (double mirroring) and high redundancy (triple mirroring). It is possible to achieve high redundancy with as few as three Exadata Storage Servers thanks to the included Exadata Quorum Disk Manager software.
Up to eleven Exadata Storage Servers can be accommodated in a rack that hosts a single SPARC M8-8 server, and up to six in a rack that hosts two SPARC M8-8 servers. Shared storage. A ZFS Storage Appliance (ZFSSA) that delivers 160TB of raw storage capacity is included in every Oracle SuperCluster M8 to provide shared storage, satisfying infrastructure storage needs and also providing limited capacity and throughput for user files such as application binaries and log files. Appliance controllers are delivered in a cluster configuration, with a pair of controllers set up in an active-active configuration. Two equally sized disk pools (zpools) are set up, with one associated with each controller.
Should a controller fail for any reason, the surviving controller takes over both of the disk pools and all services until the failed controller becomes available again. The result is that a controller failure need not lead to a shared storage outage.
Disks in the shared storage tray of the ZFS Storage Appliance are mirrored to provide redundancy in the event of disk failure, with hot spares that are automatically swapped into the configuration in the event of disk failure.
It’s worth noting that hardware failures typically result in a service request being raised automatically if Oracle Auto Service Request (ASR) is configured.
On Oracle SuperCluster M8, iSCSI devices are assigned for all types of system disks and for zone root file systems. It’s worth noting that all iSCSI devices for any specific PDom are stored in the same ZFS Storage Appliance zpool (as already noted, a single zpool is associated with each of the two ZFSSA controllers). The intent is that any ZFSSA controller failure will only affect half of the PDoms (although any affect is of very brief duration thanks to an automated failover). All iSCSI devices associated with other PDoms will be unaffected. InfiniBand Switches. All Oracle SuperCluster M8 configurations include two InfiniBand Leaf Switches for redundancy. Each dual-port InfiniBand HCA is connected to both leaf switches, allowing packet traffic to continue even if a switch outage occurs.
The entry Oracle SuperCluster M8 configuration consists of one CMIOU in each PDom of a single M8-8 server, plus three Exadata Storage Servers. All larger Oracle SuperCluster M8 configurations include a third InfiniBand Spine Switch as well. The spine switch, which is connected to each leaf switch, provides an alternative path for InfiniBand packets as well as additional redundancy. Ethernet Networking. Although Oracle SuperCluster M8 does not include 10GbE switches (the customer supplies these switches), 10GbE NICs in SPARC M8-8 servers and on the ZFS Storage Appliance are typically connected to two different 10GbE switches to ensure redundancy in the event of switch or cable failure. The operating system automatically detects any loss of connection, for example due to cable or switch failure, and routes traffic accordingly. Each quad-port 10GbE NICs used in Oracle SuperCluster M8 is configured as two dual-port NICs, allowing redundant connections to be established for each NIC. Other components. A number of other components, including the SPARC M8-8 Service Processor, power supply units, and fans are also designed and configured to provide redundancy in the event of component failure.   Software Redundancy Oracle SuperCluster M8 is not totally reliant on the hardware redundancy outlined in the previous section, extensive as it is. The design of Oracle SuperCluster M8 also allows a number of other mechanisms to be leveraged, providing users with the opportunity to layer software redundancy on top of the hardware redundancy.   Oracle SuperCluster M8 leverages a number of mechanisms to achieve software redundancy.   Oracle Database Real Application Clusters (RAC) has long provided a robust and scalable mechanism for delivering highly available database instances based around shared storage. On Oracle SuperCluster M8, RAC database nodes can be placed on different PDoms to build highly resilient clusters, with data files located on Exadata Storage Servers. The end result is database service that does not need to be impacted by either a PDom or a storage server outage. Oracle Solaris Cluster, an optional software add on for Oracle SuperCluster M8, provides a comprehensive HA and disaster recovery (DR) solution for applications and virtualized workloads. On Oracle SuperCluster M8, Oracle Solaris Cluster delivers zone clusters, virtual clusters based on Oracle Solaris Zones, to support clustering across PDoms with fine-grained fault monitoring and automatic failover. Zone clusters are ideal environments for consolidating multiple applications or multitiered workloads onto a single physical cluster configuration, providing service protection through fine-grained monitoring of applications, policy-based restart, and failover within a virtual cluster. In addition, Solaris 10 branded zone clusters can be used to provide high availability for legacy Solaris 10 workloads. Oracle Solaris Cluster Disaster Recovery framework, formerly known as Solaris Cluster Geographic Edition, supports clustering across geographically separate locations, facilitating the establishment of a Disaster Recovery solution. It is based on redundant clusters, with a redundant and secure infrastructure between them. When combined with data replication software, this option orchestrates the automatic migration of multitiered applications to a secondary cluster in the event of a localized disaster. Built in clustering support is inherently provided with some applications (Oracle’s WebLogic Server Clusters is an example). Such support delivers redundancy without the need for specialized cluster solutions.   Note that both RAC and Oracle Solaris Cluster use the redundant InfiniBand links in each domain when setting up cluster interconnects. For example, Oracle Solaris Cluster on Oracle SuperCluster M8 leverages redundant IB partitions, each in a separate IB switch, to configure redundant and independent cluster interconnects.     Architecting a Highly Available Solution for Oracle SuperCluster M8   Although considerable redundancy is provided in hardware components on Oracle SuperCluster M8 (for example, all 10GbE NICs and InfiniBand HCAs include two ports, which are connected to different switches), Oracle does not recommend putting the primary focus on low-level components when considering HA.   For example, Exadata Storage Servers use InfiniBand to send and receive network packets associated with database access. The InfiniBand HCAs used in storage servers have two ports, thereby providing resilience in the event of switch or cables issues. But each storage server has a single HCA, which means that an HCA failure will take the storage server offline. While this might seem like a problem at first glance, there are a number of reasons why this design not only makes sense, but has proven enormously successful:   Given the long mean time between failures of InfiniBand HCAs, replacement due to failure is extremely rare. Building redundancy into every possible failure point would not only add cost, it would increase both hardware and software complexity. Exadata Storage Servers are never installed as single entities. The key unit of redundancy is the storage server itself, not its components.   Another key factor is that outages are not solely caused by hardware failures. Planned maintenance, such as applying a Quarterly Full Stack Download Patch (QFSDP), may necessitate an outage of affected components. Other unplanned events, such as shutdowns caused by external issues (such as power or cooling problems), software or firmware errors, and even operator error, can sometimes lead to outages.   It is important to architect solutions that focus at a high level on solving real-world problems, rather than to place the focus on low level problems that may never occur. This design principle can usefully be applied to every configuration in your data center.   The most effective way to ensure continuous availability of mission critical services is to set up a configuration that is resilient to component outage, wherever it occurs, and whatever the cause. For such a strategy to be effective, it will need to include a Disaster Recovery element based on offsite replication and failover. An offsite mirror of the production environment is a necessary precaution against both natural and man-made disasters, and a key component in any highly available deployment. The simplest and safest strategy at the disaster recovery site is to deploy the same components that are in use at the primary site. Best practices for disaster recovery with Oracle SuperCluster are addressed in the Oracle Optimized Solution for Secure Disaster Recovery whitepaper, subtitled Highest Application Availability with Oracle SuperCluster.   At the local level, clustering capabilities can be used to deliver automatic failover whenever required. The extensive hardware redundancy of Oracle SuperCluster M8 is not wasted—it will contribute by greatly reducing the likelihood of hardware failure that results in downtime.   Quarterly patches, and in particular the SuperCluster QFSDP, can be applied in the fastest and most efficient manner possible using a disruptive approach of shutting down the system and applying updates in parallel. One benefit of a highly available configuration, though, is that a QFSDP can be applied to the various components of a SuperCluster system in a rolling fashion without loss of service. Rolling updates take longer overall to complete since components are not updated in parallel. They are much less disruptive, though. Speak to an Oracle Services representative to understand whether rolling updates can applied to your SuperCluster system.     Backup and Recovery   
A crucial element of any highly available environment is the ability to perform backups and restores as required. The Oracle Optimized Solution for Backup and Recovery of Oracle SuperCluster whitepaper specifically addresses this requirement, documenting best practices for backup and recovery on Oracle SuperCluster.
Backup and restore must cover infrastructure and configuration metadata as well as customer data, and for SuperCluster, Oracle provides the osc-config-backup tool for this purpose. The tool stores its backups on the included ZFS Storage Appliance. Note, though, that the ZFS Storage Appliance itself and the Exadata Storage Servers must be backed up independently.
The SuperCluster platform includes multiple components of which the following are backed up by osc-config-backup:   M8 Logical Domains (LDoms) configuration (note that the older SuperCluster M7, T5-8, M6-32, and T4-4 platforms are also supported)GbE management Switch Infiniband Switches iSCSI mirrors of rpool and u01-pool on dedicated domains ZFS snapshots of rpool and u01-pool on dedicated domains Explorer from each dedicated domain SuperCluster configuration information (OES-CU) data ZFS Storage Application configuration information For SuperCluster environments that include Root Domains and IO Domains, Root Domains can be treated like Dedicated Domains and backed up accordingly. IO Domains use iSCSI LUNs located on the included ZFS Storage Appliance for their boot disks, and these LUNs can be backed up simply by creating a ZFS snapshot. Redundancy is provided by the disk mirroring used with the ZFS Storage Appliance.   Applications and Optimized Solutions Using Oracle SuperCluster   For further information about application deployments in a highly available environment on Oracle SuperCluster, refer to the following links:    SuperCluster for SAP Oracle Optimized Solution for Oracle E-Business Suite on Oracle SuperCluster Teamcenter on Oracle Engineered System   The hardware components of Oracle SuperCluster M8 provide a key set of ingredients for delivering highly available services. In combination with clustering software such as Oracle Solaris Cluster and Oracle Database RAC, services can continue without interruption during both planned and unplanned outages. An offsite configuration that replicates the main site can ensure that even a disaster need not lead to an extended loss of service.

As well as providing a platform that delivers outstanding performance and efficient consolidation for both databases and applications, Oracle SuperCluster M8 offers a solid foundation on which highly...

Data Protection

Oracle Exadata: Can You Trust Yourself to Secure Your System?

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. Can you trust yourself with the security of your company’s critical data?  At first, this must sound like a ridiculous question, as the old adage says, “If you can’t trust yourself, who can you trust?”  But I’m not talking about trusting your own integrity, fearing you will steal your data, or sabotage your system.  I’m asking if you have enough confidence in your abilities to trust them to the security of your system.  After all, do you trust yourself to fly the plane on your next trip, or deliver your next child, or even do your own taxes?  Some things are best left to the experts, and securing your database server is clearly in that camp. It seems as if we hear about a new data breach every few weeks. It could be credit bureaus, hotel chains, social media sites—no one seems immune.  But there is no single vulnerability affecting all these victims.  That makes it especially hard to avoid the next breach.  There’s no checklist you can walk through that is going to guarantee you are secure.  Rather it takes hard work and lots of testing to ensure your system is secure. An IBM study conducted by the Ponemon Institute found the average cost of a data breach in 2018 was $3.86 million.  Given the stakes, it makes sense to leave security to the security professionals.  Security researchers have years of experience in locking down systems.  They understand common vulnerabilities and have developed best practices and methodologies to reduce the risk of break-ins dramatically.  Are you a security professional?  My guess is "probably not," and that is another reason to use engineered systems like Oracle Exadata.  So, how does Exadata protect from unauthorized access to data?  It uses a defense-in-depth approach to security, starting with giving services and users only the minimal privileges required to operate the system.  Customers following Exadata’s default settings are protected using the following techniques:   Minimal software install Exadata does not install any unnecessary packages, eliminating any potential vulnerabilities associated with these packages Implements Oracle Database secure settings Locks down the Oracle database through settings developed through years of testing by Oracle developments Enforces minimum password complexity Greatly reduces the risk that a user on the system chooses an easy to crack or guess password Locks accounts after too many failed login attempts Prevents someone from programmatically trying passwords to break into the system Default OS accounts locked Prevents log in from accounts that need not support login, reducing the password or key management burden Limited ability to use su command Prevents users from elevating their privileges on the system or from changing their identity Password-protected secure boot Prevents unauthorized changes to the boot loader, or booting the system with unauthorized software images Unnecessary protocols, services, and kernel modules disabled Eliminates threats from vulnerabilities in services not required for operation of the system Software firewall configured on storage cells Prevents anyone from opening additional ports to access storage cells, enabling services that are not required and may present vulnerabilities Restrictive file permissions on key security-related files Prevents accidental or intentional changes to security files that may compromise security SSH listening only on management/private networks Prevents users on the public network from logging into a database server Supports SSH V2 protocol only, and insecure SSH authentication mechanisms are disabled Prevents use of version 1 of the SSH protocol, which contains fundamental weaknesses that make sessions vulnerable to man-in-the-middle attacks, and other insecure mechanisms Cryptographic ciphers properly configured for best security Prevents improperly configured ciphers from compromising security and uses hardware cryptographic engines to improve performance Fine-grained auditing and accounting All user activity is monitored and recorded on the system   Now you might be thinking, these are all database and system configuration settings, and you can do it yourself.  That is true, and if you are a security professional, you likely can.  But what if you are not—do you know how to secure and harden the system properly?  Regardless, there are also many features of Exadata that improve security further—features engineered into Exadata and not available on self-built platforms. By default, all clusters running in a consolidated environment can access any ASM disks.  Exadata tightens that security with ASM-scoped security, which limits access to underlying disk partitions (grid disks) to only authorized clusters.  Because a single cluster may host multiple databases, DB-scoped security provides even finer grained control, limiting access to specific grid disks to only authorized databases. Exadata also checks for unauthorized access by scanning the machine for changes to files in specific directories.  If changes are detected, Exadata will raise software alerts, notifying the administrator of a potential intrusion.  Management operations and public data access are segregated to different networks, allowing tighter security on the public network interfaces without compromising manageability.  VLAN support protects users from unauthorized access to network data by isolating network traffic.  Similarly, access to the storage cells from the compute servers is also on an isolated network—one that is InfiniBand partitioned to ensure network traffic from one cluster is not accessible to another, eliminating the chance an attacker can steal data as it transits between compute and storage.   During the development process, security is built in.  The Exadata development team routinely runs a variety of industry-standard security scanners, to ensure the software deployed on the system is free from known vulnerabilities.  If vulnerabilities are detected, monthly software updates quickly provide fixes to ensure your system is protected. All these features are critical to security, but studies have repeatedly shown the most significant contributor to security vulnerabilities is not keeping up with software updates.  Given the complexity and risk of patching today's critical database systems, it’s not surprising.  Many opt not to touch what is not broken, but what’s broken may not always be visible. Exadata takes the risk and pain out of software updates.  Risk is reduced as all database and Exadata software updates are extensively tested in the Exadata environment before shipping.  Exadata customers also benefit from a community effect.  There is a community of customers running Exadata, and issues are quickly discovered and fixed.  If you build your own database environment, it’s possible only you will experience the issue, and you will suffer the associated disruption.  The pain of software updates is reduced with Exadata Platinum support.  This level of support, exclusive to Exadata, regularly patches your systems on your behalf, eliminating your having to deal with patching all together.  With the risk and pain of software updates reduced, Exadata systems are patched more frequently, kept up to date with security fixes, and overall more secure. Finally, don’t forget all the database security features.  Oracle Database has a rich set of features to protect your data, and all are compatible with Exadata.  Oracle Database protects your data with encryption for data at rest and in transit over the network.  It can enforce access restrictions for ad hoc data queries by filtering results based on database user or a data restriction label.  Databases themselves can be isolated within a rack using virtual machine clusters, within a single VM using OS user-level isolation, or within a container database using the Multitenant database option.  You can even protect valuable data from your administrators using Oracle Database Vault, a security feature that prevents DBAs from accessing arbitrary data on the systems they are managing.  Lastly, to ensure compliance, Oracle Audit Vault and Database Firewall monitor Oracle and non-Oracle database traffic to detect and block threats and consolidate audit data from databases, operating systems, directories, and other sources. With the attention to security and the rich set of security features built in or available as options, Oracle Exadata is the world’s most secure database machine.  Proven by FIPS compliance and many deployments satisfying PCI DDS compliance, it’s no wonder that hundreds of banks, telecoms, and governments worldwide have evaluated Exadata, and found it delivers the extremely high level of security they require. If you truly value security, don’t trust yourself to do it right.  Follow the path of these leading enterprises and protect your data with Oracle Exadata. This is part 5 in a series of blog posts celebrating the 10th anniversary of the introduction of Oracle Exadata.  Our next post will focus on Manageability, and examine the benefits Engineered Systems bring to managing your database environments.   About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. Can you trust yourself with the security of your company’s critical data?  At first, this must sound like a...

Cloud Infrastructure Services

How to Achieve Public-Cloud Innovation in Your Own Data Center

Today's guest post comes from John Cooper, Vice President of Oracle Solutions Practice at Cognizant. Organizations are transitioning from traditional IT delivery models to cloud-based models to better serve customers, improve productivity and efficiency, and rapidly scale their businesses. Despite the benefits of public-cloud platforms, concerns over data sovereignty, compliance and privacy have deterred many organizations from accelerating the migration of their workloads to the cloud.   Boomeranging IT Deployment Models In fact, 80% of the 400 IT decision-makers who participated in IDC’s 2018 Cloud and AI Adoption Survey said their organization has migrated either applications or data that were primarily part of a public-cloud environment to an on-premises or private-cloud solution in the last year. The reasons? No. 1 is security, followed by performance and cost control. Often referred to as “reverse migrations,” organizations coming back from public cloud are clearly getting smarter about which workloads belong there and which ones do not. This is certainly true in highly regulated industries, such as finance, government, and defense, where major concerns around data security and data placement have traditionally meant that data must stay within the organization’s firewall. Other concerns include limited control over assets and operations, Internet-facing APIs, and privacy. Privacy in particular has taken on greater significance after passage of the European Union’s General Data Protection Regulation (GDPR). This legislation is intended to safeguard EU citizens by standardizing data privacy laws and mechanisms across industries, regardless of the nature or type of operations. Requirements such as client consent for use of personal data, the right to personal data erasure without outside authorization, and standard protocols in the event of a data breach carry heavy fines if not strictly adhered to. Even with these challenges, IT leaders are still seeking the types of benefits that public cloud can bring. Foremost among them are technology total cost of ownership (TCO) reduction, integration with DevOps and Agile methodologies, ease of complexity in provisioning and duplication, as well as dynamic scaling horizontally and vertically. These leaders want to feel confident that moving to the public cloud won’t wipe out their data or leave it vulnerable to hackers, that customization efforts on mission-critical systems such as ERP won’t be adversely affected, and that service level agreements (SLAs) for high-performance and predictability will be met. Fortunately, Oracle can help organizations take advantage of public-cloud-like innovation in their own data centers with Oracle Exadata Cloud at Customer. Exadata Cloud at Customer provides the same database hardware and software platform that Oracle uses in its own public-cloud data centers and puts it into the customer’s data center. Oracle’s integrated technology stack offers wide-ranging benefits that enable organizations to run database cloud services similar to public cloud, while more easily complying with data regulations and governance. Oracle patches, maintains, and updates the Exadata Cloud at Customer infrastructure remotely and brings best-in-market hardware to your data center for periodic refreshes while maintaining a tight leash on your data. It’s essentially your next step closer to public cloud, but without the risks. Cognizant works closely with Oracle as a premier partner with our Oracle Cloud Lift offering, which enables enterprises to rapidly obtain value from their Oracle Cloud platform and infrastructure investments. We are an Oracle Cloud Premier Partner in North America, EMEA, APAC, and Latin America. Our offerings include migrating Oracle and non-Oracle enterprise workloads to Oracle Exadata Cloud at Customer and other Oracle Cloud models. Our tested and reliable solution enables clients with application inventory, assessment, code analysis, migration planning and execution, and post migration support. We start with an in-depth inventory of the client’s current enterprise landscape, collecting data that feeds into our cloud assessment tools. The data is key for calculating the appropriate fit among public, hybrid, and private clouds. This process also helps predict the most appropriate model for migrating client’s environment to cloud: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS).   The Benefits of On-Premises Cloud While public-cloud vendors have built software-defined/automated IT services that are very attractive to developers, it’s important to understand that those technologies are not unique to public cloud. With automated, modernized, and software-defined infrastructure, on-premises cloud solutions are safer, more predictable, and cost-effective. You stay in control of your data and where it’s located, and subscribe only to the infrastructure, platform, and software services you need. Here are the types of benefits you can expect: Flexibility: Unlike some cloud providers, Oracle gives you access to the underlying infrastructure in both a dedicated or multi-tenanted environment. This provides great flexibility and enables you to implement appropriate foundations for your specific needs. Performance: Choose an appropriate level of cloud performance for your applications, from standard compute to faster, high-end performance. Ease of migration: No need to significantly re-architect your applications or platform when you migrate to on-premises cloud. This is because Oracle’s cloud is specifically engineered to work with Oracle databases, middleware, and applications. If you choose a different cloud provider, it’s more than likely that there will be major re-architecting involved. Security: Implement your own enterprise-grade security in the same manner as on-premises, thanks to the ability to access the underlying infrastructure. Scalability: Scale vertically and horizontally to support large, persistent data workloads. Ease of management: Manage your cloud estate with the same enterprise tooling that most organizations already use for their existing ecosystem.   A New Paradigm for Infrastructure Strategy Having public-cloud-like power in your data center offers a new infrastructure strategy paradigm: Current systems are not disturbed, and organizations can build cloud-native applications or modernize existing ones more rapidly while infrastructure and data reside safely behind the firewall. For organizations interested in embracing digital transformation, on-premises cloud can be a positive initial step. Your organization will have access to a host of cloud services for data management. Why make costly missteps in public cloud, risk your data, or both? Constellation Research’s 2018 Next-Gen Computing: The Enterprise Computing Model for the 2020s confirms that Oracle Cloud at Customer offers “the most complete on-premises offering” among leading cloud vendors, with “the ability to move workloads back and forth between cloud and on-premises” as one of its key differentiators. Cognizant and Oracle work together to help clients implement new operating models, processes, and information systems needed to remain competitive and profitable in today’s digital revolution. If the time to start securely scaling your business is now, then it’s time to take a harder look at Oracle Exadata Cloud at Customer and Cognizant’s Oracle Cloud Lift offering.   About the Author John Cooper leads the Oracle Solutions Practice in North America. In a career spanning 29+ years, John has worked extensively in G&A (HR, Finance, IT, Procurement) and Consulting. His areas of specialization include design and implementation of G&A organizations, and processes and technology solution enablers. John has led HR functions and served as the operational head of a multi-line consulting company. He has expertise in the alignment of business and information technology strategies. As the practice lead, John is responsible for delivery of services related to consulting, implementation, upgrade and application management services across the entire Oracle spectrum of products and services.

Today's guest post comes from John Cooper, Vice President of Oracle Solutions Practice at Cognizant. Organizations are transitioning from traditional IT delivery models to cloud-based models to better...

Engineered Systems

10 New Year’s Resolutions Your IT Organization Should Adopt

IT organizations of all sizes should make a resolution… We are well into 2019 now, but it’s not too late to resolve to make improvements in your IT organization—improvements that can fix your processes, increase your productivity and, most importantly, make your job easier. Here are 10 resolutions you should consider putting into practice this year. Resolution #1: Refine Your Internal Systems Are you spending more than 80% of your IT budget just to keep the lights on? Have you considered a hybrid approach to your internal systems? Making strategic decisions about which applications to keep on premises, which to move to the cloud, and which you should consider keeping in the Oracle Cloud behind your firewall—with Oracle Cloud at Customer—can help you reallocate your spending to more business-critical projects. Resolution #2: Know Your Competition Industries are changing at an unprecedented rate to overcome challenges presented by new entrants, new business models, ever-increasing customer demands, and changing workforce demographics. It is imperative to understand your industry and the trends that you will face in 2019 so that you can anticipate and get ahead of them. We can look at a few examples. In the auto industry, you have to worry about autonomous cars, regulations that may ban gasoline-powered cars, and competition from electric vehicles.  In the financial industry, fintechs continue to transform offerings and customer expectations. Will you explore collaboration that can help your organization move into new markets or expand its offerings rapidly? Or will you try to go it alone? If you decide to go it alone, you will need to anticipate how you’ll compete with those nimbler fintech-powered competitors. Retail is another industry that has undergone incredible changes driven by the internet and mobile. The way we shop and purchase everything from fashion to electronics to groceries has been disrupted. How can your IT organization help respond to this radical change? Resolution #3: Understand How You Can Take Advantage of the Power of AI and Blockchain New technologies are presenting opportunities to help you succeed in the face of new challenges, fundamentally changing how we interact with technology and data. Emerging technologies can help you overcome many challenges in 2019. Some emerging technologies to consider include apps with built-in artificial or adaptive intelligence (AI), chatbots to automate and improve the user experience, IoT analytics to gain deep insights, blockchain applications that create high-trust transactions, and a self-driving autonomous database that can virtually eliminate many mundane IT processes. Resolution #4: Reduce Operational Costs If your main goal for this year is to reduce operational costs, consider leveraging AI, optimizing licensing costs, and deploying newer and better hardware. While there are many ways to do this this, Oracle Engineered Systems with a capacity-on-demand feature is one option to consider. Resolution #5: Deliver More Value to Clients Staying ahead of the competition is key, and that starts with delivering increasing value to clients by innovating and continually improving solutions. Incorporating components of AI and machine learning can help your organization break free from the status quo and achieve significant gains in business innovation. Resolution #6: Make Faster Business Decisions Leveraging advanced analytics and AI can help you make better and faster data-driven decisions. Data is your biggest commodity, so it makes sense to put it to work for your business and so you can serve customers more effectively. If you are evaluating analytics solutions, Oracle provides machine learning with years of database optimization to provide a self-driving database that includes the highest level of security. Resolution #7: Improve Data Security Security breaches are no joke. Constantly having to upgrade and patch your environment can be a tedious task and is easily overlooked—with potentially serious business consequences. Using AI embedded into security can help to predict possible breaches. Look for solutions that can deliver this kind of security. Resolution #8: Consolidate Applications onto the Cloud Are you considering signing on with best-of-breed cloud providers to move your organization forward? While this seems like a sound strategy, you are going to face some big challenges across cloud providers, including different data, management, service, and security models. Essentially, you are recreating the same challenges that exist today in your data center. And, once again, the burden falls on you to manage all of this complexity and make it all work together. Oracle offers a different option. We have designed the most complete and integrated cloud available in the industry across data-as-a-service, software-as-a-service, platform-as-a-service, and infrastructure-as-a-service layers. Simply put, it’s the only cloud you’ll need to run your entire organization. Resolution #9: Adopt Modern Solutions Instead of spending most of your IT budget on legacy systems, consider the cost effectiveness of updating to new, more modern solutions. It will help to reduce your IT footprint and save on power and cooling in your data center. Moreover, you will see improved performance to run your business and keep up with ever-increasing customer demands. Resolution #10: Consider Oracle’s Flexible Deployment Options Whether you are tied to on-premises or you plan to move to cloud—or have a hybrid infrastructure—Oracle offers a unique approach you may want to explore more in 2019. We provide three distinct deployment models: If you’re not ready to move to the cloud, you can take advantage of our engineered systems on-premises in the traditional data center. We’re also the only cloud provider to offer a fully managed cloud in your data center. With Cloud at Customer, we deploy an instance of our cloud in your data center, behind your firewall—giving you the security and compliance you need with the benefits of cloud. And, of course, you can subscribe to our complete services in the Oracle Cloud. We believe that this approach enables us to serve all types and sizes of customers and meet their business needs and maturity levels, no matter where they are today and where they plan to be at the end of 2019 and beyond. Here’s to a Better, Happier IT Organization in 2019! It may not be possible to execute on all 10 resolutions this year, but using these as a starting point, you can choose those that will help your IT organization move the needle the farthest and fastest toward your goals. So, from all of us at Oracle, we wish your IT team a happy and productive new year.

IT organizations of all sizes should make a resolution… We are well into 2019 now, but it’s not too late to resolve to make improvements in your IT organization—improvements that can fix your...

Engineered Systems

What is Storage Index?

What is a Storage Index? To answer that, let's first take a look at a database. The purpose of a database is to store information in columns called fields and rows called records.   For example, let's say you're looking for all blue shoe purchases. Here's how you would do it. First at the compute layer … You run a query asking for all the transactions done between January-March where someone has bought blue shoes. A signal is sent by that query at the compute layer own to what is called the storage layer. This is where all your blocks of data from your database are stored. When you've run a query asking for blue shoes but your storage says to the compute guys that "I cannot filter through my blocks of storage to give you just blue shoe purchases.” So your storage reads and sends up all the blocks of data for all shoes. This causes I/O bottlenecks in your storage and network because too many blocks of data are being pushed up. And now, you sit there waiting for results.   A Better Way Oracle Exadata Storage Indexes There is a better way to get faster results and eliminate performance bottlenecks with Oracle Exadata’s storage indexes. Oracle Exadata’s smart storage can actually figure out which blocks of storage will definitely not contain blue shoes.   How does this work? The idea of storage indexes is that each region of storage holds information called key statistics, which describe the data in each region. In this example, the descriptive information, or key statistics, are about colored and no colored shoes. The storage index tells that the key statistics about this region reveal colored shoes. We now have to read that region and process it to find blue shoes. This allows us to skip irrelevant regions where the Exadata storage indexes tell us the value cannot be there. The storage index uses the key statistics of this region to tell us there are no colored shoes, so we don't have to ever read that region or process it.   Exadata’s smart storage indexes dramatically increase performance and accelerate the speed of a query This is because we automatically ignore data blocks in regions where there are no colored shoes. And this eliminates all the I/O and CPU needed to search that region. It's like having fine-grained partitioning, but on multiple columns instead. You get the descriptive information, or key statistics, about shoes across many columns. For example, you can get the type of shoe, the size of a shoe, and the brand of a shoe - all at once and automatically. And the best part? While most storage indexes incur an overhead expense, particularly on updates which get borne by the database server. Exadata storage indexes have a near-zero cost! As a result, the CPU and memory cost of maintaining a storage index is very very small and it is offloaded to the storage servers. To learn more, speak to an Oracle Exadata Sales Specialist today.

What is a Storage Index? To answer that, let's first take a look at a database. The purpose of a database is to store information in columns called fields and rows called records.   For example, let's...

Data Protection

New ESG Research Highlights GDPR Data Management and Protection Challenges and how Oracle Engineered Systems may Help Customers Address Them

The European Union General Data Protection Regulation (GDPR) represents a broad new approach to customer privacy. GDPR currently applies to companies who have or process personally identifiable information (PII) of individuals located in the European Union, but it represents a global trend that is already being implemented in other countries. These and similar new laws will have lasting effects on the way global corporations do business. Regulatory compliance has affected organizations around the world for decades, and with our digital economy, IT is now at the center of the effort. Compliance isn’t easy when access, retention, and deletion of data throughout an enterprise are involved. Indeed, ESG has determined that 65% of organizations that have been subject to regulatory agency audits have failed part of one at least once in the past five years due to issues with data access or retention. Past audits, increasing stakeholder pressure, and new data protection regulations are leading to new concerns for IT managers and their teams.   GDPR touches on many different aspects of how an enterprise manages PII, including:  Personal consent and data management: Since GDPR took effect, businesses must have, in certain instances, their clients’ expressed permission via “opt-in” before logging any data. When requesting consent, firms must outline the purpose for which the data will be collected, and they may need to seek additional consent to share information with third parties. This change in regulation means many businesses must reexamine their CRM and database management systems to ensure they are maintaining the required records in the proper ways. For instance, are a minimum number of data copies being retained for a minimum amount of time and are all forms of personally identifiable information, including pictures and videos, being anonymized through encryption or other means? Data access and the right to be forgotten: GDPR gives consumers significant control over their private data including the right to access, review, and correct it on demand. Consumers can similarly, under certain circumstances, request the removal of their personal information as well, a process known as the right to be forgotten. Data breaches and notifications: GDPR ups the ante significantly in the case of a data breach. In certain instances, data controllers must report incursions to the relevant data protection authority within 72 hours for certain types of data breaches that are likely to result in a risk to people’s rights and freedoms and to provide details regarding the nature and size of the breach. Additionally, if a breach is likely to result in a high risk to the rights and freedoms of individuals, the GDPR says data controllers must inform those concerned directly and without undue delay. For serious violations, companies may be fined amounts up to the greater of 10 million Euros or 2% of their global turnover. Processors and vendor management: Enterprises are increasingly relying on outsourced development and support functions, so the private consumer data they maintain is often accessed by external vendors. Whenever a data controller uses a data processor to process personal data on their behalf, a written contract needs to be in place between the parties. Such contracts ensure both parties understand their obligations, responsibilities, and liabilities. Similarly, non-EU organizations working in collaboration with companies serving EU citizens need to ensure adequate contractual terms and safeguards while sharing data across borders. How Oracle Engineered Systems May Help Customers Meet GDPR Requirements There is no “silver bullet” for meeting GDPR requirements. An organization’s internal processes will have as much or more impact on their ability to become GDPR compliant than the hardware and software that they use to process and protect their data. However, software and hardware can play a beneficial role that supports an organization’s compliance efforts. The Enterprise Strategy Group, a leading IT analyst, research, validation, and strategy firm has authored a report that looks at how the combination of Oracle Database, Oracle Software, and Oracle Engineered Systems, specifically Exadata and Recovery Appliance, may help customers meet GDPR and similar data protection compliance requirements. ESG examined how the combined capabilities of these software and hardware products may help customers develop and maintain internal processes that simplify their efforts to meet GDPR compliance requirements. Engineered Systems work together to deliver greater efficiency and flexibility to production and data protection environments alike. These solutions give customers powerful tools that can be used to strengthen their compliance efforts. ESG outlined 10 ways that Oracle Database, Software, and Engineered Systems may help customers meet GDPR requirements and protect consumer data. Specifically, while a significant portion of GDPR compliance involves improving business processes and ensuring broad participation across an organization, the data-centric nature of GDPR makes it imperative to look at mission-critical databases because many of them contain PII. The ten ways that ESG identified where Oracle Engineered Systems may help customers create and maintain their compliance processes include: Data Discovery Data Minimization Data Deletion Data Masking/Anonymization Encryption/Security Access Control Monitoring/Reporting Continuous Protection Integrity Checking Recoverability To discover and learn more how Engineered Systems can solve your GDPR and meet data regulation requirements, read the full ESG report.  

The European Union General Data Protection Regulation (GDPR) represents a broad new approach to customer privacy. GDPR currently applies to companies who have or process personally...

Data Protection

Wikibon Reports PBBA Operating Costs are 68% Higher than Oracle’s Recovery Appliance

Leading tech influencer Dave Vellante, Chief Research Officer at Wikibon, recently published an enlightening new research report comparing Oracle’s Recovery Appliance with traditional Purpose-Built Backup Appliances (PBBAs). The analysis, titled “Oracle’s Recovery Appliance Reduces Complexity Through Automation” found Oracle’s Recovery Appliance helped customers reduce complexity and improve both Total Cost of Ownership (TCO) and enterprise value.   Traditionally, the best practice for mission-critical Oracle Database backup and recovery was to use storage-led PBBAs, such as Dell EMC Data Domain, integrated with Oracle Recovery Manager. However, this approach remains a batch process that involves many dozens of complicated steps for backups and even more steps for recovery, which can prolong the backup and recovery processes as well as cause errors leading to backup and recovery failures.   Oracle’s Recovery Appliance customers report that (TCO) and downtime costs—lost revenue due to database or application downtime—are significantly reduced due to the simplification and the automation of the backup and recovery processes. The Wikibon analysis estimates that over four years, an enterprise with $5 billion in revenue can potentially reduce their TCO by $3.4M and have a positive impact on the business of $370M. Wikibon findings indicate that operational costs are 68% higher for PBBAs such as Data Domain relative to ZDLRA for a typical Global 2000 enterprise running Oracle Databases.     Bottom Line Wikibon has exposed what Oracle clients have known all along, choosing Oracle’s Recovery Appliance results in higher efficiencies through automation, an overall reduced TCO, and positive impact to both an enterprise’s top and bottom line. Read the full report   Discover more about Oracle’s Recovery Appliance

Leading tech influencer Dave Vellante, Chief Research Officer at Wikibon, recently published an enlightening new research report comparing Oracle’s Recovery Appliance with traditional Purpose-Built...

Engineered Systems

Fast Food, Fast Data: Havi is Feeding QSR’s Massive Data Appetite with Cloud-Ready Technology

Quick-service restaurants (QSRs) have always focused on speed, value, and convenience for their competitive advantage, but recent trends have made that mission exponentially more complex for companies in this $539 billion global industry. Consumers increasingly demand greater choice, more customization, and a more personalized marketing experience. They want the ability to order, plan delivery, and pay on their mobile devices. In fact, 25% of consumers report that the availability of tech figures into their decision of whether to visit a specific QSR location. As a global company providing marketing analytics, supply chain management, packaging services, and logistics for leading brands in food service, HAVI Global Solutions may be a behind-the-scenes player in the QSR arena, but it is on the front lines of technology-driven innovation. For a very large global QSR and one of its customers, HAVI computes 5.8 billion supply forecasts every day, down to the individual ingredient level, for 24,000 restaurants across the globe. With data points and locations continuing to grow, HAVI’s on-premises infrastructure was reaching capacity. “Traditional build-your-own IT hardware infrastructure stacks were not helping us with all our problems,” says Arti Deshpande, Director, Global Data Services at HAVI. “we were always bound by the traditional stack: storage, network, compute—and when our workload is mainly IO bound, that traditional stack was not helping us with all our problems.”     Ensuring the Right Product at the Right Time “In the QSR business, if you don’t have the right food in the restaurant at the right time, it’s very difficult to meet customer expectations,” says Marc Flood, CIO at HAVI. When Flood joined HAVI as the company’s first global corporate CIO in 2013, he found a complex IT infrastructure environment across multiple data-centers and co-location providers. “I wanted to establish a common backbone with a partner that would work with our cloud-first strategy,” he recalls. Ultimately, Flood chose to consolidate data operations for their ERP solutions—NetSuite and JD Edwards—onto Oracle Exadata Database Machine running in the primary data centers and DR in five Equinix data centers around the globe. HAVI chose Equinix not only for its global footprint that closely matched its own, but also because if its dedicated interconnection with the Oracle Cloud via Oracle FastConnect. “One of the crucial capabilities we sought was the ability to leverage Oracle’s cloud solutions to complement our on-premises solution,” he says. “The cross-connect quality is incredible; the latency on the cross-connect is very low.” HAVI consolidated 34 databases onto two racks of Exadata Database Machine x6-2, resulting in 25% to 35% performance gains versus the previous HP infrastructure. Exadata met HAVI’s requirement for elastic scalability without performance degradation to stay ahead of its QSR client’s projected worldwide growth.   Streamlining Disaster Recovery Without Sacrificing Speed When it came to re-examining the company’s disaster recovery (DR) strategy, HAVI determined that it would need its DR system to achieve 75% to 80% of native performance. “It is essential that we be able to continue to forecast regardless of whether we would have an event in our primary data center while also keeping costs under control,” Flood says. “That means meeting our DR requirements in the right way, establishing appropriate RPOs (recovery point objectives) and RTOs (recovery time objectives) while being able to maintain capability and the cost model in alignment with our clients’ expectations.” To meet these criteria, HAVI worked with Oracle to create a DR solution using the Oracle Cloud to offload the huge overhead required for the DR system from the primary database servers. The solution not only resulted in a cost savings of approximately 35%, but also exceeded performance requirements. “Almost 95% of our workload ran at 100% performance of Exadata, of which 60% actually ran 200% faster,” Deshpande says. Flood and Deshpande were impressed with the speed with which the custom solution could be developed and implemented. “It was a very fast process—a great example of partnering and then moving quickly from proof of concept (POC) into live production,” Flood says. Together, Oracle and HAVI ran eight POCs over three months and fully deployed the system over the course of another three months.   Preparing for the Future with Cloud-Ready Infrastructure QSR is hardly the only industry experiencing change, thanks to the proliferation of data. Finance, ecommerce, and healthcare are just some of the other industries evolving as companies learn how to mine the data deluge for competitive advantage. For HAVI, migrating to a cloud-ready environment means removing the barriers to growth for itself and its customers. “We were able to grow the service that we provide without experiencing any reduction in performance to our customer, and we’re able to assure them of continuous service at the level they expect” Flood concludes. Learn more about how Oracle Exadata and cloud-ready engineered systems can enable your company to scale and innovate your competitive advantages. Subscribe below if you enjoyed this blog and want to learn more about the latest IT infrastructure news.  

Quick-service restaurants (QSRs) have always focused on speed, value, and convenience for their competitive advantage, but recent trends have made that mission exponentially more complex for...

5 Exciting Moments at Oracle OpenWorld

OpenWorld is an exciting conference with excellent networking and information on how emerging technologies are effecting the IT industry. With over 2000 sessions and events, OpenWorld has a lot to offer. We know that not everyone can make it to the conference in person. If you didn’t get a chance to attend, here are the top 5 exciting things that happened at OpenWorld. 1. Exadata Customers Uncover Their Keys To Success One of the most insightful moments at Oracle OpenWorld 2018 was listening to Exadata customers uncover amazing performance improvements and better business results that have helped them develop a competitive edge in the market. Wells Fargo and Halliburton both shared their significant cost savings as well as operational benefits from consolidating their hardware and software onto Oracle Exadata in this session. David Sivick, technology initiatives manager at Wells Fargo, shared how they leveraged 70 racks of Exadata to replace several thousand Dell servers. Sivick said that the company has “realized a multi-million dollar a year saving…There’s a 78% improvement in wait times, 30% improvement on batch, 36% reduction in space from compression and an overall application speed improvement of 33%.” (Source diginomica). Shane Miller, senior director of IT at Halliburton, also explained that he experienced significant cost savings and business results. For instance, Shane mentioned that with Exadata, “we saw a 25% reduction in the time it takes to close at the end of the month… We saw load times from 6 hours to like 15 minutes.” (Source diginomica). 2. Constellation Research and Key Cloud at Customer Customers Share Stories About innovation In the 2 years since the Cloud at Customer portfolio has been announced, customers have seen significant innovation with their Cloud deployments.  As an example, Sentry Data Systems’ Tim Lantz shared how Exadata Cloud at Customer allows them to have their cake and eat it too.  Kingold Group’s Steven Chang shared how important data sovereignty is with their digital transformation with Exadata Cloud at Customer.  And other customers in other sessions, including Dialog Semiconductor, Galeries Lafayette, Quest Diagnostics, and more shared their stories at OpenWorld.    To learn more, read Jessica Twentyman’s article in Diginomica. 3. Oracle Database Appliance Customers Shared How They Maximize Availability  During the Oracle OpenWorld customer panel, we heard how Oracle Database customers are driving better outcomes with Oracle Database Appliance versus traditional methods of building and deploying IT infrastructure. We covered the business value, and customer perspectives on how Oracle Database Appliance has delivered real value for their Oracle software investments while simplifying the life of IT without additional costs. Our special guests operate in education, mining, finance and real estate development. One of the main topics was using a multi-vendor approach vs. an engineered system. As a DBA managing day to day operations, many faced performance and diagnostic issues and a multi-vendor solution was not helping. With ODA they can manage the entire box which provides easy patching with one single patch that does it all. David Bloyd, Nova Southeastern University stated: “In the past, we would take our old production SUN SPARC server that was out of warranty to be our dev/test/stage environment when purchasing a new production server to save money.  Now we can test our ODA patches on the same software and hardware as our production environment by having the same ODAs for both environments. Furthermore, our panelists expressed the need to have 24x7 availability with no downtime. Konstantin Kerekovski, Ramymond James stated:   “The ODA HA model is key because being in financial services you cannot go down, high availability is key. We have two set ups in Dev we are using RAC ONE Node. And also for DR, we can consolidate many databases on one ODA. In production, we have two instances of RAC running on ODA compute nodes, so no downtime.” As we approach the latest generation of Oracle Database Appliance we are seeing more performance, security and reliability increase. “Rui Saraiva, KGHM International stated: “With the latest implementation of the ODA X7 we were able to significantly increase application performance and thus improve business efficiencies by saving time to the business users when they run reports or execute their business processes.” Are you considering Oracle Database Appliance to run your Oracle Database and Applications? Check out this blog By Jérôme Dubar, dbi services on “5 mistakes you should avoid with Oracle Database Appliance.” 4. Oracle Products Demo Floor The OpenWorld demo-grounds featured 100+ Oracle Product Managers explaining the technical details of each  product. This is an excellent opportunity to learn how to get the most out of your Oracle investments by the person who designed the product!  In case you missed it, here is a video showing the exciting things that were happening at Exadata demo booth:   5. Oracle CloudFest Concert Oracle hosted a private party exclusively for customers! This intimate concert featured Beck, Portugal the Man, and the Bleachers. Guests enjoyed a night out at the ball park with free food, drinks, entertainment and networking. Overall, the Exadata experience at Oracle OpenWorld was amazing and to learn more, do check out the new Exadata System Software 19.1 release which serves as the foundation for the Autonomous Database.      

OpenWorld is an exciting conference with excellent networking and information on how emerging technologies are effecting the IT industry. With over 2000 sessions and events, OpenWorld has a lot to...

Engineered Systems

Top 5 Must See Exadata & Recovery Appliance Sessions at Oracle OpenWorld

Are you feeling butterflies in your stomach yet? Oracle OpenWorld 2018 is around the corner and we want to make sure you’re able to maximize your time at the event from October 22-25. So, we’ve decided to give you a personal guide on the top five sessions that you should attend while exploring Oracle Exadata and Oracle Zero Data Loss Recovery Appliance (ZDLRA). What’s more, you can attend all of the key Exadata sessions by checking out this Exadata Focus-On-Document which highlights the top: customer case study sessions, product overview sessions, business use case sessions, product roadmap sessions, product training sessions, and the key tips and tricks sessions. As you can imagine, Oracle has some exciting innovations in store for Exadata across the Exadata on-prem, Exadata Cloud at Customer, and Exadata Cloud Service consumption models. You also need to check out the interesting and latest developments happening on Oracle ZDLRA. So, we’ve recommend the below key five sessions on Exadata and ZDLRA to make it easier for you to navigate through the event. Top 5 Exadata and ZDLRA sessions that you can’t miss while at OpenWorld: Monday Sessions: Exadata Strategy & Roadmap 1. Oracle Exadata: Strategy and Roadmap for New Technologies, Cloud, and On-Premises Speaker: Juan Loaiza, Senior VP at Oracle   When: Monday 10/22 9:00-9:45 am Where: Moscone West - Room 3008 Many companies struggle to accelerate their online transaction processing and analytics efforts so they face faltering business performance. Sound familiar? This session is a perfect gateway to understanding how Exadata can help erase this problem and power faster processing of database workloads while minimizing costs. In this session, Oracle’s Senior VP, Juan Loazia, will explain how Oracle’s Exadata architecture is being transformed to provide exciting cloud and in-memory capabilities that power both online transaction processing (OLTP) and analytics. Juan will uncover how Exadata uses remote direct memory access, dynamic random-access memory, nonvolatile memory, and vector processing to overcome common IT challenges. Most importantly, Juan will give an overview of current and future Exadata capabilities, including disruptive in-memory, public cloud, and Oracle Cloud at Customer technologies. Customers like Starwood Hotels & Resorts Worldwide, Inc. have used they key Exadata capabilities to improve their business. For instance, they have been able to quickly retrieve information about things like customer loyalty, central reservations, and rate-plan reports for efficient hotel management. With Exadata, they can run critical daily operating reports such as booking pace, forecasting, arrivals reporting, and yield management to serve their guests better. Check out this session to see how Exadata helps customers like Starwood hotels gain these results. Customer Panel on Exadata & Tips to Migrate to the Cloud 2. Exadata Emerges as a Key Element of Six Journeys to the Cloud: Customer Stories Speaker: David Sivick, Technology Initiatives Manager, Wells Fargo Claude Robinson III, Sr. Director Product Marketing, Oracle Shane Miller, Halliburton When: Monday, 10/22, 9:00- 9:45 am Where: Moscone South - Room 215 Every company is trying to build a cloud strategy and make a seamless migration to the cloud without impacting their current, on-premises IT systems today. This is a pretty challenging feat and hard to accomplish when having a multi-vendor environment. The good news is that Oracle has helped more than 25,000 companies transition to the cloud. For these large multinational customers, their journey to the cloud began years ago with Oracle Exadata as a cornerstone. They’ve modernized by ditching commodity hardware for massive database consolidation, saving millions in Oracle Database licensing, and improving the safety and soundness of their data. Well Fargo’s Technology Initiatives Manager, David Sivick and Haliburton’s Shane Miller have experienced such transformations. And within the last few years, customers like David and Shane have started to consume Exadata in more flexible ways in their digital transformation drive. Well Fargo’s David Sivick and Haliburton’s Shane Miller will sit down with Oracle’s Sr. Director Product Marketing, Claude Robinson to share their Exadata cloud journey stories around: How they optimized their database infrastructure Successfully drove their application and database migration Achieved application development and data analytics This interesting session will feature Well Fargo’s and Haliburton’s stories and tips that you can use as you build a cloud strategy as well as understand how Exadata can help you achieve this path to the cloud. Tuesday Sessions: Customer Panel on Exadata, Big Data, & Disaster Recovery 3. Big Data and Disaster Recovery Infrastructure with Equinix and Oracle Exadata Speaker: Claude Robinson III, Sr. Director Product Marketing, Oracle Arti Deshpande, Director, Global Data Services, Havi Global Solutions Robert Blackburn, Global Managing Director, Oracle Strategic Alliance, Equinix When: Tuesday, 10/23, 3:45 - 4:30 pm Where: Moscone South - Room 214 We think that some of the most powerful sessions are those that come from customers and partners who openly share their experiences and so can you relate to their challenges and how and they have achieved IT and business success. So, we picked this session which uncovers how the Director of Global Data Services at Havi Global Solutions, Arti Deshpande, leveraged an Oracle offering to achieve Havi’s IT success. Arti will give you the inside scoop on how they were able to streamline disaster recovery in the Oracle Cloud without sacrificing speed and also consolidated dozens of databases onto Exadata to improve performance. Beyond learning from Havi’s customer experience, you will also hear about the solution architecture created through Oracle’s and Equinix’s partnership. Equinix will share how it partnered with Oracle’s Engineered Systems and Oracle Cloud teams to create a distributed on-premises and cloud infrastructure. The company will reveal how they created an on-premises and cloud infrastructure that consists of a private, high-performance direct interconnection between the Oracle Exadata Database Machine solution and Oracle Cloud by using the Oracle Cloud Infrastructure FastConnect on Equinix Cloud Exchange Fabric. Finally, Equinix will share how this combined solution bypasses the public internet, allowing for direct and secure exchange of data traffic between Oracle Exadata and Oracle Cloud services on Platform Equinix, the Equinix global interconnection platform. Customer Panel on Exadata Cloud at Customer 4. Unleash the Power of Your Data with Oracle Exadata Cloud at Customer Speaker: Vishal Mehta, Sr Manager, Architecture, Quest Diagnostics Maywun Wong, Director of Product Marketing, Cloud Business Group, Oracle Jochen Hinderberger, Director IT Applications, Dialog Semiconductor Cyril Charpentier, Database Manager, Galeries Lafayette When: Tuesday, 10/23 5:45-6:30 pm Where: Moscone South - Room 214 If you’re looking for some more insight about Exadata, specifically Exadata Cloud at Customer, this is great session to check out because it features first-hand experiences from customers using the Cloud at Customer consumption service and how it has impacted their businesses. In this interactive customer panel, IT and business leaders from Quest Diagnostics, Dialog Semiconductor, and Galaries Lafayette will discuss their business success with bringing the cloud into their own data center for their Oracle Database workloads, as well as answer your questions. Vishal Mehta, the Sr. Manager, Architecture at Quest Diagnostics, will share how they consolidated dozens of database servers onto Exadata and freed up many of their admins to drive more strategic tasks. By using Exadata Cloud at Customer, they were able to standardize their database services and configurations to yield benefits across many dimensions. Jochen Hinderberger, the Director of IT Applications at Dialog Semiconductor, will share the company’s decision to select Exadata Cloud at Customer because it had the capacity and performance needed to support their highly demanding tasks which included collecting and analyzing complex data to assure product quality for semiconductors and integrated circuits. Cyril Charpentier, the Database Manager at Galeries Lafayette will share their story around selecting Exadata Cloud at Customer to gain the cloud-like capabilities of agility and flexibility while improving their database performance. The customer will also discuss how Exadata Cloud at Customer has helped them offload tedious management and monitoring tasks while focusing on the real needs of the business. By attending this session, you get an idea of how Oracle’s Database enterprise customers use Oracle Exadata Cloud at Customer as part of their digital transformation strategy. This is a perfect session to learn how these customers harnessed their data and the benefits of a public cloud within their own data center behind their firewall to improve business performance. Wednesday Session: ZDLRA Architectural Overview and Tips 5. Zero Data Loss Recovery Appliance: Insider’s Guide to Architecture and Practices Speaker: Jony Safi, MAA Senior Manager, Oracle Tim Chien, Director of Product Management, Oracle Stefan Reiners, DBA, METRO-nom GmbH When: Wednesday, 10/24, 4:45- 5:30 pm Where: Moscone West - Room 3007 What keeps you up at night when it comes to IT challenges? Security and downtime no doubt. It is incredibly difficult to overcome this IT challenge around improving database performance while making sure the infrastructure is immune to security attacks or database downtime and performance problems. They good news is that we think long and hard about these challenges at Oracle and have a solution to address these issues. In this session, you will learn how to mitigate the problems of data loss and improve data recovery for your database workloads with Zero Data Loss Recovery Appliance so you avoid problems around downtime and security. In this session, you will learn how Zero Data Loss Recovery Appliance (ZDLRA) is an industry-innovating, cloud-scale database protection system that hundreds of customers have deployed globally. ZDLRA’s benefits are unparalleled when compared to other backup solutions in the market today, and you will get a chance to learn how this is the case. Jony, Tim, and Stefan will share how this offering the eliminates data loss and backup windows, provides database recoverability validation, and ensure real-time monitoring of enterprise-wide data protection. Attend this session to get an insider’s look at the system architecture and hear the latest practices around management, monitoring, high availability, and disaster recovery. This is a perfect session for you to learn tips and tricks for backing up to and restoring from the recovering appliance. After this session, you’ll be able to walk away and implement these practices at your organization to fulfill database-critical service level agreements. Other Sessions You’ll Really Want to Check Out: That’s it! Those are the top five session that you don’t want to miss while attending Oracle OpenWorld this year. However, keep in mind that if you want a deeper exploration on Oracle Exadata and Oracle Zero Data Loss Recovery Appliance, you should check out these additional sessions. Here are three more sessions you should look into and use for brownie points. Maximum Availability Architecture 1. Oracle Exadata: Maximum Availability Best Practices and Recommendations Speaker: Michael Nowak, MAA Solutions Architect, Oracle Manish Upadhyay, DBA, FIS Global   When: Tuesday, 10/ 23, 5:45 - 6:30 pm Where: Moscone West - Room 3008 Exadata Technical Deep Dive & Architecture 2. Oracle Exadata: Architecture and Internals Technical Deep Dive Speaker: Gurmeet Goindi, Technical Product Strategist, Oracle Kodi Umamageswaran, Vice President, Exadata Development, Oracle When: Monday, 10/22 4:45-5:30 Where: Moscone West - Room 3008 Exadata Cloud Service 3. Oracle Database Exadata Cloud Service: From Provisioning to Migration Speaker: Nitin Vengurlekar, CTO-Architect-Service Delivery-Cloud Evangelist, Viscosity North America Brian Spendolini, Product Manager, Oracle Charles Lin, System Database Administrator, Beeline When: Thursday, 10/25 10:00-10:45 am Where: Moscone West - Room 3008            

Are you feeling butterflies in your stomach yet? Oracle OpenWorld 2018 is around the corner and we want to make sure you’re able to maximize your time at the event from October 22-25. So, we’ve...

Customers

Oracle Exadata: Deep Engineering Delivers Extreme Performance

In my previous post, "Yes, Database Performance Matters", I talked about those I met at Collaborate, and how most everyone believed Oracle Exadata performance is impressive.  However, every now and then I run into someone who agrees Exadata performance is impressive, but also believes they can achieve this with a build your own solution.  I think on that one, I have to disagree... There are a great many performance enhancing features, not just bolted on, but deeply engineered into Exadata.  Some provide larger impact than others, but collectively they are the secret sauce that makes Exadata deliver extreme performance.  Let’s start with its scale out architecture.  As you add additional compute servers and storage servers, you grow the overall CPU, IO, storage, and network capacity of the machine.  As you grow a machine from the smallest 1/8th rack to the largest multi-rack configuration, performance scales linearly.  Key to scaling compute nodes is Oracle Real Application Clusters (RAC).  This allows a single database workload to scale across multiple servers. While RAC is not unique to Exadata, a great deal of performance enhancements have been done on RAC’s communication protocols specifically for Exadata, making Exadata the most efficient platform for scaling RAC across server nodes. Servers are connected using a high-bandwidth, low-latency 40 Gb per second InfiniBand network.  Exadata runs specialized database networking protocols using Remote Direct Memory Access (RDMA) to take full advantage of this infrastructure, providing much lower latency and higher bandwidth than possible if you tried this in a build-your-own environment.  Exadata also understands the importance of the traffic on the network, and can prioritize important packets.  This, of course, has a direct impact on the overall performance of the databases running on the machine. t’s common knowledge that IO is often the bottleneck in a database system.  Exadata has impressive IO capabilities.  I’m not going to overwhelm you with numbers, but if you are curious, check out the Exadata data sheet for a full set of specifications.  More interesting is how Exadata provides extreme IO.  The most obvious technique, is to use plenty of flash memory.  Exadata storage cells can be fully loaded with NVMe flash, providing extreme IOPS and throughput for any database read or write operation.  This flash is placed directly on the PCI bus, not behind bottlenecking storage controllers.  Perhaps surprisingly, most customers do not opt for all flash storage.  Rather they choose a lesser (read that as less expensive) flash configuration backed by high capacity HDDs.  The flash provides an intelligent cache, buffering most latency sensitive IO operations.  The net result is the storage economics of HDDs, with the effective performance of NVMe flash. You might be wondering how flash can be a differentiator for Exadata.  After all, many vendors sell all flash arrays, or front-end caches in front of HDDs.  The key is understanding the database workload.  Only Exadata understands the difference between a latency-sensitive write of a commit record to a redo log, and an asynchronous database file update.  Exadata knows to cache database blocks, that are very likely to be read or updated repeatedly, but not to cache IO from a database backup or large table scan, that will never be re-read again.  Exadata provides special handling for log writes using a unique algorithm that reduces the latency of these critical writes and avoids the latency spikes common in other flash solutions.  Exadata can even store cached data in an optimized columnar format, to speed processing on analytical operations that need only access a subset of columns.  These features require the storage server to work in concert with the database server, something no generic storage array can do.   Flash is fast, but there is only so much you can solve with flash.  You still need to get the data from the storage to the database instance, and storage interconnect technologies have not kept up with the rapid rise in the database server’s ability to consume data.  To eliminate the interconnect as a potential bottleneck, Exadata takes advantage of its unique Smart Scan technology to offload data intensive SQL operations from the database servers directly to the storage servers.  This parallel data filtering and processing dramatically reduces the amount of data that need be returned to the database servers, correspondingly increasing the overall effective IO and processing capabilities of the system.  Exadata’s intelligent storage further improves processing by tracking summary information for data stored in regions of each storage cell.  Using this information, the storage cell can determine whether relevant data may even exist in a region of storage, avoiding unnecessarily reading and filtering that data.  These fast in-memory lookups eliminate large numbers of slow HDD IO operations, dramatically speeding database operations.  While you can run the Oracle database on many different platforms, not all features are available on all platforms.  When run on Exadata, Oracle database supports Hybrid Columnar Compression (HCC) which stores data in an optimized combination of row and columnar methods, yielding the compression benefits of columnar storage, while avoiding the performance issues typically associated with columnar storage.  While compression reduces disk IO, it traditionally hurts performance as substantial CPU is consumed with decompression.  Exadata offloads that work to the storage cells, and once you account for the savings in IO, most analytic workloads run faster with HCC than without. Perhaps there is no better testimonial to Exadata’s performance than real-world examples.  Four of the top five banks, telcos and retailers run on Exadata. For example Target consolidated database from over 350 systems onto Exadata.  They now enjoy a 300% performance improvement and 5x faster batch and SQL processing.  This has enabled them to extend their ship from store option for Target.com to over 1000 stores, allowing customers to get their orders sooner than before.  I’ve really just breezed over 10 years of performance advancements.  Those interested can find more detail in the Exadata data sheet.  Hopefully, you see it would be impossible to get the same performance from a self-built Exadata or similar system.  In the case of database performance, only deep engineering can deliver extreme performance. This is the third blog in a series of blog posts celebrating the 10th anniversary of the introduction of Oracle Exadata.  Our next post, "Oracle Exadata Availability," will focus on high availability. About the Author   Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

In my previous post, "Yes, Database Performance Matters", I talked about those I met at Collaborate, and how most everyone believed Oracle Exadataperformance is impressive.  However, every now and...

Engineered Systems

Implementing a Private Cloud with Oracle SuperCluster

Oracle SuperCluster is an integrated server, storage, networking, and software platform that is typically used either for full stack application deployments or consolidation of applications or databases. Because it incorporates Oracle’s unique and innovative Exadata Storage, Oracle SuperCluster delivers unrivaled database performance. And the platform also hosts the huge range of Oracle and third-party applications supported on Oracle’s proven, robust, and secure Oracle Solaris operating environment. Virtualization is a particular strength of Oracle SuperCluster, with Oracle VM Server for SPARC serving up high performance virtual machines with zero or near zero virtualization overhead. These virtual machines are known as I/O domains. Further, an additional layer of highly optimized nested virtualization is offered in the form of Oracle Solaris Zones. All of these virtualization capabilities come at no additional license cost. For more information about virtualization on Oracle SuperCluster, refer to the recent blog Is "Zero-Overhead Virtualization" Just Hype? The platform also utilizes a built in high throughput, low latency InfiniBand fabric for extreme network efficiency within the rack. As a result, Oracle SuperCluster customers enjoy outstanding end-to-end database and application performance, along with the simplicity and supportability featured on all of Oracle’s engineered systems. Can these benefits be realized in a cloud environment, though? Oracle SuperCluster is not available in Oracle’s Cloud Infrastructure, but private cloud deployments have been implemented by a number of Oracle SuperCluster customers, and Oracle Managed Cloud Services also hosts many Oracle SuperCluster racks in their data centers worldwide. In this blog we will consider the building blocks provided by Oracle to simplify deployments of this type on Oracle SuperCluster. An Introduction to Infrastructure-as-a-Service (IaaS) In the past, provisioning new compute environments consumed considerable time and effort. All of that has changed with Infrastructure-as-a-Service capabilities in the Cloud. Some of the key attractions of cloud environments for provisioning include: Improved Time to value. The period of time that usually elapses before value is realized from a deployment is considerably reduced. Highly capable virtual machines are typically deployed and ready to use almost immediately. Greater Simplicity. Specialized IT skills are no longer required to deploy a virtual machine that encompasses a complete working set of compute, storage, and network resources. Better Scalability. Provisioning ten virtual machines requires little more effort than provisioning a single virtual machine. IaaS environments typically include the following characteristics: User interfaces are simple and intuitive. Actions are typically either achieved with a few clicks from a browser user interface (BUI), or automated using a REST interface. Virtual machines can be created without sysadmin intervention and without the need to understand the underlying hardware, software, or network architecture. Newly created virtual machines boot with a fully configured operating system, active networks and pre-provisioned storage. Virtual machine components are drawn from pools or buckets of resources. Component pools typically deliver a range of resources including CPU, memory, network interfaces, storage resources, IP addresses, and virtual local area networks (VLANs). Virtual machines can be resized or migrated from one physical server to another as the need arises, without the need for manual sysadmin intervention. Where costs need to be charged to an end user, the actual resources allocated can be used as the basis for charging. Resource usage can be accounted to specific end users, and optionally tracked for billing purposes. Resource usage may also be optionally restricted per user. The end user is responsible for managing and patching operating systems and applications, but not for managing the underlying cloud infrastructure. Oracle SuperCluster IaaS The virtual machine lifecycle on Oracle SuperCluster is orchestrated by the SuperCluster Virtual Assistant (SVA), a browser-based tool that supports the creation, modification, and deletion of domain-based virtual machines, known as I/O domains. Functionality has progressively been added to this tool over the years, and it has now become a single solution for dynamically deploying and managing virtual machines on SuperCluster, including both I/O domains and database-oriented Oracle Solaris Zones. SVA is a robust tool that is widely used by SuperCluster customers across a range of different environments. The current SuperCluster Virtual Assistant v2.6 release offers a set of capabilities that deliver benefits and features consistent with those outlined above in the IaaS Introduction. As an alternative to SVA’s intuitive browser user interface, SVA’s IaaS functionality on Oracle SuperCluster can be managed from other orchestration software using the provided REST interfaces. SVA REST APIs are self-documenting and therefore easier to consume, thanks to the included Swagger UI. SuperCluster Virtual Assistant in Action The following screenshot shows an initial window from the tool listing I/O domains in a range of different states. Both physical domains and I/O domains (virtual machines) are managed, along with their component resources. New I/O domains can be created, and existing I/O domains modified or deleted, with additional cores and memory able to be added dynamically to live I/O domains. Database Zones based on Oracle Solaris can also be managed from the tool, and a future SVA release will allow Oracle Solaris Zones of all types to be managed. I/O domains can be frozen at any time to release their resources, and thawed (reactivated) whenever required. As well as providing a cold migration capability, the freeze/thaw capability allows resources used by non-critical I/O domains to be temporarily freed during peak periods for use by other mission critical applications. Resources are assigned automatically from component pools that manage CPU, memory, network interfaces, IP addresses, and storage resources. VLANs and other network properties can be pre-defined, allowing access to DNS, NTP, and other services. An integrated resource allocation engine ensures that cores, memory, and network interfaces are optimally assigned for performance and effectiveness. Compute resources are allocated to I/O domains at a granularity of one core and 16GB of memory, or using pre-defined recipes. Network recipes can also be set up to simplify the allocation of network resources, including simultaneous redundant connectivity to different physical networks thanks to quad-port 10GbE adapters. Recipes are illustrated in the screenshot below. A number of SVA policies can be set according to customer requirements. One set of policies relates to users. User roles are supported, allowing both privileged and non-privileged users to be created. A single SVA user can consume all resources. Alternatively, multiple SVA users can be created, with resource usage tracked by user. Resources can be unconstrained, allowing a user to consume any available resource, or limits can be set, to ensure that no user consumes more than a pre-defined allowance.  The screenshot below illustrates an early step in the process of creating an I/O domain. A comprehensive Health Monitor examines the state of SVA services to ensure that the tool and its resources remain in a consistent and healthy state. SVA functionality continues to be extended, with a number of new features currently under development. Oracle SuperCluster M8 and Oracle SuperCluster M7 customers are typically able to leverage new features simply by installing the latest quarterly patch bundle, which also upgrades the SVA version. Enjoying the Benefits Oracle SuperCluster customers can realize cloud benefits in their own data centers, taking advantage of improved time to value, greater simplicity, and better scalability, thanks to the Infrastructure-as-a-Service capabilities provided by the SuperCluster Virtual Assistant. Database-as-a-Service (DBaaS) capabilities can also be instantiated on Oracle SuperCluster using Oracle Enterprise Manager. The end result is that Oracle SuperCluster combines the proven benefits of Oracle engineered systems with IaaS and DBaaS capabilities, allowing customers to reduce complexity and increase return on investment. About the Author Allan Packer is a Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle. He has worked on issues related to server systems performance, sizing, availability, and resource management, developed performance and regression testing tools, published several TPC industry-standard benchmarks as technical lead, and developed a systems/database training curriculum. He has published articles in industry magazines, presented at international industry conferences, and his book "Configuring and Tuning Databases on the Solaris Platform" was published by Sun Press in December 2001.  Allan is currently the technical lead and architect for Oracle SuperCluster.  

Oracle SuperCluster is an integrated server, storage, networking, and software platform that is typically used either for full stack application deployments or consolidation of applications...

Customers

Prescription for Long-Term Health: ODA Is Just What the Doctor Ordered

Healthcare providers face so many complex challenges, from a shortage of clinicians to serve an aging population that requires more care, to changing regulations, to evolving patient treatment and payment models. At the same time, these providers struggle to manage the ever-increasing amount of data being generated by electronic health records (EHRs). How can they focus on providing the best possible patient care while keeping costs tightly under control? Data Drives the Modern Healthcare Organization One of the most important steps is to manage the data that’s the heartbeat of their organization. Data makes it possible to provide quality patient care, streamline operations, manage supply inventories, and build sound long-term organizational strategies, among other things. Perhaps today’s most critical healthcare challenge—outside of the frontline clinician-patient encounter—is efficiently, securely, and affordably managing data. Clinicians need to be able to access patient data in real time, around the clock. In the case of acute-care situations, they can’t afford for systems to go down, or to lose data. Administrators need to ensure the security of patient information to protect privacy, meet regulations, and avoid fines and bad PR. Materials management requires systems to monitor critical supplies and keep them stocked at optimal levels to ensure availability, prevent waste, and reduce costs. Executives need real-time analytics to make day-to-day decisions, plan for the long term, and ensure patients continue to receive the best possible care while the industry experiences seemingly constant change and uncertainty. How do you implement innovative and life-saving procedures and technology, hire the best talent, and expand services without going bankrupt? It all comes down to balancing patient care with controlling costs. Technology That Performs the Perfect Balancing Act Healthcare organizations need to manage enormous quantities of data, but they don’t always have the budget for top-of-the-line database solutions. Nor do they always have the resources required to manage these systems day-in and day-out. For many of these midsize healthcare providers, Oracle Database Appliance offers a realistic, affordable option that optimizes performance for the Oracle Database. The completely integrated package of software, compute, networking, storage, and backup makes setup simple and fast. At the same time, it delivers the performance and the fully redundant, high availability so critical to healthcare environments. And it’s cloud-ready, so that organizations can migrate to the cloud seamlessly. With all the uncertainty healthcare organizations operate under today, they need IT solutions that can adapt and change as their needs change. Oracle Database Appliance was designed with flexibility to meet organizations’ changing database requirements. Compute capacity can be scaled up on demand to match workload growth. Protecting Patient Data Must Take Top Priority Because patient data is so critical to healthcare organizations, they must have reliable, secure backup. Oracle Database Appliance also has an option that makes backup just as simple as system deployment and management. Healthcare organizations can choose between backing up to a local Oracle Database Appliance or to the Oracle Cloud if they don’t want to manually manage backups or maintain backup systems. In healthcare, protecting patient data has to be a top priority. The backup feature of Oracle Database Appliance offers end-to-end encryption and is specifically designed to include the archive capabilities needed to ensure compliance with the healthcare industry’s stringent regulations. One Brazilian healthcare organization ended a two-year search for a solution when it found the Oracle Database Appliance. Santa Rosa Hospital Takes Good Care of Its Patients—and Its Data Santa Rosa Hospital in Cuiaba, Brazil, needed a database system that could scale to match its rapid growth in patient procedures—and the accompanying growth in the hospital’s data. Some non-negotiable capabilities for a solution included improving performance, ensuring uninterrupted access to the database 24/7, a safe and efficient backup process, and expandable storage capacity. According to IT Manager Andre Carrion, Santa Rosa searched for two years for a solution but couldn’t find one that fit its budget, until it found Oracle Database Appliance with cloud backup. The results were impressive: Ensured full access to the database even when a server crashed, and increased patient data security. Systems now run on the virtual server in the cloud while the physical server is re-established. Reduced backup time from 24 hours to 2 hours. Reduced time to retrieve patient information from as much as 3 minutes to 2 seconds. Reduced average ER consultation time from 15 minutes to 6 minutes. Replaced 10 servers with 1 server. As a bonus, everything was installed and ready to go in just a week. The Oracle Database Appliance with easy cloud backup was just what the doctor ordered to meet Santa Rosa’s growing business without compromising the security of sensitive patient information or breaking the budget.  

Healthcare providers face so many complex challenges, from a shortage of clinicians to serve an aging population that requires more care, to changing regulations, to evolving patient treatment and...

Engineered Systems

Improving ROI to Outweigh Potential Upgrade Disruption

Today's guest post is by Allan Packer, Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle with a focus on Oracle SuperCluster. Hardware upgrades have always been supported on Oracle SuperCluster, but how flexible are they? And will any benefits be outweighed by the disruption to service when a production system is upgraded? Change is an ever-present reality for any enterprise. And with change comes an opportunity cost, unless IT infrastructure is flexible enough to satisfy the evolving demand for resources. From the very first release of Oracle SuperCluster, a key attraction of the platform has been the ability to upgrade the hardware as business needs change. Modifying hardware can be very disruptive. Hardware configuration changes create a ripple effect that penetrates deep into the software layers of a system. For this reason, an important milestone in the upgrade landscape for both Oracle SuperCluster M8 and Oracle SuperCluster M7 has been the development of special purpose tools to automate the upgrade steps. These tools are able to reduce the necessary downtime associated with an upgrade, and also minimize the opportunity for misconfiguration during what can be a complex operation.    CPU upgrades Compute resources on both Oracle SuperCluster M8 and Oracle SuperCluster M7 are delivered in the form of CPU, Memory, and I/O Unit (CMIOU) boards. Each SPARC M8 and SPARC M7 chassis supports up to eight of these boards, organized into two electrically isolated Physical Domains (PDoms) hosting four boards each.    Each CMIOU board includes: One processor with 32 cores—a SPARC M8 processor for Oracle SuperCluster M8, or a SPARC M7 processor for Oracle SuperCluster M7. Each core delivers 8 CPU hardware threads, so each processor presents 256 CPUs to the operating system. Sixteen memory slots, fully populated with DIMMs. Oracle SuperCluster M8 uses 64GB DIMMs, for a total of 1TB of memory. Oracle SuperCluster M7 uses 32GB DIMMs, for a total of 512GB of memory. Three PCIe slots. One slot hosts an InfiniBand HCA, and another hosts a 10GbE NIC. In the case of Oracle SuperCluster M8, the 10GbE NIC is a quad-port device. Oracle SuperCluster M7 provides a dual-port NIC. The third PCIe slot is empty on all except the first CMIOU in each PDom, where it hosts a quad-port GbE NIC. Optional Fiber Channel HBAs can be placed in empty slots.   Adding CMIOU boards CMIOU boards can added to a PDom whenever more CPU and/or memory resource is required. Up to four CMIOU boards can be placed in each PDom. The diagram below illustrates a possible sequence of upgrades in a SPARC M8-8 chassis, from a quarter-populated configuration with two CMIOUs (one per PDom), to a half-populated configuration with four CMIOUs, to a fully-populated configuration with eight CMIOUs.   PDoms can be populated with as many CMIOUs as required—there is no requirement to use the same number of CMIOU boards in both PDoms on the same chassis. The illustration below shows two SPARC M8-8 chassis with different numbers of CMIOUs in each PDom.       Adding a second chassis Many Oracle SuperCluster installations are initially configured with a single compute chassis. Every SPARC M8-8 and SPARC M7-8 chassis shipped with Oracle SuperCluster includes two electrically isolated PDoms, so highly available configurations begin with a single chassis. When the need for additional compute resources exceeds the capacity of a single chassis, a customer can add a second chassis with one or more CMIOUs, thereby allowing total compute resources to be increased by up to two times. Since each CMIOU board in the second chassis comes equipped with its own InfiniBand HCA, additional resources immediately become available on the InfiniBand fabric after the upgrade. Note that both SPARC M8-8 and SPARC M7-8 chassis consume ten rack units. Provided no more than six Exadata Storage Servers have been added to an Oracle SuperCluster rack, sufficient space will be available to add a second chassis.     Memory upgrades Where memory resources have become constrained, the simplest way to increase memory capacity is to add one or more additional CMIOU boards. Such upgrades come with the extra benefit of additional CPU resources as well as greater I/O connectivity.   Note that it is not supported to exchange existing memory DIMMs for higher density DIMMs. Adding additional CMIOUs achieves a similar effect in a more cost effective manner. The cost of a CMIOU populated with lower density DIMMs, a SPARC processor, an InfiniBand HCA, and a 10GbE NIC, compares favourably just with the cost of higher density DIMMs.     Exadata storage upgrades Exadata Storage Servers can be added to existing Oracle SuperCluster configurations. Even early Oracle SuperCluster platforms can benefit from the addition of current model Exadata Storage Servers.   Customers adding Exadata Storage quickly discover that both the performance and available capacity of current Exadata Storage Servers far outstrips that of older models. Best practice information is available for such deployments, and should be followed to ensure effective integration of different storage server models into an existing Exadata Storage environment.   Note that Oracle SuperCluster racks can host eleven Exadata Storage Servers with one SPARC M8-8 or SPARC M7-8 compute chassis, or six Exadata Storage Servers with two compute chassis.   The graphic below illustrates an Oracle SuperCluster M8 rack before and after an upgrade that adds a second M8-8 chassis and three additional Exadata Storage Servers.       External storage upgrades General-purpose storage capacity can be boosted by adding a suitably configured ZFS Storage Appliance that includes InfiniBand HCAs. This storage can then be made available via the InfiniBand fabric and used for application storage, backups, and other purposes.   Implications for domain configurations Additional compute resources can be assigned in a number of different ways: Creating new root domains Root domains provide the resources needed by I/O domains, which can be created on demand using the SuperCluster Virtual Assistant. I/O domains provide a flexible and secure form of virtualization at the domain level. Although they share I/O devices using the efficient SR-IOV, each I/O domain has its own dedicated CPU and memory resources. Oracle Solaris Zones are also supported in I/O domains, providing nested virtualization.
A one-to-one relationship exists between CMIOU boards and root domains, which means that a root domain can be created for each new CMIOU that is added. Each root domain supports up to sixteen additional I/O domains.
Note that creating new I/O domains is not the only way of consuming the extra resources. CPU cores and memory provided by an additional CMIOU board can also be used to increase resources in existing I/O domains. Creating new dedicated domains
Dedicated domains provide CPU, memory, and I/O resources—specifically an InfiniBand HCA and a 10GbE NIC—that are not shared with other domains (and are therefore dedicated). Virtualization within dedicated domains is provided by Oracle Solaris Zones.
New CMIOU boards can be used to create new dedicated domains. Dedicated domains can be created from one or more CMIOU boards. If two CMIOU boards are added, for example, they could be used together to create a single dedicated domain, or they could be used individually to create two dedicated domains.
When multiple dedicated domains have been created in a PDom, CPU and memory resources do not need to be split evenly between the dedicated domains. These resources can be assigned to dedicated domains at a granularity of one core and 16GB of memory.
The largest possible dedicated domain on both Oracle SuperCluster M8 and Oracle SuperCluster M7 contains four CMIOU boards. Expanding existing dedicated domains
A new CMIOU board can be used to boost the resources of an existing dedicated domain, up to the maximum capacity of four CMIOU boards per dedicated domain.   The available upgrade options will depend on the specifics of an existing domain configuration as well as the number of CMIOU boards being added. Customers should consult their Oracle account team to explore possible options.   I talk more about Oracle domains in my previous blog, Is "Zero-Overhead Virtualization" Just Hype?     What is the required downtime for hardware upgrades? Two deployment approaches are available for hardware upgrades: Rolling upgrades 
Rolling upgrades allow service outages associated with a hardware upgrade to be minimized or eliminated. The reason is that only one PDom is affected at a time. Provided the Oracle SuperCluster configuration has been configured to be highly available, services need not be affected during a rolling upgrade. High availability can be achieved using clustering software, such as Oracle Database Real Application Cluster (RAC) for database instances and Oracle Solaris Cluster for applications.
The downside of rolling upgrades is that the overall period of disruption is greater. The reason is that PDoms are only upgraded one at a time, so the upgrade process takes longer.
 Non-rolling upgrades
 The benefit of non-rolling upgrades is that the overall period of disruption is shorter, since PDoms are upgraded in parallel. The downside of non-rolling upgrades is that all services become unavailable during the upgrade, since a full system outage is required. Before the hardware upgrade process can begin, a suitable Quarterly Full Stack Download Patch (QFSDP) must be applied to the existing system, and backups taken with the osc-config-backup tool.   For information about the expected period of time required to complete rolling or non-rolling upgrades for a particular configuration, the customer’s Oracle account team should be consulted.   Hardware upgrades allow the available resources of Oracle SuperCluster to be extended as required to satisfy changing business requirements. Upgrades of varying complexity can be handled smoothly while minimizing downtime, thanks to tool-based automation of the upgrade process. The end result is that customers are able to realize the benefits of hardware upgrades without the need for extended periods of disruption to production systems   About the Author Allan Packer is a Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle. He has worked on issues related to server systems performance, sizing, availability, and resource management, developed performance and regression testing tools, published several TPC industry-standard benchmarks as technical lead, and developed a systems/database training curriculum. He has published articles in industry magazines, presented at international industry conferences, and his book "Configuring and Tuning Databases on the Solaris Platform" was published by Sun Press in December 2001.  Allan is currently the technical lead and architect for Oracle SuperCluster.

Today's guest post is by Allan Packer, Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at...

Infrastructure

June Database IT Trends in Review

This summer has been an exciting one for converged infrastructure; lots of announcements past month!  In case you missed it... Oracle debuted "Oracle Soar" on Jun 5th, an automated enterprise cloud application upgrade product that will enable Oracle customers to reduce the time and cost of cloud migration by up to 30%. Larry Ellison discussed the details of Oracle Soar which includes a discovery assessment, process analyzer, automated data and configuration migration utilities, and rapid integration tools. The automated process is powered by the True Cloud Method, Oracle’s proprietary approach to support customers throughout the journey to the cloud.  According to Wikibon, Do-It-Yourself x86 servers cost 57% more than Oracle Database Appliance over 3 years. The Wikibon research paper also shows above-the-line business benefits of improved time-to-value from a hyperconverged Full Stack Appliance are over 5x greater than the IT operational cost benefits. They also claim that the traditional enterprise strategy of building and maintaining low-cost x86 white box piece-part infrastructure is unsustainable in a modern hybrid cloud world. The experts talk converged infrastructure and AI We invited Neil Ward-Dutton, one of Europe's most experienced and high-profile IT industry analysts, to discuss how robotic process automation (RPA) and artificial intelligence (AI) have the potential to transform not just routine administrative business processes but also those that have traditionally depended on skilled workers. Read the interview here. Top fintech influencer and founder of Unconventional Ventures, Theodora Lau, joined us to discuss how AI is transforming banking. To be able to process all the data that modern enterprises create, such as financial data, at speed and scale, enterprises need better infrastructure to support it. Learn more about the interview here. Internationally recognized analyst, and founder of CXOTalk Michael Krigsman joined us on the blog to discuss the positive influence of digital disruption. The way we approach business today, he says, is being turned on its head by new demands from internal and external customers. We’re at a crossroads where innovative technologies and new business models are overtaking traditional approaches, creating significant pressure and challenges for tech infrastructure and the people who manage it. Read the interview here. The future of banking Srinivasan Ayyamoni, transformation consulting lead at Cognizant focusing on the banking industry, discusses the relentless cycle of innovation, rising consumer expectations, and business disruptions that have created major challenges as well as lucrative opportunities for the banking industry today. Read more here. Chetan Shroff, Oracle Commercial Leader at Cognizant, discusses why banks must look carefully at their IT infrastructure before they can benefit from new, exciting tech innovations. Don’t Miss Future Happenings: subscribe to the Cloud-Ready Infrastructure blog today!

This summer has been an exciting one for converged infrastructure; lots of announcements past month!  In case you missed it... Oracle debuted "Oracle Soar" on Jun 5th, an automated enterprise...

Engineered Systems

Oracle Exadata: Ten Years of Innovation

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. I recently read some interesting blog posts on the driving forces behind many of the today’s IT innovations.  One of the common themes was the realization that sometimes purpose-built engineering is better at solving the toughest problems.  Given 2018 marks the 10-year anniversary of the introduction of Oracle’s first engineered system, Oracle Exadata, I started thinking about many of the drivers that led to the development of this system in the first place.  Perhaps not surprisingly, I realized Oracle introduced Exadata for the same reason driving other innovations--you can't reliably push the limits of technology using generalized "off-the-shelf" components.     Back in the mid-2000's, the conventional wisdom was that the best way to run mission critical databases was to use a best-of-breed approach, stitching together the best servers, operating systems, infrastructure software, and databases to build a hand-crafted solution to meet the most demanding application requirements.  Every mission critical deployment was a challenge in those days, as we struggled to overcome hardware, firmware, and software incompatibilities in the various components in the stack.  Beyond stability, we found it difficult to meet the needs of a new class of extreme workloads, that exceeded the performance envelopes of the various components.  What we found was we were not realizing the true potential of the components, as we were limited by the traditional boundaries of dedicated compute servers, dumb storage, and general purpose networking.   We revisited the problem we were trying to solve:   Performance:  how to optimize the performance of each component in the stack and eliminate bottlenecks when processing our specific workload. Availability:  how to provide end-to-end availability, from the application through the networking and storage layers. Security:  how to protect end-user data from a variety of threats both internal and external to the system. Manageability:  how to reduce the management burden to operate these systems. Scalability:  how to grow the system as customer's data processing demands ballooned. Economics:  how to leverage the economics of commodity components while exceeding the experience offered by specialized mission critical components.   Reviewing these objectives in light of the limits of the best-of-breed technology led to a simple solution--extend the engineering beyond the individual components and across the stack.  In other words, engineer a purpose-built solution to provide extreme database services.  In 2008, the result of this effort, Oracle Exadata, was launched. The mid-2000’s saw explosive growth in compute power, as Intel continually launched new CPUs with greater and greater numbers of cores.  But databases are I/O hungry beasts, and I/O was stuck in the slow lane.  Organizations were deploying more and more applications on larger and larger SANs, connecting the servers to the storage with shared-bandwidth pipes that were fast becoming a bottleneck for any I/O intensive application.  The economics and complexity of SANs made it difficult to provide databases the bandwidth they required, and the result was lots of compute power starved for data.  The burning question of the day was, “how can we more effectively get data from the storage array to the compute server.” The answer, in hindsight, was quite simple, although quite difficult to engineer.  If you can’t bring the data to the compute, bring the compute to the data.  The difficulty was you couldn’t do this with a commercial storage array—you needed a purpose built storage server that could cooperatively with the database process vast amounts of data, offloading processing to the storage servers and minimizing the demands on the storage network.  From that insight, Exadata was born. Over the years, we’ve built upon this engineered platform, refining the architecture of the system to improve performance, availability, security, manageability, and scalability, all while using the latest technology and components and minimizing overall system cost.    Innovations Exadata has brought to market:   Performance:  Pushing work from the compute nodes to the storage nodes spreads the workload across the entire system while eliminating I/O bottlenecks; intelligent use of flash in the storage system provides flash based performance with hard disk economics and capacities.  The Exadata X7-2 server can scan 350GB/sec, 9x faster than a system using an all flash storage array. Availability:  Proven HA configurations based on Real Application Clusters running on redundant hardware components ensures maximum availability; intelligent software identifies faults throughout the system and reacts to minimize or mask application impact.  Customers are routinely running Exadata solutions in 24/7 mission critical environments with 99.999% availability requirements. Security:  Full stack patching and locked down best-practice security profiles minimize attack vulnerabilities.  Build PCI DSS compliant systems or easily meet DoD security guidelines via Oracle-provided STIG hardening tools. Manageability:  Integrated systems management and tools specifically designed for Exadata simplify the management of the database system.  New fleet automation can update multiple systems in parallel, enabling customers to update hundreds of racks in a weekend. Scalability:  Modular building blocks connected by a high-speed low latency Infiniband fabric enable a small entry level configuration to scale to support the largest workloads.  Exadata is New York Stock Exchange’s primary transactional database platform supporting roughly one billion transactions per day. Economics:  Built from industry standard components to leverage technology innovations provides industry leading price performance.  Exadata’s unique architecture provides better than all flash performance, at low-cost HDD capacity and cost.   Customers have aggressively adopted Exadata, to host their most demanding and mission critical database workloads.  Chances are you indirectly touch an Exadata every day—by visiting an ATM, buying groceries, reserving an airline ticket, paying a bill, or just browsing the internet.  Four of the top five banks, telcos, and retailers run Exadata.  Fidelity Investments moved to Exadata and improved reporting performance by 42x. Deutsche Bank shaved 20% off their database costs, while doubling performance.  Starbucks leveraged Exadata’s sophisticated Hybrid Columnar Compression technology to analyze point-of-sale data while saving over 70% on storage requirements. Lastly, after adopting Exadata, Korea Electric Power processes load information from their power substations 100x faster allowing them to analyze load information in real time to ensure the lights stay on. The funny thing about technology is you must keep innovating.  Given today’s shift to the cloud, all the great stuff we’ve done for Exadata, could soon be irrelevant—or will it?  The characteristics and technology of Exadata has been successful for a reason—that’s what it takes to run enterprise class applications!  The cloud doesn’t change that.  Just as in an on-premise world where people don’t run their mission critical business databases on virtual machines, because they can’t, customers migrating to the cloud will not magically be able to suddenly run those same mission critical business databases in VMs hosted in the cloud.  They need a platform that meets their performance, availability, security, manageability and scalability requirements, at a reasonable cost.  Our customers have told us they want to migrate the Cloud, but they don’t want to forgo the benefits they realize running Exadata on-premises.  For these customers, we now offer Exadata in the cloud.  Customers get a dedicated Exadata system, with all the characteristics they’ve come to appreciate, but hosted in the cloud, with all the benefits of a cloud deployment:  pay-as-you-go, simplified management, self-service, on-demand elasticity, paid for with a predictable operational expense budget with no customer-owned datacenter required. However, not everyone is ready to move to the cloud. While the economics and elasticity are extremely attractive to many customers, we’ve repeatedly found customers unwilling to put their valuable data outside their firewalls.  It may be because of regulatory issues, privacy issues, data center availability, or just plain conservative tendencies towards IT—they are not able or willing to move to the cloud.  For these customers, we offer Exadata Cloud at Customer, an offering that puts the Exadata Cloud Service in your data center, offering cloud economics, with on-premises control. So, it’s been a wild 10 years, and we are continuing to look for ways to innovate with Exadata.  No matter whether you need an on-premises database, a cloud solution, or are looking to bridge the two worlds with Cloud at Customer, Exadata remains the premier choice for running databases.  Look for continued innovation, as we adopt new fundamental technologies such as lower-cost flash storage and non-volatile memory, that promise to revolutionize the database landscape.  Exadata will continue as our flagship database platform, leveraging these new technologies, and making their benefits available to you, regardless of where you want to run your databases. I hope this post gives you a sense of the history behind Exadata, and some of the dramatic shifts that will be affecting your databases in the future.  This is the first in a series of blog posts that will examine these technologies.  Next, we will look more closely at performance, and why performance is critical in a database server, and how we’ve engineered Exadata to provide the best performance for all types of database workloads. Stay tuned for more: Oracle Exadata: Ten Years of Innovation Yes, Database Performance Matters Deep Engineering Delivers Extreme Performance Availability: Why Failover Is Not Good Enough Security: Can You Trust Yourself? Manageability: Labor is Not That Cheap Scalability: Plan for Success, Not Failure Oracle Exadata Economics:  The Real Total Cost of Ownership Oracle Exadata Cloud Service:  Bring Your Business to the Cloud Oracle Exadata Cloud at Customer:  Bring the Cloud to your Business About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. I recently read some interesting blog posts on the driving forces behind many of the today’s IT innovations. ...

Engineered Systems

Is "Zero-Overhead Virtualization" Just Hype?

At its first release—Oracle SuperCluster T4-4—Oracle claimed zero-overhead virtualization for the domain technology used on Oracle SuperCluster. Was this claim just marketing hype, or was it real? And is the claim still made for current SuperCluster platform releases? To answer these questions we need to examine the virtual machine implementation used on SuperCluster: Oracle VM Server for SPARC, also known as Logical Domains (LDoms for short). Oracle VM Server for SPARC is a Type 1 hypervisor that is implemented in firmware on all modern SPARC systems. The virtual machines created as a result are referred to as Domains. The diagram below illustrates a typical industry approach to virtualization. In this case, available hardware resources are shared across virtual machines, with the allocation of resources managed by a hypervisor implemented using a software abstraction layer. This approach delivers flexibility, but at the cost of weaker isolation and increased virtualization overheads. Optimal performance is delivered only by “bare metal” configurations that eliminate the hypervisor (and therefore do not support virtualization). By contrast, Oracle VM Server for SPARC has a number of unique characteristics:   SPARC systems always use the SPARC firmware-based hypervisor, whether or not domains have been configured—there is no “bare metal” configuration on SPARC that eliminates the hypervisor. For this reason, the concept of bare metal that applies to most other platforms has no meaning on SPARC systems. An important implication is that no additional virtualization layer is required on SPARC systems when configuring domains. That means no additional performance overheads are introduced, either. The SPARC hypervisor partitions CPU and memory resources rather than virtualizing them. That approach is possible because CPU and memory resources are never shared by SPARC domains. Each hardware CPU strand is uniquely assigned to one and only one domain. In other words, each virtual CPU in a domain is backed by a dedicated hardware strand. Further, each memory block is uniquely assigned to one and only one domain. This approach has a number of important implications: Since each domain has its own dedicated CPU resources, no virtualization layer is needed to schedule CPU resources in a domain-based virtual machine. The hardware does the scheduling directly. The result is that the scheduling overheads inherent in most virtualization implementations simply don’t apply in the case of SPARC systems. Memory resources in each domain are also dedicated to that domain. That means that domain memory access is not subject to an additional layer of virtualization, either. Memory access operates in the same way on all SPARC systems, whether or not they use domains. Over-provisioning does not apply to either CPU or memory with SPARC domains. We have seen that access to CPU and memory resources on SPARC systems used in Oracle SuperCluster does not impose overheads, both because these resources are dedicated to each domain, and also because the same highly efficient SPARC hypervisor is always in use, whether or not domains are configured.   We’ve examined CPU and memory. What about I/O? I/O virtualization is a major source of performance overhead in most virtualization implementations.   I/O virtualization with Oracle VM Server for SPARC takes one of three forms:   Partition at PCIe slot granularity. 
In this case one or more PCIe slots, along with any PCIe devices hosted in them, are assigned uniquely to a single domain. The result is that I/O devices are dedicated to that domain. As for CPU and memory, the virtualization in this case is limited to resource partitioning and therefore does not incur the usual overheads inherent in traditional virtualization.
This type of virtualization has been available on every Oracle SuperCluster platform release, and indeed virtualization of this type was the only option available on the original SPARC SuperCluster T4-4 platform. In this implementation, InfiniBand HCAs (which carry all storage and network traffic within SuperCluster), and 10GbE NICs (which carry network traffic between the SuperCluster rack and the datacenter), are dedicated to the domains to which they are assigned. As is true for CPU and memory access, I/O access for this implementation follows the same code path whether or not domains are in use.
Domains of this type are referred to as Dedicated Domains on SuperCluster, since all CPU and memory resources, and InfiniBand and 10GbE devices, are uniquely dedicated to a single domain. Such domains have zero overheads with respect to performance. SuperCluster Dedicated Domains are illustrated in the diagram below. Virtualization based on SR-IOV. 
For Oracle SuperCluster T5-8 and subsequent SuperCluster platform releases, shared I/O has also been available for InfiniBand and 10GbE devices. The resulting I/O Domains leverage SR-IOV technology, and feature I/O virtualization with very low, but not zero, performance overheads. The benefit of the SR-IOV technology used in I/O Domains is that InfiniBand and 10GbE devices can be shared between multiple domains, since domains of this type do not require dedicated I/O devices. SuperCluster I/O Domains are illustrated in the diagram below.   Virtualization based on proxies in combination with virtual device drivers.
 This type of virtualization has been used on all SuperCluster implementations for functions that are not performance critical, such as console access and virtual disks used as domain root and swap devices.   All Oracle SuperCluster platforms since Oracle SuperCluster T5-8—including the current Oracle SuperCluster M8—support hybrid configurations that deliver InfiniBand and 10GbE I/O virtualization via Dedicated Domains (domains that use PCIe slot partitioning), and/or via I/O Domains (domains that leverage SR-IOV virtualization).   An additional layer of virtualization is also supported, with one or more low overhead Oracle Solaris Zones able to be deployed in domains of any type. An example of a configuration featuring nested virtualization is illustrated in the diagram below.     The Oracle SuperCluster tooling leverages SuperCluster’s built in redundancy, along with both the resource partitioning and resource virtualization described above, to allow customers to deploy flexible and highly available configurations. High Availability will be the subject of a future SuperCluster blog.   In summary, SPARC domains are able to offer efficient and secure isolation with zero or very low performance overheads. The current Oracle SuperCluster M8 platform delivers domain-based virtual machines with zero performance overheads for CPU and memory operations. Oracle SuperCluster M8 virtual machines also deliver I/O virtualization for InfiniBand and 10GbE with either zero performance overheads via Dedicated Domains, or with very low performance overheads via I/O Domains. Learn more here.   About the Author Allan Packer is a Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle. He has worked on issues related to server systems performance, sizing, availability, and resource management, developed performance and regression testing tools, published several TPC industry-standard benchmarks as technical lead, and developed a systems/database training curriculum. He has published articles in industry magazines, presented at international industry conferences, and his book "Configuring and Tuning Databases on the Solaris Platform" was published by Sun Press in December 2001.  Allan is currently the technical lead and architect for Oracle SuperCluster.

At its first release—Oracle SuperCluster T4-4—Oracle claimed zero-overhead virtualization for the domain technology used on Oracle SuperCluster. Was this claim just marketing hype, or was it real? And...

Cloud Infrastructure Services

Mapping a Path to Profitability for the Banking Industry

Just over 20 years ago, a supercomputer named Deep Blue made history by beating the world’s best chess player, Garry Kasparov, in a six-game match. It was able to do this using hardware with a little over 11 Gflops of processing speed. In contrast, the iPhone X you might be holding right now is capable of about 346 Gflops. That’s enough raw computing power to take on Kasparov plus 30 more grandmasters... at the same time. Such comparisons remind us even by modern technology-industry standards, mobile technology continues to advance at a breakneck pace. The result of this trend—a relentless cycle of innovation, rising consumer expectations, and business disruptions—has created major challenges as well as lucrative opportunities for the banking industry. Today, more banks are discovering that a successful mobile strategy offers a clear path to a profitable future. They are also discovering, however, that the wrong IT infrastructure decisions —especially those involving legacy infrastructure—risk turning this journey into a costly dead end. Understanding the Mobile Banking Opportunity There are many reasons why banks increasingly view long-term success through a mobile banking lens. Consider a few examples of the opportunities that an institution can unlock with a successful mobile strategy:   Room to grow: According to the Citi 2018 Mobile Banking Survey, 81 percent of U.S. consumers now use mobile banking at least nine days a month, and 46 percent increased their mobile usage in the past year. Mobile banking apps are now the third most widely used type of app—trailing only social media and weather apps.   A global opportunity: According to World Economic Forum, 500 million adults worldwide became bank accountholders for the first time—but two billion more remain without banking services. As with access to healthcare and education, easy access to affordable mobile connectivity – with 1.6 billion new mobile subscribers coming online by 2020—will put banking and payment services in front of many people for the first time.   A mobile-banking revenue boost: According to a 2016 Fiserv study, mobile banking customers tend to hold more bank products than branch-only customers—a trend that suggests bigger cross-selling opportunities. As a result, mobile banking customers bring an average of 72 percent more revenue than branch-only customers.   Millennials are “mobile-first” banking customers: 62 percent of Millennials increased their mobile banking usage last year, and 68 percent of those who use mobile banking see their smartphones replacing their physical wallets.   Second-Rate Mobile Banking Technology Is Risky Business—and Getting Riskier As mobile technology advances, however, so do the risks associated with a second-rate mobile banking presence. This is especially true for banks that previously settled on a “good enough” mobile strategy—an approach that, in many cases, was designed to work within or work around the limitations of a bank’s legacy systems.   Two risks stand out for banks that continue to accept a “good enough” approach. First, as competitors invest in cutting-edge mobile technology, they expose the glaring usability, reliability, and capability gaps associated with legacy IT infrastructure.   Second, it’s clear that technology innovation drives rising consumer expectations. When a bank’s mobile offerings fall short, the consequences can be profound, far-reaching, and extremely difficult to rectify:   Unhappy consumers are ready and willing to abandon their banks: In 2016, about one in nine North American consumers switched banks.    Millennials are even faster to switch: During the same period, about one in five adults age 34 or younger switched banks. Another 32 percent of those surveyed said they would switch in the future if another institution offered easier-to-use digital banking services.   Bad banking apps are a big deal: Seeking a better mobile app experience is now the third most common reason for switching banks—ahead of security concerns and customer-service failures.   Digital lag leaves mobile apps lacking: A recent survey of UK bank customers found just one in four said they were able to do everything they wanted using a bank’s mobile app, and 34 percent found their bank’s app easy to use.   There are many reasons why a bank might continue to rely on a lower-caliber mobile presence built using aging legacy infrastructure. It’s very difficult, however, to imagine why any of those reasons would justify this level of possibly grievous damage to a bank’s customer relationships, brand image, and industry reputation.     It’s Not Too Late to Invest in Mobile Banking Success   I know that I have painted a foreboding picture—especially for banks that want to embrace a modern technology infrastructure but haven’t yet been able to follow through.    That’s why it’s important to make another point: It’s not too late to get ahead of these challenges and to make the investments that enable a truly first-rate mobile banking strategy.   First, bear in mind that traditional banks still hold some very important cards: Consumers still consider them more deserving of trust than most businesses, their physical branches (though declining in numbers) are important for certain types of advisory and high-value services, and they have the compliance and legal expertise required to navigate the treacherous regulatory waters of the banking mainstream.   Second, it’s crucial to recognize that moving away from legacy infrastructure—the sooner the better—may be the single most important move a bank can make to trigger a quick and decisive pivot toward mobile banking success.    4 Keys to Winning with Bank IT Infrastructure   Let’s focus now on specifics: four action items that a bank IT leader can use to drive a fast and effective infrastructure modernization program:    1. Embrace the cloud to support global growth. Mobile technology performance is a key to creating a good user experience; nobody likes to wait, especially when they want to access their money. Cloud-ready infrastructure is a much better foundation for building robust and reliable mobile offerings—for example, eliminating the latency problems that happen when on-premises systems try to service a global customer base.   2. Get and stay ahead with help from integrated, co-engineered systems. Hardware and software designed to work together and offered in simple pre-configured and pre-optimized packages, offer better performance and faster deployment than DIY non-optimized alternatives. This can be a bank’s most powerful technology weapon for fighting back against the complexity, management, and reliability issues that accompany rapid growth and pressures to scale.   3. Liberate your IT staff to do the things that matter. Co-engineered systems and cloud infrastructure both contribute to many of the same goals: attacking complexity, enabling growth, designing scalable and resilient systems. This means less time spent on tedious maintenance tasks—and more time focusing on business goals that drive success.   4. Build infrastructure that’s ready to handle today’s data and analytics challenges. An entire category of fintech upstarts is focused on reaching new markets through the use of unconventional credit analytics and scoring systems. These firms incorporate everything from educational achievements to call center records and website analytics into models that identify preferences and assess risk for customers who don’t yet have—and might never get—conventional credit scores. In many cases, the only way to serve these customers will be over mobile banking apps and systems.    Many banks could pursue similar opportunities, given the massive quantities of customer data at their disposal. But first, they’ll have to put systems into place that are capable of pulling this data from dozens of siloed sources; combine it with the masses of data flowing into the organization; and apply the right management, analytical, and storage solutions to unlock the insights within.   Oracle Sets Up Banks for Mobile-Tech Success   Oracle’s engineered systems are especially adept at giving banks everything they need for a truly modern, mobile-ready IT infrastructure. First, that’s because they are built to be fully integrated systems. Engineered systems like Oracle Exadata enable banks with one dedicated, high availability environment to run Oracle Database, and another like Oracle Exalytics or Oracle Exalogic to run advanced analytics and other critical business applications.   Second, along with a single, integrated technology stack, Oracle gives banks a single, integrated technology partner to support a modern mobile banking strategy. This is a powerful advantage combined with Oracle’s ability to deliver openness where it matters: In compliance with open architectures, open industry standards, and open APIs; and to achieving interoperability and integration.   These are the qualities that truly give an IT team freedom and flexibility to support innovative mobile banking functions that are money in the bank.    About the Author Srinivasan Ayyamoni is a Certified Accountant with 20 years of experience in business transformation, technology integration, and establishing finance shared services for large global enterprises. As a transformation Consulting lead with Cognizant's Oracle Solution Practice, he manages large digital transformation engagements focused on helping clients establish a high-performance finance function and partnering with them to achieve superior enterprise value.

Just over 20 years ago, a supercomputer named Deep Blue made history by beating the world’s best chess player, Garry Kasparov, in a six-game match. It was able to do this using hardware with a little...

The Future of Banking: How AI is Transforming the Industry

Today's blog post is a Q&A session with top fintech influencer and founder of Unconventional Ventures, Theodora Lau. Named one of 44 "2017 Innovators to Watch" by Bank Innovation, ranked No. 2 Top FinTech Influencers 2018 by Onalytica, and named to the list of LinkedIn Top Voices 2017 for Economy and Finance, she's a powerful voice in the industry. If you probe into the rapid adoption of artificial intelligence (AI) initiatives in the enterprise, it quickly becomes clear what’s behind it: big data. In a 2018 NewVantage Partners survey of Fortune 1000 executives, 76.5 percent cite the greater proliferation and availability of data is making AI possible. As Randy Bean in an MITSloan Management Review article puts it, “For the first time, large corporations report that they have direct access to meaningful volumes and sources of data that can feed AI algorithms to detect patterns and understand behaviors….these companies combine big data, AI algorithms, and computing power to produce a range of business benefits from real-time consumer credit approval to new product offers.” To be able to process all that data, such as financial data, at speed and scale, enterprises need infrastructure to support it. Infrastructure specifically designed for financial and big data applications, with hardware and software that has been co-engineered to work optimally together, can offer better performance and faster analytics. It’s definitely helping deliver a better customer experience—and that’s especially true in the financial services industry. We asked fintech influencer Theodora Lau to talk about the major innovations taking place in the traditionally conservative world of financial services. One key driver of this innovation is the infiltration of AI technology into the financial services industry. A second driver is a new era of partnerships between fintech startups and traditional financial institutions. Traditional financial institutions and fintechs have discovered that, by partnering, they can take advantage of each other’s strengths to develop innovative, revenue-generating offerings. PwC’s Global FinTech Report 2017 found that 82 percent of mainstream financial institutions expect to increase their fintech partnerships in the next three to five years. Theo, how are fintech startups disrupting the industry, and how are the traditional financial services companies responding to that? If you’d asked that question a few years ago, most people would have said banks are in trouble and need to defend against fintechs. But, starting sometime around 2017, the industry began to turn around and become more willing to collaborate. It makes sense because the services fintech startups are typically more focused on specific use cases: they hone in on those and do it really well. They have really good ideas and they tend to be very customer experience-driven, though they lack the scale compared to incumbent banks. And, as much as we talk about how bank infrastructure is aging, banks still have a large customer base and can scale. Traditional financial services companies have existing customers and brand recognition; whereas, fintech startups are typically starting from scratch. At the end of the day, it’s money that we're talking about, and money is very personal and emotional. How much will a consumer actually go out and trust a company that has no history? While a startup may have the most beautiful customer experience, will I trust it enough to hand over my money? I see the two of them [traditional banks and fintechs] working together as the best outcome from a consumer perspective as well as for their own survival. Is it true that new technology is making more collaboration possible as well? Yes, exactly—through APIs and open banking. I don't believe that any single bank can offer everything that the consumer wants, and I don’t think it’s in their best interest to try to be everything for everybody. For instance, ING, a large bank based in Amsterdam, has multiple operating units in different countries. Its German operations formed a partnership with a startup called Scalable Capital, which is an online wealth manager and robo-advisor, to offer a fully digital solution for its customers in Germany. This is a brilliant example of a partnership where the bank extends its product offerings by leveraging the solutions and capabilities that someone else has. What AI technology is changing the industry? Open banking Open banking is the big game changer. One example is Starling Bank in the UK, which does a really good job being an online marketplace. Using APIs, it acts as a hub through which consumers can get access to different things that traditional banks don’t offer, including spending insights, location-based intelligence, and links to retailers’ loyalty programs. Technology companies with banking services Another example is Tencent and Alibaba in China and the big ecosystem that they’ve built. Between the two companies, they own over 90 percent of all of the mobile payments in China. They're not banks, but they’re technology companies that put the consumer in the center of everything they do. They view payments and financial services not as an end in itself, but as a tool to further enhance their offerings. Voice banking We can't forget about voice banking. We see more banks trying to get into that space—though we are not quite there yet. Voice is very intuitive. It’s just easier to talk than it is to remember how to navigate a menu, which is a challenge in online/mobile banking. Imagine if you can actually say, “Hey, pay my bills,” instead of having to remember where you need to go on the menu tree. Let's go deeper into how AI has changed the customer experience. How has it affected personalization and the omnichannel experience? When we’re talking about AI in customer experience, it’s important to remember that banks are not really competing with other banks anymore. When consumers do their “banking,” they're comparing their experience to that which they get with all the other online businesses. How does banking compare to me getting something from an ecommerce site? Is it quick and easy? Is it available when I want it and where I want it? The threat to banks isn’t so much fintech companies as the big tech companies like Apple, Amazon, Alibaba, and Tencent. They are the ones banks should be worried about. Look how many customers they have. Look at the products and services they offer, even payments. It’s because of the vast amount of customer information they collect, as well as data analytics and AI, that big tech companies can provide data insights into user behavior and spending habits, allowing them to anticipate your needs and offer contextual, personalized recommendations. That’s how payments are supposed to work as well. Consumers shouldn't have to think, “I need to pay something.” They have a specific task they want to do, and banking services are just a means to an end. From a consumer perspective, hopefully, AI can make banking ambient and transparent in our increasingly connected world. We've been talking a lot about retail banking, but I presume AI is also making similar changes to other areas of financial services. Marketing is a good example. A big thing is figuring out how to entice people to open an email because everything is digital now. HSBC ran a trial using AI to figure out whether its members would prefer rewards for travel or merchandise versus rewards in the form of gift cards or cash. It sent emails to 75,000 credit card members using recommendations that were generated by AI while a control group received emails with rewards from a random category. As it turned out, the emails using AI-generated recommendations had a 40% higher open rate. That’s a fascinating business use case because you don't want to waste your marketing dollars if people are not going to open your emails. Do traditional financial service companies have the infrastructure in place to fully leverage AI or even to partner with fintechs? How is AI changing processes within their firms’ infrastructure? Financial institutions have a lot of data, but when it comes to being able to leverage AI, which is very heavily data-dependent, the challenge is being able to access that data. A lot of times, all of these systems are very siloed. So while a bank may have a ton of data about a customer, how well can it actually pull all of the data together to be able to generate insights that are useful and can be leveraged? The other challenge: If you can get the data together in a meaningful way, are they explainable? If you are using AI to make decisions, such as in lending, are you going to be able to explain what the AI is recommending, and how someone gets qualified for a loan, for example? That's something you need to do. What’s holding the banks back in terms of modernizing their technology? It’s a couple of things. You need to look at the make-up of the people, because it has to start from the top. Embrace technology At the upper layer, finance people have been doing the same thing for many years. Until you have leaders, including senior executives and board members, that are passionate about and actually understand technology, it's hard to transform. It goes beyond just having a mobile app —true digital transformation and modernization involve change in culture, mindsets, and processes. Data security Of course, it’s also a heavily regulated industry. If you're going to be upgrading something, and you already have customers and money and transactions there, you need to be very careful about what you're doing. Privacy and security of data is of paramount importance. The pain of upgrading infrastructure It’s also a very expensive and lengthy process to upgrade core systems, so money is definitely one part of why financial institutions aren’t modernizing their infrastructure. Some of my friends would say that some banks are actually not scared enough yet. Look at their earnings—they're still making good money. So if they’re not feeling the pain as much yet, then how urgent is it for them to actually do something drastic? Yet many mid-size financial companies don’t have large budgets, but still need to modernize their technology solutions to manage the explosion of data. There are banks that are certainly more at the forefront of technology, and they’re betting big on technology. For example, JPMorgan Chase’s technology budget is over $10 billion in 2018, with most of it going toward enhancement of mobile and web-based services. Where do you see AI taking financial services in the future? What I would like to see in the future in the US is what we see right now in China in regard to their platforms. In India, China, and Africa, the mobile adoption is so much higher compared to the US where the mode of doing things is so different. We shouldn't be looking at banking as an entity per se. Consumers are looking for banking services. That’s what we will be evolving to in the future, and we'll need AI to be the brain and the engine to offer a deeper, richer, more personalized experience. Authentication is another interesting area. No one wants to remember passwords or carry those little tokens. That's not customer-friendly at all. So biometrics and voice authentication will be very fascinating. At least that’s true of voice banking, which is still in the exploratory stage. Checking balances is not really exciting, but, in the future, AI will let the bank know I got a work bonus and will automatically ask whether I’d like to put aside 10 percent of that into savings. Things like that will enable financial wellness and more value overall for customers. That’s where I think AI can help in the future—and that’s how we can make banking better. And behind this future will be the enormous quantities of data that make this customer knowledge possible, and the ability to collect and analyze the data in real time, built on the right infrastructure. Learn more about how machine learning and AI can add substantial value to the financial services ecosystem.    

Today's blog post is a Q&A session with top fintech influencer and founder of Unconventional Ventures, Theodora Lau. Named one of 44 "2017 Innovators to Watch" by Bank Innovation, ranked No. 2 Top...

Cloud Infrastructure Services

Why Should CMOs Care About GDPR?

What GDPR means for CMOs: “Is all the hype justified?” As a direct link to customers and their data, marketers will be uniquely affected by GDPR so we asked Oracle’s Marie Escaro, Marketing Operations Specialist for OMC EMEA SaaS, and Kim Barlow, Director, Strategic and Analytic Services for OMC EMEA Consulting, to discuss how GDPR affects marketing teams. Is all the hype around GDPR justified? How seriously should marketers be taking it? Kim: European regulators have a clear mandate to tighten controls on the way businesses collect, use and share data, and the prospect of large fines for non-compliance is enough to make companies err on the side of caution. Marketers should take this very seriously, as a large part of their role is to ensure the organization has a prescriptive approach to acquiring, managing and using data.  Marie: Businesses increasingly rely on data to get closer to their customers. With data now viewed as the soft currency of modern business, companies have every reason to put the necessary controls in place to protect themselves and their customers. What does this mean for CMOs and marketing teams? Marie: Marketing teams need a clear view of what data they have, when they collected it, and how it is being used across the business.  With this visibility, they can define processes to control that data. I once worked with a company that stored information in seven different databases without a single common identifier. It took two years to unify all this onto a single database, which should serve as motivation for any business in a similar position to start consolidating their data today. It’s equally important to set up processes to prioritize data quality. Encryption is a good practice from a security standpoint, but marketers also need to ensure their teams are working with relevant and accurate data. What’s been holding marketers back?  Kim: There is still a misconception around who is responsible for data protection within the organization. It’s easy to assume this is the domain of IT and legal departments, but every department uses data in some form and is therefore responsible for making sure it does so responsibly. Marketing needs to have a clear voice in this conversation. Many businesses are also stuck with a siloed approach to their channel marketing and marketing data, which makes the necessary collaboration difficult. These channel siloes within marketing teams have developed through years of growth, expansion and acquisitions, and breaking them down must be a priority so everyone in the business can work off a centralized data platform.   Is this going to hamper businesses or prove more trouble than it is worth? Kim: Protecting data is definitely worth the effort for any responsible business. But GDPR is not just about data protection. It’s a framework for new ways of working that will absolutely help businesses modernise their approach to handling data, and benefit them in the long term. If we accept data is an asset with market value, then it’s only natural customers gain more control over who can access their personal information and how it is used and shared. Giving customers the confidence their data is safe and being looked after responsibly, while ensuring that data is better structured and higher quality will be good for the businesses deriving value from that data. What should CMOs do to tackle GDPR successfully?   Marie: As with any major project, success will come down to a structured approach and buy-in from employees. CMOs need to stay close to this issue but in the interests of their own time should at least appoint a strong individual or team as part of an organization-wide approach to compliance. Marketing needs to be a part of that collaborative effort and should be working in a joined-up way, with finance, IT, operations, sales and any other part of the business to ensure all data is accounted for and properly protected.  Find out more and discover how Oracle can help with GDPR . About the Authors Marie Escaro is at Oracle. She has more than 15 years in coordinating partnership between sales and marketing, using high performance tools to improve marketing usage, data quality management in CRM and marketing automated processes. She specializes in marketing automation, CRM, direct marketing, international localization, communication, Eloqua Master and finally sharing the feeling of having a positive impact and changing the world working with the best marketers in the industry. Kim Barlow is currently the Director of Strategic and Analytical Services EMEA at Oracle. She has had an extensive career in tech. She is currently working on a number of clients to help drive their lifecycle and digital strategies using Oracle technology. She loves her life, her family, her friends and her work colleagues.

What GDPR means for CMOs: “Is all the hype justified?” As a direct link to customers and their data, marketers will be uniquely affected by GDPR so we asked Oracle’s Marie Escaro, Marketing Operations...

Cloud Infrastructure Services

What is IT's Role in Regards to GDPR?

Usually when any sort of new compliance and regulation regarding personal data comes out, it is automatically assumed to be solely ‘IT’s problem" because technology is such a huge component of the data collections and data processing system. But compliance is in fact an organization-wide commitment. No individual or single department can make the organization compliant. If you've somehow missed the May 25th deadline, don't panic too much, you're not alone. But you do need to move quickly because there are clear areas where IT can add significant value in helping the organization achieve GDPR compliance a whole lot faster and more methodically. 1. Be a data champion Organizations know how valuable their data is, but many departments, business units and even board members may not realize how much data they have access to, where it resides, how it is created, how it could be used and how it is protected. This is one of the main reasons why organizations are lagging; unclear oversight into where all personally identifiable data (PID) resides.   The IT department can play a clear role in helping organizations understand why data, and by extension GDPR, is so important and determine the best way to use and protect it. By helping educate the greater organization on what exactly GDPR is and the ramifications of non-compliance will help influence a sense of urgency across the organization and ensure that everyone is moving quickly to comply. In addition, GDPR is an excellent opportunity for IT to explore intergraded infrastructure technology and different approaches to data management that can help unify where and how PID is used and processed. Oracle Exadata is a complete engineered system that is ideal for consolidation and performance of the Oracle Databases that handle much of an organizations PID. 2. Ensure data security GDPR considers protection of PID a fundamental human right, so organizations need to ensure they understand what PID they have access to and put in place appropriate protective measures. IT has a role to play in working with the organization to assess security risks and ensure that appropriate protective measures, such as encryption, access controls, attack prevention and detection, are in place.   In my previous post on the new regulations that the telecommunications industry is facing, I mentioned that PCI-DSS compliance is being used as a basic guideline for IT to help achieve GDPR compliance. GDPR is unfortunately quite broad and not well defined, so the more clear demands on PID security so many companies are intelligently using that as a starting point. Engineered systems, including Exadata, have gone under rigorous review to determine its compliance with PCI DSS V3.2 so customers can take care of at least the technological requirements of that regulation.   At a glance, Exadata features extensive database security measures to help customers protect and control the flow of PID: Perimeter Security, Defence in depth, Open Security by default, DB Scoped Security and ASM Scoped Security (CellKey.ora – Key, asm, realm), Infiniband, Open Security by default but particular gateways can be assigned to segregate the networks, Auditd monitoring enabled (/etc/audit/ audit.rules), Cellwall: iptables firewall, Boot loader is password protected. All of which align perfectly with many industry compliance strategies for GDPR that focus on: 1) Authentication, 2) Authorization, 3) Credential Management, and 4) Privilege Management.   3. Help the organization be responsive GDPR requires organizations to not only protect personal data but also respond to requests from individuals who, among other things, want to amend or delete data held on them. That means that their personal data must be collected, collated and structured in a way that enables effective and reliable control of all this information. This means breaking down internal silos and ensuring an organization has a clear view of its processing activities with regard to personal data.    4. Identify the best tools for the job GDPR compliance is as much about process, culture and planning as it is about technology. However, there are products available that can help organizations with key elements of GDPR compliance, such as data management, security and the automated enforcement of security measures. Advances in automation and artificial intelligence mean many tools offer a level of proactivity and scalability that don’t lessen the responsibility upon people within the organization but can reduce the workload and put in place an approach which can evolve with changing compliance requirements.    5. See the potential An improved approach to security and compliance management, fit for the digital economy, can give organizations the confidence to unlock the full potential of their data. If data is more secure, better ordered and easier to make sense of, it stands to reason an organization can do more with it. It may be tempting to see GDPR as an unwelcome chore. However, companies should also bear in mind that this is also an opportunity to seek differentiation and greater value, to build new data-driven business models, confident in the knowledge that they are using data in a compliant way.  Giving consumers the confidence to share their data is also good for businesses.    The IT department will know better than most how the full value of data can be unlocked and can help businesses pull away from seeing GDPR as a cost of doing business and start seeing it as an opportunity to do business better.   Learn more about GDPR and how Oracle can help

Usually when any sort of new compliance and regulation regarding personal data comes out, it is automatically assumed to be solely ‘IT’s problem" because technology is such a huge component of...

Infrastructure

May Database IT Trends in Review

April and May flew by! Check out the latest database infrastructure happenings you may have missed in the last 2 months... In case you missed it... General Data Protection Regulation (GDPR) took effect last week on May 25th and many companies were "unprepared" despite having 2 years to plan for it. if you're set, great! Otherwise, check out these posts to get you up to speed ASAP: What is GDPR? Everything You Need to Know. ​It's Not Too Late: 5 Easy Steps to GDPR Compliance Your Future Is Calling: Surprise! There’s (Always) More Regulation on the Way The experts take over We've recently invited tech luminaries to talk about the intersection of new, emerging technologies and the challenges that organizations are facing now in the digital age. Welcome to the ‘Self-Driving’ Autonomous Database with Maria Colgan, master product manager, Oracle Database Going Boldly into the Brave New World of Digital Transformation with internationally recognized analyst, and founder of CXOTalk, Michael Krigsman The Transformative Power of Blockchain: How Will It Affect Your Enterprise? with blockchain expert and founder of Datafloq, Mark van Rijmenam How is the telecommunications industry changing? Your Future Is Calling: How to Turn Data into Value-Added Services Telcos, Your Future Is Calling: It Wants to Show You What’s Possible Telcos, Your Future is Calling! Is Your Back Office Holding Your Back? Your Future Is Calling: Get Connected—With Everything Don’t Miss Future Happenings: subscribe here today!

April and May flew by! Check out the latest database infrastructure happenings you may have missed in the last 2 months... In case you missed it... General Data Protection Regulation (GDPR) took effect...

Cloud Infrastructure Services

​It's Not Too Late: 5 Easy Steps to GDPR Compliance

GDPR went into effect last week May 25th with, unsurprisingly, many organizations scrambling to make the deadline. If you've been keeping up on this blog, you know that we've been highlighting this topics for months. But don't worry, it’s not too late to take control of your data and prepare your organization. Here, we outline five surprisingly simple steps that can help you get on the path to getting your organization to compliance.   Step 1: Don’t panic! Seriously! You may have missed the deadline, but you're not the only one. A recent report estimated that 60% of businesses were likely to miss the GDPR compliance deadline and the articles coming out since the 25th indicate this to be quite true. It might be tempting to hastily implement as many data protection measures as possible as quickly as possible. While this sense of urgency is warranted, as always a measured and strategic approach is best. Companies first need to understand GDPR, how it applies to them, and exactly what their obligations are. This will give them a clear view of the data management and protection measures they need to address their compliance needs.   Step 2: Centralize your data GDPR asks that only the absolute minimum of necessary user information be collected and processed and that users have control over what you do and how you hold that data. Thus, having greater visibility in how and where the organization collects data is imperative. To better monitor data, organizations first need to make relevant information easily accessible to all the right people internally. Years of growth and diversification may have left them with disjointed systems and ways of working, making it difficult for individual teams to understand how their data fits in with data from across the organization. This makes customer information almost impossible to track in a cohesive way, which is why it’s crucial to centralize data and ensure it is constantly updated. This is one of the reasons why a unified Oracle stack is so attractive. Performance, speed, and cost savings of Oracle Engineered Systems and the cloud are great, but it is the consolidation, standardization, and security from chip to cloud that makes complying with regulations like PCI-DSS and GDPR so much easier.    Step 3: Build in data transparency Once you have a solid grip on your data and data-related processes, the next step is to facilitate the exchange of information between teams. Teams like customer service and sales draw on more customer data from more touch-points than ever before to help personalize products or services, but this also means the information they collect is spread thinly across the organization.  To gain a more accurate view of their data, organizations need to integrate their systems and processes so every team has access to the data they need.  Step 4: Choose consistency and simplicity over breadth With businesses collecting such large volumes of data at such a rapid rate, complexity quickly becomes the enemy of governance. Rather than opting for a breadth of technologies to manage this information, your business may want to consider using a single system that sits across the organization and makes data management simple. Cloud-based applications are well-suited to this end, as they allow businesses to centralize both data and data-driven processes, making it easier to track where and how information is being used at all times. As I mentioned before, consolidating your Oracle Database infrastructure onto Oracle Engineered Systems like Oracle Exadata delivers the standardization and security needed to help comply with new regulations like GDPR and beyond. With exact equivalents in the cloud, Exadata allows customer to get their systems in compliance today while still keeping an eye on the demands of tomorrow.   Step 5: Put data protection front-of-mind for employees New technologies can only go so far in making an organization GDPR compliant. As ever, change comes down to employees, culture and processes. Data protection must be baked into the organization’s DNA, from decisions made in the boardroom down to the way service teams interact with customers.    Much of the focus around GDPR has been on the cost organizations will incur if their data ends up in the wrong hands, but it’s worth remembering that above all else the law requires them to show they have the people, processes and technologies in place to protect their information. By following these simple steps organizations can put themselves in a better position to take control of their data. Learn more about how Oracle solutions like Oracle Engineered Systems can help support your response to GDPR.

GDPR went into effect last week May 25th with, unsurprisingly, many organizations scrambling to make the deadline. If you've been keeping up on this blog, you know that we've been highlighting...

How to Build a Digital Business Model

Many companies understand the opportunities presented by digital technologies, but lack a common language or framework to transform their organizations. Through extensive interviews and surveys, researchers at the MIT Sloan Center for Information Systems Research (CISR) have developed a framework to guide thinking about digital business models. The framework focuses on business design considerations and aimed to discover how much revenue is under threat from digital disruption and whether the company is focused on transactions or building a network of relationships to meet customers’ life event needs. CISR analyzed 144 business transformation initiatives to determine the underlying factors that drive next-generation business models and they found two common key dimensions: Customer knowledge. Many companies are launching products and initiatives to learn more about their end customers. Business design. Many firms are striving to shift from value chains to networks or ecosystems. CISR took these two dimensions and created a two-by-two matrix which highlights the business models that will be important in the next five to seven years, and beyond. Now, not every organization transforms in the same way because there is no one-size-fits-all approach to building digital business models. As companies evaluate their digital business model, they must answer several key questions. For organizations developing new digital business models best practices, the research suggest answering these 4 key questions is a good starting point: How much revenue is under threat from digital disruption? It is important to think beyond traditional competitors. What parts of your value chain or business might be attractive to another company? Is the business at a fork in the road? Key decisions include whether to focus on transactions and become an efficiency play, or meet customers’ life events and build a network of relationships. Investments must be driven by what the company is great at. What are the buying options for the future? Moving a company’s business model is the equivalent of buying options. One path is to buy an option that helps the company evolve a little bit at a time. What is your digital business model? Woerner recommends focusing on the business model you want to become. It is important to know where you want to go as a company. Curious about the framework and Woerner's research? Join this Harvard Business Review webinar on Wednesday, May 30th to hear Woerner speak live with Oracle and share her research findings and insights about digital business models. http://ora.cl/oe8SH 

Many companies understand the opportunities presented by digital technologies, but lack a common language or framework to transform their organizations. Through extensive interviews and...

Data Protection

GDPR: Too late? Too complicated? Too flexible? Don’t panic.

‘GDPR is coming tomorrow!’ The Wall Street Journal just reported today that as many as 60% to 85% of companies say they don’t expect to be in full compliance by Friday’s deadline. Suggestions as to why this is the case include businesses weighing the cost of compliance against the cost of non-compliance and deciding to accept the risk, while others will simply fail to get their affairs in order in time.  So, as we approach the deadline, what's next? A great many organizations will be compliant and should find their preparations stand them in good stead.  But what about those organizations who miss the deadline tomorrow, either by delay or design? Should they start panicking now? Should they throw resources and money at the problem in the hope of scrambling over the finish line at the eleventh hour? Is it now more risky to rush a response than it is to miss the deadline, but do so with a deliverable approach in place that demonstrates a commitment to compliance?  If businesses are rushing to compliance, what should they be prioritizing?  Part of the problem in answering that question is the fact the regulation itself doesn’t provide a convenient tick box guide to compliance. Lori Wizdo, principal analyst at Forrester has written: “The GDPR is a comprehensive piece of legislation. But even at 261 pages long, with 99 articles, [it] doesn’t provide a lot of specificity.”  Wizdo was writing for B2B marketers, but the conclusion is the same for all parties. “In practice this renders the GDPR more flexible than traditional “command and control” frameworks”.”  This conclusion is right, of course, but if you’re asking, in a panic, what constitutes best practice compliance, “it’s flexible” isn’t necessarily the answer you’re looking for. All the more reason to stop panicking, pause and consider an appropriate response. If an organization has only now decided it needs to address GDPR then the one thing it cannot change is when it started. Rather than wishing they could turn the clocks back, they should focus on clearly understanding what they want to achieve and how best to go about it. For example, within GDPR there is a clear focus on security and data protection. But organizations should not develop tunnel-vision for those objectives alone.  In our recent series on the future of IT infrastructure and the telecommunications industry, we suggested that following PCI-DSS guidelines can get businesses closer to GDPR compliance. So that is a great first step. “A panicked response to GDPR, which focuses almost exclusively on data protection and security, distorts an organization’s data and analytics program and strategy. Don’t lose sight of the fact that implementing GDPR consent requirements is an opportunity for an organization to acquire flexible rights to use and share data while maximizing business value," says Lydia Clougherty Jones, Research Director at Gartner. Flexibility again, but this time as a benefit to organizations trying to come to terms with GDPR. And this is an issue – and inherent contradiction - at the heart of GDPR. The same regulation can be seen as an unwelcome overheard that some organizations try to avoid, put off, or weigh up but dismiss, or it can be seen as an opportunity to modernize and create a data-driven business that also carries less risk.  While organizations may not be able to change when they started the process, every one remains in control of how effectively they respond. One of the first steps is to educate yourself before you rush into any hasty decisions.

‘GDPR is coming tomorrow!’ The Wall Street Journal just reported today that as many as 60% to 85% of companies say they don’t expect to be in full compliance by Friday’s deadline. Suggestions as to...

Engineered Systems

Cognizant Guest Blog: Supercharged JIT and How Technology Boosts Benefits

Just-in-time manufacturing (JIT) strategies date back to the 1980s, and manufacturers today continue to embrace JIT as they navigate a fast-changing business and technology landscape. This kind of staying power raises an obvious question: How has JIT adapted and evolved to be as useful today as it was 30 years ago? We recently discussed this question with two experts on modern manufacturing technology: Vinoth Balakrishnan, Associate Director at Cognizant Technology Solutions, and Subhendu Datta Bhowmik, Senior Solution Architect at Cognizant. Their insights reveal the critical role that cloud infrastructure plays in creating a new generation of high-performing, “supercharged” JIT manufacturing organizations.    JIT has a pedigree that dates back to the 1980s. Why do modern manufacturing organizations continue to embrace JIT strategies? Balakrishnan: The key to understanding JIT is to realize that it is not just a functionality or feature—it is an organization-wide discipline. In addition, there are two distinct pillars of a JIT strategy: One that is focused on organizational and process issues, and another that is more technology-focused. It is the organizational/process pillar of JIT that keeps it relevant even as technology evolves and changes. This is especially true for continuous improvement (CI), which is a core element of any modern JIT strategy. This is a concept that rises above shifting technology and business trends—giving manufacturers a proven and scalable model for building agile, efficient, and highly competitive operations. Of course, technology plays an important role in JIT, which excels at combining established practices with modern technology innovation. This versatility allows JIT to adapt readily to new manufacturing challenges and competitive pressures, and to meet the demands of global, multi-plant operations with very complex supply chains. This combination also leads to what we think of as “supercharged” JIT strategies that unlock new just-in-time benefits and capabilities. Technology innovation is transforming JIT into a truly frictionless materials-replenishment loop—one that shifts from manual to automated processes, and that enables supply chains linking hundreds or even thousands of companies via strings of real-time, fully automated transactions. Another way to think of this transformation is to imagine a supply chain that replaces material with information. When you can share reliable, real-time information up and down any supply chain, you enable huge efficiency gains, and drastic cuts in waste and misallocated resources. These benefits are relevant to all types of manufacturers, by the way, but they are especially important in industries where we see the most complex supply chains and the greatest scalability challenges—for example, the aerospace and automotive industries. Can you discuss a few areas where you have already seen technology innovation combine with JIT strategies to deliver game-changing benefits?  Bhowmik: Two examples come immediately to mind. First, the Industrial Internet of Things (IIoT) has enabled major speed, efficiency, and accuracy gains in key JIT manufacturing practices. The IIoT leverages its core capabilities—machine-to-machine communication and real-time data flows—to elevate JIT performance. Manufacturers gain real-time visibility into manufacturing processes and performance; and they are able to adjust and improve manufacturing processes on the fly. Value stream mapping—an exercise that identifies waste in a manufacturing process stream—illustrates the value of combining the IIoT with JIT activities. Value stream mapping was previously a manual exercise using individual observations and pencil-and-paper notes. The IIoT enables real-time, fully automated value stream mapping—a much faster and more accurate approach—and allows manufacturers to fix problems on the spot.  Second, cloud services are fueling a transformation in JIT capabilities and performance. One of the best examples involves supply chain management—an area where manufacturers face major challenges dealing with application and data integration, scalability and complexity, among many others.  Cloud services allow manufacturers to solve many of these issues by defining a common information-exchange framework—one in which each supplier represents a node in a virtual supply chain. This framework allows manufacturers to adapt and adjust in real time to shifts in demand, supply chain disruptions, time-to-market requirements, and other potential risks to JIT performance.  Looking ahead, which emerging technologies are most likely to have a similar impact on JIT capabilities and performance? Balakrishnan: Assuming a reasonable time frame—let’s say five years—I would look first at intelligent process automation (IPA).  IPA has implications for JIT manufacturing when it combines existing approaches to process automation with cutting-edge machine learning techniques. The resulting IPA applications can learn and adapt to new situations—a key to combining process automation with continuous improvement. Distributed ledger technology—also known as blockchain—is another important area of innovation. Blockchain has the potential to enable “frictionless” transactions that minimize cost, errors, and business risk, and some firms are already using blockchain to create private trading networks within their enterprise supply chains. Continuous improvement remains a pillar of a modern JIT strategy. Does CI present any special challenges or opportunities related to technology innovation? Bhowmik: I think it’s important to answer a question like this one by restating—first and foremost—that JIT is a technology-independent concept.  Certainly, this is true of Kanban, Five S and other CI methodologies that play a role in JIT strategy. These concepts have proven staying power and rely on timeless concepts—and these qualities make them even more valuable as strategic tools. At the same time, it’s important to understand that “technology independent” doesn’t mean “technology free.” Instead, it means that manufacturers are free to choose the right technology that complements a chosen CI methodology and meets their business needs. Fortunately, it is very easy to find examples that illustrate this point. Perhaps the most useful of these involves the ability to shift from physical Kanban cards to “eKanban” signaling systems. These rely on IIoT machine-to-machine communications and data flows to track the movement of materials; to distribute and route Kanban signals; and to integrate Kanban systems with ERP and other enterprise applications. eKanban systems based on IIoT capabilities are fully automated, and they scale to accommodate global manufacturing organizations of any size. They virtually eliminate the risk of manual entry errors and lost cards. Technology doesn’t change the principles that make Kanban useful, but it does radically improve your ability to apply those principles. For a second example, consider the role that machine learning and artificial intelligence can play in upgrading the IT security measures protecting your JIT manufacturing infrastructure. If a cyberattack stops the flow of eKanban signals, it can also stop your manufacturing processes. The benefits of eKanban are real, and they’re incredibly valuable—and it’s worth protecting those benefits with appropriate security technology choices.  These examples are a great lead-in to our final question: How can manufacturers set themselves up for success with their own “supercharged” JIT strategies?  Balakrishnan: My first piece of advice would be to partner with an integrator, or another source of expert advice and technology services. I realize this sounds like self-serving advice coming from a technology integrator. Nevertheless, it’s a valid recommendation, given the sheer number of technology options available to manufacturers. Most JIT-related technology initiatives, however, are built on the same foundation: cloud-ready infrastructure. It’s very important to understand what it means to be “cloud ready,” especially in a manufacturing context. First, a cloud-ready infrastructure must support easy and efficient integration of the infrastructure (IaaS), platform (PaaS) and application (SaaS) layers of a manufacturing technology stack. It must also facilitate integration with other systems—within and outside of the enterprise—and support interoperability standards such as service-oriented architectures. Second, cloud-ready infrastructure must offer a level of availability that is suitable for business-critical applications. Third, it must support Big Data applications—ingesting, storing, managing, and processing massive quantities of manufacturing and IIoT data. Next, it must be highly scalable—enabling fast and economical hardware upgrades, as well as scaling capabilities without scaling cost and risk. Finally, cost is always a concern. The most common way to control costs is to use commodity hardware optimized specifically for a cloud-ready manufacturing technology stack. Bhowmik: We’ve had a great deal of experience assessing and implementing cloud infrastructure solutions, of course, and we find that Oracle Exadata does the best job of satisfying these requirements. This is largely due to Oracle’s use of engineered systems: pre-integrated, fully optimized hardware-software pairings that incorporate the company’s expertise and experience building cloud-ready systems for the manufacturing industry.  Oracle Exadata meets our scalability, security, availability, and cost requirements; and it performs exceptionally well in Big Data and IIoT environments. As a result, Oracle Exadata remains our first choice for building cloud-ready infrastructure solutions for our manufacturing clients. About the Authors Vinoth Balakrishnan, CPIM (supply chain certified), Six Sigma Black Belt (ASQ certified), Total Productive Maintenance (Japan) certified Oracle Manufacturing, Supply and Demand Planning Architect with 16+ years of experience in manufacturing, supply chain and ERP domain in the U.S., Europe, and Asia. He leads the Oracle VCP/OTM practice at Cognizant.    Subhendu Datta Bhowmik, CSCP (supply chain certified), IOT (internet of things) and Machine Learning (Stanford) certified Oracle Solution Architect with 20 years of Oracle experience in large program management, supply chain management, product development lifecycle, and digital transformation. At Cognizant, he’s working on all Oracle Digital Transformation initiatives. 

Just-in-time manufacturing (JIT) strategies date back to the 1980s, and manufacturers today continue to embrace JIT as they navigate a fast-changing business and technology landscape. This kind of...

Engineered Systems

Oracle Database Appliance: Simplicity and Performance Go Hand-in-Hand

Financial transactions are an essential part of life. For retail bank customers, paying monthly bills online helps avoid late fees. For business owners, rapidly processing customer payments keeps the cash flowing. For investors, buying or selling a perfectly priced security helps keep portfolio objectives on target. Given the importance of such matters, seamless service and access to real-time data are critical.Indeed, when a lapse in data access occurs, the impact on a financial service company’s bottom line can be significant. A Ponemon Institute study estimated that the average cost of an unplanned data center outage in the financial services industry neared $1 million, encompassing: Damaged or lost data Reduced productivity Detection and remediation costs Legal and regulatory headaches Tarnished reputation and brand Downtime-related risks are significant for small and large financial service providers alike. Fortunately, building the infrastructure to help ensure high availability data access can be more budget-friendly than you think. Customer connections multiplying Fintech firms are leading the way in developing individual relationships with their customers, according to EY’s 2017 FinTech report. EY found that a third of digitally active consumers in 20 markets around the world use fintech services and project usage will exceed 50% in the coming years. Traditional financial services companies are now moving aggressively to catch up to and get ahead of these nimble industry disrupters. Interestingly, even as digital channels explode, EY’s 2017 Global Consumer Banking Survey found that between 60% and 100% of retail banking customers worldwide still visit local branches. While delivery platforms vary from cutting edge to old school, the foundation of all financial services remains data: Real-time insights into information such as balances, transactions, and rates accessible at any time of day. Yet collecting, managing, and analyzing that data must be balanced with controlling costs and sustainable profit margins. Keeping it simple High-end, sophisticated database technology is great, but sometimes isn’t a fit from a cost or business perspective. For example, a large financial service company may operate a broad network of remote or branch offices with small business-type needs, while a smaller firm may contend with a tight budget and limited resources. Increasingly, however, financial service providers have found that the Oracle Database Appliance (ODA) offers a streamlined, cost-effective approach to data management. This purpose-built system is optimized for Oracle Database and Oracle applications, and it can be configured and deployed in 30 minutes or less. Engineered to grow with a firm’s database needs, it leverages standardized, proven configurations that don’t require specialists or a team of installers. Plus, the Oracle Database Appliance eases budget concerns as clients only license the CPU cores they need (up to a robust 72). Certainty in an uncertain world Underlying the simplicity and cost effectiveness of the ODA is Oracle’s tradition of reliability and durability. Full redundancy and high availability rates allow data to be accessed 24/7 while protecting databases from both planned and unplanned downtime. Designed to eliminate a single point of failure, the system also reduces the service area of attack with a single-system patch feature. For high-availability solutions, the Oracle Database Appliance may be paired with Oracle Real Application Clusters, Oracle Active Data Guard, and Oracle GoldenGate. Built-in flexibility The Oracle Database Appliance works seamlessly with the Oracle Exadata Database Machine to provide unlimited scalability as businesses grow. Better suited for large enterprises, the Exadata system is simply too powerful in some situations. For example, a small and growing financial services company may not need the full Exadata solution at this stage of the business—or have the internal resources to support it. Similarly, for a multinational bank that employs Exadata at a macro level, a new office or branch may have modest database needs as it builds a local footprint. The Oracle Database Appliance is ideal in both situations. Additionally, in the latter case, the branch-level installation will fully integrate with the Exadata system housed at any regional or international base. The two systems were designed to be complementary, with smooth data movement between connected databases and the cloud as well. Ultimately, Exadata has its place, but with the Oracle Database Appliance, you aren’t forced to take on the complexity and cost if it doesn’t fit. Customer success story: Yuanta Securities keeps it real-time Taiwan-based Yuanta Securities Company is an investment banking firm that provides assorted brokerage and other investment services across a 176-branch network. To realize the benefits of its merger with Polaris Securities, a popular transaction platform operator, Yuanta Securities needed to ensure seamless, real-time data synchronization between the two firms’ distinct transaction systems without disrupting the customer experience. In addition, it sought to consolidate six databases into a single platform, simplify system management, and rely upon a single support vendor. To tackle these challenges, Yuanta Securities deployed three Oracle Database Appliance units—one for its production site, a second for its disaster discovery site, and the third for development and testing. While one Oracle Database Appliance unit required just three hours for installation and configuration, the entire implementation, which included Oracle GoldenGate and Oracle Active Data Guard, was live within 45 days. The disruption to customer transactions was minimal as the company achieved near-real-time, back-end data synchronization with GoldenGate. Furthermore, Yuanta Securities slashed its hardware costs by 70% and saved on licensing costs due to Oracle Database Appliance’s flexible, capacity-on-demand licensing model. Customer success story: Coopenae grows full-speed ahead Costa Rica-based Coopenae is a credit union that serves 100,000 members through 27 locations nationwide. Founded in 1966, the cooperative offers a full array of financial services aimed at meeting the financial needs of its members and their families and communities. Coinciding with Coopenae’s 50th anniversary, management modernized the company’s systems environment to address existing challenges as well as prepare for future opportunities. Key requirements of the upgrade included: Accelerated batch processing times that didn’t affect other business critical applications such as funds management A highly efficient and scalable engineered system A high-performing, server-virtualization environment featuring a simplified, cost-effective single-vendor support approach Oracle Database Appliance fit the bill on all fronts, along with redundant databases, servers, storage, and networking. In turn, Coopenae reported that its database performance improved by three-fold, financial statements and other reports were generated five times faster, and monthly closing processing time dropped from six hours to two hours. A smart way to fulfill your database needs As Yuanta Securities and Coopenae discovered, always-on, high-performing database technology doesn’t have to break the bank. Nor does it require debilitating deployment times or complicated support requirements. Instead, the Oracle Database Appliance offers a simple path to improved data performance and the adaptability to align with growing business needs.    

Financial transactions are an essential part of life. For retail bank customers, paying monthly bills online helps avoid late fees. For business owners, rapidly processing customer payments keeps...

Customers

SAP Reveals Much Longer-Term Commitment to Oracle

Today's guest post is from Mustafa Aktas, Head of Oracle on Oracle Solutions, EMEA Region, Engineered Systems for Oracle Applications, ISVs and SAP. SAP has much longer term commitment to Oracle then many think both for on-premise and extending to the Cloud! You may have heard that earlier this month on April 4th, 2018, SAP certified Oracle Database Exadata Cloud Service, aka ExaCS, to run all the Business Suite stack applications such as ERP/ECC, HR, SCM, CRM, etc. This is yet another commitment of SAP to support Oracle in the very longer term as we have seen the proof of that with initial signals earlier last year with the extended support of SAP HR/HCM on Oracle (anyDB) from 2025 to 2030 where we expect similar commitments to happen to all other core applications. With this certification coming after the availability of Oracle Database 12c and some key options only available to SAP customers who has chosen to leverage SAP on Oracle infrastructure such as Real Application Cluster, Automatic Storage Management and Oracle Database In-Memory what we already have certified already for on-premise now available on Exadata Cloud Service as well. With this certification, similar to what we already have available for on-premises Exadata for the last 6-years with 100s of live SAP customers (and growing), SAP customers can now deploy Oracle Databases of SAP Business Suite (either for Dev, Test/QA, Pre-Production, Production, Disaster Recovery – on any platforms) based on SAP NetWeaver 7.x technology platform on Exadata Cloud Service. The same Exadata architecture, performance, capabilities, and advantages (for example, the ability to build cloud assurance and hybrid deployment, and scale out/failover between on-premise to cloud, and rapid deployment) are now available with this ExaCS certification for SAP at an optimized cost based on cloud’s pay-per-use licensing model. The Oracle Cloud Infrastructure portfolio for SAP customers is getting much more interesting with "Bring Your own License" and other great programs that directly benefits our customers. REMINDER: Don’t forget to download the SAP on OCI Newsletter here to see Cloud with no I/O latency and much positive price/performance ratio compared to Azure and AWS. CONCLUSION: Oracle complete, open, secure, scalable, powerful cloud’s value means a lot to SAP users who can follow all the development updates/plans and available SAP on Oracle cloud certifications closely on SAP SCN as well.  About the Author Mustafa Aktas is the Head of Oracle on Oracle Solutions for the EMEA Region, focusing on Engineered Systems for Oracle Applications, ISVs and SAP products. He leads the focused and specialized x-LOB team including co-prime sales, business development, presales, consulting and delivery functions, to help our customers to build low-cost, optimized, high-performing cloud platforms to achieve substantially and higher business benefits. His team's motto is to help clients to lower operating cost and risk with high performing, integrated solutions that also increase user productivity and free up IT resources to focus on new innovations that drive business growth.  Feel free to reach him mustafa.aktas@oracle.com for any questions anytime. 

Today's guest post is from Mustafa Aktas, Head of Oracle on Oracle Solutions, EMEA Region, Engineered Systems for Oracle Applications, ISVs and SAP. SAP has much longer term commitment to Oracle then...

Cloud Infrastructure Services

Your Future Is Calling: Surprise! There’s (Always) More Regulation on the Way

Unless you’ve been hiding out in a cave for the past couple of years, you know that the always-highly-regulated telecommunications industry is about to be hit with even more regulation. The latest salvo from regulators is Europe’s General Data Protection Regulation (GDPR), which goes into effect May 25, 2018. GDPR will add new accountability obligations, stronger user rights, and greater restrictions on international data flows for any organization that stores user data, including telcos, financial services providers, and social networks. These regulations apply to data for individuals within the EU as well as the export of any personal data outside the EU—so it will affect all businesses that collect data from EU citizens. While increasing the compliance requirements, these new regulations also present tremendous opportunities for telecommunications companies to gain greater customer trust through improved data protection, and to expand and refine their service offerings. Greater Challenge, Greater Opportunity GDPR dramatically ups the ante both in terms of its data governance requirements and the cost of noncompliance, which could result in legal action or fines of up to €20 million, or 4% of worldwide annual revenue. No matter where they are headquartered, companies holding the confidential data of EU citizens will not only need to comply themselves, but also ensure that their vendors, including SaaS providers, meet the requirements as well. Yet, in a 2017 survey by Guidance Software, 24% of service providers predicted they would not meet the deadline. The survey also identified the top four actions companies must take to become GDPR compliant: Develop policies and procedures to anonymize and de-identify personal data (25%). Conduct a full audit of EU personal data manifestation (21%). Use US cloud repositories that incorporate EU encryption standards (21%). Evaluate all third-party partners that access personal data transfers (21%). How to Prepare for GDPR: Build on PCI DSS The good news is that telcos already have a blueprint for achieving GDPR compliance: Payment Card Industry Data Security Standard (PCI DSS), the latest version of which has been in place since 2016. While PCI DSS deals with cardholder data (CHD) and GDPR’s focus is on personally identifiable information (PII), both are designed to improve customer data protection. What’s more, the segmentation and security measures required for PCI DSS can be deployed to help meet the less prescriptive GDPR requirements. Take the advice of Jeremy King, international director at the Payment Card Industry Security Standards Council (PCI SSC): “People come to me and say, ‘How do I achieve GDPR compliance?’… Start with PCI DSS.”   Clearly, communications service providers must gain greater control over their data to comply with regulations. But doing so also presents an opportunity to increase customer trust. Furthermore, the data consolidation required to achieve regulatory compliance opens avenues for better insights and analytics—which help telcos provide more revenue-generating services and offer a better customer experience.    These goals will require a standardized, integrated infrastructure that provides data security, scalability, agility, resilience, and processing power. Because every component is co-engineered to optimize performance, co-engineered systems that have security designed into its DNA, like Oracle Exadata, can help meet the storage and processing needs of telcos handling sensitive personal or payment data.  The built-in ability to isolate storage and compute nodes that must adhere to varying degrees of confidentiality, integrity, and availability requirements meet the PCI DSS V3.2 requirements. This is just one of the reasons why Spanish telecommunications giant Telefonica chose Exadata to consolidate its mission-critical databases, boost database performance by 41x, and optimizes operating expenses to ensure business continuity in the face of outages or, worse, potential cyberattacks and security breaches.   Cloud Computing: The Perfect Solution—Except When It Isn’t  Cloud computing offers a flexible, scalable pathway to meeting the growing compliance requirements that organizations face. The public cloud provides full-time, dedicated security monitoring and enhancement, and allows companies to bring new security measures online without costly IT intervention or retooling of legacy, on-premises infrastructure.     But relying on the public cloud raises concerns for telcos, which must ensure data sovereignty and governance and may also worry about latency issues. One alternative that solves this dilemma is Cloud at Customer, which offers a full public cloud model delivered as a service, behind the enterprise firewall. This ensures that telcos can maintain their data security and regulatory compliance even as they take advantage of the benefits of cloud computing, such as automatic updates to meet the latest compliance standards, quickly and effortlessly.    AT&T Turns to Cloud at Customer to Enhance Capabilities and Compliance As the world’s largest telco, AT&T operates a massive private cloud based on proprietary virtualization. But it needed a cloud-based solution to run its 2,000 largest mission-critical Oracle databases, and its private cloud couldn’t deliver the needed performance for the transaction-intensive databases. The company also needed a solution that would keep all the customer data on premises for regulatory, privacy, and security reasons.    Cloud at Customer allowed AT&T to take advantage of the same infrastructure platform that Oracle uses in its own data center, but located in AT&T’s facility. Through Cloud at Customer, AT&T runs critical databases up to 100 TB in an Oracle-managed cloud that provides the same flexibility and scalability as the public cloud. This configuration also offers performance benefits, according to AT&T lead principal technical architect Claude Garalde: “For performance, you want the database to be really close to the application and middleware layers,” he notes. “You don’t necessarily want to be going out over a public internet link or even a VPN.”   If Compliance Seems Like a Burden, Try Data Exposure...  Regulation will remain a fact of life for telcos. Getting ahead of the regulatory game, then, is critical. The key is to recognize that regulation represents a growing demand from customers that companies keep their sensitive data safe. With that in mind, telcos need a flexible, scalable data storage and processing solution that ensures compliance while also supporting aggressive business goals. Engineered systems that include Oracle Exadata, and a deployment model such as Cloud at Customer provide the crucial link between data security and data utilization to power transformational innovation.   Learn more about how Oracle Engineered Systems can help you maintain compliance with new regulations while supporting a customer-centric business model:  

Unless you’ve been hiding out in a cave for the past couple of years, you know that the always-highly-regulated telecommunications industry is about to be hit with even more regulation. The...

Cloud Infrastructure Services

4 Trends to Keep an Eye on in 2018

Today's guest blog comes from Andre Carpenter, Principal Sales Consultant at Oracle. 2018 is already shaping up to be an exciting year which has already shown it is more about the new and emerging technologies than the existing platforms and trends out there. The use of server and desktop virtualization continues to soar and grow, but now the focus has turned to building and delivering micro services through rapid spin up of containers to serve the needs of the business. Just in the last three months I have seen more and more interest from customers on how they are tackling the hybrid and multi-cloud adoption challenge.   This has led onto discussions on how Oracle’s three deployment models give them choice and how perhaps the right model for many is a combination of the three. Here are my four technology predictions for the coming year: 1. Data becomes “Smart” Data We are already seeing strong IoT adoption in the enterprise, driving more and more consumption of data storage and contributing to the phenomenon we know as Big Data (more on this later). But what are companies doing with this data from smart devices once it is ingested into their enterprise data management lifecycle?   With considerations such as permissions, sharing, security, compliance, governance and the actual life span of the data, datasets are now being designed to become smarter in the sense that they actually drive these elements (perhaps by way of meta-data) themselves.   This allows further enhancements and possibilities around self-healing, provisioning, and security permissions which will make the day to day life of the IT operations manager a heck of a lot easier.   Take our self-managing, self-driving database as an example, with the promise to automate the management of the Oracle Database to eliminate the possibility of human error, self-patching and even the possibility to back itself up without any human intervention. This frees up human resources to concentrate on other areas of their IT environment that haven’t quite reach an autonomous state while not worrying about the database driving itself in the background.     What is scary is that the next evolution of this prediction will be the ability to learn from these events making Smart Data even smarter.   2. Big(ger) Data   8K video files, larger frame rates leading to hungrier storage demands. We have seen this in one of our customers' storage farms not only from a consumption perspective but from a performance perspective too.  The ability to store bigger data faster increases time to market for many and provides that competitive edge that customers are always seeking.   Another source of this boom is the Internet of Things (IoT) and edge devices, each with their own consumption requirements. Smart cars, aviation and smart devices are all generating data at a unprecedented rate.   This takes me back to the IDC digital universe published in 2012 which aims to measure the "digital footprint" of this phenomenon estimating that by 2020 we would have generated 40 ZB.  The previous year's forecast for 2020 was 35 ZB by comparison.    3. Security is paramount even in the cloud   For many, security is the reason why and the reason why not to go Cloud. It can be assumed on one level that public cloud providers host a far more secure and robust platform for cloud users to adopt their services.     The main reason for this, is that the technology that these public cloud environments provide is a more modern and robust platform specifically designed for the particular purpose it is serving – to host multiple workloads in a secure and reliable manner.     In contrast to tradition on-premises environments, where stack technologies have organically formed and grown over the years without any real thought or intent to resemble a cloud hosting environment.   The result is incompatible components, poor security measures right through the stack and aging hardware.  This makes many IT leaders nervous to move their workloads to the cloud as they have created a monster that does not simply lift and shift. And certainly not without security risks.   A study by Barracuda Networks found that 74% of IT leaders said that security concerns were restricting their organizations' migration to the public cloud.  This stance is supported by Gartner’s very own Jay Heiser, research vice president at Gartner who states:   "Security continues to be the most commonly cited reason for avoiding the use of public cloud".   Whichever way you look at it and whichever deployment model is in consideration, security has now taken a front row seat in the priorities list of most Chief Digital Officers/Chief Information Officer’s things to tackle in 2018.   4. Multi-cloud model is the norm!   No surprise here with this one, but I still find a lot of cloud practitioners in the industry still believing that customers standardize on just one public cloud provider when in fact customers are embracing the choice paradigm and using multiple cloud vendors for multiple purposes. This is backed by research from Right Scale in their Cloud Computing Trends: 2018 State of the Cloud Survey who found that 81 percent of enterprises they surveyed have a multi-cloud strategy.     The market has now become a fierce battleground for these providers, and more scrutiny and demands from end customers are forcing the cloud players to  provide more flexibility and simplicity to move workloads on AND off their cloud without any major financial headache or penalty which was a major consideration initially.   In closing, we are witnessing a massive explosion right across the I.T industry where technology appears to be accelerating a head of where many enterprises are in their IT roadmap and the challenge to maintain security, utilise big data smartly, and leverage cloud choice to gain competitive edge all remains critical in the CIO and CDO’s agenda.   I can’t wait to see how the year pans out and just how and if these predictions transpire.     About the Guest Blogger Andre Carpenter is a seasoned IT professional with over 12 years’ experience spanning presales, delivery, and strategic alliances across the APAC region for many large vendors.  Prior to joining Oracle, Andre held a number of roles at HPe including Principal Consulting Architect and Account Chief Technologist helping customers drive their IT Strategy, looking at how new and emerging storage technologies could impact their competitiveness and operations. He also evangelised HPe’s Converged infrastructure and storage portfolio through product marketing, blogging and speaking at industry conferences. Andre holds a Bachelor of Information Degree as well as a Master of Management (Executive Management) from Massey University, New Zealand. You can follow Andre on Twitter: @andrecarpenter and LinkedIn www.linkedin.com/in/andrecarpenter

Today's guest blog comes from Andre Carpenter, Principal Sales Consultant at Oracle. 2018 is already shaping up to be an exciting year which has already shown it is more about the new and emerging...

Cloud Infrastructure Services

March Database IT Trends in Review

Check out the latest database happenings you may have missed in March... In Case You Missed It... Forbes - Larry Ellison: Oracle's Self-Driving Database 'Most Important Thing Company's Ever Done.' Ellison pulled no punches in framing his view of how truly disruptive the Autonomous Database will be: it requires no human labor, delivers huge cost savings, and is more secure because eliminating human labor eliminates human errors. Read the interview here. GDPR is Just Around the Corner What is GDPR? Everything You Need to Know. The EU General Data Protection Regulation (GDPR) comes into effect on May 25th 2018. For those of you who are still coming to terms with what GDPR actually means for you, your team, and your company, we sat down with Alessandro Vallega, security and GDPR business development director for Oracle EMEA, to help get some answers to frequently asked questions about GDPR. Read the blog. Time to See GDPR as an Opportunity, Not a Chore. We are very quickly moving to a world driven by  all sorts of connected devices and new forms of artificial intelligence and machine learning. In order to succeed with these technologies, businesses will need the public to trust their approach to managing data. By acting now, companies will guarantee their approach to data is compliant with the new GDPR rules and gain the confidence to continue delighting customers with better, more personalized services. Learn more. Addressing GDPR Compliance Using Oracle Security Solutions. This white paper explores how you can leverage Oracle Security Solutions to help secure data at rest and in transit to databases, organize and control identity and access of users and IT personnel, and manage any aspect of a complex IT infrastructure to get a leg up on addressing GDPR requirements today. Download the white paper. How Are Companies Evolving IT to Keep Up with New Demands? The Gaming Industry Bets on Cloud-Ready Solutions. The gaming industry has seen a remarkable transformation in the past few years; 57.6% of 2017 revenue at major Las Vegas Strip resorts came from non-gaming activities. With customers spending more and more on celebrity restaurants, high-end shopping, and shows and even theme-park like rides. Ensuring that your IT environment is ready to take on the cloud should be the a top, if not #1 priority, for the casino and gaming industry. MedImpact Optimized Pharmacy Management with Oracle Exadata and ZFS Pharmacy benefit managers (PBMs) like MedImpact work with customers' health plan to help them get the medication they need. PBMs face a rapidly changing patient landscape that is demanding higher efficiency, rapid response, and improved health care outcomes. MedImpact was able to accelerate database performance up to 1000% and pull reports and analytics in seconds versus hours with Oracle Engineered Systems. Watch their story here. Don’t Miss Future Happenings: subscribe to the Oracle Cloud-Ready Infrastructure Blog today!

Check out the latest database happenings you may have missed in March... In Case You Missed It... Forbes - Larry Ellison: Oracle's Self-Driving Database 'Most Important Thing Company's Ever Done.' ...

Cloud Infrastructure Services

What is GDPR? Everything You Need to Know.

The EU General Data Protection Regulation explained... The EU General Data Protection Regulation (GDPR) comes into effect on May 25th 2018. For those still getting to grips with what it means, we sat down with Alessandro Vallega, security and GDPR business development director for Oracle EMEA, to help get some answers to frequently asked questions about GDPR. What is GDPR? The EU General Data Protection Regulation (GDPR) will come into effect on 25 May 2018. It applies to all organizations inside the EU and any outside who handle and process data of EU residents. It is intended to strengthen data protection and give people greater control over how their personal information is used, stored and shared by organizations who have access to it, from employers to companies whose products and services they buy or use. GDPR also requires organizations to have in place technical and organizational security controls designed to prevent data loss, information leaks, or other unauthorized use of data. Why is GDPR being introduced? The EU has had data protection laws in place for over 20 years. However, in that time, the level of personal information in circulation has grown dramatically, and so have the different channels through which personal information is being collected, shared and handled. As the volume and potential value of data has increased, so has the risk of it falling into the wrong hands, or being used in ways the user hasn’t consented to. GDPR is intended to bring fresh rigor to the way organizations protect the data of EU citizens, while giving citizens greater control over how companies use their data.  What should organizations do to comply with GDPR? GDPR does not come with a checklist of actions businesses must take, or specific measures or technologies they must have in place. It takes a ‘what’ not ‘how’ approach, setting out standards of data handling, security and use that organizations must be able to demonstrate compliance with. Given the operational and legal complexities involved, organizations may want to consult with their legal adviser to develop and implement a compliance plan. For example, while GDPR strictly speaking does not mandate any specific security controls, it does encourage business to consider practices such as data encryption, and more generally requires businesses to have in place appropriate controls regarding who can access the data and be able to provide assurances that data is adequately protected. It also states businesses must be able to comply with requests from individuals to remove or amend data. But it is up to organizations how they meet these requirements and ultimately it is up to them to determine the most appropriate level of security required for their data operations.  What are the penalties for not being compliant with GDPR? If organizations are found to be in breach of GDPR, fines of up to 4% of global annual revenue or €20 million (whichever figure is highest) could potentially be imposed. Furthermore, given how critical personal data is to a great many businesses the to their reputation damage could be even more significant, if the public believes an organization is unfit to control or process personal information. Who needs to prepare for GDPR? Any organization based inside or outside the EU that uses personal data from EU citizens, whether as the controller of that data, such as a bank or retailer with customer data, or a third party company handling data in the service of a data controller, such as a technology company hosting customer data in a datacentre, depending on their respective roles and control over the data they handle.  What personal information is covered by GDPR? GDPR is designed to give people greater control over personal information which may include direct or ‘real world’ identifiers such as name and address, or employment details, but may also include indirect or less obvious geolocation data or IP address data which could make a person identifiable.  Is GDPR bad for businesses? Complying with any new regulation may bring additional work and expense but GDPR also gives organizations an opportunity to improve the way they handle data and bring their processes up to speed for new digital ways of working. We are living in a data-driven economy. Organizations need to give consumers the confidence to share data and engage with more online services. Following the requirements of GDPR can help in that regard. Who should be in charge of GDPR? GDPR compliance must be a team effort. It is not something that can be achieved in, or by, one part of the organization. Ultimately, its importance is such that CEOs should be pushing their teams and appointed owners across the business to ensure compliance. Almost every part of a business uses and holds data and it only takes one part of the business to be out of alignment for compliance efforts to fail.  How can Oracle help with GDPR compliance? Oracle has always been a data company and takes very seriously our role in helping organizations use their data in more effective, more secure ways. We have more than 40 years of experience in the design and development of secure database management, data protection, and security solutions. Oracle Cloud-Ready Infrastructure and Oracle Cloud solutions are used by leading businesses in over 175 countries and we already work with customers in many heavily regulated industries. We can help customers better manage, secure and share their data with confidence. For more information, see: Helping Address GDPR Compliance Using Oracle Security Solutions Is compliance being left to chance? How cloud and AI can turn a gamble into a sure thin

The EU General Data Protection Regulation explained... The EU General Data Protection Regulation (GDPR) comes into effect on May 25th 2018. For those still getting to grips with what it means, we sat...

Cloud Infrastructure Services

Time to See GDPR as an Opportunity, Not a Chore

When many people think of data-driven businesses the temptation may be to think of major consumer facing websites, online retailers or social media companies. But the reality is, organizations of all sizes, across all sectors are getting closer to their data in order to improve and personalize the customer experience or the way they work, or to transform whole industries or create new opportunities. The UK’s NHS Business Services Authority (NHSBSA) recently uncovered insights in its data that have helped it improve patient care and uncover nearly £600 million in savings. In India, a new crop of financial institutions have reimagined credit checks for the country’s unbanked population, assessing people for small business loans based on an analysis of their social media data.  But while the rise of data-driven business models and organizations has made life better for many people it has also raised concerns about how our data is collected, used and managed. This is the major motivation behind the EU’s General Data Protection Regulation (GDPR), which aims to raise the standard of data protection in modern businesses and provide consumers with greater transparency and control over how their personal details are used. New regulation can feel like a burden, but organizations should see GDPR as an opportunity to put in place processes and protections that given them the ability to make the most of their data, and give consumers the confidence to keep sharing their data with the organization.  To paraphrase TechUK’s Sue Daly, who joined a panel of data experts to discuss GDPR on the Oracle Business Podcast, we are moving to a world driven by connected devices, the Internet of Things, and new forms of artificial intelligence, and to succeed with these technologies businesses will need the public to trust their approach to managing data.  Transparency can also be a valuable differentiator. Telefónica, one of Spain’s largest telecoms operators, provides advertisers and content providers with anonymous audience insights so they can better tailor their content to individual users. In the interest of transparency, the company publishes the customer data it sends to third parties and gives people the option to opt out of sharing their personal details. Telefónica’s data-driven approach has taken it from strength to strength. Despite currency pressures and a difficult market, the company posted a 23% rise in profits at the end of February 2018.  The exchange is mutually beneficial, as it allows the operator to curate the right content for its own customers and provide them with a better user experience. Telefonica has now captured 40% of Spain’s lucrative digital media and advertising market. By comparison, most telcos only contribute to roughly 2% of the advertising value chain. This perfectly illustrates why businesses should not just wait for GDPR to arrive and do the minimum required in the name of compliance. With major changes come major opportunities, but only for organizations that are proactive and look beyond short-term regulatory burden.  Nina Monckton, Chief Insight Officer at the NHSBSA, who also joined the Oracle Business Podcast panel to discuss GDPR, had this to say: “The trick is to help people see how their data helps your business improve their quality of life. For example, when you explain that their anonymized details can help researchers find cures to serious illnesses the benefits become much more tangible”.  By acting now, companies will guarantee their approach to data is compliant and gain the confidence to continue delighting customers with better, more personalized services. 

When many people think of data-driven businesses the temptation may be to think of major consumer facing websites, online retailers or social media companies. But the reality is, organizations of...

Cloud Infrastructure Services

The Gaming Industry Bets on Cloud-Ready Solutions

Today's guest post comes from Robert Garity, the Senior Sales Director for Gaming for Oracle Hospitality. The gaming industry has seen a remarkable transformation in the past 20 years. At Las Vegas’ major Strip resorts (those grossing more than $1 million in gaming revenue annually), 57.6% of 2017 revenue came from nongaming activities. Some resorts are seeing as much as 65% of revenue from non-gaming sources. Today, it’s fine dining (often at restaurants of celebrity chefs), extravagant hotel accommodations, luxury retail, pampering spas, dazzling shows, and pro-designed golf courses that are the big money-makers. As these nongaming revenue streams expand, they bring about new challenges for the players in the gaming industry. What are these challenges, and how can these gaming enterprises meet them by adopting new technology models? Managing Challenges In-House Can Be a Crapshoot Traditionally, casinos managed IT systems in-house, but with the growth and direction the industry has taken in recent years, it is becoming increasingly complex. In-house IT teams simply can’t keep up with the increased volume and complexity of the operations. For example, large fluctuations in volume for occasions such as a Thanksgiving weekend, New Year’s Eve, or Christmas week can strain the capacity of the IT stack and slow systems down. Beyond the volume demands, staying current with system versions has become increasingly difficult and costly when managed in-house. That challenge becomes even greater when gaming enterprises can’t find and keep on-site experts—which is especially problematic for casinos in rural and remote locations. Managing the back-of-the-house infrastructure requires considerable expertise. If the enterprise has only one in-house expert and that person leaves the company or otherwise unavailable, finding someone to replace them may be an extremely difficult effort. If all these challenges weren’t enough, gaming environments are a high-value target for payment thieves, including the food and beverage and lodging areas of the business. Casinos and resorts with gaming have massive amounts of credit card information, plus customer loyalty program information. For these reasons, they must keep payment and nonpayment customer information absolutely secure or face disastrous consequences. (Watch for a future post that addresses the data security issue for the entire hospitality market.) Gaming operations struggle to manage a large data center full of servers and technology experts. Traditionally, they’ve had separate instances of their systems at each location. Offloading to third-party data centers doesn’t solve the problems of the labor to manage the systems (database and application-specific expertise) or the security, it simply moves it to another location. Why the Gaming Industry Is Betting on the Cloud Gaming enterprises see the cloud as an avenue to gain control of the complexity of the modern gaming environment. The bottom line is that the cloud allows these enterprises to centralize infrastructure and have it managed by experts so that their IT teams can focus on managing the operations that contribute directly to creating a world-class customer experience. How is this possible? Traditional on-premises infrastructure management requires experience with the database, operating system, and applications, and can be challenging for gaming operators to manage. Why? Because often times the environment is comprised of commodity hardware from multiple vendors, cobbled together by a select few who have the type of knowledge to manage this kind of complexity. It is especially difficult and complicated to upgrade those environments with all the systems with which they need to interface: casino management, hotel operations, food and beverage management, catering, vendor management, customer loyalty programs, liquor-dispensing, surveillance systems, and more... All of these systems need to work together seamlessly and require testing and attention during the upgrade process—which is extremely difficult to pull off with a complex on-premises installation, and without outside expertise.  If, on the other hand, gaming operators move to the cloud, those integrations are much more manageable, far easier to test, and not nearly as difficult to deploy. The cloud also allows casinos to roll system upgrade costs into the monthly fees for cloud service, and have the third-party team of experts implement upgrades for them. Casino operators are also able to bring the POS and property management systems (PMS) as well as other systems, into the cloud to centralize their management—a huge advantage from a security perspective. The cloud resolves the problem of volume fluctuations as well. It allows casinos to increase capacity on an as-needed basis, and then drop back down when volume subsides, based on a monthly subscription fee.  Cloud and Cloud-Ready Solutions May Be the Winning Hand With Oracle solutions in the cloud, there’s very little on-site hardware to manage and minimal database server-level product, removing the hardest part of the systems management from the IT department. In the case of Oracle Hospitality OPERA Cloud Services (lodging) and Oracle Hospitality Simphony (POS), anyone—including food and beverage managers—can have the skill set to manage these systems on-site. The IT staff is freed to focus on the workstations, kitchen display systems, third-party integrations, training on applications, and all the other pieces of an operation that enable it to deliver a customer experience that exceeds expectations. Contrary to what used to be believed, the cloud offers a more secure environment than what can be provided on-premises. Both Nevada and most Native American jurisdictions have come to the realization that casinos cannot provide a secure environment on-premises to host their database and applications. The cloud provides that level of data security that is a non-negotiable for these operations. Many casinos have taken the first steps toward the cloud by migrating applications such as email. Comfortable with this, now they are beginning to move their POS (food and beverage and retail) and lodging systems to the cloud. Realistically, it will be some time before they move casino management systems to the cloud. Many jurisdictions have regulations around deployment of these systems and some other applications in the cloud and how it must be done. What we’ve been doing at Oracle is helping gaming enterprises understand how the cloud is beneficial to them, and why they need to make the move. Timing has been the biggest concern. But we are making sure that they understand that this is the most secure place to have your data, and this is the best way to have your system managed to be fault-tolerant and maintain a high degree of uptime. Engineered Systems Is the Stepping Stone to Cloud Large deployments may require a different approach. With some resorts managing multiple large properties in many different geographies, taking those systems to the cloud will take time. But addressing on-premises issues like IT infrastructure complexity, application reliability, database performance, and data security while still keeping an eye on cloud is possible today. OPERA and Oracle Exadata Database Machine are a winning combination. A resort’s reservation system is its lifeblood; if it were to go down, guest annoyance would be least of their worries. I mean, entire vacations could be ruined. Fault-tolerate design enables Exadata to deliver 99.99999% reliability for mission-critical applications like OPERA so guests will never notice if something goes wrong on the backend. Exadata is able to deliver the performance and availability casinos and resorts need because it was co-engineered with the Oracle Database team, giving it a pretty unfair advantage. You can consolidate hodge-podge commodity systems onto a single Exadata system that was purpose-built to deliver optimal speed, performance, and security for the Oracle stack. Less systems to manage means less resource requirements, and a single stack architecture means more streamlined support and management. Ensuring that your environment is ready to take on the cloud should be the a top, if not #1 priority, for the casino and gaming industry. Exadata is available in three consumption models: on-premises, in the cloud, or cloud at customer. With exact equivalents in the cloud, cloud migration is frictionless and happens on your terms. This type of flexibility allows customers to choose how and when they go to the cloud because it all comes down to this for the gaming industry: the guest experience. And cloud and cloud-ready solutions are changing the game in the gaming industry. Learn more about how you can prepare your infrastructure for the cloud with Exadata and the entire Oracle Engineered Systems stack, systems purpose-built to maximize the performance of on-premises deployments of mission-critical applications like Oracle OPERA and Oracle Simphony.  About the Author Bob leads a very successful team of sales executives at Oracle Hospitality (formerly MICROS Systems) in the gaming group, working with all of the world's leading casino resort operators. Bob's background includes extensive experience in technology and product sales, management, hospitality and live entertainment and event production. He was awarded the MICROS Chairman's award for excellence in enabling digital transformation for many major gaming and resort brands. He is originally from Sioux Falls (Brandon), South Dakota and currently resides in Henderson, Nevada with his wife Karmin. Connect with Bob on LinkedIn https://www.linkedin.com/in/robertgarity.

Today's guest post comes from Robert Garity, the Senior Sales Director for Gaming for Oracle Hospitality. The gaming industry has seen a remarkable transformation in the past 20 years. At Las Vegas’...

Cloud Infrastructure Services

February Database IT Trends in Review

February, 2018 newsworthy happenings in review: Oracle’s Cloud-Ready Infrastructure emerges as key element for database workload demands of modern manufacturing; @Wikibon publishes new TCO model addressing operational costs for Oracle workloads; your guide to data backup and recovery, and avoiding blame and shame game; Cloud at Customer momentum showcased at Oracle CloudWorld New York. In Case You Missed It - Modernizing Manufacturing  Manufacturers turn to cloud-ready infrastructure to tame unruly Internet of Things. Here’s a tasty snack: based an early industry study: ’… 94% face challenges collecting and analyzing their IoT data…41% say these data challenges top their list of IoT concerns…’ Full story here. Is virtual reality in manufacturing at a tipping point? Turns out virtual reality and augmented reality (VR/AR) are actually becoming useful on the factory floor: '…More than one-third of all U.S. manufacturers either already use VR/AR or expect to do so by the end of 2018​...'  Read more. How does today’s technology improve just-in-time retail manufacturing? Consumers today expect immediate accessibility to goods and near-same-day delivery times. This has increased the need for manufacturers to have real-time visibility into their operations.  Read full article. Key nugget: ‘…Worthington Industries, a North American steel processor and global diversified metals manufacturing (~$3.4B annual revenue) supports 80 manufacturing facilities and 10,000 employees…Worthington needed to streamline its JIT processes…’ Results? ‘…improved forecasting accuracy by 50% and create forecasts three times faster…avoiding shortages and excess inventory across the 11 countries in which it operates…’ Here’s a deeper look at their strategy How can manufacturers benefit while transitioning to Cloud? Alamar Foods is a master franchise operator for Domino’s Pizza, with +300 locations in the Middle East, North Africa, and Pakistan. How’s their transition going?: ‘…Performance of business-critical applications, such as Oracle E-Business Suite, increased by up to 30%, providing round-the-clock availability to +350 internal users bolstered productivity…’  Here’s how.​ Insight to Impress Your Colleagues David Floyer, CTO & Co-Founder of Wikibon, quantifies the evolution of IT infrastructure management and operational costs: ‘…from Roll Your Own #RYO model to an integrated, full stack system where Oracle Exadata Database Machine optimizations reduce operating costs by at least 30%...running Oracle on x86 Costs 53% More than Exadata…’ Here’s the model. New from Morgan Stanley Equity Research: IT Hardware Gets a Second Life—and a Double Upgrade.  ‘…several catalysts are converging to give IT Hardware a second life—and drive double-digit earnings growth in 2018. For this reason, our team recently gave the IT Hardware group a double upgrade, shifting our view from cautious to attractive…' Learn More. Keeping it Real – Data Protection, Avoiding the Blame and Shame Game Seriously, what’s the point of backing up if you can’t recover? While data backup and recovery may not be the most glamorous job in an organization, have a failure when restoring critical data and you’re suddenly the center of attention—in the worst way. ‘…The optimal solution is designed with recovery in mind and has the recovery process tightly integrated with the database so that database transactions, and not just files, are being backed up…’  Read more Before You Go It’s here! Oracle Database 18c is now available on the Oracle Cloud and Oracle Engineered Systems! Oracle Database 18c, the latest generation of the world's most popular database is now available on Oracle Exadata and Oracle Database Cloud.  It's the first annual release in Oracle's new database software release model, and is a core component of Oracle's recently announced Autonomous Database Cloud. Click here for details on OTN. Spotlight from the recent Oracle Cloud World New York 2018: Oracle’s Cloud at Customer is offering gaining momentum across industries, including Healthcare. In case you didn’t attend, here are the details from the ‘Cloud at Customer with Quest Diagnostics’ - Session BRK1124. ‘…Quest Diagnostics is the world’s leading provider of diagnostic information services, and one of the world's largest database of clinical lab results, with insights revealing new avenues to identify and treat disease, inspire healthy behaviors and improve health care management….’  That’s so cool.  Find details on their deployment of Cloud at Customer here. Don’t Miss Future Happenings: subscribe here today!

February, 2018 newsworthy happenings in review: Oracle’s Cloud-Ready Infrastructure emerges as key element for database workload demands of modern manufacturing; @Wikibon publishes new TCO model...

Engineered Systems

Manufacturers Benefit While Transitioning to the Cloud

Manufacturers are always looking for ways improve the performance, efficiency, and availability of their operations and business processes, so they can accelerate innovation, ramp up product development, and increase product quality, without increasing IT overhead. To achieve these goals, manufacturers are moving to cloud technology. During the transition, companies are reaping the benefits of cloud-ready or cloud-based solutions alongside on-premises systems. But some manufacturers aren’t quite ready to make the move to the public cloud, and need on-premises infrastructure and applications that are built to be compatible with cloud-based solutions, so they can easily migrate to the cloud when the time is right. Other manufacturers are required by regulations to always keep certain data on-premises, so a combined cloud-based and on-premises solution that is engineered to work together seamlessly is their best approach. Do It Yourself (DIY) Approach to Digital Transformation Traditionally, industrial companies source solutions from multiple vendors and craft a DIY cloud infrastructure. However, this approach tends to be expensive, time-consuming, and difficult to upgrade, especially when third-party applications are upgraded and break the integration with the DIY solutions. DIY solutions are also vulnerable to staff turnover, which can put mission-critical systems at risk. If the engineers who built the DIY solutions leave the organization, there is a loss of institutional knowledge. Recruiting new engineers to work on a DIY system is hard because they won’t have any experience with that system. Alternative to DIY: Choose the Right Partner A faster and easier option is working with one partner that offers integrated cloud technology that works with your existing on-premises systems. For example, Oracle Engineered Systems provide a cloud-ready, scalable infrastructure that makes it quick and easy to migrate from your existing systems to the cloud. Oracle’s on-premises infrastructure is identical to the one used with Oracle Cloud solutions, so you get to enjoy benefits, such as increased performance, efficiency, and uptime—along with advanced analytics and visibility—while you’re preparing to migrate. Also, because the Engineered Systems are co-engineered with Oracle software, they speed up performance beyond what a generic infrastructure can. At the heart of an Engineered System for industrial companies is Oracle Database Appliance, a package of fully integrated hardware and software optimized for peak performance with Oracle Databases and other applications. Automation simplifies installation—the appliance can be up and running in minutes, allowing manufacturers to run databases on a single, centrally managed appliance, from one vendor. A single-vendor model streamlines IT support and reduces the amount of time spent managing vendors and getting their solutions to work together, giving your IT team more time to create added value for your company. It can also can potentially cut software-licensing costs and reduce the number of databases, thus decreasing support requirements. Let’s take a look at how two industrial companies that replaced legacy solutions with Oracle Engineered Systems benefited. Food Company Gets Faster Results Alamar Foods is a master franchise operator for Domino’s Pizza, with more than 300 locations in the Middle East, North Africa, and Pakistan. It also owns Premier Foods—a meat-processing factory—and several Dunkin Donuts franchise locations in Egypt. Its aging system didn’t have the performance and availability the company needed. The internal IT staff installed Oracle Database Appliance and then had an Oracle partner migrate the data. Performance of business-critical applications, such as Oracle E-Business Suite, increased by up to 30%, and their round-the-clock availability to more than 350 internal users bolstered productivity. The time to finish monthly close for the 200 stores in Saudi Arabia was reduced from 13 to nine days. The company also gained the ability to initiate new projects, such as data warehousing, human resources, and business intelligence initiatives, that could now be supported by its Oracle Engineered Systems. Infrastructure costs were reduced by using Oracle Database Appliance fully integrated software, servers, storage, and networking, so there was no need to buy separate hardware and software components and deal with incompatibility issues. Company Nixes Downtime Al Yusur Industrial Contracting Company (AYTB) offers services in the fields of construction and fabrication, operation and maintenance, industrial cleaning, shutdowns and turnarounds, and housing and catering. It is also a major provider of industrial, technical, and logistical support services for businesses in the oil and gas, chemical, petrochemical, power, desalination, and other sectors throughout Saudi Arabia and Qatar. With the company’s old system, it was experiencing excessive downtime. A regional power outage could cause up to 12 hours of system downtime and would require more resources for follow-up investigations to determine the extent of the damage. Performance of its database and applications was subpar. AYTB implemented Oracle Database Appliance and Oracle E-Business Suite with the assistance of two Oracle Partners. It also used Oracle Premier Support to expedite the project and provide ongoing support. Oracle Active Data Guard provided infrastructure stability and data security. As a result, its database and application performance improved by more than 90%. Maximum availability was provided to all users, with zero planned or unplanned downtime events since implementation. Manufacturers Benefit While Transitioning to the Cloud Moving to the cloud is not an all-or-nothing choice. If you’re not ready to move or can’t move everything due to regulations, combining cloud-ready technology with existing systems offers a good way to transition on your own timeline. Oracle Engineered Systems along with Oracle Database Appliance make it easy and fast to switch to the cloud when you’re ready. Until then, you gain the benefits of cloud technology, including increased speed, efficiency, and uptime for your operations and business processes. Interested in learning more? Visit our website for information on Oracle Database Appliance and Oracle Engineered Systems.

Manufacturers are always looking for ways improve the performance, efficiency, and availability of their operations and business processes, so they can accelerate innovation, ramp up product...

Customers

4 Steps to Enhance Financial Data Security in Your Organization

Did you know... Financial Services Organizations Are Now the #1 Target for Cyberattackers? The WannaCry ransomware attack that broke out May 12 attacked hundreds of thousands of Windows XP computers and tens of thousands of organizations spanning more than 150 countries. It provided a wake-up call about the vulnerability of organizations and the potential worldwide scope of cyberattacks.   Beyond the regulatory and reputation nightmare, the global cost of cybercrime is staggering. It’s reported that it will reach $2 trillion within two years. Because of the compliance and regulatory requirements of both the financial services and healthcare industries, the cost per breach is expected to be higher than for almost any other industry groups, in fact.   Even more frightening is the fact that financial services is a top target for cyberattackers.  The recent Verizon security report shows that almost one-quarter of data breaches affect financial organizations, with 88% of these occurring through web application attacks and denial of service attacks, as well as payment card skimmers.  You Need to Build Resilience Into Your Infrastructure Recently, Accenture worked jointly with Oracle to provide a roadmap for strengthening business resilience and ensuring business continuity in the face of these ever-greater threats.   A key point is that you can mitigate data security risk better by building in security that prevents data breaches in the first place, rather than reacting to an event. Even if you can repel an attack, system performance will degrade while under attack—slowing operations and reducing staff productivity.   Network security alone simply won’t do the job. You need to build in security throughout your infrastructure, right to the core.   Are you wondering if your financial services business adequately protects sensitive customer data? Here are some questions you need to ask: Do the IT policies adhere to industry standards with regards to database security? What measures are in place to protect from unauthorized access or misuse by privileged users? What measures are in place to protect from data corruption, and unrecoverable and intentional damage to data? To ensure Oracle database security, Accenture takes a full-lifecycle approach based on 4 pillars. Let’s look at each briefly now. Data Security Pillar #1: Discovery In the discovery phase of the process, you begin by getting an assessment of where your systems are today. This includes an audit of your database architecture and past events; analysis and confirmation of vulnerabilities in critical areas like user access, application security, and patch validations; and then a summary of the findings with recommendations.   Data Security Pillar #2: Engineering a Solution The next step is to engineer a solution based on your specific business requirements. This should include a security and compliance model; an intrusion detection system; integration of third-party applications; and Oracle Advanced Security solutions that include encryption, masking and redaction, and compliant identity management solutions. Because Oracle Engineered Systems are completely integrated and optimized through every layer of the stack, the data security is already built in.   Data Security Pillar #3: Implementation Once the solution is developed, it’s time to implement it. But implementing a new solution is not only tedious, it can offer its own security risks as well. Sourcing individual components of a new database solution and working with the networking and storage team to install, configure, and patch can be overwhelmingly complex and time-consuming. Not to mention that taking systems offline for a long period of time could leave you unnecessarily exposed.   Oracle Exadata, Oracle's flagship Engineered System, is an all-in-one database platform. Because servers, networking, and storage are all pre-configured, pre-tuned, and ready to deploy, you can deploy in a matter of days versus weeks or even months. Because of the massive consolidation ratio, applying a pre-tested quarterly patch to a few Exadatas is faster and much easier than having to dedicated resources to patch several disparate machines and ensure compatibility after each update. You're also reducing your surface area of attack. A smooth implementation is a great indicator of how your security measures will continue to go in the future.   Data Security Pillar #4: Education Continued training, workshops, and educational materials can help ensure data security doesn’t stop once systems and processes are implemented. Building resilience into your organization extends much further than just hardware. Teaching employees new and old how to protect their passwords, avoid phishing scams, and develop good workplace habits, such as locking your computer when you step away to go to the bathroom, are all important measures in ensuring data securing across the entire organization.   How a Major Bank Realized Better Data Security and Performance With Engineered Systems   Oracle Engineered Systems are co-engineered with the Oracle Database team to deliver unique security enhancements and stronger end-to-end security for the entire stack. For Chinae Savings Bank of South Korea, security like that is paramount. With a network of 14 bank branches and internet and mobile banking services, Chinae needed to strengthen security for customers’ personal information, such as bank account details and home address, and prevent malicious attacks and data breaches to ensure compliance with stringent Korean Personal Information Protection Act requirements. By combining Oracle Advanced Security and Oracle Exadata Database Machine, Chinae experienced the following results:   Minimizing exposure of sensitive customer information during online transactions and keeping unauthorized users from accessing sensitive information improved data security. Data encryption and redaction capabilities ensured the bank’s compliance with South Korea’s regulatory requirements. Data redaction directly into the database operation system increased security without affecting system response time and CPU utilization rate. The "smart" Exadata features allowed credit-related transactions to be processed 3x faster than before, at 660 transactions per second. Exadata's "out-of-the-box" pre-tested, pre-configured platform allowed the new retail banking platform was deployed in just 5 months. This accelerates data transfer between Chinae Savings’ core banking and information system and the external system for Korea Federation of Savings Bank from 20 hours to just four hours—a 5x improvement. Chinae was able to improve data security, but also improved performance of their risk analysis and credit management by enabling bank employees to rapidly access customer credit data, such as loan amount and credit rating, and ensure timely updates to the account and customer information management systems.   You don’t have to sacrifice performance for data security. By engineering security into your infrastructure from the start, you can get the best of both worlds and avoid becoming a statistic on the growing data breach-shaming list. Learn more about the 4 pillars of data security in the report published by Accenture and Oracle, "Digital Trust: Securing Data at Its Core" and how Oracle Engineered Systems can help you enhance your financial data security.

Did you know... Financial Services Organizations Are Now the #1 Target for Cyberattackers? The WannaCry ransomware attack that broke out May 12 attacked hundreds of thousands of Windows XP computers...

Cloud Infrastructure Services

Selling the Future: You’re Not Selling Goods. You’re Selling an Experience

As we saw in part 2 of our "Future of Selling" series, today’s consumers are often digital natives who don’t shop the way their parents did. With expectations of an app to do literally anything and everything via a smartphone or tablet, modern shoppers no longer view a trip to the store as a necessity before making a purchase. Some may even see it as a burden—time and energy wasted going to a store only to discover that they could buy it cheaper online and have it conveniently shipped wherever they wanted. Therein lies the dilemma for retailers: If a trip to the store is no longer taken for granted, how do retailers get shoppers into the store? Brick-and-mortar retailers are caught on the wrong side of the digital shift in retail, with many stuck in a dangerous cycle of falling foot traffic, declining comparable-store sales, and increasing store closures. More than 8,600 retail stores could close this year in the U.S.—more than the previous two years combined, brokerage firm Credit Suisse reports. Tom Goodwin wisely observed that retailers must decide to make shopping practical or an experience, not both: “Retail is becoming a world of extremes. Brands either need to remove complexity and make the process as simple as possible, or add it in to create a ‘delightful’ experience.” To appeal to new shoppers, the answer is to deliver more than just good, fast service. While material goods may not be as important to millennials and other digital natives, an experience is extremely important. It’s Not Either/Or but, Rather, Both/And Contrary to popular belief, millennials actually value the brick-and-mortar store experience more than any other demographic, according to a recent GeoMarketing article, millennials aren’t giving up on stores—they just want an enhanced shopping experience.” ROTH Capital Partners’ 2017–2018 Millennial Survey also identified that:   43% research online before buying at a physical store. 71% says the right in-store experience would increase visits and purchases. Rarely without a smartphone, millennials want the option to shop both online and in-store (and sometimes online while in the store). They value being immersed in the brand experience; whereas, Gen-Xers and prior generations tend to view shopping more as simply transactions. Millennials are building their identity through the modern shopping experience.   Personalizing the In-Store Experience If retailers can no longer simply stock the shelves or racks with a variety of products and expect to see sales, how can they create the kind of experience that today’s shoppers want? Personalization is key. And “shopping cart analysis” can provide the components to gain key consumer insights and turn them into a personalized customer experience. Based on a fully unified and integrated infrastructure, Oracle offers a complete, cloud-ready solution offers retailers one centralized platform that can analyze buyer behavior, turn those insights into personalized in-store and online experiences, and ensure that the supply chain delivers purchases as fast as the buyer wants. Creating a Digital Experience for In-Store Guests 7-Eleven’s entire business model is in-store, so it is a great example of a retailer that has taken bold steps to respond to the demands of a new consumer generation—connecting with its customers through a holistic digital guest experience the minute they walk through the door.  Leveraging Oracle Engineered Systems, Oracle Exadata, Oracle Exalogic, and Oracle Enterprise Manager, the convenience store giant launched a Digital Guest Experience (DGE) program across 8,500 stores in the U.S. and Canada. Everyday, 7‑Eleven connects with tens of millions of customers through point-of-sale terminals, web sites, and mobile apps to promote customer loyalty, distribute targeted promotions, customize digital coupons, and accept digital payments to deliver the most rewarding customer experience possible. The 7-Eleven app helps the company get to know every single user digitally. 7-Eleven can see what each customer is buying, how often he or she buys it, how offers (like a “buy six, get the seventh drink free” offer) affect buying behavior, and which offers the individual store guests prefer. And customers responded well. One year after the launch, app scans more than doubled, and customers’ baskets increased almost 25% on average. The company also discovered that when customers redeemed their free drinks, they spent 30% more on average than when they were purchasing before the rewards program. What about the time required for deploying successive versions of this mission-critical solution? Ron Clanton, 7-Eleven's DGE IT Program Manager, reported at Oracle Open World, "We are now able to provision new environments in less than 10 minutes. This includes the complete SOA Suite on Exalogic, and Enterprise Manager managing both the SOA Suite, Exalogic, and our Exadata databases." Watch what Steve Holland, Chief Technology and Digital Officer for 7-Eleven, has to say about building a powerful infrastructure required to keep an application that serves nearly three billion customers annually up and running 24/7. What Will the Future Retail Customer Experience Look Like?  No matter how convenient online shopping is, it can’t yet replace the high-touch in-store experience. Layout, stocking, and even the temperature inside the store are important. Beyond these factors, augmented reality and virtual reality can help shoppers literally see what’s possible. RFID research can also be used to personalize the customer experience. RFID tags on individual items helps retailers get product stocking right, understand what drives sales, make the fitting room a place for engagement, and handle shipping.  Retail industry expert Michael Forhez, Global Managing Director, Consumer Markets Industry Solutions Group at Oracle, gave us a glimpse into the future. According to Michael, the future experience will engage every customer who comes into the store. Confectioner Lolli and Pops has already taken a step toward the future and uses “smart store analytics” to anticipate staffing levels and decide where to place staff within the store.  "Further in the future (but not too far away), says Michael, consumers may stop at a store to pick up groceries, but go into the store when it comes time to plan a party. Once inside, the future consumer will expect customized service—perhaps meeting with the butcher to select a special cut of meat, then talking to the wine steward about what wines to pair with the meal, and connecting with a decorating professional to get the party ambiance exactly right." Technology is at the heart of creating a personalized experience. Artificial intelligence and cognitive computing can be used in the online experience to drive traffic to the store. Predictive analytics can help forecast buying behavior and create custom product recommendations. Timing of offers can also coincide with previous consumer patterns. Offers can even pop up on smartphones while the customers are in the store—a kind of 21st century “Bluelight Special.” For more from Michael, watch his appearance on the special pre-Oracle OpenWorld edition of "Exadata Your Way" where he explored some of the major changes happening in both the retail and financial services industry. An Offer Customers Can’t Refuse  With 1,807 stores and 38 distribution centers in the U.S., Target realized that it could offer a blended experience for customers, offering competitive same- and next-day delivery services and in-store pick up. Customers see an opportunity to save on wait time and shipping costs, and Target is able to keep stores relevant by turning them into fulfillment centers that pull customers back in stores.  A short three months after deploying Exadata, Target was able to build and push “pick up at store” and “ship from store” options to more than a thousand stores just in time for the holiday shopping season. The new infrastructure enables Target to serve modern customers better, ensuring that customers receive their Target.com order faster and more reliably than ever before. “Now that we are delivering with greater speed and great flexibility, it is changing expectations.” Tom Kadlec, Senior VP Infrastructure and Operations at Target. The Future of Retail Is Now Are your stores and your IT infrastructure up for the challenge of handling millennial consumer demands? You don’t have to wait until the future to respond to the new world of retail. The fact is, you can’t afford to wait. Luckily, there’s no need to start from scratch—you can build on what you already have to start offering a more personalized customer experience that will transform your business. Oracle Engineered Systems are pre-built, pre-configured, pre-tested database platforms co-engineered with the Oracle Database and application team at the source code level to provide a highly unified experience. With on-premises options with exact equivalents in the cloud, you can build based on your own architecture specifications. The cloud provides the capability to centralize all your data from different legacy systems in different data centers on premises or other cloud sites.  Macy’s chose Oracle Engineered Systems and Exadata Cloud Service for this reason. Rather than having to build its IT infrastructure based on individual components, the multi-chain retailer was able to deploy a completely integrated solution quickly that it can run on-premises or in the cloud with the exact same experience.  Macy’s is just one more example of a retailer who’s not waiting for the future to arrive, but running out to meet it. Don’t be left in their dust. Oracle Engineered Systems come cloud-ready, with three consumption models available: on-premises, cloud, and cloud at customer, a revolutionary system that deliver the public cloud behind your firewall fully managed by Oracle on a subscription basis. You can maintain control and comply with data sovereignty laws by keeping your databases on-premises, while leveraging cloud services for burst-computing, like during a busy holiday shopping season. Once the holiday shopping tapers off, you can reduce your capacity—all without capital expenditures because you purchase your capacity on an as-needed basis. Its an integrated solution that you deploy on your terms. Learn more about Oracle Engineered Systems.

As we saw in part 2 of our "Future of Selling" series, today’s consumers are often digital natives who don’t shop the way their parents did. With expectations of an app to do literally anything and...

Cloud Infrastructure Services

GDPR Compliance and the Cloud – Help or Hindrance?

Today's guest post comes from Paul Flannery, Oracle's Senior Director, Business Development, Systems in the Europe, Middle East, and Africa region. Organizations are currently faced with the question of how to approach the General Data Protection Regulation (GDPR), the new legislation coming into force in May 2018 which sets out to harmonize data protection across the European Union. Rather than be seen as a compliance burden by Europe-based organizations and global entities who do business in the EU, GDPR should be seen as one of the best opportunities to deploy long term technology investment to unlock true digital transformation. While the regulation itself is limited to the processing of personal data, the EU’s interpretation of what that actually constitutes is broad. Essentially, any data that relates to an identifiable living human, including something as disconnected as an IP address that can identify a specific user’s device, is regarded as within the scope. The extended scope of the legislation doesn’t end there. For example, organizations are obliged to take into account the “state of the art” in cybersecurity, yet specific technologies, controls or processes beyond that phrase remain unmentioned, leaving a high degree of risk assessment and subsequent judgement needing to be applied by the organization itself. The timescale for addressing compliance is tight too, and any organization of sizable scale will find it difficult to even understand what data they have in the first place and assess its sensitivity. The cost of non-compliance is what has brought GDPR to the attention of boardrooms not just in the EU, but globally. The potential magnitude of fines are significant (4% of an organization’s global revenue, or €20 Million – whichever is greater), as well as the potential reputation damage that may result from non-compliance with the new mandatory breach notification requirements. The inevitability of cloud computing The cloud, whether it’s public or private, Software-, Infrastructure- or Platform-as-a-Service, can mean different things to different people, and the overall understanding across the majority of industries is somewhat immature, specifically with regards to compliance and security. Yet the journey to the cloud is happening regardless, and without proper security in place, that inevitable shift will arrive in the form of shadow IT, bringing with it unnecessary risk exposure. Generally speaking, there are substantial benefits in moving to the cloud, such as enhanced security capabilities that go beyond what would be affordable for most organizations in an on-premise environment. However any move to the cloud needs to be carefully planned and architected properly, as with the new legislation approaching, the consequences of getting it wrong are significantly increasing. GDPR compliance is a long term commitment, and investment in implementing a cost-effective supporting infrastructure will prove to be valuable in the years ahead. It might even represent one of the biggest opportunities to accelerate digital transformation in recent years. It places focus on good data management, with benefits to organizations ranging from increased security and operational efficiency, to improved customer service and corporate reputation. For example, one of the key legislative requirements is to be able to provide any individual with every piece of data an organization holds on them, including all data records and any activity logs that may be stored. On the one hand, this places significant technology requirements that would only be possible with the simplification and standardization of complex IT environments. Yet on the other, the potential for converged data of that quality from a business or marketing perspective is substantial, and brings with it a wealth of possibilities. Earlier this year, IDC gathered CIOs and CSIOs from enterprises across EMEA, to gain insight into how they are approaching GDPR in light of current cloud adoption and security requirements. Their resulting report ‘Does Cloud Help or Hinder GDPR Compliance?’ summarizes discussions from events in France, Italy, Morocco, Spain, South Africa, Sweden and Switzerland. It not only flags the many potential benefits of compliance, but also sets out IDC’s simple but effective technology framework to help organizations focus on the particular requirements of GDPR, and select the right technology for the job. The full report is available to download here. About the Guest Blogger Paul Flannery is the Senior Director, Business Development, Systems for the EMEA region at Oracle. With over 30+ years experience in the IT industry as a Software Developer, IT Manager and Pre-sales consultant, Paul brings a 360 degree view of the IT market : having held several Sales Leadership and General Management roles with over 25 years experience in Global Account Leadership, Partners & Alliances and Business Development, working with Large Global Corporate customers , across a broad range of Industries. Paul is well known for his strong strategic thinking approach, coupled with an execution focus to help Customers and Partners deliver Quantifiable Business Value to their Key Stakeholders. Find Paul on LinkedIn at https://www.linkedin.com/in/paul-flannery-1849262/

Today's guest post comes from Paul Flannery, Oracle's Senior Director, Business Development, Systems in the Europe, Middle East, and Africa region. Organizations are currently faced with the question...

Cloud Infrastructure Services

Selling the Future: The Last Days of Retail or the Best Days of Retail?

We’ve been hearing the term “retail apocalypse” ad nauseam. Is it really the last days of traditional retail? Or do the best days of retail still lie ahead? In the end, each retailer can determine its own fate: inevitable decline or enviable success. If the holiday shopping season is any indicator, there are some worrisome trends for traditional retailers that are not taking their futures into their own hands. Consumers continue to shift from brick-and-mortar (down about 1.6% over Thanksgiving Day and Black Friday) to online and mobile apps. In fact, Thanksgiving and Black Friday online sales rose 17.9% this year over last year. Purchases via smartphones were up an astounding 50% this Thanksgiving Day versus last year. While brick-and-mortar stores are holding their own this holiday season, they’re doing it with about 3,800 fewer stores. And the future doesn’t favor the traditional retail model. What happened to traditional retail? The Industry has always adjusted to previous cycles in consumer behavior to maintain and grow brand-loyalty, new generational nuances, distribution of household income, shifts in population and urban footprints, multi-channel support for commerce and mobile devices, and addiction to discounting and convenience. Fearless industry disruptors have used technology to turn their brand and entire industries upside-down, taking them to the brink of extinction in some cases - disruptors like Uber in ride-sharing, Airbnb in lodging, and Zara in retail.  And it all happened in little more than two decades. In retail, recent changing consumer behavior - especially millennial shopping habits - along with the rise of online and mobile app shopping have given consumers more control.  In many cases, they don't need to visit stores.  They can experience their brand and products digitally, in new ways. Other companies, like clothing retailers, are using emerging digital technologies to maximize their employee base and design processes to stay ahead of consumer trends.  One great example is lululemon athletica's creative design and planning approach to allow their global merchants to share design images with regional buyers and planners - to bring new trends to market, well ahead of each season.  The work-flow is supported by lululemon anthletica's creative employees. These employees prefer role-based, graphical interfaces and rich visual design experiences, because it keeps them in tune with their customers' buying preferences. Times are changing. It seems apparent that many retailers were a little too complacent and thought that shoppers would continue to come into stores.  And it left them flat-footed.  A recent Bloomberg article notes that while retailers announced ~3,000 store openings in the first three quarters of 2017, they also reported that ~6,700 stores would close, including about 550 department stores.  The article goes on to say that more retail chains have filed for bankruptcy and been rated distressed than during the financial crisis in 2008.   From Matt Townsend, Jenny Surane, Emma Orr and Christopher Cannon, in “America’s ‘Retail Apocalypse’ Is Really Just Beginning,” Bloomberg.com, November 8, 2017 But it’s not too late for retailers to adopt technology that offers them the chance to re-emerge in the marketplace—and win. Retail power continues to shift away from them at a much faster pace and into the hands of the consumer. Retailers are responding in a big way, focusing on the art of convenience.  Buyers shop online and get deliveries where they want them, when they want them. The supply chain doesn’t stop at the last mile any more. Now retailers must consider “the mile after the last mile.” Those leading retailers are deploying tools that take all the consumer data that’s available and extract knowledge about purchasing patterns, buying propensity, and more—tools like data analytics, predictive analytics, artificial intelligence (AI), and machine learning (ML). Armed with this intelligence, they are staying ahead of customers’ insatiable appetite to interact with their brands digitally, and deliver new services that anticipate their needs in real-time. Companies like Target Stores in the US are riding this innovation curb - making commerce or mobile purchases made online available for curbside or in-store pick-up – and real-time buyer-gratification with mobile couponing to drive loyalty. Even this won’t be enough. On top of disruption to the retail model, the pace of change still threatens to leave traditional retailers in the dust – those retailers who make life more convenient for people get this.   7-Eleven is another example of a company that started down the digital path with Oracle early on – using Oracle’s Engineered Systems flexible on-premises and cloud-based platforms as a cornerstone of their digital strategy. It appears that the industry remakes itself almost overnight, before some retailers can respond. Where does it leave traditional retailers today? In the traditional model, retailers relied on efficient processes and systems for success. They tinkered on the margins rather than implementing wholesale change. The old way won’t work anymore. We see this in the bankruptcy filings and disappearance of retailers that were household names only a few years ago. Just this year, some big names that filed for bankruptcy included The Limited, Hhgregg, Payless ShoeSource, Gymboree, and Toys R Us. Those that have survived still struggle with change. But the savvy retailers are taking a cue from the disruptors and adapting to the new reality. What lies ahead for retailers? The world belongs to the brave and the bold The future is about the customer experience (CX)—and that means harnessing the intelligence held in all the data collected in day-to-day operations. The power of information gathering, advanced analytics, and data visualization, coupled with digital marketing and real-time delivery capabilities, has spawned an entirely new breed of fearless competitors who’ve changed the rules of the game. This is not a world for the timid. Retailers must be willing to take risks and look at completely new business models. The smart retailers know this, and they know they can’t do it themselves. In the race to market, they can’t afford to spend months, or even years, developing strategies and building the IT infrastructure to support those strategies piecemeal. To survive, they must partner with companies with the expertise and experience to build a radically new strategy—partners who can help them design the future. In terms of IT infrastructure—which is absolutely critical to this radical operational overhaul—retailers must partner with vendors who have already built integrated IT solutions that can be deployed in weeks and provide performance that simply can’t be achieved with DIY solutions. This infrastructure must be cloud-ready to consolidate data and scale as needed. The retail graveyard is full of retailers who weren’t willing to take risks and move boldly. Are these your last days, or just the beginning of your best days? As a retailer, you hold the power to determine your future. Let’s get to the details This is just the beginning of the discussion. Watch for future posts on this topic. In part two, we’ll look more closely at the consumer-in-control—how retailers can adopt technologies like artificial intelligence (AI) to create the modern customer experience (CX), especially to meet the demanding expectations of the powerful cohort of “digital native” shoppers. In part three, we’ll take a look at how retailers can harness technology to provide a personalized in-store customer experience. It’s no longer if or when retailers need to act boldly, it’s now how—from leveraging personal devices and cloud-based solutions, to employing AI for forecasting and responding to in-store traffic, to comprehensive shopping cart analysis that can provide all the components for gaining key consumer insights and turning them into a personalized CX.  In the final installment, we'll turn to technology and the retail supply chain.  Automation and blockchain are also two key components helping retailers deliver on changing consumer expectations, whether it's at the store or the "mile after the last mile". Stay tuned.

We’ve been hearing the term “retail apocalypse” ad nauseam. Is it really the last days of traditional retail? Or do the best days of retail still lie ahead? In the end, each retailer can determine its...

Cloud Infrastructure Services

How 2 Manufacturing Companies Are Preparing for the Cloud

When I hear “manufacturing,” my mind immediately shifts to sepia-toned steam-powered factories. But the adoption of innovative technology is putting that outdated image to rest. Digital technologies that have emerged in the last 5 or so years, like the Internet of things (IoT), artificial intelligence, machine learning, automation, robotics, and data analytics have fundamentally changed the manufacturing sector. As Daniel Newman notes in a recent Forbes article about the top 5 digital transformation trends in manufacturing, “Not since Henry Ford introduced mass production has there been a revolution to this scale.” As manufacturers look to make their operations leaner and more competitive, integrated cloud-ready IT systems like Oracle Engineered Systems have become crucial for effective digitization strategies—helping to manage operating costs, improve efficiency, and enable almost instantaneous responses to changing customer and market demands. Competing in the Digital Age Requires Significantly Better IT Systems That's a fact. In the digital age, we have high expectations for seamless experiences—making the business environment increasingly more competitive. This is especially true for manufacturers, who must manage ever-growing complexities to meet the needs of their customers. Manufacturers are constantly looking for ways to beat the competition by moving faster, cutting costs further, and winning customer loyalty. Modernization doesn't just happen by deploying a new web-based SaaS application. And it doesn't just happen by upgrading your critical systems to the latest version (although that is a place to start). IT systems are the backbone of every company's technological prowess; its how companies compete now.   These two real-world examples demonstrate how companies can leverage new cloud-ready technologies to adapt their existing IT systems for the digital age; improving application and reporting speeds, enabling real-time analytics, and reducing unplanned downtime of their most important business applications while preparing for the cloud shift. They both do this by leveraging Oracle Engineered Systems to enable their current enterprise resource planning (ERP) systems to perform faster at maximum availability. The additional cost savings are just a cherry on top. Speeding Up Global Growth Worthington Industries, a rapidly growing diversified metals manufacturer, was experiencing growing pains as a result of global expansion. Worthington Industries had grown sales to US$2.8 billion, with 10,000 employees at more than 80 facilities in 11 countries.   The company was looking to consolidate its financial management systems across multiple lines of business and wanted to reduce costs by standardizing operating procedures across the business. Worthington also wanted to improve its manufacturing processes while providing near-real-time reporting to give managers the insights necessary to better manage the business. Higher system capacity, improved availability, and quicker disaster recovery were required to safeguard mission-critical systems. Worthington wanted to be able to do all of this without significantly increasing its IT team.   A co-engineered IT system, with storage and applications designed to work together and purpose-built for Oracle Database, was Worthington’s solution to provide a foundation for global growth. The company upgraded its integrated ERP and supply chain management system on a managed-cloud services platform, implementing Oracle E-Business Suite Release 12 with Managed Cloud Services.   Worthington’s new system improves scalability as the business grows and expands through acquisitions. Most impressively, Worthington Industries was able to meet its goals and improve system availability to 99.8%. Manufacturing Downtime = Penalties and Lost Business Spain-based CELSA Group, a highly diversified manufacturer of forged, laminated, and processed steel, is the largest steel producer in Spain and one of the largest in Europe. With more than 50 companies operating on five continents, CELSA Group knew it needed to improve its IT infrastructure to support international growth.   CELSA Group runs SAP ERP systems and was concerned about excessive downtime, which had delayed shipments of its steel products and exposed the company to late penalties and lost business. CELSA Group needed to be able to provide managers with better and timelier financial reporting and resource planning across its businesses, while optimizing its backup processes to reduce downtime.   CELSA implemented a new IT infrastructure based on Oracle SuperCluster and Oracle Exadata Database Machine. With diligent project planning, migration was accomplished seamlessly in less than a day.   Additionally, with the new engineered IT system, CELSA was able to improve on-time delivery by eliminating downtime, saved more than $650,000 annually in labor costs, optimized financial reporting across 2,000 users in 50 entities, and tripled backup speeds.  Ready for the Cloud, Ready for the Future By upgrading and standardizing their IT infrastructure to an integrated, co-engineered technologies that are purpose-built for Oracle Database, these two companies have been able to realize tremendous improvements in efficiency. Because Oracle Engineered Systems have exact equivalents in the cloud (see Oracle Exadata Cloud at Customer and Oracle Exadata Cloud Service), both companies have gained flexibility unique to Oracle, allowing them to scale easily, cut costs, and gain a single view across the business for greater market agility. Learn more about how Oracle Engineered Systems and cloud-ready solutions can address today's problems and prepare you for the shifting market demands of tomorrow. Raad the CIO magazine and Oracle collaboration on cloud-ready infrastructure here:

When I hear “manufacturing,” my mind immediately shifts to sepia-toned steam-powered factories. But the adoption of innovative technology is putting that outdated image to rest. Digital technologies...

Oracle

Integrated Cloud Applications & Platform Services