Learn how businesses like yours can begin to optimize for today and plan for tomorrow with Cloud-Ready IT Infrastructure

Recent Posts

Cognizant Reveals 3 Questions to Begin Your Cloud Infrastructure Journey

By Roshan Subudhi, Cognizant  Cloud plays a very important role in our clients’ technology strategies, and there’s no longer any real debate over this point. The advantages of migrating enterprise workloads into the cloud are clear and compelling, and the costs of maintaining legacy approaches are too high. We see evidence of this shift in Gartner’s projection that the market for Infrastructure as a Service (IaaS) will grow more than 35 percent during the 2017-2018 period. This category is a major part of the investments that enterprises are making as a first step in a cloud migration strategy. Many Paths to the Cloud The fact that everybody is moving to the cloud doesn’t help a CEO or CIO decide how to plan a successful cloud journey for their enterprise. Most organizations can’t uproot and transplant all of their workloads into the public cloud at once. Aside from the resources required for such a colossal effort, consider the variables that often impact an application’s cloud readiness: Legacy application architectures Regulatory compliance requirements Data sovereignty mandates Business process alignment Latency and other performance constraints These and other variables dictate how different applications often will follow very different paths into the cloud: some migrating to the public cloud, and others transitioning to a private cloud. Many applications also will remain on-premises for varying lengths of time—either to accommodate higher-priority applications or because some legacy applications present special challenges to re-architect for the cloud. Cloud Infrastructure: Making Choices that Matter Most The task at hand, then, involves crafting a transition strategy that satisfies three imperatives: 1. Keep legacy applications available and accessible in their on-premises environments, while still making the most of opportunities to run them more efficiently. 2. Choose the simplest and least disruptive cloud migration path when the time comes. 3. Give the organization the runway it needs to execute a migration strategy on its schedule and on its own terms. At Cognizant,  we help clients to craft their own cloud migration strategies. We view an open, modern, cloud-ready IT infrastructure as a key requirement for any successful cloud migration. An organization’s IT infrastructure choices—including its server, storage, backup and recovery, and networking capabilities—can all have a make-or-break impact on enterprise cloud migration projects. As a result, while any organization must take a number of steps to ensure a successful cloud migration, getting its infrastructure house in order is typically the first step—and often the most important step—on this journey. The standards that define when one organization’s “house is in order” may not cover another’s infrastructure needs. In general, however, a truly cloud-ready infrastructure is one that: Implements a consistent technology stack and can accommodate all three common deployment models—preferably using a common management interface. Delivers the right mix of cost and performance-optimized capabilities, depending on a given workload’s business impact. Employs open standards and open source applications to establish that infrastructure components are fully interoperable, today and in the future. They give organizations the advantage of a fully transparent and predictable technology roadmap upon which to base a cloud strategy.   The Open Infrastructure Imperative The elements of a modern cloud application stack, and the tools used to build and deploy those applications, consist largely of open source software. The ability to pair open infrastructure with open source applications is extremely important today. As quickly as technology changes now, change will be faster in the near future. Openness ensures that a cloud infrastructure will remain agile and versatile enough to adapt, evolve, and create business value. Innovation is happening right now around a group of technologies, including Artificial Intelligence (AI), Machine Learning, Internet of Things (IoT), Predictive Analytics, and Edge-to-Core, cloud-enabled data storage architectures. It’s impossible to overstate the value of these technologies to businesses that view innovation as a competitive advantage and the reality that you can’t take full advantage of them without cloud infrastructure. 3 Questions to Begin Your Cloud Infrastructure Journey When developing a cloud migration strategy, the following points can be useful in your conversations with internal and external experts and to help you establish benchmarks and assess your progress. What kinds of gains, in terms of KPI performance, should we expect from making cloud infrastructure investments. At Cognizant, it’s not uncommon for the firms with whom we work to achieve cost savings of 40 percent or more as they adopt a modern cloud infrastructure upon which to implement their migration projects. We also typically see TCO fall by 20-to-30 percent, even as the same firms gain time-to-market and performance gains in excess of 30 percent. This type of performance is the very definition of a win-win scenario.   There are, of course, many factors that contribute to these gains. They range from the benefits of shifting investments from CapEx to OpEx as an organization’s public cloud footprint grows, to the direct business impact of faster times-to-market for new products and services. How should my organization expect to change to accommodate these infrastructure upgrades? The good news here is that a modern, open cloud infrastructure actually simplifies the change process for many stakeholders. For example, it can do so by minimizing disruptions and downtime during cloud migration. Nevertheless, there are groups for whom infrastructure upgrades actually involve direct changes in how they work. Bringing in key stakeholders, including your firm’s executive team (or at least the CEO and CIO), as well as product managers and senior members of your software engineering team, is a good start. Also bear in mind that infrastructure modernization will accelerate the need to fill new roles. This can include engineering managers with DevOps experience, site reliability engineers, and developers with experience using and refining DevOps methodologies.   Are there advantages to getting outside consulting or other expert assistance with open cloud infrastructure? There is value in working with cloud technology professionals who have helped organizations complete similar infrastructure upgrades. Consider the value of partnering with experts who have “seen it all” when it comes to implementing cloud infrastructure—They bring with them an immense body of accumulated experience to tasks that many IT practitioners have performed rarely, if at all. In the case of Cognizant, we focus on helping clients rapidly obtain value from their Oracle Cloud platform and infrastructure investments. This includes migrating Oracle and non-Oracle enterprise workloads to Oracle Cloud environments. We also assist clients with application inventory, assessment, code analysis, migration planning and execution, and post migration support. During a typical engagement, the Cognizant team might start with an in-depth inventory of a client’s current enterprise landscape—collecting data that feeds into our cloud assessment tools. These tools, in turn, help us to determine an application’s most appropriate location among public, hybrid, and private clouds. This process also helps predict the most appropriate model for migrating a client’s environment to cloud—using either Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) offerings.   Selecting a modern, open cloud infrastructure may be the first step on a cloud migration journey, but it certainly isn’t the last one. The decisions your organization makes here will have an impact across countless projects, and possibly years into the future. _________________________________________________________________________   About the Author  Roshan Subudhi is VP and Global Delivery Leader of Oracle Solutions Practice at Cognizant. He has 20+ years of experience in the Consulting and System Integration space with a focus on Oracle. He has deep understanding of Innovation, Solutions, Automation, Delivery and Execution around Oracle Enterprise Products. He has driven the Oracle Business with industry-leading growth and enabled exceptional market positioning. Prior to joining Cognizant, Roshan was the Global Practice Head for the Oracle Practice at Infosys.  

By Roshan Subudhi, Cognizant  Cloud plays a very important role in our clients’ technology strategies, and there’s no longer any real debate over this point. The advantages of migrating enterprise...

Storage & Tape

Oracle ZFS Storage Appliance ZS7: Fast, Efficient, and Cost Effective

Today Oracle introduced the new Oracle ZFS Storage Appliance ZS7 unified storage systems which provide exceptional performance and price-performance for data analytics, transaction processing, data sharing and data protection. These sixth-generation systems build on the breakthrough DRAM-centric system architecture serve up to 90% of all IOs out of DRAM cache, which is up to one-thousand times faster than flash, substantially improving the performance and price-performance compared to previous generation systems. We also announced amazing benchmark results for the software build, database, EDA, and video data acquisition portions of the SPEC SFS2014 benchmarks suite which place the ZFS Storage Appliance among the fastest performers on a per-node basis of any benchmarked systems. And while the performance of these systems represents the apex of storage performance available today, it’s the business value they bring to an organization that’s really important. In general, we break this value into four categories: Flexibility to meet modern IT requirements without complexity: Oracle ZFS Storage Appliance ZS7 pairs a highly multi-threaded hardware and software environment with massive DRAM caches that slash IO latency for VMs and help enterprises meet the scalability and performance requirements of modern IT environments. Oracle Database and Engineered System integration: Oracle ZFS Storage Appliance ZS7 offers unique co-engineering with Oracle Database and Engineered Systems, enabling it to understand which database IOs are most important so it can automate setup, tuning, and performance optimization for these environments – enabling DBAs to spend more time growing the business and less time tuning storage. Application performance and value: When it comes down to it, the reason anyone buys storage is to enable applications to bring value to the organization. By decreasing application and database latency and increasing throughput for everything from software development to data warehouses, Oracle ZFS Storage Appliance ZS7 makes everyone more productive and increases the value obtainable from existing software licenses. Faster protection of critical data: And, while all of us would like to believe that our IT infrastructure is infallible, the reality is that software and hardware break and there are malicious agents afoot. The speed at which we backup and restore all types of data, from databases to email, is critical to all enterprises. The Oracle ZFS Storage Appliance ZS7-2 offers up to 54 TB-per-hour of backup and 66 TB-per-hour of restore performance, significantly faster than traditional PBBAs and at a small fraction of their price. What would you do with the ability to restore data at 66 terabytes per hour? Taken together, these four forms of business value provided by the Oracle ZFS Storage Appliance ZS7-2 systems enable enterprises of all sizes to operate faster and more efficiently than ever before.      

Today Oracle introduced the new Oracle ZFS Storage Appliance ZS7 unified storage systems which provide exceptional performance and price-performance for data analytics, transaction processing, data...


Rethinking Credit Risk Models with Cloud and Analytics

Banks and other creditors have used formulas to model creditworthiness for decades, a successful strategy centered in effective risk assessment. External trends aren’t going to disrupt this. Lending will always be a risk game. But data and technology are refining the art and science of risk assessment in ways that are critically important for banks to understand as they work to maintain and grow market share in an increasingly competitive industry. More Data Means Better Credit Risk Modeling The amount of data banks own and use is exploding. This is driving a realization that credit risk modeling needs to adjust in response because it has direct impact on compliance, revenue, reputation, and competitiveness. Compliance: Post-financial crisis regulations such as Basel II, III, and Dodd-Frank have added new forms of credit risk management, such as stress testing, that require more frequent and more detailed reporting. Revenue: Customer expectations are changing. They want banks to evaluate their credit worthiness on a continuous basis and be proactive in communications, including pre-approved loan amount information. Reputation: Errors in execution or reporting are more harmful, as customers and society have a diminishing tolerance for mistakes. Adding more contextual data to credit risk modeling improves the efficacy of decisions and the ability to justify decisions. Competitiveness: Banking has been historically slow to change, an unhealthy trait in an era dominated by fast-moving innovation. Not modernizing credit risk modeling is a huge risk by itself considering the value it adds to competitors that do, like the Fintechs. Modernizing Risk Modeling Starts with Effective Data Management A data management platform is the first requirement of modern credit risk modeling. We see many banks that have invested in data lakes, which are valuable to the risk function in general because they are able to possess high volumes of structured, unstructured, and semi-structured data. This allows for near-real-time data ingestion and processing, which is needed more and more across the banking enterprise. Data lakes also add significant processing power for soft analytics and allow for new data processing techniques. They can also save money because they make it possible to store data at low cost while enabling banks to store more and more diverse data. All that comes together in one single repository. On top of the data are analytical capabilities needed to build the quantitative models, and a growing number of qualitive models for risk scoring and default detection for credit risk. Qualitative credit risk modeling is becoming more common to fill the need to serve personalized offers to customers on an ongoing basis and to expand to new markets and customers. For example, if you look at traditional credit scoring for retail loan approval, credit risk is assessed most of the time on a customer's financial history. Adding more data and analytics capabilities provides banks the ability to build richer models by factoring in new types of data, i.e., demographic data, financial data, employment data, and behavioral data. This is where data lakes come into play: They feed in contextual data that expands possibilities, as well as confidence in the possibilities. The same is true for commercial lending. Current models assess things like sales margins, liquidity ratios, and total debts; but there's an opportunity to factor in new data types as well. Things like capacity utilization, social capital, social media, family records, and other archives can be meaningful. While the analytics engines of modern credit risk modeling are automated and could include machine learning (ML), human input using soft analytics are still important in credit risk modeling to make sure credit decisions are transparent and legal. Second, not all data-rich credit risk models are the same, and one organization might emphasize one input more than another depending upon strategy and other factors. The Right Technology: Cloud and Engineered Systems One of the things that stands in the way of adoption of modern credit risk modeling is outdated technology. Cloud is essential to provide the storage and computing power needed for resources such as data lakes and real-time analytics. Cloud resources also can be rapidly scaled up or down. Most of the time, current infrastructures can't do. Using hardware and software that is engineered specifically for the bank’s requirements provide additional advantages. Typically, engineered system stand out in performance, scalability, fault tolerance, and manageability — all important aspects in modern credit risk modeling. To add to that, using an autonomous system such as Oracle’s autonomous database, which automatically tunes and manages itself with no downtime, could mean faster response times and quicker decisions. From a performance point of view, there's more data and more models, but also more expectations from regulators and customers. From all angles, there's the requirement to deliver results faster, so performance is certainly a very important factor. Scalability. With new types of data constantly being added, the bank’s underlying infrastructure needs to rapidly scale up. Risk modelers will need data lakes to experiment with their models and test out new data sources. The scalability of those environments is getting more and more important for data scientist and data analyst productivity. Both in the development part, as well as the production part, the ability to scale very fast with the least cost is becoming more important. Fault tolerance. Credit risk management is a business-critical function. At any point in time, banks should be able to deliver results or credit positions to customers. Engineered systems help to effectively do that, as well as regulatory reporting and stress testing. Manageability. Modern credit risk modeling systems become more and more complex as time goes by, so manageability of the whole platform is an important factor; and the autonomous capabilities of Oracle Cloud Services are an important part of ongoing manageability. Take a Step Toward Modernized Credit Risk Modeling Engineered systems can be a good place to start cloud modernization. These systems can be part of a hybrid cloud architecture and/or provide cloud-ready assets when a bank is ready to move more workloads into the cloud. And while it seems that some workloads will remain on-premises for the foreseeable future, banks are taking steps to modernizing their on-premises infrastructure to achieve cloud benefits while taking into account the compliance and risk management profiles they need to satisfy their customers. HDFC Bank in India chose Oracle Exadata to help increase daily financial reporting speeds by 4x and liquidity risk reporting speeds 7x—moving from 52 hours to just under eight hours. Replacing its legacy infrastructure has helped it meet a new, more demanding service-level agreement and improved overall credit risk management.   Learn more about how Oracle can provide ready resources that are engineered for the cloud. Wiljo van Beek is the Director for Analytics and Big Data Banking & Insurance, Oracle EMEA.  

Banks and other creditors have used formulas to model creditworthiness for decades, a successful strategy centered in effective risk assessment. External trends aren’t going to disrupt this. Lending...

Data Protection

Wikibon Reports PBBA Operating Costs are 68% Higher than Oracle’s Recovery Appliance

Leading tech influencer Dave Vellante, Chief Research Officer at Wikibon, recently published an enlightening new research report comparing Oracle’s Recovery Appliance with traditional Purpose-Built Backup Appliances (PBBAs). The analysis, titled “Oracle’s Recovery Appliance Reduces Complexity Through Automation” found Oracle’s Recovery Appliance helped customers reduce complexity and improve both Total Cost of Ownership (TCO) and enterprise value.   Traditionally, the best practice for mission-critical Oracle Database backup and recovery was to use storage-led PBBAs, such as Dell EMC Data Domain, integrated with Oracle Recovery Manager. However, this approach remains a batch process that involves many dozens of complicated steps for backups and even more steps for recovery, which can prolong the backup and recovery processes as well as cause errors leading to backup and recovery failures.   Oracle’s Recovery Appliance customers report that (TCO) and downtime costs—lost revenue due to database or application downtime—are significantly reduced due to the simplification and the automation of the backup and recovery processes. The Wikibon analysis estimates that over four years, an enterprise with $5 billion in revenue can potentially reduce their TCO by $3.4M and have a positive impact on the business of $370M. Wikibon findings indicate that operational costs are 68% higher for PBBAs such as Data Domain relative to ZDLRA for a typical Global 2000 enterprise running Oracle Databases.     Bottom Line Wikibon has exposed what Oracle clients have known all along, choosing Oracle’s Recovery Appliance results in higher efficiencies through automation, an overall reduced TCO, and positive impact to both an enterprise’s top and bottom line. Read the full report   Discover more about Oracle’s Recovery Appliance    

Leading tech influencer Dave Vellante, Chief Research Officer at Wikibon, recently published an enlightening new research report comparing Oracle’s Recovery Appliance with traditional Purpose-Built...

Engineered Systems

Fast Food, Fast Data: Havi is Feeding QSR’s Massive Data Appetite with Cloud-Ready Technology

Quick-service restaurants (QSRs) have always focused on speed, value, and convenience for their competitive advantage, but recent trends have made that mission exponentially more complex for companies in this $539 billion global industry. Consumers increasingly demand greater choice, more customization, and a more personalized marketing experience. They want the ability to order, plan delivery, and pay on their mobile devices. In fact, 25% of consumers report that the availability of tech figures into their decision of whether to visit a specific QSR location. As a global company providing marketing analytics, supply chain management, packaging services, and logistics for leading brands in food service, HAVI Global Solutions may be a behind-the-scenes player in the QSR arena, but it is on the front lines of technology-driven innovation. For a very large global QSR and one of its customers, HAVI computes 5.8 billion supply forecasts every day, down to the individual ingredient level, for 24,000 restaurants across the globe. With data points and locations continuing to grow, HAVI’s on-premises infrastructure was reaching capacity. “Traditional build-your-own IT hardware infrastructure stacks were not helping us with all our problems,” says Arti Deshpande, Director, Global Data Services at HAVI. “we were always bound by the traditional stack: storage, network, compute—and when our workload is mainly IO bound, that traditional stack was not helping us with all our problems.”     Ensuring the Right Product at the Right Time “In the QSR business, if you don’t have the right food in the restaurant at the right time, it’s very difficult to meet customer expectations,” says Marc Flood, CIO at HAVI. When Flood joined HAVI as the company’s first global corporate CIO in 2013, he found a complex IT infrastructure environment across multiple data-centers and co-location providers. “I wanted to establish a common backbone with a partner that would work with our cloud-first strategy,” he recalls. Ultimately, Flood chose to consolidate data operations for their ERP solutions—NetSuite and JD Edwards—onto Oracle Exadata Database Machine running in the primary data centers and DR in five Equinix data centers around the globe. HAVI chose Equinix not only for its global footprint that closely matched its own, but also because if its dedicated interconnection with the Oracle Cloud via Oracle FastConnect. “One of the crucial capabilities we sought was the ability to leverage Oracle’s cloud solutions to complement our on-premises solution,” he says. “The cross-connect quality is incredible; the latency on the cross-connect is very low.” HAVI consolidated 34 databases onto two racks of Exadata Database Machine x6-2, resulting in 25% to 35% performance gains versus the previous HP infrastructure. Exadata met HAVI’s requirement for elastic scalability without performance degradation to stay ahead of its QSR client’s projected worldwide growth.   Streamlining Disaster Recovery Without Sacrificing Speed When it came to re-examining the company’s disaster recovery (DR) strategy, HAVI determined that it would need its DR system to achieve 75% to 80% of native performance. “It is essential that we be able to continue to forecast regardless of whether we would have an event in our primary data center while also keeping costs under control,” Flood says. “That means meeting our DR requirements in the right way, establishing appropriate RPOs (recovery point objectives) and RTOs (recovery time objectives) while being able to maintain capability and the cost model in alignment with our clients’ expectations.” To meet these criteria, HAVI worked with Oracle to create a DR solution using the Oracle Cloud to offload the huge overhead required for the DR system from the primary database servers. The solution not only resulted in a cost savings of approximately 35%, but also exceeded performance requirements. “Almost 95% of our workload ran at 100% performance of Exadata, of which 60% actually ran 200% faster,” Deshpande says. Flood and Deshpande were impressed with the speed with which the custom solution could be developed and implemented. “It was a very fast process—a great example of partnering and then moving quickly from proof of concept (POC) into live production,” Flood says. Together, Oracle and HAVI ran eight POCs over three months and fully deployed the system over the course of another three months.   Preparing for the Future with Cloud-Ready Infrastructure QSR is hardly the only industry experiencing change, thanks to the proliferation of data. Finance, ecommerce, and healthcare are just some of the other industries evolving as companies learn how to mine the data deluge for competitive advantage. For HAVI, migrating to a cloud-ready environment means removing the barriers to growth for itself and its customers. “We were able to grow the service that we provide without experiencing any reduction in performance to our customer, and we’re able to assure them of continuous service at the level they expect” Flood concludes. Learn more about how Oracle Exadata and cloud-ready engineered systems can enable your company to scale and innovate your competitive advantages. Subscribe below if you enjoyed this blog and want to learn more about the latest IT infrastructure news.  

Quick-service restaurants (QSRs) have always focused on speed, value, and convenience for their competitive advantage, but recent trends have made that mission exponentially more complex for...

5 Exciting Moments at Oracle OpenWorld

OpenWorld is an exciting conference with excellent networking and information on how emerging technologies are effecting the IT industry. With over 2000 sessions and events, OpenWorld has a lot to offer. We know that not everyone can make it to the conference in person. If you didn’t get a chance to attend, here are the top 5 exciting things that happened at OpenWorld. 1. Exadata Customers Uncover Their Keys To Success One of the most insightful moments at Oracle OpenWorld 2018 was listening to Exadata customers uncover amazing performance improvements and better business results that have helped them develop a competitive edge in the market. Wells Fargo and Halliburton both shared their significant cost savings as well as operational benefits from consolidating their hardware and software onto Oracle Exadata in this session. David Sivick, technology initiatives manager at Wells Fargo, shared how they leveraged 70 racks of Exadata to replace several thousand Dell servers. Sivick said that the company has “realized a multi-million dollar a year saving…There’s a 78% improvement in wait times, 30% improvement on batch, 36% reduction in space from compression and an overall application speed improvement of 33%.” (Source diginomica). Shane Miller, senior director of IT at Halliburton, also explained that he experienced significant cost savings and business results. For instance, Shane mentioned that with Exadata, “we saw a 25% reduction in the time it takes to close at the end of the month… We saw load times from 6 hours to like 15 minutes.” (Source diginomica). 2. Constellation Research and Key Cloud at Customer Customers Share Stories About innovation In the 2 years since the Cloud at Customer portfolio has been announced, customers have seen significant innovation with their Cloud deployments.  As an example, Sentry Data Systems’ Tim Lantz shared how Exadata Cloud at Customer allows them to have their cake and eat it too.  Kingold Group’s Steven Chang shared how important data sovereignty is with their digital transformation with Exadata Cloud at Customer.  And other customers in other sessions, including Dialog Semiconductor, Galeries Lafayette, Quest Diagnostics, and more shared their stories at OpenWorld.    To learn more, read Jessica Twentyman’s article in Diginomica. 3. Oracle Database Appliance Customers Shared How They Maximize Availability  During the Oracle OpenWorld customer panel, we heard how Oracle Database customers are driving better outcomes with Oracle Database Appliance versus traditional methods of building and deploying IT infrastructure. We covered the business value, and customer perspectives on how Oracle Database Appliance has delivered real value for their Oracle software investments while simplifying the life of IT without additional costs. Our special guests operate in education, mining, finance and real estate development. One of the main topics was using a multi-vendor approach vs. an engineered system. As a DBA managing day to day operations, many faced performance and diagnostic issues and a multi-vendor solution was not helping. With ODA they can manage the entire box which provides easy patching with one single patch that does it all. David Bloyd, Nova Southeastern University stated: “In the past, we would take our old production SUN SPARC server that was out of warranty to be our dev/test/stage environment when purchasing a new production server to save money.  Now we can test our ODA patches on the same software and hardware as our production environment by having the same ODAs for both environments. Furthermore, our panelists expressed the need to have 24x7 availability with no downtime. Konstantin Kerekovski, Ramymond James stated:   “The ODA HA model is key because being in financial services you cannot go down, high availability is key. We have two set ups in Dev we are using RAC ONE Node. And also for DR, we can consolidate many databases on one ODA. In production, we have two instances of RAC running on ODA compute nodes, so no downtime.” As we approach the latest generation of Oracle Database Appliance we are seeing more performance, security and reliability increase. “Rui Saraiva, KGHM International stated: “With the latest implementation of the ODA X7 we were able to significantly increase application performance and thus improve business efficiencies by saving time to the business users when they run reports or execute their business processes.” Are you considering Oracle Database Appliance to run your Oracle Database and Applications? Check out this blog By Jérôme Dubar, dbi services on “5 mistakes you should avoid with Oracle Database Appliance.” 4. Oracle Products Demo Floor The OpenWorld demo-grounds featured 100+ Oracle Product Managers explaining the technical details of each  product. This is an excellent opportunity to learn how to get the most out of your Oracle investments by the person who designed the product!  In case you missed it, here is a video showing the exciting things that were happening at Exadata demo booth:   5. Oracle CloudFest Concert Oracle hosted a private party exclusively for customers! This intimate concert featured Beck, Portugal the Man, and the Bleachers. Guests enjoyed a night out at the ball park with free food, drinks, entertainment and networking. Overall, the Exadata experience at Oracle OpenWorld was amazing and to learn more, do check out the new Exadata System Software 19.1 release which serves as the foundation for the Autonomous Database.      

OpenWorld is an exciting conference with excellent networking and information on how emerging technologies are effecting the IT industry. With over 2000 sessions and events, OpenWorld has a lot to...

Engineered Systems

Oracle Exadata: Why Failover Is Not Good Enough

Back in the 1990’s, I was responsible for a high-availability product at one of the leading server vendors in Silicon Valley.  Our product, relied on two servers, and two copies (or a shared copy) of an application’s data.  We pinged the application, and in the event we determined it was non-responsive, we failed over to the other server.  This proven technology has formed the basis for database high-availability ever since, and while it does the job, is it sufficient to meet the availability needs of today’s mission critical databases?   That’s exactly the question we were pondering ten years ago as we introduced Oracle Exadata to the public.  We examined the causes of downtime, and quickly realized we needed more than simple failover if we were to sell our product as a mission critical database solution.  While failover will automatically recover from software and hardware faults that take a database instance down, there turned out to be a great many scenarios where it just didn’t do the job.  To see why, let’s look at some of the causes of downtime, and examine how well a failover solution protects from each cause: Cause        Failover Protection (or not) Impact Server (hardware) failure Detects a failure after a timeout, and fails over the database 5-30 minutes database downtime Database (software) failure Detects a failure immediately or after a timeout, and restarts the database or fails over 5-30 minutes database downtime Hardware maintenance Move database instance or entire VM to another physical server 0-30 minutes depending on technology used to move Quarterly Software maintenance No protection—system is down during maintenance 2-3 hours per event Disk drive failure Protected by RAID No downtime Disk drive slow down Not often detected until complete failure—slows entire system System slows down and fails to meet performance SLAs Disaster/Data Center outage No protection—system is down until data center or backups restored Hours to days of database downtime Human error No protection—system is down until problem resolved Hours of database downtime   Now, I’ll be the first to admit the above table is a little simplistic, but it conveys my general point.  When we looked to Exadata to become a mission critical database server, we pretty quickly found that database failover was not going to work as our primary means of protecting from downtime. Luckily for us, we didn’t need to start from scratch.  For over twenty years, Oracle development has had a Maximum Availabilty Architecture (MAA) team chartered to provide Oracle Database high availability blueprints and solutions to minimize downtime for all unplanned outages and planned maintenance activities.  Through a lot of testing, bug fixes, enhancements, occasional work arounds, and dedication to providing the best possible database availability, the MAA team has developed many white papers and collateral, and delivered a lot of solid advice directly to customers.  Most importantly, they developed a deep understanding of the principles of high availability, and these principles have been fully baked into every Exadata since its inception.   We can unequivocally state Exadata is Oracle's best MAA database platform because it's engineered to provide MAA benefits far superior to those of any custom configuration and every generation includes significantly more sweat and full stack testing (firmware, OS, database, GI, ASM, network, Exadata software, etc) to ensure the engineering delivers as promised.       Our MAA experience taught us the importance of Oracle Real Application Clusters (Oracle RAC), the industry-leading high availability solution for databases.  Exadata was architected from the ground up to use Oracle RAC.  Oracle RAC doesn’t rely on failover, but rather runs instances of the database on two or more servers in the Exadata Database Machine.  In the event of a failure, another instance is already running and ready to process work.  Connections from the failed instance are redirected to the surviving instance, and work continues. Because the instance and database service is already running on the surviving servers, there is no need to restart the database and mount the database files, maintaining continual database service availability.  Recovery after a failure is fast, as existing connections stay connected and failed connections can automatically get notified and reconnect.  You may be thinking, so what?  Oracle RAC is available for use in a custom built solution, so why do you need Exadata?  Oracle RAC is further enhanced in an Exadata environment.  We can take advantage of tight integration with the hardware to provide instant failure detection, reducing the dependence on lengthy timeouts.  This requires enormous amounts of software, hardware, and full-stack testing with the focus on minimize application workload impact.  Rather than relying on SCSI or Clusterware timeouts, Exadata integrates lightweight checks and communications between Oracle Clusteware, Exadata, and its internal network.  Customers typically experience less than 2 seconds of disruption due to a hardware or software fault. More importantly RAC provides a mechanism to maintain full database service availability for periodic software updates that include critical security fixes and database software updates (PSUs).  Instead of requiring up to 2-3 hours of downtime per quarter for software updates, Oracle RAC can enable zero downtime software maintenance for almost all software updates.  Key to this is the ability for work to cleanly migrate from one server to another before it is shut down.  Since not all work migrates at the same rate, the database must support active-active connections to both servers.  Can you do the same by failing over a database instance between servers?  Not in practice, as without active-active connections, the act of migrating will most likely create disruption to connected applications, and recovering from that application disruption can be rather time consuming. Oracle Exadata relies on ASM Storage mirroring to provide double or triple storage redundancy to protect from disk and storage cell failures.  While such technology is common in 3rd-party storage arrays, Exadata uniquely checks blocks for corruptions as they propagate to the storage subsystem, detecting data issues and even auto-repairing them before they affect availability.  Key features exclusive to Exadata include Exadata HARD, Exadata ASM corruption detection and repair, implicit smart Exadata disk scrubbing and repair, and smart storage or component failure and repair.  Perhaps one of the most vexing problems in storage subsystems is intermittent problems or slowdowns, issues that do not trigger fault detection and correction, but can have a dramatic effect on a system’s ability to meet its SLA’s.  Exadata has developed sophisticated technology including IO latency capping and redirection, under-performing disk confinement, and database side read latency capping that is able to mute the impact of these intermittent issues on end-to-end database performance by redirecting requests to redundant components and, when necessary, isolating the offending components to ensure they do not affect performance in the future.  Of course, most all of the Oracle database's MAA features can be leveraged in an Exadata environment.  One of the most popular, Oracle Data Guard, can protect from disasters and data center outages by maintaining a standby environment locally, or thousands of miles away. Others, just work better in an Exadata environment due to its scale out and high-throughput end-to-end architecture.  Exadata customers experience faster object reorganization, instance recovery, flashback rates, backup/restore rates, and Active Data Guard and Golden Gate apply and replication rates. Exadata also has enhanced HA management tools, including holistic exachk health checks, sophisticated EM and ASR alerts including sick component detection, and Exadata AWR reports enhanced to provide comprehensive network and storage statistics for the entire Exadata stack.  When in an MAA configuration, Exadata is recognized by IDC as a 5-nines availability platform. Our last cause of downtime has today become one of the most common—human error.  Exadata supports all advanced features of Oracle database including sophisticated role management, and the ability to surgically undo human errors at the database, table, or even row level using Flashback technology built directly into the database. Platinum Support takes the pain out of patching, putting it in the hands of experienced Oracle support personnel, reducing the likelihood a mistake can affect your availability. And I'd like to make one final, and perhaps most important, point–for best availability, stay on a well-traveled road.  I can't overstate the importance of running a standardized configuration, one that is tested end-to-end by Oracle's testing and MAA teams.  This additional layer of testing detects problems in the full stack that are not uncovered with less holistic testing.  For a given customer, this eliminates their having to find issues unique to their configuration.  If there is an issue found by one customer, the fix can be incorporated into the platform and propagated very quickly to other Exadata customers before they ever encounter the problem.   All this technology together makes Exadata one of the industry’s most reliable and robust platforms.  All Nippon Airways (ANA) manages airline operations, Bank of Georgia and other banks run their core banking, and Sprint processes over 15 billion transactions per day in a 24x7 environment–all on Exadata.  What do airlines, banks, and telcos have in common?  Downtime equals lost revenue or financial penalties.  Exadata and the MAA architecture provide the best high availability platform to support mission critical workloads across every industry, and every continent. This is part 4 in a series of blog posts celebrating the 10th anniversary of the introduction of Oracle Exadata.  Our next post will focus on Security, and examine the benefits Engineered Systems bring to protecting your data. Subscribe below if you enjoyed this blog and want to learn more about the latest IT infrastructure news. Stay tuned for more: Oracle Exadata: Ten Years of Innovation Yes, Database Performance Matters Deep Engineering Delivers Extreme Performance Availability: Why Failover Is Not Good Enough Security: Can You Trust Yourself? Manageability: Labor is Not That Cheap Scalability: Plan for Success, Not Failure Oracle Exadata Economics:  The Real Total Cost of Ownership Oracle Exadata Cloud Service:  Bring Your Business to the Cloud Oracle Exadata Cloud at Customer:  Bring the Cloud to your Business About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

Back in the 1990’s, I was responsible for a high-availability product at one of the leading server vendors in Silicon Valley.  Our product, relied on two servers, and two copies (or a shared copy) of...

Engineered Systems

Beware of the Frankenstack! Do you mix software and hardware vendors?

If your technology stack is a ghoulish mish-mash of hardware and software, the performance effects on your business could be frightening.  In the classic 1931 film Frankenstein, the brilliant young Dr. Henry Frankenstein, accompanied by his hunchbacked assistant Fritz, cries at the moment of “creation” of his monster, “It’s alive!” Dr. Frankenstein had spent countless hours working in his castle-tower laboratory assembling body parts to bring to life the monster. After the monster destroys everybody Dr. Frankenstein loves, he laments, “Remorse extinguished every hope.”  While we generally do not refer to the data center as a castle-tower laboratory (though some C-suite executives may beg to differ), it’s true that some scary manifestations of technology have been known to emerge from the depths of organizations’ archaic infrastructure. We call it a “Frankenstack.”   The Frankenstack Within What is a Frankenstack? Much like the monster in the movie, a Frankenstack is the piecing together of DIY hardware and software in your organization’s enterprise systems architecture. Even if you select best of breed middleware, database, operating systems, virtual machines, servers and storage, because they’re not optimized to work together efficiently, this mish-mash of components hampers database performance, threatens security, and can block your path to the cloud. Performance: If your database isn’t running on an integrated technology stack, it’s running in degraded mode. This has costly implications for speed: Database queries take longer than they could be on an engineered system purpose-built for efficiency. Delivery cycles for new applications will take longer as well, especially for larger applications. No business can afford the extra operational effort and lower availability that stem from degraded performance.   Security: Every layer in your Frankenstack leaves you exposed to new security threats. You’ll need to patch each layer of the stack separately, and this increases complexity and downtime—another set of risks no business can afford. The average data breach costs a company $3.9 million, and most breaches occur through vulnerabilities for which a patch is already available. And if you hit any snags along the way, you’ll have to juggle multiple vendors to diagnose the fix the problem—which is more time and effort wasted.   Path to the cloud: By settling for a Frankenstack, your business bypasses an easy path to the cloud—either a public cloud or in your data center. A roll-your-own (RYO) technology stack often comprises apps running on legacy operating systems, and these applications are often little monsters unto themselves, either heavily customized off-the-shelf applications or custom apps developed to meet particular business needs. This leaves organizations grounded, as many organizations don’t have the time, budget, or skills to rewrite their applications. Frankenstacks can even disguise themselves as friendly spirits when it comes to multicloud. While deciding to work with a whole host of cloud providers and cloud software may seem like a reasonable way to transform your business, it presents its own challenges. Different data models, management models, and service models only serve to recreate the complexity of a RYO technology stack—a horror show you can’t afford to revisit. Sid Nag, research director at Gartner, reminds us that organizations need to be cautious about IaaS providers potentially gaining unchecked influence over customers and the market. He says that in response to multicloud adoption trends, “Organizations will increasingly demand a simpler way to move workloads, applications and data across cloud providers’ IaaS offerings without penalties.” While cloud computing was initially intended to simplify IT through standardization, consolidation, and centralization, today’s enterprises are operating in a more fragmented IT landscape that must integrate both on-premises resources and a variety of private and public cloud environments. A Less Scary Model: Oracle Engineered Systems Luckily, your story doesn’t have to end with a monstrous Frankenstack bringing down the enterprise. Oracle Engineered Systems offer an alternative: an integrated technology stack in which the components are purpose-built to work together—one which is pre-built, pre-tuned, and optimized, so that you don’t have to waste time and resources building the stack yourself and so that you won’t risk building a monster. One of the most important advantages of an engineered system is the performance improvement. Co-engineered components operate more efficiently and with greater speed, which supports better customer experiences through faster access to websites and applications, and lets employees leverage data faster to perform better in their roles. An integrated IT stack also minimizes the maintenance requirements from IT staff and maximizes system security. IT can patch the entire stack at once, and if any service or diagnostics become necessary, the IT team works with just one vendor that can pinpoint and address the issue at once. Furthermore, Oracle Engineered Systems have exact cloud equivalents, making a move to the cloud a seamless and simple process. At the heart of Oracle Engineered Systems is Exadata, which is optimized for running Oracle Database. It achieves higher performance and availability and lower cost by moving database algorithms and intelligence into storage and networking, bypassing the traditional processing layer. Originally designed for use in corporate data centers or deployed as private clouds, it became available in 2015 in the Oracle Cloud as a subscription service. In early 2017, a third Exadata deployment choice became available as Exadata Cloud at Customer, an Exadata Cloud Service technology deployed on-premises (behind the corporate firewall) and managed by Oracle Cloud experts. Now in its seventh generation, Exadata has added a substantial number of unique database capabilities that just can’t be matched with generic server-storage approaches. In other words, stick a Frankenstack on your Oracle Database and expect some side effects. Chief among them are lower performance, longer delivery cycles for new applications (especially larger ones), more operational effort, and less-than-optimal security and availability. In fact, Exadata’s security patching has been automated by Oracle to reduce complexity. Exadata’s predictable results have helped customers to lower investment in keeping things running in order to focus on innovating. No tricks—just treats—when it comes to a real future-proof investment. Allied Bank Banishes IT Complexity and Speeds Up Key Processes Allied Bank Limited (ABL) is a perfect case in point. One of Pakistan’s leading banks, ABL reduced its critical close-of-business (COB) processing time by as much as 50% with the successful deployment of Oracle Exadata Database Machine. Although the bank has previously optimized its core banking applications at the software level to improve OLTP and COB performance, the system still needed a boost. Exadata was the only solution that allowed the bank to shrink time for critical reporting processes at a reduced infrastructure cost. It also helped the bank improve its data protection capabilities, simplify its IT, and accelerate its expansion plans due to Exadata’s ability to scale. Your IT Infrastructure Doesn’t Have to Be Frightful The moral of the story is that digital transformation shouldn’t be stopped by a Frankenstack. Cloud-ready engineered systems mean the barriers and penalties that once existed to moving workloads, applications, and data to the cloud have been eliminated, and the new world order of choice and control to shape your organization’s cloud journey is alive! And unlike how Dr. Frankenstein felt about his creation, we can now say that hope extinguishes any remorse. Happy Halloween!

If your technology stack is a ghoulish mish-mash of hardware and software, the performance effects on your business could be frightening.  In the classic 1931 film Frankenstein,the brilliant young Dr....

Oracle OpenWorld & The Future of Blockchain and Infrastructure

According to a new Gartner report, blockchain’s business value add could reach $3.1 trillion by 2030. But enterprises need to think beyond simply deploying distributed ledger technology to choosing the right deployment platform. So, companies need to think about the kind of infrastructure and blockchain technology that they require to drive businesses value. And Oracle provides that desired, robust infrastructure and blockchain technology with its Oracle Blockchain Cloud. If you’re attending Oracle OpenWorld this week, you’ll want to check out this Tuesday session: “Making Enterprise Blockchain a Reality: Oracle Blockchain Cloud Use Cases [BUS4591]” to learn how your enterprise can simplify and accelerate blockchain adoption. Oracle also continues to be a leader in providing infrastructure engineered to optimize performance across a broad range of data workloads. For example, Exadata enables the Oracle Database to ensure the highest levels of performance, scalability, flexibility, availability, and security. So make sure to check out this session at Oracle OpenWorld, “An Overview of Oracle Infrastructure Technologies in Oracle Cloud [PRO5904],” which uncovers details about Oracle’s cloud infrastructure, and how it supports massive data and database workloads to drive business transformation. Oracle Blockchain Cloud and Oracle’s powerful Exadata engineered systems offering are both exciting technologies that are driving innovation and business value. So it is worth your time to check out both solutions and how they can improve your IT and business performance today. When it comes to blockchain, one thing that is clear: blockchain has moved beyond theory and into practical application. Blockchain expert Mark van Rijmenam believes that the new technology is starting a revolution that is dramatically changing the way people do business with each other and interact online. He even makes the case that blockchain could someday help eliminate world poverty. In an earlier post, we spoke to van Rijmenam about what’s happening in blockchain technology today. In this post, we’re following up with him about future uses of blockchain technology and what enterprise companies need to know to stay on top. Van Rijmenam is the founder of Imagin and Datafloq, a faculty member at the Blockchain Research Institute, and author of the best-selling book Think Bigger. He also recently published a new book on blockchain for social good called Blockchain: Transforming Your Business and Our World. In a recent interview, you said that blockchain is the biggest invention since the Internet. What characteristics will enable it to change the future of transactions? Inside a blockchain, data is immutable, verifiable, and traceable. These characteristics create a single version of truth that makes data more trustworthy. It also makes all kinds of processes more flexible and efficient. The fact that you now can really trace the life of products and data creates new levels of transparency which, in turn, creates trust. This impacts every industry, especially supply chain and retail. But you also see the effect in energy and fair trade, for example. Another game changer is the elimination of intermediaries—like money transfers. Currently, if I wanted to transfer money to someone in a different country, and our banks didn’t know each other, they would need an intermediary to complete the transaction. With blockchain, you don’t have that problem anymore. Another killer application of the blockchain is smart contracts. If I sell you my house using a normal contract, there are ways that I could just take your money and run away with it. But with smart contracts, when you transfer money to me, there’s nothing I can do to stop it. The moment it hits my bank account, you automatically become the legal owner of the property. It sounds like, on a high level, the structure of businesses are going to change because roles that used to be necessary are going to disappear. For instance, we might not need an escrow company anymore when we close on houses. Online, people don't trust each other because they don't know each other. With blockchain, there’s a new possibility for individuals to collaborate with other individuals, organizations—and even with things—in a seamless way where everyone is happy. That’s exactly what we're trying to achieve with my new company. Our goal is to change how people collaborate. In addition to that, we also see that competitors all of a sudden can share their proprietary data with each other while remaining in full control over who gets access to the data, when, for how much, for how long, how, etc. That offers all kinds of new possibilities as well, but it also requires organizations to completely rethink how they deal with data.       Can you tell us more about this new company you’re creating and what kind of collaboration might take place? We are creating a decentralized collaboration platform, using blockchain technology, which will allow individuals, organizations, and things to collaborate with each other on any kind of trade. Our initial focus is written content. For example, writing an article or piece of news with another person. But as we evolve over time, the platform will include the ability to create software or games—basically, any type of collaboration you can imagine. The platform gives full ownership and full control to the content creators. Whatever they create, they will be able to earn a fair reward on fair work. We use a reputation protocol, so the idea is that the higher the quality of your content, the more reliable and reputable you are, and the more money you can make. The Internet was envisaged as a decentralized global network, but we’ve seen it come to be controlled by a few, very powerful, centralized companies. Blockchain can shift that control back to the individual. It allows secure, reliable, and direct information transfer between individuals, organizations, and things, so that we can manage, verify, and control the use of our own data. When the creators of the Internet initially developed it, they did a lot of things very well, developing standards around http and domain name servers, etc. But, unfortunately, they left out a few things—especially around the idea of identity. That is, how can we use our offline identity in the online world? A lot of companies are working on a concept called self-sovereign identity to fix this issue. I think the creators of the Internet also forgot a reputation protocol. Online, you can pretend to be anyone, and that can be a good thing. But it can also be a bad thing if you can't trust anyone because you don't know how reputable they are. That's basically what we're trying to add to the web: a reputation protocol. What you’re talking about is building trust in shared data. What about security? How is blockchain changing data security? How does the focus change? Security should always be the top of the agenda for any company. If it’s not, that's just unacceptable. One thing that enterprises need to be aware of is quantum computing, which is coming our way very quickly. Organizations need to look into requiring quantum-resistant encryption. Many organizations don’t have security high on the agenda—let alone quantum resistance—and that’s a massive challenge that organizations need to be aware of. Security depends on understanding this new technology. For instance, not everyone is aware that there are different kinds of blockchain. Yes—you have public auction and private blockchains, permissioned and permissionless blockchains, and these allow you to create a whole variety of different blockchains. The Bitcoin blockchain is a public permissionless blockchain. Publicly available means anyone can participate, and you don't need anyone to tell you to be verified. “Can I join first and then show my name?” is perfectly acceptable. You also have, for example, public permissioned blockchains which, as long as you get verified, everyone has access to it. And then, private blockchains are, by default, permissioned. These are only accessible to a small group of people—for example, four to six banks. In the end, blockchains are just databases, and the differences are who gets access to read, write, and add to it. So permissioned blockchains are the most secure? Absolutely. If you have to verify who you are to access something, that's a good thing.  That’s one approach that we are using in the company that we're building. Everyone can access the blockchain, but we want to know who you are, and that's it. Then you have access to it. That creates more trust as well. For a private permissioned blockchain, you can have an extremely high level of security. How do you see enterprises diving into blockchain? One thing I’ve noticed is that the entire financial services industry is investigating blockchain quite heavily because they recognize that they have to do something about it. They see how easy it becomes to transfer money all around the world. With blockchain, you don’t need a bank to do this. In other industries, things are happening within supply chain and retail. Organizations are experimenting with what this could mean for their industry, but it's all in the experimental phase. The financial services industry is definitely at the forefront. You also see the most action coming from startups, which are trying to build or reinvent existing services and products. What can we expect from blockchain in the near future and why should we be excited about it? Blockchain helps us redevelop and recreate a society in a fairer and more decentralized way. I think that's a fantastic opportunity that we all have to create a better world and help us take back control our lives. Rather than being overly controlled by these huge organizations that collect and share our information, we can decide who has access, when and how. For that to happen, though, a lot of technology needs to be developed. We’re still in the very early stages. I also think it’s the responsibility of the government to build more regulation around the technology, especially cryptocurrencies. We shouldn’t forbid cryptocurrency. Instead, we should regulate it in a sensible way that protects investors and enables individuals and organizations to innovate and create these better products and services.   I think we are really at the start of a revolution—a completely new way of how we run our society. Of course, it takes time before we get there, but the possibilities and opportunities are absolutely endless. If you’re attending Oracle OpenWorld this week and want to dive deeper into the world of blockchain as well as key infrastructure solutions to enable powerful processing of database and application workloads, here are some sessions you can still catch: Tuesday, October 23, 3:45-4:30 p.m.: Delivering Private Database as a Service Using Oracle Exadata Cloud at Customer [CAS2084] Tuesday, October 23, 5:45-6:30 p.m.: Unleash the Power of Your Data with Oracle Exadata Cloud at Customer [CAS3961] Wednesday, October 24, 11:15 a.m.-12 p.m.: Blockchain: A Killer App for Enterprise Digital Transformation [PRO5856] Wednesday, October 24, 4:45-5:30 p.m.: Zero Data Loss Recovery Appliance: Insider’s Guide to Architecture and Practices [TIP4218] Thursday, October 25, 10-10:45 a.m.: Expanding a Distributor’s Business on the Cloud Through Oracle Blockchain Cloud Service [BUS2790] Thursday, October 25, 11-11:45 a.m.: Will Blockchain Transform Your Digital Supply Chain? [BUS6496]    

According to a new Gartner report, blockchain’s business value add could reach $3.1 trillion by 2030. But enterprises need to think beyond simply deploying distributed ledger technology to choosing...

The Three Layers of Defense with Oracle Cloud at Customer Solutions

An enterprise can receive up to 17,000 security alerts each week but investigate only a fraction of them. Companies are finding it nearly impossible for their security teams to keep up, and they’ve realized that throwing more people at the problem isn’t the answer. Companies want security that’s built into their cloud products so that they can rest assured that their security is strong enough. While a Gartner study estimates that more than half of all enterprises will implement an all-in-cloud strategy by 2025, not all companies are ready or able to move to a public cloud environment. For instance, many companies need to maintain data in their own data center for regulatory or latency issues. For these businesses, the traditional public cloud is not the only option. Oracle’s Cloud at Customer portfolio is a unique cloud delivery model that offers the benefits and built-in security processes, expertise, and technology of Oracle’s public cloud while allowing you to stay in control of data security behind your own firewall. Among the most important benefits of any Oracle Cloud deployment is data security. Oracle Cloud operates under a shared responsibility model that builds security in at every layer. All cloud solutions come with extensive, continual security measures so that you can focus on extracting value from your cloud-based data instead of how to protect that data. With all Oracle Cloud platforms providing the same security assurances and continued protections, Cloud at Customer users realize the same level of security as the public Oracle Cloud customers. Let’s look at some of the security measures you should consider if you’re planning a move to the cloud, and how Oracle approaches cloud security to maintain the highest level of protection—for private and public cloud users alike. Your First Layer of Defense: Keep Patches Up to Date Without the Upkeep With so many security alerts, it’s little wonder internal security teams are struggling. In our own research on cloud threats, we found that 86% of firms felt unable to “collect and analyze” the vast majority of their security event data at scale. As a result, 85% of security breaches occur where a patch was available but not implemented. Security teams need a patching strategy that ensures patches are implemented on a regular basis. Because of Oracle’s shared responsibility approach, via our Patch Update Program, Cloud at Customer is maintained, patched, and upgraded by Oracle. We deploy patches quarterly along with critical software updates. Your Second Layer of Defense: Take a Hybrid Approach to Your Security Solutions When it comes to cloud deployments, enterprises are increasingly maintaining a mix of public cloud, private cloud, and on-premises infrastructure for their databases, applications, and workloads. But all these workloads must be able to communicate with each other and be protected as one integrated system. Oracle Cloud Security Solutions allow you to manage your hybrid environment under one security umbrella. This  suite of four tools that prevent, detect, respond to, and predict threats across public and private cloud and on-premises databases: Cloud Access Security Broker (CASB) is a cloud-based security broker and automation tool that works across your entire technology stack to provide increased visibility, detect threats, and automate responses to enhance the security of corporate data. Oracle’s Identity Cloud Service offers a secure single sign-on solution for on-premises, Oracle Cloud, and Cloud at Customer networks. Our Security Monitoring and Analytics (SMA) cloud service works 24/7 to detect, investigate, and remediate security threats across your networks. Configuration and Compliance Service is especially useful for Cloud at Customer users to monitor and address compliance issues using industry benchmarks and your own compliance rules. Available as separate products or as a suite, these solutions work alongside the native security functions built into all Oracle applications and infrastructure solutions. Your Third Layer of Defense: Take a Holistic Approach to Cloud Security At Oracle, we believe security should be a holistic and continuous process involving four tiers: physical, technical, process, and people. Physical and Access Control: One of the benefits of Cloud at Customer is that you control your data’s location and physical security within your own data center. But this isn’t the end of the story for physical cloud security. Because Cloud at Customer is an extension of Oracle’s public cloud, the cloud operations are managed the same way as in the Oracle data center, but remotely. Therefore, it is important that cloud environments like Oracle’s undergo regular maintenance of their security configurations. A well-managed environment ensures that authorized people have access to sensitive data, and unauthorized people do not. As a Cloud at Customer user, you benefit from the remote Oracle Cloud Operations team’s physical security access, as well as your ability to control security in your own data center.  Technology: Security can’t be an afterthought when designing a database or application. All technology that touches the cloud needs to be built thoughtfully to safeguard against common security loopholes. Like the rest of Oracle Cloud, Cloud at Customer was built using strict secure coding standards designed to push security down your stack across your IaaS, PaaS, and SaaS tools. It can all be connected under a single dashboard by integrating your current systems with Oracle Security solutions. Process: Cloud security isn’t just about configuring a database or designing a tool—it’s a continuous process. With more cyberattacks occurring each year, security monitoring is a rising issue for most enterprises. We help you protect your data using continuous security measures, such as scheduled patching and 24/7 monitoring. People: One of the most common causes of security breaches is a lack of training on cybersecurity issues. In our most recent cyber threats survey, we discovered that only 43% of organizations could identify the most common IaaS shared responsibility model. At Oracle, all of our cloud service employees are certified through OSSA and use industry-specific best practices to develop and maintain our solutions. But we can also train your employees to be OSSA certified. Choosing between on-premises and public cloud infrastructures shouldn’t be a matter of security. Oracle builds all of its products so that you can focus on the benefits of each solution instead of how to protect it. The security available with Oracle’s Cloud at Customer offerings allows you to adopt all the security best practices of Oracle Cloud while maintaining the security within your own data center.  Discover more about how Oracle Cloud at Customer and Oracle Cloud Security Solutions give you security and control. And follow us at @Infrastructure, @Exadata, and @OracleSecurity for all the latest announcements and insights.

An enterprise can receive up to 17,000 security alerts each week but investigate only a fraction of them. Companies are finding it nearly impossible for their security teams to keep up, and they’ve...

Cloud Infrastructure Services

Five Cloud at Customer Sessions You Don’t Want to Miss

Oracle OpenWorld is only a week away! OpenWorld features over 2,000 sessions and events to chose from. We suggest you use the schedule builder to help you prioritize the Cloud at Customer sessions you would like to attend. To book all of the relevant Cloud at Customer sessions, use the Focus on Document for the Cloud at Customer portfolio and the Focus on Document for Oracle Cloud at Customer product lists to see all of the Cloud at Customer-related sessions at the show.  Here are the top 5 Cloud at Customer sessions you want to put in your schedule during the week of October 22-25th: 1. Finding Success with Oracle Cloud at Customer: Key Customer Stories and Best Practices Tuesday, Oct 23, 4:45 p.m. - 5:30 p.m. | Moscone South - Room 207 Hear leading analyst Holger Mueller from Constellation Research as he shares his view of the Cloud at Customer portfolio compared to other market offerings.  He will be presenting with Sentry Data Systems, Solinftec, and Kingold at the session Finding Success with Oracle Cloud at Customer: Key Customer Stories and Best Practices [CAS3959] 2. Governments Adopt Oracle Cloud at Customer in Their Journey to the Cloud Monday, Oct 22, 5:45 p.m. - 6:30 p.m. | Moscone South - Room 159 Are you a government institution? You have probably thought about where your data resides.  Join us to hear how the City of Las Vegas and Clark County Water Reclamation District show how Governments Adopt Oracle Cloud at Customer in Their Journey to the Cloud [CAS3960] 3. Unleash the Power of Your Data with Oracle Exadata Cloud at Customer Tuesday, Oct 23, 5:45 p.m. - 6:30 p.m. | Moscone South - Room 214 Thought about database residency in your IT network?  Join us at OpenWorld to hear how Quest Diagnostics, Dialog Semiconductor, and Galeries Lafayette are able to Unleash the Power of Your Data with Oracle Exadata Cloud at Customer [CAS3961] 4. Oracle Cloud at Customer: Have It Your Way Tuesday, Oct 23, 5:45 p.m. - 6:30 p.m. | Moscone South - Room 206 Are you a banking institution who wants to move to the cloud?  Hear how ICICI is about to adopt the cloud in their way at the session Oracle Cloud at Customer: Have It Your Way [CAS1468] 5. Oracle Cloud at Customer: Hear from Customers [CAS1470] Thursday, Oct 25, 10:00 a.m. - 10:45 a.m. | Moscone South - Room 206 Hear directly from our customers Riverland Reply GmbH and CITIC  about how they are adopting Cloud at Customer.  Come see them speak at the session Oracle Cloud at Customer: Hear from Customers [CAS1470] Hope to see you in San Francisco!  Even if you can’t be at OpenWorld in-person, follow us on Twitter to see some of the live updates as sessions and keynotes occur.    

Oracle OpenWorld is only a week away! OpenWorld features over 2,000 sessions and events to chose from. We suggest you use the schedule builder to help you prioritize the Cloud at Customer sessions you...


Key Storage and Server Sessions at Oracle OpenWorld 2018

The Oracle server and storage teams are looking forward to meeting you all at Oracle OpenWorld and sharing best practices for managing your server and storage systems. Here are your top 5 sessions you don't want to miss: Zero Data Loss Recovery Appliance: Insider’s Guide to Architecture and Practices TIP4218 Wednesday, Oct 24, 4:45 p.m. - 5:30 p.m. | Moscone West - Room 3007 Jony Safi, MAA Senior Manager, Oracle Tim Chien, Director of Product Management, Oracle Stefan Reiners, DBA, METRO-nom GmbH Zero Data Loss Recovery Appliance is an industry-innovating, cloud-scale database protection system deployed at hundreds of customers around the world. Its benefits are unparalleled when compared to other backup solutions in the market today, offering the elimination of data loss and backup window, database recoverability validation, and real-time monitoring of enterprise-wide data protection status. In this session get an insider’s look at the system architecture. Hear the latest practices for management, monitoring, high availability, and disaster recovery. Learn tips and tricks for backing up to and restoring from the appliance. Implement these practices at your organization to fulfill database-critical service level agreements.     Maximize Database Performance and Efficiency on Oracle ZFS Storage Appliance BUS376 Thursday, Oct 25, 9:00 a.m. - 9:45 a.m. | Moscone South - Room 214 Scott Ledbetter, Oracle In this session learn why Oracle ZFS Storage Appliance is uniquely positioned to serve as backup and data storage for Oracle's engineered systems. With a DRAM-driven architecture and support for high-speed network interconnectivity, it can offer extreme I/O performance for backup and restore speeds. With integrated support for Oracle Hybrid Columnar Compression and uniquely co-engineered features including Direct NFS Client, Oracle Intelligent Storage Protocol, and industry-acclaimed DTrace analytics, Oracle ZFS Storage Appliance is ideally suited for Oracle Database workloads both on-premises and in the cloud.     Oracle Solaris and SPARC Update: Security, Simplicity, Performance PRM3358 Monday, Oct 22, 11:30 a.m. - 12:15 p.m. | Moscone South - Room 206 Masood Heydari, SVP, Hardware Development, Oracle Brian Bream, Chief Technology Officer, Vaske Computer, Inc. Bill Nesheim, Senior Vice President, Oracle Solaris Development, Oracle Attend this session learn how SPARC/Solaris systems deliver continuous innovation and investment protection with advanced features that secure application data, simplify management of virtual machines, and accelerate performance for Oracle Database and middleware. Also, learn about Oracle's strategy for future enhancements for Oracle Solaris and SPARC systems.   Maximizing Oracle Workloads with Oracle’s Compute Infrastructure PRO4788 Monday, Oct 22, 10:30 a.m. - 11:15 a.m. | Moscone South - Room 207 Subban Raghunathan, VP, Product Management In this session learn why Oracle software runs best on Oracle hardware and how you can improve the security, reliability, and performance of your most demanding workloads. Oracle’s x86 and SPARC servers turbo charge the most important Oracle Database features and come with advanced security capabilities built into each server to keep your data safe.   How To Implement End-to-End Security In Your Cloud Infrastructure Today TIP3337 Thursday, Oct 25, 12:00 p.m. - 12:45 p.m. | Moscone South - Room 207 Renato Ribeiro, Director, SPARC Systems Products Michael Ramchand, Distinguished Solution Engineer, Oracle Today, securing your infrastructure and data is an absolute must. If you want engineering insight on how to easily build an end-to-end encrypted infrastructure throughout web tier, Java app tier, and database tier, all with near-zero performance impact, you will want to attend this session.

The Oracle server and storage teams are looking forward to meeting you all at Oracle OpenWorld and sharing best practices for managing your server and storage systems. Here are your top 5 sessions you...

Cloud Infrastructure Services

Exadata Powers the Oracle Autonomous Database

The future of cloud is autonomous. And the Oracle Autonomous Database Cloud is powered by Exadata. The Autonomous Cloud Frees Humans from Tedious Database Tasks Last year, Oracle introduced the Oracle Autonomous Database and in February, we delivered the first production version to handle data warehousing and optimize data queries for analytics. On August 7, CTO Larry Ellison made a major announcement with Oracle Autonomous Transaction Processing for optimized transaction processing and mixed workloads—a significant step up in the Autonomous Cloud. Everything is automated: the infrastructure, the database, and the data center. There’s nothing to learn and nothing to do. The Autonomous Database Cloud delivers automatic provisioning, automatic scaling, automatic tuning, automatic security, automatic fault-tolerant failover, automatic backup and recovery—everything—enabling users to cut costs, reduce risk, and focus on innovation. This machine learning-based technology now optimizes itself not only for queries for data warehouses and data marts, but also for transactions. So it can handle all your database workloads. It Runs on the Same Exadata Hardware as Your On-Premises Infrastructure Oracle Autonomous Cloud marks the culmination of four decades of technology innovation. It delivers to market today something that our competitors simply cannot do. Exadata, together with Oracle Cloud Infrastructure, the Oracle Autonomous Cloud Platform, Oracle Autonomous Applications, and many other Oracle innovations, incorporate machine learning to eliminate human intervention once policies are set. These emerging autonomous technologies are reshaping our customers’ approach to IT, allowing them to free up their budgets and staff and reduce risk while focusing on business growth and innovation. Oracle has continued to add capabilities to the underlying software as well as to the underlying hardware—which is based on Oracle Exadata. Exadata not only forms part of the basis of the Autonomous Database, but it also is part of the underlying infrastructure and the cloud management. It is this interconnectivity that allows the Autonomous Database to deliver queries 10X faster and transaction processing and mixed workload handling up to 100X faster than competing solutions. Exadata's Unique Structure Makes True Elasticity Possible When you create an Oracle database using Autonomous Database, the system automatically provisions itself. It allocates storage, network capacity, compute capacity, and memory. The beauty of the Autonomous Database is that if you run your application and the load is low, the Autonomous Database will start de-allocating servers. When the application isn’t running at all, there is no server—that is, zero servers allocated to the database. It’s a server-less cloud. As you need capacity, it will automatically add servers while the system is running. So as the demands on the database happen with more transactions requests, servers will be added automatically along with additional network I/O and additional I/O capacity. The Same Exadata Platform Makes Moving to the Cloud Easy With the Exadata platform as the foundation of the Oracle Cloud and the Autonomous Database, your business investment in Exadata is protected. In addition, your transition from on-premises to the cloud will be smooth so that you can focus on modernizing your infrastructure on-premises even as you prepare for migration to the cloud. ClubCorp, the largest owner and operator of private clubs nationwide, with 200+ country, city, athletic, and alumni clubs, has migrated its IT deployment from Exadata on-premises to Oracle Exadata Cloud Service in addition to other Oracle SaaS and PaaS services. The organization is excited about the possibilities of Autonomous Data Warehouse and the ability to remove some of the administrative burdens of maintaining the database so that its staff can spend more time focusing on key business objectives. Still, some organizations will want or need to keep everything behind their firewalls. In that case, Oracle Exadata Cloud at Customer puts the Oracle Cloud inside your data center. As recently announced, Oracle will support Autonomous Database on Exadata Cloud at Customer as well, which includes all the automatic patching, reliability, security, and availability of Autonomous Database. But you don’t buy the hardware—you subscribe to the hardware and to the service. Exadata Provides the Infrastructure Behind the Autonomous Cloud For whatever environment meets your needs, Oracle Autonomous Cloud can handle all your database workloads. With end-to-end automation, you get a more reliable, more secure, more available system that eliminates all the manual processes involved in creating and managing databases. The future of database cloud is autonomous, and the Autonomous Database Cloud is powered by Oracle Exadata. Learn more about how Oracle Autonomous Database, Oracle Exadata, and Oracle Exadata Cloud at Customer can take care of all your database workloads. And follow us at @Exadata, @Infrastructure, and @OracleCloud for all the latest announcements and insights.

The future of cloud is autonomous. And the Oracle Autonomous Database Cloud is powered by Exadata. The Autonomous Cloud Frees Humans from Tedious Database Tasks Last year, Oracle introduced the Oracle...

Engineered Systems

Top 5 Must See Oracle Database Appliance Sessions at OpenWorld

  Oracle OpenWorld 2018 is just around the corner and we want to make sure you’re able to maximize your time at the event from October 22-25. Below is a personal guide to the top five sessions that you should plan to attend while exploring Oracle Database Appliance. #1: Customer Perspectives on Business Value Gains with Oracle Database Appliance [CAS3962] Thursday, Oct 25, 1:00 p.m. - 1:45 p.m. | Moscone South - Room 215 Join this session to hear how Oracle Database customers are driving better outcomes with Oracle Database Appliance versus traditional methods of building and deploying IT infrastructure. Hear uses cases, business value, and customer perspectives on how Oracle Database Appliance has delivered real value for their Oracle software investments while simplifying the life of IT without additional costs. Come hear from customers in education, healthcare, and retail in this informative session. #2: Oracle Database Appliance: A Technical Deep Dive [TIP4123] Monday, Oct 22, 10:30 a.m. - 11:15 a.m. | Moscone West - Room 3008 Oracle Database Appliance provides the lowest entry cost of all Oracle’s engineered systems. It is architected with Oracle Database best practices built in and takes less than two hours to deploy a 2-node Oracle RAC cluster—the complete storage and software stack. How can Oracle Database Appliance accomplish this? Attend this session to gain a deeper insight into the design and discover how it can reduce your risk to implement highly available databases environments so you can focus on the business instead of chasing down patches and storage topologies. #3: Oracle Database Appliance vs. Commodity x86 Hardware: The REAL Differences [PRO4122] Wednesday, Oct 24, 11:15 a.m. - 12:00 p.m. | Moscone West - Room 3008 Database Appliance is the smallest engineered system from Oracle. It offers tightly integrated hardware and software stacks to provide simple deployment and patching, database best practices baked into the provisioning, and capacity-on-demand licensing that allows you to control your cost. But isn’t Oracle Database Appliance just an x86 box? Join this session and see how Oracle Database Appliance is architected to run mission-critical workloads on-premises, serve as your corner stone to a hybrid cloud solution architecture, and add value to your business. #4: Hands-on Lab: Oracle Database Appliance [HOL6321] In this hands-on lab get access to a simulated Oracle Database Appliance environment where you can provision the system, create and delete databases, configure backups, and more. Discover the ease-of-use capabilities of Oracle Database Appliance, and see how simple it is to manage your database estate with both a full web-based GUI and (for those of you love scripting) a full-featured CLI. Tips and Tricks Sessions Monday, Oct 22, 5:15 p.m. - 6:15 p.m. | Marriott Marquis (Yerba Buena Level) – Salon 3/4 Tuesday, Oct 23, 12:45 p.m. - 1:45 p.m. | Marriott Marquis (Yerba Buena Level) - Salon 3/4 Wednesday, Oct 24, 3:45 p.m. - 4:45 p.m. | Marriott Marquis (Yerba Buena Level) - Salon 3/4 Thursday, Oct 25, 10:30 a.m. - 11:30 a.m. | Marriott Marquis (Yerba Buena Level) - Salon 3/4 #5 Deploy Database/Applications in Kernel-Based Virtual Machine on Oracle Database Appliance [TIP4121] Are you looking for a way to consolidate your databases and applications onto fewer systems and still be able to isolate the workloads and save money on software licensing? Virtualization provides a solution to utilize resources efficiently to run different workloads and easily move VMs to another on-premises system or even the cloud. Kernel-based virtual machine (KVM), one of the most popular hypervisors, is supported on Oracle Database Appliance to run your database and applications on a single platform. Attend this session to learn how to deploy KVM to provision and manage your databases and applications while controlling your software licensing costs on Oracle Database Appliance and the cloud. In addition to the Top 5 Sessions, check out other Database Appliance Activities at OOW Monday Activity: Systems Technology, Data Warehouse, Big Data & ODA Customer Appreciation Event Monday, October 22, 06:30 PM - 10:30 PM Hosted by Oracle Senior Executives in Server Technology. Join Product Management & Development and customers from all over the world to network and exchange experiences. As this is an invitation only event please email: martina.keippel@oracle.com Thursday Activity: ODA Customer Advisory Board If you are interested in participating please email: martina.keippel@oracle.com Thursday, October 25, 02:00 PM - 05:00 PM  

  Oracle OpenWorld 2018 is just around the corner and we want to make sure you’re able to maximize your time at the event from October 22-25. Below is a personal guide to the top five sessions that...

Engineered Systems

Top 5 Must See Exadata & Recovery Appliance Sessions at Oracle OpenWorld

Are you feeling butterflies in your stomach yet? Oracle OpenWorld 2018 is around the corner and we want to make sure you’re able to maximize your time at the event from October 22-25. So, we’ve decided to give you a personal guide on the top five sessions that you should attend while exploring Oracle Exadata and Oracle Zero Data Loss Recovery Appliance (ZDLRA). What’s more, you can attend all of the key Exadata sessions by checking out this Exadata Focus-On-Document which highlights the top: customer case study sessions, product overview sessions, business use case sessions, product roadmap sessions, product training sessions, and the key tips and tricks sessions. As you can imagine, Oracle has some exciting innovations in store for Exadata across the Exadata on-prem, Exadata Cloud at Customer, and Exadata Cloud Service consumption models. You also need to check out the interesting and latest developments happening on Oracle ZDLRA. So, we’ve recommend the below key five sessions on Exadata and ZDLRA to make it easier for you to navigate through the event. Top 5 Exadata and ZDLRA sessions that you can’t miss while at OpenWorld: Monday Sessions: Exadata Strategy & Roadmap 1. Oracle Exadata: Strategy and Roadmap for New Technologies, Cloud, and On-Premises Speaker: Juan Loaiza, Senior VP at Oracle   When: Monday 10/22 9:00-9:45 am Where: Moscone West - Room 3008 Many companies struggle to accelerate their online transaction processing and analytics efforts so they face faltering business performance. Sound familiar? This session is a perfect gateway to understanding how Exadata can help erase this problem and power faster processing of database workloads while minimizing costs. In this session, Oracle’s Senior VP, Juan Loazia, will explain how Oracle’s Exadata architecture is being transformed to provide exciting cloud and in-memory capabilities that power both online transaction processing (OLTP) and analytics. Juan will uncover how Exadata uses remote direct memory access, dynamic random-access memory, nonvolatile memory, and vector processing to overcome common IT challenges. Most importantly, Juan will give an overview of current and future Exadata capabilities, including disruptive in-memory, public cloud, and Oracle Cloud at Customer technologies. Customers like Starwood Hotels & Resorts Worldwide, Inc. have used they key Exadata capabilities to improve their business. For instance, they have been able to quickly retrieve information about things like customer loyalty, central reservations, and rate-plan reports for efficient hotel management. With Exadata, they can run critical daily operating reports such as booking pace, forecasting, arrivals reporting, and yield management to serve their guests better. Check out this session to see how Exadata helps customers like Starwood hotels gain these results. Customer Panel on Exadata & Tips to Migrate to the Cloud 2. Exadata Emerges as a Key Element of Six Journeys to the Cloud: Customer Stories Speaker: David Sivick, Technology Initiatives Manager, Wells Fargo Claude Robinson III, Sr. Director Product Marketing, Oracle Shane Miller, Halliburton When: Monday, 10/22, 9:00- 9:45 am Where: Moscone South - Room 215 Every company is trying to build a cloud strategy and make a seamless migration to the cloud without impacting their current, on-premises IT systems today. This is a pretty challenging feat and hard to accomplish when having a multi-vendor environment. The good news is that Oracle has helped more than 25,000 companies transition to the cloud. For these large multinational customers, their journey to the cloud began years ago with Oracle Exadata as a cornerstone. They’ve modernized by ditching commodity hardware for massive database consolidation, saving millions in Oracle Database licensing, and improving the safety and soundness of their data. Well Fargo’s Technology Initiatives Manager, David Sivick and Haliburton’s Shane Miller have experienced such transformations. And within the last few years, customers like David and Shane have started to consume Exadata in more flexible ways in their digital transformation drive. Well Fargo’s David Sivick and Haliburton’s Shane Miller will sit down with Oracle’s Sr. Director Product Marketing, Claude Robinson to share their Exadata cloud journey stories around: How they optimized their database infrastructure Successfully drove their application and database migration Achieved application development and data analytics This interesting session will feature Well Fargo’s and Haliburton’s stories and tips that you can use as you build a cloud strategy as well as understand how Exadata can help you achieve this path to the cloud. Tuesday Sessions: Customer Panel on Exadata, Big Data, & Disaster Recovery 3. Big Data and Disaster Recovery Infrastructure with Equinix and Oracle Exadata Speaker: Claude Robinson III, Sr. Director Product Marketing, Oracle Arti Deshpande, Director, Global Data Services, Havi Global Solutions Robert Blackburn, Global Managing Director, Oracle Strategic Alliance, Equinix When: Tuesday, 10/23, 3:45 - 4:30 pm Where: Moscone South - Room 214 We think that some of the most powerful sessions are those that come from customers and partners who openly share their experiences and so can you relate to their challenges and how and they have achieved IT and business success. So, we picked this session which uncovers how the Director of Global Data Services at Havi Global Solutions, Arti Deshpande, leveraged an Oracle offering to achieve Havi’s IT success. Arti will give you the inside scoop on how they were able to streamline disaster recovery in the Oracle Cloud without sacrificing speed and also consolidated dozens of databases onto Exadata to improve performance. Beyond learning from Havi’s customer experience, you will also hear about the solution architecture created through Oracle’s and Equinix’s partnership. Equinix will share how it partnered with Oracle’s Engineered Systems and Oracle Cloud teams to create a distributed on-premises and cloud infrastructure. The company will reveal how they created an on-premises and cloud infrastructure that consists of a private, high-performance direct interconnection between the Oracle Exadata Database Machine solution and Oracle Cloud by using the Oracle Cloud Infrastructure FastConnect on Equinix Cloud Exchange Fabric. Finally, Equinix will share how this combined solution bypasses the public internet, allowing for direct and secure exchange of data traffic between Oracle Exadata and Oracle Cloud services on Platform Equinix, the Equinix global interconnection platform. Customer Panel on Exadata Cloud at Customer 4. Unleash the Power of Your Data with Oracle Exadata Cloud at Customer Speaker: Vishal Mehta, Sr Manager, Architecture, Quest Diagnostics Maywun Wong, Director of Product Marketing, Cloud Business Group, Oracle Jochen Hinderberger, Director IT Applications, Dialog Semiconductor Cyril Charpentier, Database Manager, Galeries Lafayette When: Tuesday, 10/23 5:45-6:30 pm Where: Moscone South - Room 214 If you’re looking for some more insight about Exadata, specifically Exadata Cloud at Customer, this is great session to check out because it features first-hand experiences from customers using the Cloud at Customer consumption service and how it has impacted their businesses. In this interactive customer panel, IT and business leaders from Quest Diagnostics, Dialog Semiconductor, and Galaries Lafayette will discuss their business success with bringing the cloud into their own data center for their Oracle Database workloads, as well as answer your questions. Vishal Mehta, the Sr. Manager, Architecture at Quest Diagnostics, will share how they consolidated dozens of database servers onto Exadata and freed up many of their admins to drive more strategic tasks. By using Exadata Cloud at Customer, they were able to standardize their database services and configurations to yield benefits across many dimensions. Jochen Hinderberger, the Director of IT Applications at Dialog Semiconductor, will share the company’s decision to select Exadata Cloud at Customer because it had the capacity and performance needed to support their highly demanding tasks which included collecting and analyzing complex data to assure product quality for semiconductors and integrated circuits. Cyril Charpentier, the Database Manager at Galeries Lafayette will share their story around selecting Exadata Cloud at Customer to gain the cloud-like capabilities of agility and flexibility while improving their database performance. The customer will also discuss how Exadata Cloud at Customer has helped them offload tedious management and monitoring tasks while focusing on the real needs of the business. By attending this session, you get an idea of how Oracle’s Database enterprise customers use Oracle Exadata Cloud at Customer as part of their digital transformation strategy. This is a perfect session to learn how these customers harnessed their data and the benefits of a public cloud within their own data center behind their firewall to improve business performance. Wednesday Session: ZDLRA Architectural Overview and Tips 5. Zero Data Loss Recovery Appliance: Insider’s Guide to Architecture and Practices Speaker: Jony Safi, MAA Senior Manager, Oracle Tim Chien, Director of Product Management, Oracle Stefan Reiners, DBA, METRO-nom GmbH When: Wednesday, 10/24, 4:45- 5:30 pm Where: Moscone West - Room 3007 What keeps you up at night when it comes to IT challenges? Security and downtime no doubt. It is incredibly difficult to overcome this IT challenge around improving database performance while making sure the infrastructure is immune to security attacks or database downtime and performance problems. They good news is that we think long and hard about these challenges at Oracle and have a solution to address these issues. In this session, you will learn how to mitigate the problems of data loss and improve data recovery for your database workloads with Zero Data Loss Recovery Appliance so you avoid problems around downtime and security. In this session, you will learn how Zero Data Loss Recovery Appliance (ZDLRA) is an industry-innovating, cloud-scale database protection system that hundreds of customers have deployed globally. ZDLRA’s benefits are unparalleled when compared to other backup solutions in the market today, and you will get a chance to learn how this is the case. Jony, Tim, and Stefan will share how this offering the eliminates data loss and backup windows, provides database recoverability validation, and ensure real-time monitoring of enterprise-wide data protection. Attend this session to get an insider’s look at the system architecture and hear the latest practices around management, monitoring, high availability, and disaster recovery. This is a perfect session for you to learn tips and tricks for backing up to and restoring from the recovering appliance. After this session, you’ll be able to walk away and implement these practices at your organization to fulfill database-critical service level agreements. Other Sessions You’ll Really Want to Check Out: That’s it! Those are the top five session that you don’t want to miss while attending Oracle OpenWorld this year. However, keep in mind that if you want a deeper exploration on Oracle Exadata and Oracle Zero Data Loss Recovery Appliance, you should check out these additional sessions. Here are three more sessions you should look into and use for brownie points. Maximum Availability Architecture 1. Oracle Exadata: Maximum Availability Best Practices and Recommendations Speaker: Michael Nowak, MAA Solutions Architect, Oracle Manish Upadhyay, DBA, FIS Global   When: Tuesday, 10/ 23, 5:45 - 6:30 pm Where: Moscone West - Room 3008 Exadata Technical Deep Dive & Architecture 2. Oracle Exadata: Architecture and Internals Technical Deep Dive Speaker: Gurmeet Goindi, Technical Product Strategist, Oracle Kodi Umamageswaran, Vice President, Exadata Development, Oracle When: Monday, 10/22 4:45-5:30 Where: Moscone West - Room 3008 Exadata Cloud Service 3. Oracle Database Exadata Cloud Service: From Provisioning to Migration Speaker: Nitin Vengurlekar, CTO-Architect-Service Delivery-Cloud Evangelist, Viscosity North America Brian Spendolini, Product Manager, Oracle Charles Lin, System Database Administrator, Beeline When: Thursday, 10/25 10:00-10:45 am Where: Moscone West - Room 3008            

Are you feeling butterflies in your stomach yet? Oracle OpenWorld 2018 is around the corner and we want to make sure you’re able to maximize your time at the event from October 22-25. So, we’ve...

Cloud Infrastructure Services

Three Business Challenges. One Solution. Cloud at Customer.

For one company, it’s moving to cashless credit card transactions. For another, it’s detecting fraud on identity documents. For a third, it’s providing exceptional lifestyles to customers. All three businesses turned to Oracle Cloud at Customer to overcome the obstacles holding them back from realizing their business goals. Rakuten Card, BrScan Tecnologia, and Kingold Group all saw Cloud at Customer as the way to remove the barriers to cloud and unleash the opportunities. Going Cashless Puts New Demands on Rakuten Card’s Business Agility As a financial technology company and one of the biggest credit card service providers in Japan, Rakuten Card sets its sights on continually improving the customer experience with more and better online services. At the same time, it needed to respond to the Japanese government’s initiative to promote cashless transactions. This migration to a cashless environment was fueling an annual growth rate of 20%, mostly from a sharp increase in credit card transactions. From an infrastructure standpoint, Rakuten needed to move off outdated systems and find a solution that could manage peak workloads like month-end credit card payment transactions and deliver a faster, better, more reliable customer experience. With Cloud at Customer, Rakuten was able to meet all these objectives and process credit card transactions 40% faster. Because of the identical architecture between on-premises and cloud infrastructure, the company gained the flexibility to move data and workloads between on-premises and the cloud without requiring fixed configurations to process peak workloads. An added challenge was the need to migrate fast. With Cloud at Customer, Rakuten was able to move data from its legacy system to Cloud at Customer in one day and the entire system within its strict three-day window. Cloud at Customer also gave Rakuten the ability to meet tight government data security regulations by putting the Oracle Cloud inside its data center. And speaking of fast… With 300% YoY Growth, BrScan Needs a Scalable Solution BrScan Tecnologia is in the business of risk management. One of the Brazil-based technology company’s services is helping the major Brazilian telecommunications companies and banks detect fraud on identity documents with its BrSafe solution. And demand has been growing at a tremendous rate—in fact, 300% year-over-year. That demand, not only from more transactions but also from more users, was placing a lot of pressure on the company’s existing infrastructure. It was obvious that BrScan needed a more robust, scalable, flexible, and secure solution. Also important was the need to keep the data in Brazil. The answer for the growing business was a hybrid cloud environment that incorporated Cloud at Customer where the data was not only inside the country but inside BrScan’s data center. BrScan was so pleased with the solution, it is already considering moving to Oracle Exadata Cloud at Customer to boost performance even more and using Oracle features like analytics and chatbots in the future. The path to the cloud takes many forms, depending on the needs of the enterprise. For BrScan, the need for flexibility led to a hybrid environment. For Kingold Group, moving all its core applications off legacy infrastructure and into the cloud was a non-negotiable goal. For Kingold, Nothing Less than Exceptional Is Acceptable The Kingold Group mission statement says the real estate, property development, and financial services company provides “exceptional lifestyles to people who lead exceptional lives.” It should be no surprise, then, that the Guangzhou, China-based organization puts data sovereignty and customer data security at the top of its priority list. Like most companies, Kingold Group had concerns about making sure customer data was safe in the cloud. Cloud at Customer provided the confidence that the data was secure inside the Kingold data center. CIO Steven Chang gave his CEO and management team an analogy: Imagine you put all your money into a particular bank; that bank sees you as a valued customer and puts an ATM in your house, managed by them. That’s the strategy behind Cloud at Customer. With the security issue addressed, one of the goals of the company was to move from a legacy architecture to a cloud architecture. With Oracle, it was able to lift and shift all its core business systems into Cloud at Customer. Now Oracle takes care of the day-to-day systems management, freeing Chang and his team to focus on delivering that exceptional lifestyle to its customers. Only Oracle Offers Cloud at Customer All three customer success stories demonstrate why Cloud at Customer is attractive to enterprises that have legitimate concerns about hosting business-critical data and processes in the cloud. Cloud at Customer gives these enterprises the best of both worlds: the benefits of the cloud and the control of on-premises. Because Customer at Cloud is built on the same platform as the Oracle public cloud, enterprises have the flexibility to deploy applications wherever it makes the most sense without making any compromises on performance. With Cloud at Customer, businesses can take advantage of the agility and subscription-based pricing of Oracle Cloud while meeting data-residency requirements. And all day-to-day systems management is provided by Oracle, freeing IT staff to focus on business innovation and other business-critical goals. Plus, they benefit from the ongoing innovation built into every Oracle solution. See how Oracle's Cloud at Customer offerings can help your business.  

For one company, it’s moving to cashless credit card transactions. For another, it’s detecting fraud on identity documents. For a third, it’s providing exceptional lifestyles to customers. All three...

Cloud Infrastructure Services

Increasing Revenue and Competitive Trade Finance Innovation with Blockchain

The European Oracle Fintech Innovation Program publishes articles offering insights from curated Fintech partners. This article is based on an interview with Rik De Deyn, Senior Innovation Director at the Oracle European Fintech Innovation Program, and Rob Barnes, CEO of TradeIX. The full webinar can be accessed here.   Rik De Deyn: Welcome, Rob, to the Oracle Innovation Program in Europe.  I’ve had the pleasure to work with you and your company TradeIX on several opportunities, and I’ve been inspired by the focus you’ve had on making innovation in trade finance real and tangible. Let me start by asking you how you see the current state of the trade finance market, and what are some of the challenges corporates face today? Rob Barnes, TradeIX: From the perspective of corporate treasury, there is still a very strong need and focus on enhancing working capital, looking at their entire cash conversion cycle including receivables, payables and inventory. Today, there are many options and solutions in the market. However, one of the main challenges we hear from corporates is the need to integrate with multiple applications, different solution providers and financial institutions when looking at working capital and trade finance solutions. In addition, the exchange of the underlying trade data is transmitted between each party through customized communication channels or by email making it not scalable and efficient for them going forward. Corporate Treasury has no overview and no common platform to manage centrally their trade finance solutions such as availability of credit limits, volumes or pricing, and their trade data is stored on multiple proprietary databases, managed by external parties.   Rik De Deyn, Oracle: That’s interesting.  I can imagine that that must create inefficiencies that corporate treasurers should overcome. Rob Barnes, TradeIX: I agree, managing invoices and payments can be time-consuming and inefficient for a corporate’s treasury and its group entities. They must deal with different currencies, and jurisdictions with unique contract terms and payments requirements. Because of this, companies often establish multiple local trade finance programs with several financial partners. This often introduces duplication, inefficiencies and lack of standardization across the organization. Most of the trade finance systems currently used to manage working capital are siloed, highly manual, causing a lack of visibility, and overhead costs. There is a clear need in the market for change, and for a new approach to managing working capital.   Rik De Deyn, Oracle: Currently, how is data exchanged between corporates and financial institutions and what does the future look like? Rob Barnes, TradeIX: We are really at the beginning of a new era. Today, still 90 percent of all trade finance transactions are based on the exchange of CSV (comma-separated values) files exchanged via email or uploaded into a type of sFTP server, such as trade portals operated by banks, which makes it very time-consuming and inefficient for clients and financial institutions. The next stage in trade finance, which is already used by a few corporates, is the use of open communication channels or APIs (application programming interfaces), allowing the automatic extraction of trade data from the ERP system into the trade finance application operated by the bank. However, the underlying data is still stored on each party’s proprietary database, which can cause issues in terms of information silos, data consensus, and security. To avoid that, the next steps in this journey is the introduction of distributed ledger technology (DLT), which stores data, and allows to share data via APIs with a clear consensus mechanism, creating full transparency and provenance of the data.  This makes the information available and visible, only to parties that have access rights to the information.  This now really starts to open up to a variety of value-added providers such as B2B networks, insurance companies and logistics providers enhancing the level of data, which can give banks and their corporate clients a greater level of understanding behind the lifecycle of each underlying trade finance transaction.   Rik De Deyn, Oracle: Applying blockchain and DLT technologies to trade finance transactions is a fascinating idea.  How do you plan to realize this? Rob Barnes, TradeIX: Well, the ultimate stage in this journey is what we are discussing with Oracle today. We’re talking about making trade finance applications accessible from within the corporate’s ERP system, which is the system of record. A trade finance app integrated with the ERP system allows the company and the bank to interface with one another in a much more fluid way and connect with other players and value-added providers in the trade ecosystem. Instead of using different, separate trade finance solutions for your receivables or payables, the application will then allow you to manage all trade finance transaction within just one platform and Oracle ERP environment.   Rik De Deyn, Oracle: What are some of the advantages of your Trade Finance App, linked with the Oracle ERP environment? Rob Barnes, TradeIX: There are numerous advantages for the corporate, as well as its funding partners. With the new Trade Finance App, we can exchange the data directly with the ERP system.  We can then extract it automatically making the integration into existing trade finance system more efficient and faster. This will allow you to manage trade finance solutions just within one application available through the Oracle Marketplace. And, by giving permissioned access of trade data previously siloed in back office systems, we can expand potential funding options, reducing risk and cost of trade finance for corporates. The platform allows you to manage your working capital centrally, without the need to change existing processes. In addition, by leveraging blockchain technology, the platform provides corporates and financial institution with secure, distributed data storage and bookkeeping options. That way, the treasury department can track transactions and transfer of value between trade participants. With ERP systems, we are used to share trusted data across various departments within the organization with a single-point-of-contact. The same is offered with the trade finance application leveraging blockchain technology, but in a wide sense. The application allows to centralize trade finance processes, integrating and optimizing operations and workflows across multiple organizations as well as sharing trusted version of trade data across participants.   Rik De Deyn, Oracle: At what stage of development are you with the TradeIX Trade Finance App? Rob Barnes, TradeIX: We are currently building and testing the end-to-end trade finance application, focusing first on receivable financing from within the cloud-based Oracle ERP environment. We will make the application available on the ERP’s Marketplace and will allow corporates to directly connect with multiple banks and other financial institutions to implement and manage their trade finance programs leveraging the Corda distributed ledger technology from R3 on the Oracle Cloud. In a second stage, we will focus on other trade finance modules such as payable financing or asset-based lending solutions. The selection of the upcoming modules depends on market demand, but the feedback from corporates as well as financial institutions is very promising. We are confident that together with our partnership with Oracle we will offer a unique product which is in high demand in the current trade finance ecosystem.   Rik De Deyn, Oracle: Thanks for this Rob.  As a fintech, how have you engaged with Oracle to achieve this collaboration? Rob Barnes, TradeIX: We have been happy with the engagement with the Fintech Innovation Program in Europe and globally so far.  We met through a common opportunity and realized that we could achieve more by partnering than alone.   The Oracle Fintech Innovation Program enables corporates and banks to be more innovative and shorten the time to monetize their new products and solutions with the Oracle ecosystem, and the Oracle Cloud is a perfect fit for our trade finance platform powered by distributed ledger technology and growing catalog of APIs.  Oracle is helping us with technical mentorship, resources and introductions to customers. 

The European Oracle Fintech Innovation Program publishes articles offering insights from curated Fintech partners. This article is based on an interview with Rik De Deyn, Senior Innovation Director at...


Oracle Exadata: Deep Engineering Delivers Extreme Performance

In my previous post, "Yes, Database Performance Matters", I talked about those I met at Collaborate, and how most everyone believed Oracle Exadata performance is impressive.  However, every now and then I run into someone who agrees Exadata performance is impressive, but also believes they can achieve this with a build your own solution.  I think on that one, I have to disagree... There are a great many performance enhancing features, not just bolted on, but deeply engineered into Exadata.  Some provide larger impact than others, but collectively they are the secret sauce that makes Exadata deliver extreme performance.  Let’s start with its scale out architecture.  As you add additional compute servers and storage servers, you grow the overall CPU, IO, storage, and network capacity of the machine.  As you grow a machine from the smallest 1/8th rack to the largest multi-rack configuration, performance scales linearly.  Key to scaling compute nodes is Oracle Real Application Clusters (RAC).  This allows a single database workload to scale across multiple servers. While RAC is not unique to Exadata, a great deal of performance enhancements have been done on RAC’s communication protocols specifically for Exadata, making Exadata the most efficient platform for scaling RAC across server nodes. Servers are connected using a high-bandwidth, low-latency 40 Gb per second InfiniBand network.  Exadata runs specialized database networking protocols using Remote Direct Memory Access (RDMA) to take full advantage of this infrastructure, providing much lower latency and higher bandwidth than possible if you tried this in a build-your-own environment.  Exadata also understands the importance of the traffic on the network, and can prioritize important packets.  This, of course, has a direct impact on the overall performance of the databases running on the machine. t’s common knowledge that IO is often the bottleneck in a database system.  Exadata has impressive IO capabilities.  I’m not going to overwhelm you with numbers, but if you are curious, check out the Exadata data sheet for a full set of specifications.  More interesting is how Exadata provides extreme IO.  The most obvious technique, is to use plenty of flash memory.  Exadata storage cells can be fully loaded with NVMe flash, providing extreme IOPS and throughput for any database read or write operation.  This flash is placed directly on the PCI bus, not behind bottlenecking storage controllers.  Perhaps surprisingly, most customers do not opt for all flash storage.  Rather they choose a lesser (read that as less expensive) flash configuration backed by high capacity HDDs.  The flash provides an intelligent cache, buffering most latency sensitive IO operations.  The net result is the storage economics of HDDs, with the effective performance of NVMe flash. You might be wondering how flash can be a differentiator for Exadata.  After all, many vendors sell all flash arrays, or front-end caches in front of HDDs.  The key is understanding the database workload.  Only Exadata understands the difference between a latency-sensitive write of a commit record to a redo log, and an asynchronous database file update.  Exadata knows to cache database blocks, that are very likely to be read or updated repeatedly, but not to cache IO from a database backup or large table scan, that will never be re-read again.  Exadata provides special handling for log writes using a unique algorithm that reduces the latency of these critical writes and avoids the latency spikes common in other flash solutions.  Exadata can even store cached data in an optimized columnar format, to speed processing on analytical operations that need only access a subset of columns.  These features require the storage server to work in concert with the database server, something no generic storage array can do.   Flash is fast, but there is only so much you can solve with flash.  You still need to get the data from the storage to the database instance, and storage interconnect technologies have not kept up with the rapid rise in the database server’s ability to consume data.  To eliminate the interconnect as a potential bottleneck, Exadata takes advantage of its unique Smart Scan technology to offload data intensive SQL operations from the database servers directly to the storage servers.  This parallel data filtering and processing dramatically reduces the amount of data that need be returned to the database servers, correspondingly increasing the overall effective IO and processing capabilities of the system.  Exadata’s intelligent storage further improves processing by tracking summary information for data stored in regions of each storage cell.  Using this information, the storage cell can determine whether relevant data may even exist in a region of storage, avoiding unnecessarily reading and filtering that data.  These fast in-memory lookups eliminate large numbers of slow HDD IO operations, dramatically speeding database operations.  While you can run the Oracle database on many different platforms, not all features are available on all platforms.  When run on Exadata, Oracle database supports Hybrid Columnar Compression (HCC) which stores data in an optimized combination of row and columnar methods, yielding the compression benefits of columnar storage, while avoiding the performance issues typically associated with columnar storage.  While compression reduces disk IO, it traditionally hurts performance as substantial CPU is consumed with decompression.  Exadata offloads that work to the storage cells, and once you account for the savings in IO, most analytic workloads run faster with HCC than without. Perhaps there is no better testimonial to Exadata’s performance than real-world examples.  Four of the top five banks, telcos and retailers run on Exadata. For example Target consolidated database from over 350 systems onto Exadata.  They now enjoy a 300% performance improvement and 5x faster batch and SQL processing.  This has enabled them to extend their ship from store option for Target.com to over 1000 stores, allowing customers to get their orders sooner than before.  I’ve really just breezed over 10 years of performance advancements.  Those interested can find more detail in the Exadata data sheet.  Hopefully, you see it would be impossible to get the same performance from a self-built Exadata or similar system.  In the case of database performance, only deep engineering can deliver extreme performance. This is the third blog in a series of blog posts celebrating the 10th anniversary of the introduction of Oracle Exadata.  Our next post, "Oracle Exadata Availability," will focus on high availability. About the Author   Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

In my previous post, "Yes, Database Performance Matters", I talked about those I met at Collaborate, and how most everyone believed Oracle Exadataperformance is impressive.  However, every now and...

Engineered Systems

Implementing a Private Cloud with Oracle SuperCluster

Oracle SuperCluster is an integrated server, storage, networking, and software platform that is typically used either for full stack application deployments or consolidation of applications or databases. Because it incorporates Oracle’s unique and innovative Exadata Storage, Oracle SuperCluster delivers unrivaled database performance. And the platform also hosts the huge range of Oracle and third-party applications supported on Oracle’s proven, robust, and secure Oracle Solaris operating environment. Virtualization is a particular strength of Oracle SuperCluster, with Oracle VM Server for SPARC serving up high performance virtual machines with zero or near zero virtualization overhead. These virtual machines are known as I/O domains. Further, an additional layer of highly optimized nested virtualization is offered in the form of Oracle Solaris Zones. All of these virtualization capabilities come at no additional license cost. For more information about virtualization on Oracle SuperCluster, refer to the recent blog Is "Zero-Overhead Virtualization" Just Hype? The platform also utilizes a built in high throughput, low latency InfiniBand fabric for extreme network efficiency within the rack. As a result, Oracle SuperCluster customers enjoy outstanding end-to-end database and application performance, along with the simplicity and supportability featured on all of Oracle’s engineered systems. Can these benefits be realized in a cloud environment, though? Oracle SuperCluster is not available in Oracle’s Cloud Infrastructure, but private cloud deployments have been implemented by a number of Oracle SuperCluster customers, and Oracle Managed Cloud Services also hosts many Oracle SuperCluster racks in their data centers worldwide. In this blog we will consider the building blocks provided by Oracle to simplify deployments of this type on Oracle SuperCluster. An Introduction to Infrastructure-as-a-Service (IaaS) In the past, provisioning new compute environments consumed considerable time and effort. All of that has changed with Infrastructure-as-a-Service capabilities in the Cloud. Some of the key attractions of cloud environments for provisioning include: Improved Time to value. The period of time that usually elapses before value is realized from a deployment is considerably reduced. Highly capable virtual machines are typically deployed and ready to use almost immediately. Greater Simplicity. Specialized IT skills are no longer required to deploy a virtual machine that encompasses a complete working set of compute, storage, and network resources. Better Scalability. Provisioning ten virtual machines requires little more effort than provisioning a single virtual machine. IaaS environments typically include the following characteristics: User interfaces are simple and intuitive. Actions are typically either achieved with a few clicks from a browser user interface (BUI), or automated using a REST interface. Virtual machines can be created without sysadmin intervention and without the need to understand the underlying hardware, software, or network architecture. Newly created virtual machines boot with a fully configured operating system, active networks and pre-provisioned storage. Virtual machine components are drawn from pools or buckets of resources. Component pools typically deliver a range of resources including CPU, memory, network interfaces, storage resources, IP addresses, and virtual local area networks (VLANs). Virtual machines can be resized or migrated from one physical server to another as the need arises, without the need for manual sysadmin intervention. Where costs need to be charged to an end user, the actual resources allocated can be used as the basis for charging. Resource usage can be accounted to specific end users, and optionally tracked for billing purposes. Resource usage may also be optionally restricted per user. The end user is responsible for managing and patching operating systems and applications, but not for managing the underlying cloud infrastructure. Oracle SuperCluster IaaS The virtual machine lifecycle on Oracle SuperCluster is orchestrated by the SuperCluster Virtual Assistant (SVA), a browser-based tool that supports the creation, modification, and deletion of domain-based virtual machines, known as I/O domains. Functionality has progressively been added to this tool over the years, and it has now become a single solution for dynamically deploying and managing virtual machines on SuperCluster, including both I/O domains and database-oriented Oracle Solaris Zones. SVA is a robust tool that is widely used by SuperCluster customers across a range of different environments. The current SuperCluster Virtual Assistant v2.6 release offers a set of capabilities that deliver benefits and features consistent with those outlined above in the IaaS Introduction. As an alternative to SVA’s intuitive browser user interface, SVA’s IaaS functionality on Oracle SuperCluster can be managed from other orchestration software using the provided REST interfaces. SVA REST APIs are self-documenting and therefore easier to consume, thanks to the included Swagger UI. SuperCluster Virtual Assistant in Action The following screenshot shows an initial window from the tool listing I/O domains in a range of different states. Both physical domains and I/O domains (virtual machines) are managed, along with their component resources. New I/O domains can be created, and existing I/O domains modified or deleted, with additional cores and memory able to be added dynamically to live I/O domains. Database Zones based on Oracle Solaris can also be managed from the tool, and a future SVA release will allow Oracle Solaris Zones of all types to be managed. I/O domains can be frozen at any time to release their resources, and thawed (reactivated) whenever required. As well as providing a cold migration capability, the freeze/thaw capability allows resources used by non-critical I/O domains to be temporarily freed during peak periods for use by other mission critical applications. Resources are assigned automatically from component pools that manage CPU, memory, network interfaces, IP addresses, and storage resources. VLANs and other network properties can be pre-defined, allowing access to DNS, NTP, and other services. An integrated resource allocation engine ensures that cores, memory, and network interfaces are optimally assigned for performance and effectiveness. Compute resources are allocated to I/O domains at a granularity of one core and 16GB of memory, or using pre-defined recipes. Network recipes can also be set up to simplify the allocation of network resources, including simultaneous redundant connectivity to different physical networks thanks to quad-port 10GbE adapters. Recipes are illustrated in the screenshot below. A number of SVA policies can be set according to customer requirements. One set of policies relates to users. User roles are supported, allowing both privileged and non-privileged users to be created. A single SVA user can consume all resources. Alternatively, multiple SVA users can be created, with resource usage tracked by user. Resources can be unconstrained, allowing a user to consume any available resource, or limits can be set, to ensure that no user consumes more than a pre-defined allowance.  The screenshot below illustrates an early step in the process of creating an I/O domain. A comprehensive Health Monitor examines the state of SVA services to ensure that the tool and its resources remain in a consistent and healthy state. SVA functionality continues to be extended, with a number of new features currently under development. Oracle SuperCluster M8 and Oracle SuperCluster M7 customers are typically able to leverage new features simply by installing the latest quarterly patch bundle, which also upgrades the SVA version. Enjoying the Benefits Oracle SuperCluster customers can realize cloud benefits in their own data centers, taking advantage of improved time to value, greater simplicity, and better scalability, thanks to the Infrastructure-as-a-Service capabilities provided by the SuperCluster Virtual Assistant. Database-as-a-Service (DBaaS) capabilities can also be instantiated on Oracle SuperCluster using Oracle Enterprise Manager. The end result is that Oracle SuperCluster combines the proven benefits of Oracle engineered systems with IaaS and DBaaS capabilities, allowing customers to reduce complexity and increase return on investment. About the Author Allan Packer is a Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle. He has worked on issues related to server systems performance, sizing, availability, and resource management, developed performance and regression testing tools, published several TPC industry-standard benchmarks as technical lead, and developed a systems/database training curriculum. He has published articles in industry magazines, presented at international industry conferences, and his book "Configuring and Tuning Databases on the Solaris Platform" was published by Sun Press in December 2001.  Allan is currently the technical lead and architect for Oracle SuperCluster.  

Oracle SuperCluster is an integrated server, storage, networking, and software platform that is typically used either for full stack application deployments or consolidation of applications...


Prescription for Long-Term Health: ODA Is Just What the Doctor Ordered

Healthcare providers face so many complex challenges, from a shortage of clinicians to serve an aging population that requires more care, to changing regulations, to evolving patient treatment and payment models. At the same time, these providers struggle to manage the ever-increasing amount of data being generated by electronic health records (EHRs). How can they focus on providing the best possible patient care while keeping costs tightly under control? Data Drives the Modern Healthcare Organization One of the most important steps is to manage the data that’s the heartbeat of their organization. Data makes it possible to provide quality patient care, streamline operations, manage supply inventories, and build sound long-term organizational strategies, among other things. Perhaps today’s most critical healthcare challenge—outside of the frontline clinician-patient encounter—is efficiently, securely, and affordably managing data. Clinicians need to be able to access patient data in real time, around the clock. In the case of acute-care situations, they can’t afford for systems to go down, or to lose data. Administrators need to ensure the security of patient information to protect privacy, meet regulations, and avoid fines and bad PR. Materials management requires systems to monitor critical supplies and keep them stocked at optimal levels to ensure availability, prevent waste, and reduce costs. Executives need real-time analytics to make day-to-day decisions, plan for the long term, and ensure patients continue to receive the best possible care while the industry experiences seemingly constant change and uncertainty. How do you implement innovative and life-saving procedures and technology, hire the best talent, and expand services without going bankrupt? It all comes down to balancing patient care with controlling costs. Technology That Performs the Perfect Balancing Act Healthcare organizations need to manage enormous quantities of data, but they don’t always have the budget for top-of-the-line database solutions. Nor do they always have the resources required to manage these systems day-in and day-out. For many of these midsize healthcare providers, Oracle Database Appliance offers a realistic, affordable option that optimizes performance for the Oracle Database. The completely integrated package of software, compute, networking, storage, and backup makes setup simple and fast. At the same time, it delivers the performance and the fully redundant, high availability so critical to healthcare environments. And it’s cloud-ready, so that organizations can migrate to the cloud seamlessly. With all the uncertainty healthcare organizations operate under today, they need IT solutions that can adapt and change as their needs change. Oracle Database Appliance was designed with flexibility to meet organizations’ changing database requirements. Compute capacity can be scaled up on demand to match workload growth. Protecting Patient Data Must Take Top Priority Because patient data is so critical to healthcare organizations, they must have reliable, secure backup. Oracle Database Appliance also has an option that makes backup just as simple as system deployment and management. Healthcare organizations can choose between backing up to a local Oracle Database Appliance or to the Oracle Cloud if they don’t want to manually manage backups or maintain backup systems. In healthcare, protecting patient data has to be a top priority. The backup feature of Oracle Database Appliance offers end-to-end encryption and is specifically designed to include the archive capabilities needed to ensure compliance with the healthcare industry’s stringent regulations. One Brazilian healthcare organization ended a two-year search for a solution when it found the Oracle Database Appliance. Santa Rosa Hospital Takes Good Care of Its Patients—and Its Data Santa Rosa Hospital in Cuiaba, Brazil, needed a database system that could scale to match its rapid growth in patient procedures—and the accompanying growth in the hospital’s data. Some non-negotiable capabilities for a solution included improving performance, ensuring uninterrupted access to the database 24/7, a safe and efficient backup process, and expandable storage capacity. According to IT Manager Andre Carrion, Santa Rosa searched for two years for a solution but couldn’t find one that fit its budget, until it found Oracle Database Appliance with cloud backup. The results were impressive: Ensured full access to the database even when a server crashed, and increased patient data security. Systems now run on the virtual server in the cloud while the physical server is re-established. Reduced backup time from 24 hours to 2 hours. Reduced time to retrieve patient information from as much as 3 minutes to 2 seconds. Reduced average ER consultation time from 15 minutes to 6 minutes. Replaced 10 servers with 1 server. As a bonus, everything was installed and ready to go in just a week. The Oracle Database Appliance with easy cloud backup was just what the doctor ordered to meet Santa Rosa’s growing business without compromising the security of sensitive patient information or breaking the budget.  

Healthcare providers face so many complex challenges, from a shortage of clinicians to serve an aging population that requires more care, to changing regulations, to evolving patient treatment and...

Cloud Infrastructure Services

Oracle Exadata and Oracle 18c: Pushing the Efficient Frontier of Infrastructure

Innovation is the driver behind every new iteration of Oracle Database and Oracle Exadata. And the driver behind the innovation is giving businesses more performance, availability, scalability, and cost-effectiveness. In the words of Exadata Product Manager Gurmeet Goindi, Oracle Database 18c running on the Exadata platform continues to push the efficient frontier of IT infrastructure.   Not one to make a statement without backing it up, Gurmeet provides plenty of evidence for why Oracle Database 18c and this engineered system are such a powerful combination. Why the combination of Oracle Database 18c and Exadata? Goindi: Exadata is our flagship platform to run the Oracle Database. After more than a decade, it’s still, by far, the most efficient, the best performing, and the highest available platform to run Oracle Database. While Exadata has massive and successful on-premises deployments in many leading enterprises, it is also the cornerstone of Oracle Cloud. Exadata runs our software-as-a-service (SaaS) applications, and provides database-as-a-service (PaaS).  How has the increased focus on DevOps factored into enhancements to 18c running on Exadata?  Goindi: Indeed, developers are increasingly dictating how the database should run, and we have to respond to that. One of the ways we’ve done that is to fully embrace a fast-provision, multitenant architecture—enabling up to 4,000 pluggable databases on the same system.  Second, we added a feature that allows us to refresh or maintain a second copy of these pluggable databases and switch them over instantaneously—a kind of pluggable database failover feature. Say you have hundreds of developers working off the same machine, and the data on one of those pluggable databases becomes corrupted. You need a quick way to switch over that one particular database and not disturb anybody else. This architecture enables the customer to do that—a very useful feature when running in a DevOps environment. Also, in this shared environment, you need granular data access control. We upgraded the security feature so that the pluggable databases support native encryption keys to isolate different developers or apps. We also support Docker containers on Exadata, which allows developers to spin up new containers, stick their app or a new version of database in, and get it going.  What types of memory innovations have you introduced in Exadata that enhances 18c performance? Goindi: As a company, we believe the future of analytics is in-memory. In-memory columnar store delivers real-time analytics because nearly every business needs to run a real-time organization today.  The first phase, which came out in our last Exadata release, used the same server architecture for storage that we have in the database tier, which makes the in-memory technology go very fast on the computer and at the storage tier as well. With this latest release, we store the data in flash in the same format as an in-memory columnar store, and we use the same SIMD technology. The hottest portion of the dataset remains in the computer memory, and the rest spills over into the flash tier; the entire dataset now benefits from the columnar format. You still have datasets that need the core in-memory performance. With Database 18c on Exadata, we introduced a feature we call Automatic In-Memory that basically eliminates tuning and, instead, automatically evicts less-accessed data to flash and keeps the active dataset in DRAM. These two innovative features, the in-memory-flash formatting and the Automatic In-Memory, provide the performance of DRAM and the capacity of flash. One important thing I want to note is that Exadata is the only platform on which you can run Oracle In-Memory on a standby database. This allows you to, for example, offload reports to a standby database and use the active database for ingesting new records. We even have a function that allows you to lazily upgrade your standby database to keep it in sync with your active database so that you don’t have to wait until the end of the day to upload billions of records to your data warehouse. Would you talk about all the latest automation built into 18c and Exadata, and how it affects management and security? Goindi: With new features or functions we add, we strive to simplify management by automating processes. First of all, the system comes pre-configured. Some features may have to be enabled, but they work in an automatic pre-configured fashion. Customers can tune if they want to, but the default is they don’t have to. In the latest 18c release, we’ve added a seamless, automatic software update feature. What we’ve done is to create what is really an export system in which you specify the servers that you have to update. You specify a time period for the updates and the software version to update to. You might specify, for example, "Update my Exadata to the latest version on Saturday between 10 p.m. and midnight." This tool will wake up at 10 p.m. It will fetch the bits from wherever you have specified, and it will start updating the servers, one at a time in a rolling fashion until midnight. When midnight comes, it will stop, wherever it is. Then, next Saturday, it will resume where it left off. If you're running on a non-Exadata system, and something like the Spectre and Meltdown vulnerabilities occur, to protect your system, you have to wait for every vendor to put its security update out. They all have to be lined up on the same day, and then all these teams have to work together to implement the patches because if even one component is not updated, you are not secure.   In Exadata, one software update patches the updates and secures the whole stack top to bottom, and with only one vendor. How does the latest functionality facilitate cloud-scale deployment? Goindi: Since we run the same Exadata platform in the cloud, the business investment in Exadata is protected, and your transition from on-premises to the cloud is smooth, no matter how much you want to move to the cloud or how quickly (or slowly).  At the same time, we make our applications run faster in a more available fashion. Built-in functionality allows native database charting across many geographies. It also provides management efficiencies of cloud scale, enabling management of all your systems as one. Isn’t Exadata perceived as being the high-end, expensive platform that’s out of reach for many businesses? Goindi: Exadata aims to address all customer requirements. Running Oracle Database on Exadata actually provides tremendous cost efficiency while delivering the best combination of performance, availability, and scalability. Because the customer gains all this productivity and reduces staff resources for day-to-day management, the cost-effectiveness keeps increasing as more and more innovation is incorporated. That’s why we call it the efficient frontier of IT infrastructure. About the Author Gurmeet Goindi is the Master Product Manager for Oracle Exadata at Oracle. 

Innovation is the driver behind every new iteration of Oracle Database and Oracle Exadata. And the driver behind the innovation is giving businesses more performance, availability, scalability, and...

Cloud Infrastructure Services

Prescription for Long-Term Health: What Will the Future of Healthcare Tech Look Like?

You can hardly go a day without hearing or reading about new technologies that are changing our world: Internet of Things (IoT), blockchain, adaptive intelligence (AI), and machine learning (ML). Do they have a role in healthcare? They do. In fact, they have the power to reshape patient care and organizational operations dramatically. And while each one can have an impact on its own, they become even bigger game-changers when combined. IoT’s Impact Is Already Being Felt  We’re surrounded today by the Internet of Things. Sensors gathering real-time data from manufacturing equipment, smart appliances in our homes, and wearable fitness devices are just a few examples. Now there’s a another term, the Internet of Healthcare Things (IoHT). IoHT connects all devices and applications to healthcare IT systems. They can be used in clinical settings as well as support operations. Frost & Sullivan predicts as many as 30 billion IoHT devices by 2020. One area where IoHT is being applied is remote patient monitoring. Clinicians can use sensors to monitor the vital signs of recently discharged patients and help catch complications early, reducing risk and hospital readmissions. Devices can also be used to help patients self-monitor and self-manage disease or to remember to take their prescription medications on schedule, putting patients more in control of their own health and improving their outcomes.  These same sensors can be used for clinical trials. Oracle has been involved in providing technology behind some of these Class II remote patient monitoring devices. Oracle Health Science mHealth Connector Cloud Service can acquire and transmit remote monitoring device data to get real-time views of patient status and progress. On the operations side, mobile medical equipment traceability is one area where IoHT can help identify, in real-time, what equipment is being used and where. This information can help reduce costs through more efficient mobile medical device utilization. That is, resources can be identified, located, and moved to where they are needed quickly.  Studies have found that average mobile device utilization rates are as low as 42 percent. IoHT can also be used to monitor expensive medical equipment such as MRI machines to plan and perform scheduled maintenance and detect problems early so they can be fixed before disrupting operations. In overall hospital facilities management, IoT can be used to monitor, analyze, and understand what’s happening throughout the facility, including HVAC, security systems, elevators, and more. These are just a few examples of how IoHT is already changing healthcare. What about blockchain? Blockchain Generates Unalterable Data Records, Holds Tremendous Potential  While the buzz around cryptocurrencies can make blockchain technology seem somewhat mysterious and glamorous, it’s nothing more than a distributed-ledger technology in which transactions are recorded in a “chain” that can be shared among members of a network. The data can be written and read, but they can’t be edited—meaning records can’t be changed once they’re recorded. As blockchain expert Mark van Rijmenam describes it, the data in a blockchain is immutable, verifiable, and traceable. The end result is that everyone in the network has access to this “single source of truth.” What does this mean for the healthcare industry? The possibilities are enormous. It could streamline the claims process or speed the medical insurance enrollment process.  When combined with IoHT, blockchain can be used to trace medical devices and pharmaceuticals from raw material through final destination. This traceability can provide information about the authenticity of a product, if it is damaged during transport, or whether it has been exposed to extreme environmental conditions or careless handling that might erode its efficacy—for example, with pharmaceuticals. Having this information early can also expedite a recall and keep faulty or harmful products out of the supply chain.  Since blockchain eliminates the need for a “middleman,” it can help promote faster and more efficient employee credentialing by verifying caregiver records or make use of smart contracts that include pre-authorizations between payers and providers. And that’s just the start. Adaptive Intelligence and Machine Learning Can Revolutionize Patient Care Adaptive intelligence (AI) and machine learning (ML) are disruptive technologies that have the ability to consolidate the enormous quantities of patient data being generated by electronic medical records (EMR), monitoring devices, medical imaging, and other sources to gain a deeper understanding and view of patient populations, outcomes, and costs. These more advanced AI/ML capabilities that are available today can help with early diagnosis of disease; improvement of prescribing effectiveness; and identification of fraud, security threats, and population risk.  The insights available by applying AI and ML to medical data can pair up appropriate subjects with clinical trials to increase the chances of success as well as to reduce readmission rates and lower healthcare costs. Chatbots are another form of AI and can help further process improvement by automating tasks that used to fall on humans.  Researchers at Methodist Research Institute in Houston, Texas, have developed AI software that can predict breast cancer risk 30 times faster than a human physician and with 99 percent accuracy.   Take a Peek into the Future of Healthcare In a recent Forbes article, Oracle Senior Vice President for Converged Infrastructure Chuck Hollis gives an example of routine healthcare in the future. Wearing a smart sensor would allow healthcare providers to monitor medication intake. AI applications could alert patients to a problem with blood pressure or sugar levels, or even remind them to take their medications. Healthcare records could easily be shared among providers. And everything could be verified and protected using blockchain technology.  All of these advanced technologies would be supported by infrastructure designed to maximize their value and performance, like fully integrated engineered systems. With the right technology built on a solid foundation, the prognosis for the healthcare industry is excellent. About the Author Michael Walker is the Industry Solutions Group Global Lead for Healthcare and Life Sciences at Oracle with over 25 years of experience in various capacities working across healthcare, medical device, biopharmaceuticals and clinical research. In addition to Oracle, Mike has held positions in management consulting and industry including, Vice President of Supply Chain, Director of Product Strategy, and operations roles. Mike holds a degree in computer science from the University of Pennsylvania with certifications in Six Sigma and APICS. 

You can hardly go a day without hearing or reading about new technologies that are changing our world: Internet of Things (IoT), blockchain, adaptive intelligence (AI), and machine learning (ML). Do...

Data Protection

Oracle Exadata: Yes, Database Performance Matters

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. Wandering the hallways earlier this year at Collaborate 2018, I had the opportunity to speak with quite a few Oracle customers about their thoughts on Oracle Exadata X7-2.  At the risk of oversimplifying their thoughts, I think I can break them into two camps—those who see strong value in extreme performance, and those who aren’t so sure they need it.  Everyone believed Exadata performance is impressive, with 350GB/sec throughput and almost 6 million read IOPS, but what about workloads that only need a fraction of these numbers.  Do they see any benefit? The answer, as always, is “it depends.”  However, more often than not, even databases with relatively low performance requirements will benefit from Exadata’s performance.  More specifically, running an application on an extreme performance platform can reduce the overall resources required for the application, resulting in the need for fewer cores, and fewer Oracle database licenses.  In addition, Exadata’s overall performance and capacity make consolidation of a greater number of databases a real possibility, and consolidation is a numbers game.  The more databases you can consolidate, the greater the efficiencies, the fewer operating systems and servers to purchase, support, host, and manage, and, with Oracle Multitenant, the fewer instances to manage.  In the example below, various workloads are consolidated into a single server, sharing spare capacity.  Since workloads peak at different times over the year, consolidating these various workloads saves substantial CPU resources when compared to each running on independent hardware.   Empirical tests of various workloads have shown that customers can consolidate four times more databases on the same hardware because of the features of the smart Exadata software.  Pushing work down to the storage cells dramatically affects the amount of data the database servers need to process.  The ability of the storage servers to scan immense quantities of data in parallel eliminates the need for many storage indexes, which also eliminates the need to maintain those indexes.  Hybrid Columnar Compression not only reduces by a factor of ten the amount of data that must be stored, it makes analytics on that data more efficient as filtering can be done without reading the entire row.  Exadata’s high-speed InfiniBand communications and generous flash cache ensures the database spends little time waiting on IO, even when storing data on cost-effective high-capacity, but relatively slow, HDDs.  Unlike a standard server and storage, Exadata is database aware. It knows what work is latency sensitive and can prioritize resources for that work to optimize performance.  All these performance features reduce the overall time the database server, and connected applications, spend idling, waiting for data. Swiss Post highlighted the benefits of their consolidating onto Exadata at a recent OOW.  They took 480 databases running on 100 physical and 30 virtual servers, and migrated them to three Exadata half-racks.  They also took 96 SAP databases running on 47 physical and 15 virtual servers, and moved them to two Exadata quarter-racks.  They realized large benefits in patching and overall management of the system.  They increased their management efficiency from 85 databases per DBA, to 125 databases per DBA, eliminated dozens of servers, and reduced their overall costs.  Perhaps one of the biggest benefits was the single point of accountability–they went from 10 hardware and software vendors to one, eliminating the multiple teams involved in troubleshooting, allowing them to more rapidly resolve issues.  The impact of extreme performance, however, can be much larger than enhancing consolidation.  As summarized by Jonathan Walsh, Head of BI & DW at Morrisons, Plc., “With query times dropping from minutes to seconds, Exadata has changed the way people work.”  Hundreds of customers have adopted Exadata, not to speed up existing workloads, but to enable new workloads.  Analytic tasks that were impossible or impractical in the past, now becoming routine.  Businesses are using Exadata to analyze data in real-time, to make better decision based on huge stockpiles of data.  A large financial institution in the US moved to Exadata, and found it enabled them to perform more effective fraud detection on transactions.  Deeper more sophisticated fraud analysis enabled by Exadata's performance allowed them to more reliably in real-time detect fraudulent transactions while reducing false alarms.   High performance can also help customers better meet their performance and availability SLAs.   Overall database performance can easily be adversely affected, maybe due to a workload spike, a change in workload patterns, or even a change in the application.  Exadata’s extreme performance ensures such events will be absorbed by the system, reducing the likelihood that response times or throughput metrics will violate the standards you’ve negotiated with your end users. So, to me, it's obvious that performance does matter.  Think about what you can do with an extreme performance database systems.  Look at your opportunities for consolidation and think about what you can save.  More importantly, talk to your end users and application developers.  Find out what they could do with better performance and stricter SLAs.  What strategic value can they derive from their IT systems if they could process data in a fraction of the time it takes today? This is blog 2 in a series of blog posts celebrating the 10th anniversary of the introduction of Oracle Exadata.  Our next post on Oracle Exadata will also focus on Performance, but will explore why only an Engineered System can deliver this level of performance. About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. Wandering the hallways earlier this year at Collaborate 2018, I had the opportunity to speak with quite a few...

Cloud Infrastructure Services

Prescription for Long-Term Health: How Data Can Deliver Better Patient Care

You can’t treat what you can’t see. In the healthcare arena, this statement rings true not only for patient care but also for the business performance of healthcare organizations. With the passage of the Affordable Care Act and its associated focus on value-based versus volume-based care, both payers (health plans/insurers) and providers (hospitals/clinics/integrated delivery systems) are finding they must change the way they do business to make their patients healthier—and to manage the costs associated with achieving that goal. Big data and predictive analytics are leading the way in this new business model. However, healthcare organizations still struggle to manage and leverage effectively the massive amounts of data they collect and generate.   Payers Face Pressure to Do More with Data Historically, payers applied business intelligence tools primarily to claims data stored in data warehouses to reduce overall costs, detect fraud, and optimize the operation of their facilities. This data helped payers gain market share within partner networks and cut overall claim costs by detecting fraudulent and wrongful claims early. But today, payers feel growing pressure to make fuller use of data. Specifically, they’re looking to negotiate more favorable, lower-cost contracts with providers; predict high-risk patients for Accountable Care Organizations (ACOs); anticipate how value-driven activities such as tests and procedures will improve clinical outcomes; and quantify healthcare costs and recommended wellness activities.   New Goals Require New Analytics Strategies These are highly sophisticated goals, and payers are turning to highly sophisticated datasets to accomplish them. They’re collecting and analyzing a dizzying array of customer experience data (including sentiment, social, website, call center notes, financial and demographic information, and IoT data from wearable devices); supplementary clinical EHR data points and pharmacy records; and additional billing, financial and provider supply chain data. To draw value from it all, payers are also rapidly evolving their information architecture to support predictive analytics. Forward-thinking organizations are leveraging analytics to understand and meet inventory needs, develop better pricing models, slash fraud and waste, optimize their workforces, manage costs, and improve profitability. Blue Cross of California, Family Health Network, and Healthcare Services Corporation (HCSC) worked with Oracle to strengthen their foundational systems and reduce costs. These organizations chose Oracle Cloud Applications to help them provide healthcare coverage at a more affordable cost by digitizing and modernizing their financial, planning, budgeting, and business processes. The Oracle Enterprise Resource Planning (ERP) Cloud has helped all these organizations simplify and streamline operations with increased visibility and insights into financial and operational activities. Reducing IT complexity and costs have increased productivity, freeing employees to provide insured members with better, more affordable health insurance plans.   Providers Use Data to Deliver Better Patient Care and a Stronger Bottom Line Providers constantly face the challenge of managing costs while improving the patient experience and patient health outcomes. Facing the same pressures as payers when it comes to shifting reimbursement models, providers are also looking for new and innovative ways to analyze patient data to improve clinical quality and strengthen their bottom lines. With patient data going digital, providers have a growing body of information they can consolidate and analyze to improve patient outcomes in the move to value-based care and population health-based business models. And, while clinical data is fueling the initial analytics drive, providers are also looking beyond clinical information. Effective population health management requires healthcare providers to rely heavily on additional data derived from their own IT systems. Identifying patients at high risk of for chronic diseases or failing to follow treatment protocols is a significant challenge for many organizations, but this capability is quickly becoming essential for value-based care.   Volume and Velocity of Data Is Increasing To develop a comprehensive portrait of a patient’s clinical, financial, and social risks, healthcare providers must aggregate key data from across the care continuum before they can leverage risk scoring frameworks and target interventions to individuals. Those data sources include not only clinical data from EHRs, but also genomic sequencing profiles, billing and financial records, customer experience and wearables data. To organize and leverage this rising sea of data effectively, organizations need a robust IT infrastructure. Oracle Big Data Appliance and Oracle Database Appliance serve as the foundation for a data solution that can harness clinical and related data and turn it into better patient outcomes. Wit-Gele Kruis is a home-care organization providing nursing services to more than 150,000 homes across five provinces in Flanders, Belgium. The organization wanted to increase operational efficiency and improve patient care by providing nurses and management staff with timely, accurate, up-to-date business intelligence. Wit-Gele Kruis deployed Oracle Database Appliance and Oracle Business Intelligence Suite with a focus on improving its analytics and automation capabilities. Now, service department leaders are able to monitor staff performance in real time rather than having to generate monthly performance reports. New dashboards allow them to detect and respond immediately to changes in demand for nursing services throughout the provinces. In addition, the organization has automated the creation and delivery of customized reports to staff, managers, and employees, which has saved time and helped prevent errors. Says Steven De Block, IT manager of Wit-Gele Kruis, “Our new data warehouse, which is run on Oracle Database Appliance, improved the quality of home care we provide to tens of thousands of patients.”   The Future of Healthcare Is Built on Cloud-Ready IT Infrastructure The more that payers and providers can leverage the data being generated by a growing ecosystem of sources, the better they will be able to deliver on their goals of providing better care at lower cost. Emerging information sources like genetic data, digitized clinical data, customer experience data, and information from IoT-powered wearable technologies will drive more personalized care and reduce overall healthcare costs. Payers and providers appreciate the role that predictive analytics will play in their efforts to personalize treatment, manage chronic diseases, and mitigate clinical and financial risks. In fact, a whopping 93% of healthcare organizations participating in a 2017 Society of Actuaries survey said they won’t be able to address the financial and clinical challenges of the future without investing in forward-looking, big data analytics. These analytics capabilities, however, will not be possible without the scalability of the cloud. Healthcare organizations need cloud-ready IT infrastructure that not only makes it simple to move operations to the cloud, but includes hardware and software engineered to work optimally together for the best performance possible. Oracle engineered systems deliver out-of-the-box functionality for the high-end analytics that healthcare demands and offer identical on-premises and cloud infrastructure for seamless migration to the cloud. About the Author Michael Walker is the Industry Solutions Group Global Lead for Healthcare and Life Sciences at Oracle with over 25 years of experience in various capacities working across healthcare, medical device, biopharmaceuticals and clinical research. In addition to Oracle, Mike has held positions in management consulting and industry including, Vice President of Supply Chain, Director of Product Strategy, and operations roles. Mike holds a degree in computer science from the University of Pennsylvania with certifications in Six Sigma and APICS. 

You can’t treat what you can’t see. In the healthcare arena, this statement rings true not only for patient care but also for the business performance of healthcare organizations. With the passage of...

One-to-One at Scale: The Confluence of Behavioral Science and Technology and How It’s Changing Business

Consumer and business customers have increasing expectations that businesses provide products and services customized for their unique needs. Adaptive intelligence and machine learning technology, combined with insights into behavior, make this customization possible. The financial services industry is moving aggressively to take advantage of these new capabilities. In March 2018, Bank of America launched Erica, a virtual personal assistant—a chatbot—powered by AI. In just three months, Erica surpassed one million users. But to achieve personalization at scale requires an IT infrastructure that can handle huge amounts of data and process it in real time. Engineered systems purpose-built for these cognitive workloads provide the foundation that helps make this one-to-one personalization possible. Bradley Leimer, Managing Director and Head of Fintech Strategy at Explorer Advisory & Capital, provides consulting and investment advisory services to start-ups, accelerators, and established financial services companies. As the former Head of Innovation and Fintech Strategy at Santander U.S., his team connected the bank to the fintech ecosystem. Bradley spoke with us recently about how behavioral science is evolving in the financial services industry and how new technological capabilities, when tied to human behavior, are changing the way organizations respond to customer needs. I know you’re fascinated by behavioral science. How does it frame what you do in the financial sector? Behavioral science is fascinating because the study of human behavior itself is so intriguing. One of the many books I was influenced by early in my career was Paco Underhill’s 1999 book Why We Buy. The science around purchase behavior and how companies leverage our behavior to create buying decisions that fall in their favor—down to where products are placed and the colors that are used to attract the eye—these are techniques that have been used since before the Mad Men era of advertising. I’m intrigued by the psychology behind the decisions we make. People are a massive puzzle to solve at scale. Humans are known to be irrational, but they are irrational in predictable ways. Leveraging behavioral science, along with things like design thinking and human-computer interaction, have been a part of building products and customer experiences in financial services for some time. To nudge customers to sign up for a service or take an additional product or to perform behaviors that are sometimes painful like budgeting, saving more, investing, consolidating, or optimizing the use of credit all involve deeply understanding human behavior. Student debt reached $1.5 trillion in Q1 2018. Can behavioral analytics be used to help students better manage their personal finances? What’s driving this intersection between behavioral science and fintech? Companies have been using the ideas of behavioral science in strategic planning and marketing for some time, but it’s only been in the last decade that the technology to act upon the massive amount of new data we collect has been available. The type of data we used to struggle to plug into a mainframe through data reels now flies freely within a cloud of shared service layers. So beyond new analytic tools and AI, there are few other things that are important. People interact with brands differently now. To become a customer now in financial services, it most often means that you’re interacting through an app, or a website, not in any physical form. It’s not necessarily how a branch is laid out anymore; it’s how the navigation works in your application, and what you can do in how few steps, how quickly you can onboard. This is what is really driving the future of revenue opportunity in the financial space. At the same time, the competition for customers is increasing. Investments in the behavioral science area are a must-have now because the competition gets smarter every day and the applications to understand human behavior are simultaneously getting more accessible. We use behavioral science to understand and refine our precious opportunities to build empathy and relationships.  You’ve mentioned the evolution of behavioral science in the financial services industry. How is it evolving and what’s the impact? Behavioral science is nothing without the right type of pertinent, clean data. We have entered the era of engagement banking: a marketing, sales, and service model that deploys technology to achieve customer intimacy at scale. But humans are not just 1’s and 0’s. You need a variety of teams within banks and fintechs to leverage data in the right way, to make sure it addresses real human needs. The real impact of these new tools has only started to be really felt. We have an opportunity to broaden the global use of financial services to reduce the number of the underbanked, to open new markets for payments and credit, to optimize every unit of currency for our customers more fully and lift up a generation by ending poverty and reducing wealth inequality. 40% of Americans could not come up with $400 for an emergency expense. Behavioral science can help move people move out of poverty and reduce wealth inequality. How does artificial intelligence facilitate this evolution? Financial institutions are challenged with innovating a century-old service model, and the addition of advanced analytics, artificial intelligence tools and how they can be used within the enterprise is still a work in progress. Our metamorphosis has been slowed by the dual weight of digital transformation and the broader implications of ever-evolving customers. Banks have vast amounts of unstructured and disparate data throughout their complicated, mostly legacy, systems. We used to see static data modeling efforts based on hundreds of inputs. That’s transitioned to an infinitely more complex set of thousands of variables. In response, we are developing and deploying applications that make use of machine learning, deep learning, pattern recognition, and natural language processing among other functionalities. Using AI applications, we have seen efficiency gains in customer onboarding/know-your-customer (KYC), automation of credit decisioning and fraud detection, personalized and contextual messaging, supply-chain improvements, infinitely tailored product development, and more effective communication strategies based on real-time, multivariate data. AI is critical to improving the entire lifecycle of the customer experience. What’s the role of behavioral analytics in this trend? Behavioral analytics combines specific user data: transaction histories, where people shop, how they manage their spending and savings habits, the use of credit, historical trends in balances, how they use digital applications, how often they use different channels like ATMs and branches, along with technology usage data like navigation path, clicks, social media interactions, and responsiveness to marketing. It takes a more holistic and human view of data, connecting individual data points to tell us not only what is happening, but also how and why it is happening. You’ve built out these customization and personalization capabilities in banks and fintechs. Tell us about the basic steps any enterprise can take to build these capabilities. As an organization, you need to clearly define your business goals. What are the metrics you want to improve? Is it faster onboarding, lower cost of acquisition, quicker turn toward profitable products, etc.? And how can a more customer-centric, personalized experience assist those goals? As you develop these, make sure you understand who needs to be in the room. Many banks don’t have a true data science team, or they are a sort of hybrid analytical marketing team that has many masters. That’s a mistake. You need deep understanding of advanced analytics to derive the most efficiencies out of these projects. Then you need a strong collaborative team that includes marketing, digital banking, customer experience, and representation from those teams that interacts with clients. Truly user-centric teams leverage data to create a complete understanding of their users’ challenges. They develop insight into what features their customers use and what they don’t and build knowledge of how customers get the most value out of their products. And then they continually iterate and adjust. You also need to look at your partnerships, including those with fintechs. There are several lessons derived from fintech platforms such as attention to growth through business model flexibility, devotion to speed-to-market, and a focus on creating new forms of customer value through leveraging these tools to customize everything from onboarding to the new user experience as well as how they communicate and customize the relationship over time. What would be the optimum technology stack to support real-time contextual messages, products, or services? Choosing the right technology stack for behavioral analytics is not that different than for any other type of application. You have to find the solution that maps most economically and efficiently to your particular problem set. This means implementing a technology that can solve the core business problems, can be maintained and supported efficiently, and minimizes your total cost of ownership. In banking, it has to reduce risk while maximizing your opportunities for success. The legacy systems that many banks still deploy were built on relational databases and not designed for real-time processing, providing access via Restful APIs and the cloud-based data lakes we see today. Nor did they have the ability to connect and analyze any form of data. The types of data we now have to consider is just breathtaking and growing daily. In choosing technology partners, you want to make sure what you’re buying is built for this new world from the beginning, that the platform is flexible. You have to be able to migrate between on-premises solutions to the cloud, along with a variety of virtual machines being used today. If I can paraphrase what you’re saying, it’s that financial services companies need a big data solution to manage all these streams of structured and unstructured data coming in from AI/ML, and other advanced applications. Additionally, a big data solution that simplifies deployment by offering identical functionality on-premises, in the cloud, and in the Oracle public Cloud behind your firewall would also be a big plus. Are there any other must-haves in terms of performance, analytics, etc., to build an effective AI-based solution? Must-haves include flexibility to consume all types of data, especially that which is gathered from the web and from digital applications. It needs to be very good at data aggregation—that is, reducing large data sets down to more manageable proportions that are still representative. It must be good at transitioning from aggregation to the detail level and back to optimize different analytical tools. It should be strong in quickly identifying cardinality—how many types of variables can there be within a given field. Some other things to look for in a supporting infrastructure are direct access through query tools (SQL), support for data transformation within the platform (ETL and ELT tools), flexible data model or unstructured access to all data, algorithmic data transformation, ability to add and access one-off data sets simply (like through ODBC), flexible ways to use APIs to load and extract information, that kind of thing. A good system needs to be real time to help customers in taking the most optimized journey within digital applications.  To wrap up our discussion, what three tips would you give the enterprise IT chief about how to incorporate these new AI capabilities to help the organization reach its goals around delivering a better customer experience? First, realize that this isn’t just a technology problem—it will require engineers, data scientists, system architects and data specialists sure, but you also need a collaborative team that involves many parts of the business and builds tools that are accessible. Start with simple KPIs to improve. Reducing the cost of acquisition or improving onboarding workflows, improving release time for customer-facing applications, reducing particular types of unnecessary customer churn—these are good places to start. They improve efficiencies and impact the bottom line. They help build the case around necessary new technology spend and create momentum. Understand that the future of the financial services model is all about the customer—understanding their needs and helping the business meet them. Our greatest source of innovation is, in the end, our empathy. You’ve given us a lot to think about, Bradley. Based on our discussion, it seems that the world of financial services is changing and banks today will require an effective AI-based solution that leverages behavioral science and personalization capabilities. Additionally, in order for banks to sustain a competitive advantage and lead in the market, they need to invest an effective big data warehousing strategy. Therefore, business and IT leaders need a solution that can store, acquire, process large data workloads at scale, and has cognitive workload capabilities to give you the advanced insights needed to run your business most effectively. It is also important that the technology is tailor-made for advancing businesses’ analytical capabilities that leverage familiar big data and analytics open source tools. And Oracle Big Data Appliance provides that high-performance, cloud-ready secure platform for running diverse workloads using Hadoop, Spark, and NoSQL systems. 

Consumer and business customers have increasing expectations that businesses provide products and services customized for their unique needs. Adaptive intelligence and machine learning technology,...

Cloud Infrastructure Services

Prescription for Long-Term Health: Maximizing the Benefits and Minimizing the Risks of the Cloud

Cloud computing has revolutionized the way we live our lives every day, significantly altering how we communicate, shop, travel, and manage our personal data. Healthcare organizations have been more cautious about adopting cloud technology than many other industries, given the stringent regulations they face around patient data protection. Traditionally, they’ve focused their technology investments on compliance with regulations, including HIPAA, HITECH, the Affordable Care Act (ACA), and mandates surrounding electronic medical records (EMR).   But their hesitation about the cloud is quickly fading. According to a 2017 survey by HIMSS Analytics, about 65% of participating healthcare organizations currently use the cloud or cloud services within their organizations. Much of the usage leans toward clinical application and data hosting, data recovery and backup, and hosting operational applications.  The surge in digitization of medical information—including the adoption of electronic health records (EHRs) and digital outputs from MRIs, bedside monitors, IoT-powered wearable technology, and genomic testing—make cloud technology even more vital for healthcare. The scalability of robust cloud solutions lets healthcare providers effectively collect, mine, and analyze this mountain of data to drive better decisions about patient care and greater business efficiencies. That’s because cloud computing makes activities like big data analytics and mobile collaboration possible and information exchange seamless. It’s no surprise, then, that the healthcare industry spent $3.73 billion on cloud services last year and is expected to reach $9.48 billion by 2020. Adoption is accelerating, as more and more organizations see the value of cloud.   Cloud Adoption Benefits Patients and Providers   Healthcare organizations are realizing that leveraging cloud technology can simplify their IT operations, save them time and money, and allow them to scale ideas faster to drive innovation. Perhaps more important, they now see the benefits to patients and providers alike.    Patients can expect more control over their personal health information. They can access their own records more easily and have that information shared with other providers through electronic data interchange. Easy access to a core set of health information drives more personalized care and stronger patient engagement. Providers can streamline their internal procedures and better coordinate care. This includes being able to share information access securely with office employees, insurers, pharmacies, specialists, and other providers. Beyond this, data analytics continues to be one of the fastest-growing segments of the healthcare IT budget, with ongoing investment in value-based care, clinical quality improvements, provider and care team performance analytics, referral patterns, and financial management.    Cloud-Ready Infrastructure Drives Real-World Results   One organization reaping the rewards of a decisive move to the cloud is the National Health Service (NHS) in the United Kingdom. The NHS Business Service Authority (NHSBA) implemented a set of Oracle solutions—including Oracle Exadata and Oracle Advanced Analytics—to make optimal use of its data and realized over £700 million in savings, while also leveraging data analytics to improve patient care.  Since implementing the solutions, NHSBA has been able to combine billions of data points on prescriptions, medicines, medical exemptions, doctor relationships and call center services from across the organization and develop insights to reveal potential new efficiencies—all of which enable the wider healthcare system to provide better patient outcomes. In addition to these deep analytics capabilities, NHSBA was also attracted to Exadata’s robust compliance and security capabilities. Nina Monckton, Chief Insight Officer for the NHS, says “The security is a big plus for us—the data centers are in EU locations, and the encryption on the database gives us a sense that everything’s okay.”   Delivering Cloud Services Behind the Firewall   More and more healthcare organizations appreciate and want the benefits that cloud can deliver, but there’s still some hesitation within the industry due to lingering historical concerns about important issues like compliance, security, application latency, data control, etc. To address these concerns, healthcare organization that choose not to move to public cloud now have the option to bring the cloud into their own data centers.  Cloud at Customer provides the same hardware and software platform that Oracle uses in its own cloud data centers and puts them into a “cloud machine” that lives behind the customer’s firewall. “We are essentially stretching out our public cloud to reach the customer’s data center,” says Nirav Mehta, the Oracle vice president of product management for Cloud at Customer. To help alleviate fears around the issues of data security, healthcare organizations retain full control of their data while getting Oracle enterprise-grade cloud SaaS, PaaS and IaaS services within their own data centers. Especially important for firms operating in Europe, this approach keeps organizations in compliance with regulations preventing personal healthcare information (PHI) from being stored outside of the country. One healthcare organization seeing the benefits of Oracle’s Cloud at Customer is Hospital de Clínicas Porta Alegre (HCPA). HCPA slashed costs, simplified IT systems, and improved EMR protection. “We chose Oracle because it has been an extremely reliable partner over the years,” said Valter Ferreira de Silva, CIO at HCPA. “In such a critical environment like ours, all hospital systems, operations and infrastructure need to be running on a 24x7 basis.”   Ensuring Longevity with Oracle Engineered Systems   One of the major reason that healthcare organizations that have chosen Oracle is that all its technology is co-engineered so that every level of the stack works together for optimized performance and data security. Oracle engineered systems are cloud-ready and include HIPAA attestation so that healthcare organizations can feel confident about data security as they take advantage of the scalability and sheer processing power of the cloud. And by unifying and standardizing on the Oracle stack (i.e., engineered systems, Oracle Cloud, etc.) organizations can tackle the hard work of patching, managing, and securing IT infrastructure sprawl from chip to cloud. With completely integrated hardware and software designed for specific workloads and cloud delivery models like Cloud at Customer that facilitate compliance, healthcare organizations have the infrastructure to finally reap all the benefits of cloud and ensure a healthier future.   About the Author Michael Walker is the Industry Solutions Group Global Lead for Healthcare and Life Sciences at Oracle with over 25 years of experience in various capacities working across healthcare, medical device, biopharmaceuticals and clinical research. In addition to Oracle, Mike has held positions in management consulting and industry including, Vice President of Supply Chain, Director of Product Strategy, and operations roles. Mike holds a degree in computer science from the University of Pennsylvania with certifications in Six Sigma and APICS. 

Cloud computing has revolutionized the way we live our lives every day, significantly altering how we communicate, shop, travel, and manage our personal data. Healthcare organizations have been more...

Cloud Infrastructure Services

Examining the Current State of Healthcare and Technology

The winds of unprecedented change have buffeted the healthcare industry for more than two decades. Amid the constant battle to keep up with the changes and manage through continual chaos and uncertainty, healthcare organizations are turning to technology as a prescription for long-term health. An Industry Health Check: Where Does the Industry Stand Today? Regulatory compliance remains a dominant challenge. Regulatory requirements have dominated the healthcare industry’s focus, especially in the IT arena, as organizations are forced to upgrade systems to meet constantly evolving regulations. Here are some of the major changes that sent ripples through healthcare industry IT departments. HIPAA (1996): The Health Insurance Portability and Accountability Act established security standards and general requirements to protect the privacy and security of electronic protected health information (PHI) as healthcare organizations moved from paper records to electronic records. HITECH (2009): Enacted to strengthen enforcement of HIPAA rules, the Health Information Technology for Economic and Clinical Health Act addresses the transmission of electronic health records. ACA (2010): The Affordable Care Act, aka “Obamacare,” had the goals of making affordable health insurance more widely available, expanding Medicaid to adults with income 138% below the federal poverty level, and lowering the cost of healthcare in general. The ACA caused a seismic shift in healthcare practices, both on the provider and the payer sides. It required healthcare providers to extend more services to more patients amid changing reimbursement models. All these changes had to be reflected in the electronic systems of healthcare organizations. ICD-10 (2015): The International Statistical Classification of Diseases and Related Problems, 10th edition, created tens of thousands of new diagnosis codes used for medical billing in the United States. The US edition now contains about 70,000 codes, compared to 14,000 in ICD-9. For FY2019, the Centers for Disease Control and Prevention (CDC) announced another 473 ICD-10 code changes beginning October 1, 2018.  Regulatory compliance leads to associated systems changes, which is why so much time and energy has been spent in this area. But regulation isn’t the only challenge facing healthcare. An aging population is putting increasing demands on the healthcare system The baby boomer generation (born from 1946 to 1964), still about 75 million strong in the US, is now 54 to 72 years old. As this generation ages, its members consume more healthcare. Members of the cohort naturally have more chronic conditions than younger people. And, as they reach the eligible age, they are using Medicare/Medicaid entitlements rather than private insurance to pay for their care. The focus is shifting to a more holistic approach to medicine Another major upheaval is the industry shift to managing health along a continuum rather than treating people at the hospital when a medical event occurs. This perspective starts with preventive medicine to keep people healthier so that they need fewer healthcare services. When they do get sick, this new model incorporates remote patient monitoring after treatment to help them recover more quickly and minimize re-admissions. This focus on wellness and care across a continuum is causing healthcare organizations to look at value-based care models that measure outcomes and move away from the traditional fee-for-service model. An offshoot of this is population health management, which looks at outcomes for groups rather than individuals and relies heavily on business intelligence and data analytics to be viable.  Uncertainty is the watchword In addition to all these major trends, the industry faces doctor shortages, declining reimbursements, soaring drug prices, rising health insurance premiums and out-of-pocket costs for the insured, and uncertainty around how the ACA will evolve under the current administration. Technology as the Key to a Healthy Long-Term Prognosis Until recently, healthcare organizations have focused their technology adoption efforts on moving to electronic health records (EHRs) and complying with ongoing regulatory changes and requirements. Now, the focus has shifted to improving patient outcomes and providing a better patient experience, increasing clinician satisfaction, streamlining business processes, implementing innovative practices and programs, improving overall organizational productivity, maximizing revenue, and managing costs. This re-focusing of resources will help organizations adapt effectively to the new world of healthcare.  Some of the specific initiatives organizations are working on include exploring how to implement value-based care and population health models; implementing telemedicine and remote patient monitoring programs; reducing waste by optimizing supply levels with better materials management systems; and identifying the most and least profitable departments within the healthcare organization and outsourcing services when it’s economically beneficial and leads to better patient outcomes. What Lies Ahead for Healthcare and This Series? All of these initiatives require consolidation of data and systems to gain a 360-degree view of operations and apply business intelligence and predictive analytics effectively. And that requires a sound IT infrastructure built on integrated systems that are designed to seamlessly manage enormous amounts of data, consolidate that data for complete visibility, and run real-time analytics that lead to smarter decisions and better patient care.  Healthcare organizations need to implement innovative solutions while mitigating risks. Purpose-built for the database with identical infrastructure on-premises and in the cloud, Oracle engineered systems provide a single, integrated infrastructure that can help these organizations scale and adapt cost-effectively and securely. Many healthcare organizations are finding that implementing hybrid cloud solutions like Oracle Exadata and Oracle Cloud at Customer help facilitate a smooth, strategic journey to the cloud. In this series, we speak with industry expert Michael Walker, Global Healthcare Lead, Healthcare and Life Sciences at Oracle, to look at how technology is helping build healthier organizations and how Oracle technology is the right prescription to achieve that health. Stay tuned for more.

The winds of unprecedented change have buffeted the healthcare industry for more than two decades. Amid the constant battle to keep up with the changes and manage through continual chaos...

How are Organizations Capitalizing on Private and Public Clouds?

A recognized thought leader in the consulting domain, PwC Partner Faisal Ghadially is TOGAF 9-certified in the Enterprise Architecture domain and author of Oracle Fusion Applications Administration Essentials. He has deep hands-on experience in Oracle Cloud, Oracle Database, Oracle E-Business Suite, and Oracle Fusion Middle domain. Faisal has earned industry awards, is a member of several advisory boards, and is a chairperson of user groups. As explained by Faisal, cloud computing provides the flexibility needed to help businesses innovate and move quickly. Enterprises realize that cloud computing is now an essential competitive driver for their organizations. In its “8 Trends in Cloud Computing for 2018,” Unfold Labs included these predictions for market leadership: growth in cloud services (SaaS, PaaS, IaaS), increase in hybrid cloud solutions (cloud-to-cloud and cloud-to-on-premises connectivity), and serverless cloud computing. With cloud-ready infrastructure—specifically engineered systems infrastructure that is built to be identical on-premises and in the cloud—the move to the cloud becomes quick and seamless, helping businesses focus on innovation rather than hardware and software. Faisal joins us to help us explore how enterprises can build the infrastructure and business layer of their systems to facilitate rapid innovation in the cloud.  What’s driving organizations to consider working in the cloud? A cloud platform lets organizations quickly dominate a market through rapid deployment, because capacity can be spun up for peak workloads and decreased when it’s no longer needed. A cloud platform also ends up delivering new services and on-demand capabilities “as a service,” which allows companies to move much faster than their competitors. What it boils down to is that companies are really looking for the cloud to become a critical driver of their new IT strategy and a fundamental, competitive differentiator. What is the right way for companies to move to the cloud? Organizations are eager to realize the benefits of the cloud. They want the agility, elasticity, scalability, security, backup recovery, and so on. But enterprises with well-established applications—ERP, supply chain, HR, etc.—can’t just flip a switch and say, “Okay, I’m now running in the cloud.” For those organizations, going to the cloud is a multi-year journey, typically with four or five major rollouts to migrate. They may need to integrate these existing applications by moving to platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) to move from the data center to the cloud. Or they can upgrade with SaaS applications. In either case, some mission-critical applications are likely, and correctly, going to remain on-premises, creating a hybrid approach and the best of both worlds. However, it’s essential that users be able to move seamlessly back and forth between on-premises and cloud applications.  Once it’s been decided what applications you want to move to the cloud, you have to decide which are the best candidates. To do that, we follow the minimal viable product, or MVP, strategy. You look at your business functions and define self-contained capabilities that can be moved to the cloud. Then define the roadmap of moving larger capabilities in increments to the cloud. That’s the minimal viable product. This approach works very well, because application users immediately see the value. Then you can grow to the next, larger division that subsumes the first one. These migrations typically start with finance-related functions. We move finance to the cloud, then look at areas like HR, order management, supply chain, and so on. We step the organization to the cloud through MVP stage gates. Between each gate is a stabilization period. There’s a lot of change involved, so people need to feel confident in their first step before moving to the next. Adding to your excellent discussion, Faisal, we can also say that mission-critical applications often remain on-premises. So on-premises infrastructure needs to support the transition to the cloud. This allows for a hybrid approach—the best of both worlds. ESG conducted a survey of enterprise customers for Oracle and found that 74% say it is critical or very important that their on-premises environment is equivalent to the public cloud. Yes, you want infrastructure on-premises that is identical to that in the cloud, so you can support the transition to the cloud. You also want this infrastructure to be co-engineered so that all of the hardware and software are optimized to work perfectly with each other in the cloud and on-premises. This means as enterprises move to the cloud, they can follow this MVP strategy and move as much or as little to the cloud whenever they’re ready—and migrate their workloads between their data center and the cloud as needed. What is the role of the cloud in encouraging innovation? The cloud creates a powerful idea incubator by connecting or creating a community of developers. It provides the environment to deliver innovation on tap. People can publish concepts or innovations and others can pick up what makes sense to them. Then somebody else chimes in and says, “That’s great but it doesn’t work in this area … so I built this.” I saw this viral effect in the Oracle Marketplace when uploading expense reports began. There was a flood of expense-report templates because everyone has a unique way of submitting expenses. That was great, because no one had to build something from scratch. Additional innovation results from the ability of cloud-based service vendors to analyze how people use their applications and identify patterns in things users do. Based on that analysis, the community can offer guidance, like, “I see what you’re trying to do—85 percent of people did it this way.” And when organizations migrate functions to the cloud, I’ve seen many cases where they realized new capabilities. They say, “I didn’t know we could do it this way. Let’s adapt our business process to adopt this new capability, because it will save us a lot of time and money.” What’s the connection between blockchain and the cloud? The cloud consists of an ecosystem of different enterprise clouds. Blockchain could become the pseudo-integration connecting the different elements in the ecosystem, with blockchain supporting the cloud rather than vice versa. Blockchain becomes a kind of integration platform. The promise is huge. Whenever I say blockchain, the first thing people mention is Bitcoin. Bitcoin is simply one blockchain use case, but it’s a very small sliver of what blockchain can do. Blockchain provides two things. First is the ability to keep an endless record of interest in a given business object. If my business object is an invoice, I can keep track of everything related to that invoice, including payments made against it, goods received, when they were received, where they are now, etc. An endless lifecycle of the invoice can be maintained with full traceability. Second, blockchain offers me full and complete security related to an activity or event. The blockchain cannot be updated or changed unless all parties involved agree the change is true. When you apply that in the broader enterprise domain, there are tons of use cases. In the supply chain of a cable box manufacturer, for example, I can treat that cable box as an object in a blockchain. The object could include all the parts in the box, contractual agreements assigned to it, customers who bought it, service or quality issues related to it. I know everything that ever touched that box. That is extremely powerful. Apply the same concept to journals or general ledger. You suddenly have the ability for all parties involved to approve each transaction. The whole concept of what financial departments currently do at month-end close goes away because you are doing a real-time close every second with every transaction. You’re 100-percent reconciled with all your trading partners at the point when a change happens. My opinion is that blockchain can have a similar impact as the internet. We evolved from client-server to internet-based and then e-commerce, and everything that’s happened since. Blockchain has the same scale of potential. Our ability to comprehend the uses of blockchain technology is the only limiting factor. There will be a tipping point where a large vendor or customer is going to say, “I will now transact with you over a blockchain.” That would drive an entire ecosystem to change. For example, Walmart may say, “I can do my B2B transactions using EDI but am also offering a blockchain platform, and if you use the blockchain platform, I will give you a half-percent discount.” I think it will evolve into private blockchains, where entire organizations adopt it internally for different areas of their business. Your employee could become a blockchain for example. That will lead to public blockchains where you’re essentially doing bank transactions and sharing health information outside the walls of an organization on a regular public basis. What other breakthroughs do you see on the horizon related to the cloud? A breakthrough I hope to see is a level of standardization across clouds. Organizations that move to the cloud typically feel they’re siloed and can’t move or change clouds. They want to have confidence that they could move from one cloud to another, just like you can move from an iPhone to an Android device with little effort. That might be a tipping point for more adoption. What tips would you give enterprise IT leaders to help them develop an effective cloud-computing strategy? First would be to see migration to the cloud as a journey, not a destination. It isn’t “Yes, I’m on the cloud,” and you’re done. The journey will ultimately lead to a lot of good things but appreciate each milestone. And build that journey on an infrastructure that simplifies and speeds the process. Second, do not forget your people. Don’t just impose it and say, “We’re doing this.” It will require a big cultural change, so bring your folks in on it. Let them participate in the vision and direction of the change, and involve them in bringing about the change. Third, embrace technology to communicate about the change. Engage people in a conversation comprising small bits of information delivered via social media. Everyone can see what’s being said and how the different parties respond to it. Encourage interaction and discussion regarding the change to ensure an effective implementation.

A recognized thought leader in the consulting domain, PwC Partner Faisal Ghadially is TOGAF 9-certified in the Enterprise Architecture domain and author of Oracle Fusion Applications...

Engineered Systems

Improving ROI to Outweigh Potential Upgrade Disruption

Today's guest post is by Allan Packer, Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle with a focus on Oracle SuperCluster. Hardware upgrades have always been supported on Oracle SuperCluster, but how flexible are they? And will any benefits be outweighed by the disruption to service when a production system is upgraded? Change is an ever-present reality for any enterprise. And with change comes an opportunity cost, unless IT infrastructure is flexible enough to satisfy the evolving demand for resources. From the very first release of Oracle SuperCluster, a key attraction of the platform has been the ability to upgrade the hardware as business needs change. Modifying hardware can be very disruptive. Hardware configuration changes create a ripple effect that penetrates deep into the software layers of a system. For this reason, an important milestone in the upgrade landscape for both Oracle SuperCluster M8 and Oracle SuperCluster M7 has been the development of special purpose tools to automate the upgrade steps. These tools are able to reduce the necessary downtime associated with an upgrade, and also minimize the opportunity for misconfiguration during what can be a complex operation.    CPU upgrades Compute resources on both Oracle SuperCluster M8 and Oracle SuperCluster M7 are delivered in the form of CPU, Memory, and I/O Unit (CMIOU) boards. Each SPARC M8 and SPARC M7 chassis supports up to eight of these boards, organized into two electrically isolated Physical Domains (PDoms) hosting four boards each.    Each CMIOU board includes: One processor with 32 cores—a SPARC M8 processor for Oracle SuperCluster M8, or a SPARC M7 processor for Oracle SuperCluster M7. Each core delivers 8 CPU hardware threads, so each processor presents 256 CPUs to the operating system. Sixteen memory slots, fully populated with DIMMs. Oracle SuperCluster M8 uses 64GB DIMMs, for a total of 1TB of memory. Oracle SuperCluster M7 uses 32GB DIMMs, for a total of 512GB of memory. Three PCIe slots. One slot hosts an InfiniBand HCA, and another hosts a 10GbE NIC. In the case of Oracle SuperCluster M8, the 10GbE NIC is a quad-port device. Oracle SuperCluster M7 provides a dual-port NIC. The third PCIe slot is empty on all except the first CMIOU in each PDom, where it hosts a quad-port GbE NIC. Optional Fiber Channel HBAs can be placed in empty slots.   Adding CMIOU boards CMIOU boards can added to a PDom whenever more CPU and/or memory resource is required. Up to four CMIOU boards can be placed in each PDom. The diagram below illustrates a possible sequence of upgrades in a SPARC M8-8 chassis, from a quarter-populated configuration with two CMIOUs (one per PDom), to a half-populated configuration with four CMIOUs, to a fully-populated configuration with eight CMIOUs.   PDoms can be populated with as many CMIOUs as required—there is no requirement to use the same number of CMIOU boards in both PDoms on the same chassis. The illustration below shows two SPARC M8-8 chassis with different numbers of CMIOUs in each PDom.       Adding a second chassis Many Oracle SuperCluster installations are initially configured with a single compute chassis. Every SPARC M8-8 and SPARC M7-8 chassis shipped with Oracle SuperCluster includes two electrically isolated PDoms, so highly available configurations begin with a single chassis. When the need for additional compute resources exceeds the capacity of a single chassis, a customer can add a second chassis with one or more CMIOUs, thereby allowing total compute resources to be increased by up to two times. Since each CMIOU board in the second chassis comes equipped with its own InfiniBand HCA, additional resources immediately become available on the InfiniBand fabric after the upgrade. Note that both SPARC M8-8 and SPARC M7-8 chassis consume ten rack units. Provided no more than six Exadata Storage Servers have been added to an Oracle SuperCluster rack, sufficient space will be available to add a second chassis.     Memory upgrades Where memory resources have become constrained, the simplest way to increase memory capacity is to add one or more additional CMIOU boards. Such upgrades come with the extra benefit of additional CPU resources as well as greater I/O connectivity.   Note that it is not supported to exchange existing memory DIMMs for higher density DIMMs. Adding additional CMIOUs achieves a similar effect in a more cost effective manner. The cost of a CMIOU populated with lower density DIMMs, a SPARC processor, an InfiniBand HCA, and a 10GbE NIC, compares favourably just with the cost of higher density DIMMs.     Exadata storage upgrades Exadata Storage Servers can be added to existing Oracle SuperCluster configurations. Even early Oracle SuperCluster platforms can benefit from the addition of current model Exadata Storage Servers.   Customers adding Exadata Storage quickly discover that both the performance and available capacity of current Exadata Storage Servers far outstrips that of older models. Best practice information is available for such deployments, and should be followed to ensure effective integration of different storage server models into an existing Exadata Storage environment.   Note that Oracle SuperCluster racks can host eleven Exadata Storage Servers with one SPARC M8-8 or SPARC M7-8 compute chassis, or six Exadata Storage Servers with two compute chassis.   The graphic below illustrates an Oracle SuperCluster M8 rack before and after an upgrade that adds a second M8-8 chassis and three additional Exadata Storage Servers.       External storage upgrades General-purpose storage capacity can be boosted by adding a suitably configured ZFS Storage Appliance that includes InfiniBand HCAs. This storage can then be made available via the InfiniBand fabric and used for application storage, backups, and other purposes.   Implications for domain configurations Additional compute resources can be assigned in a number of different ways: Creating new root domains Root domains provide the resources needed by I/O domains, which can be created on demand using the SuperCluster Virtual Assistant. I/O domains provide a flexible and secure form of virtualization at the domain level. Although they share I/O devices using the efficient SR-IOV, each I/O domain has its own dedicated CPU and memory resources. Oracle Solaris Zones are also supported in I/O domains, providing nested virtualization.
A one-to-one relationship exists between CMIOU boards and root domains, which means that a root domain can be created for each new CMIOU that is added. Each root domain supports up to sixteen additional I/O domains.
Note that creating new I/O domains is not the only way of consuming the extra resources. CPU cores and memory provided by an additional CMIOU board can also be used to increase resources in existing I/O domains. Creating new dedicated domains
Dedicated domains provide CPU, memory, and I/O resources—specifically an InfiniBand HCA and a 10GbE NIC—that are not shared with other domains (and are therefore dedicated). Virtualization within dedicated domains is provided by Oracle Solaris Zones.
New CMIOU boards can be used to create new dedicated domains. Dedicated domains can be created from one or more CMIOU boards. If two CMIOU boards are added, for example, they could be used together to create a single dedicated domain, or they could be used individually to create two dedicated domains.
When multiple dedicated domains have been created in a PDom, CPU and memory resources do not need to be split evenly between the dedicated domains. These resources can be assigned to dedicated domains at a granularity of one core and 16GB of memory.
The largest possible dedicated domain on both Oracle SuperCluster M8 and Oracle SuperCluster M7 contains four CMIOU boards. Expanding existing dedicated domains
A new CMIOU board can be used to boost the resources of an existing dedicated domain, up to the maximum capacity of four CMIOU boards per dedicated domain.   The available upgrade options will depend on the specifics of an existing domain configuration as well as the number of CMIOU boards being added. Customers should consult their Oracle account team to explore possible options.   I talk more about Oracle domains in my previous blog, Is "Zero-Overhead Virtualization" Just Hype?     What is the required downtime for hardware upgrades? Two deployment approaches are available for hardware upgrades: Rolling upgrades 
Rolling upgrades allow service outages associated with a hardware upgrade to be minimized or eliminated. The reason is that only one PDom is affected at a time. Provided the Oracle SuperCluster configuration has been configured to be highly available, services need not be affected during a rolling upgrade. High availability can be achieved using clustering software, such as Oracle Database Real Application Cluster (RAC) for database instances and Oracle Solaris Cluster for applications.
The downside of rolling upgrades is that the overall period of disruption is greater. The reason is that PDoms are only upgraded one at a time, so the upgrade process takes longer.
 Non-rolling upgrades
 The benefit of non-rolling upgrades is that the overall period of disruption is shorter, since PDoms are upgraded in parallel. The downside of non-rolling upgrades is that all services become unavailable during the upgrade, since a full system outage is required. Before the hardware upgrade process can begin, a suitable Quarterly Full Stack Download Patch (QFSDP) must be applied to the existing system, and backups taken with the osc-config-backup tool.   For information about the expected period of time required to complete rolling or non-rolling upgrades for a particular configuration, the customer’s Oracle account team should be consulted.   Hardware upgrades allow the available resources of Oracle SuperCluster to be extended as required to satisfy changing business requirements. Upgrades of varying complexity can be handled smoothly while minimizing downtime, thanks to tool-based automation of the upgrade process. The end result is that customers are able to realize the benefits of hardware upgrades without the need for extended periods of disruption to production systems   About the Author Allan Packer is a Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle. He has worked on issues related to server systems performance, sizing, availability, and resource management, developed performance and regression testing tools, published several TPC industry-standard benchmarks as technical lead, and developed a systems/database training curriculum. He has published articles in industry magazines, presented at international industry conferences, and his book "Configuring and Tuning Databases on the Solaris Platform" was published by Sun Press in December 2001.  Allan is currently the technical lead and architect for Oracle SuperCluster.

Today's guest post is by Allan Packer, Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at...

Containers Provide the Key to Simpler, Scalable, More Reliable App Development

Jon Mittelhauser, VP of the Container Native Group at Oracle, is a serial entrepreneur who is considered one of the founders of the World Wide Web. Today, his focus is on defining, architecting, and shipping (on time!) software and hardware used by millions of people. At Oracle, his team works to make it easy for developers to build hyper-scale cloud applications using container-native technologies. Jon, for our readers to get to know you a little better, tell us about your role in creating the World Wide Web and what you’re doing today. As a graduate student at the University of Illinois, I worked on the team that developed NCSA Mosaic, which was the first widely used Web browser. It was the focus of my master’s thesis. After that, I was one of the founding engineers of Netscape Communications, which was the first real web company, back in May ’94. Currently, I run what is called the Container Native Group within Oracle Cloud Infrastructure (OCI). All of the modern application development focused around containers here at Oracle, and in terms of our public cloud, works under me. That includes things like Kubernetes and Container Registry and, something we’ve announced that we’re going to do, functions-as-a-service. Looking at the world of software development, what are the big trends you’re seeing? The main thing that’s happened over the last decade is the growth of software-as-a-service for these large, distributed applications. People also use the term web-scale applications, and it’s fundamentally a different application development paradigm. With the old, classic apps, they were written in such a way that, if you wanted it to be more powerful, you had to run it on a bigger server basically. The way web-scale applications work is they scale out rather than scale up. So you’re not scaling up servers, you’re scaling out the number of servers you run them on. Where, exactly, do the applications run in this new paradigm? The answer is typically both on-premises and in the cloud. It depends on the needs of the application. One of the things that we at Oracle are very good at is handling both those cases plus a third case, Oracle Cloud at Customer. There are a lot of benefits around running inside the Oracle Cloud, but one of the things Oracle Cloud offers is also direct, fast connect back into your data center so that applications can run in either place. For instance, you may be more comfortable with having your applications that contain sensitive data run on-premises in your data center behind your firewall so you have assurance of data security. Or you may want to avoid any latency challenges by relying on an on-premises infrastructure environment at Oracle. On the flip side, you may want to subscribe to Oracle Cloud for your compute and storage, determine which remaining applications you want to move to the cloud, and then move all those applications to run on Oracle’s public cloud infrastructure. So you can get the same cloud-like capabilities by running some of your applications over on-premises infrastructure and have your other applications run in the Oracle public cloud. With the third option, Cloud at Customer, you can have any application run in your data center, behind your firewall, but still have everything managed by Oracle; you get all the security of on-premises along with all the benefits of Cloud. It’s fascinating how Oracle gives you a seamless experience as you migrate back and forth between on-premises and the cloud. Why are containers such a big game-changer for application development? It used to be that you would run your application on a server, and that application would have to know everything about that server. You’d program differently for a Windows server and a Linux server and SGI versus Sun. Everything was very specific to that server. You couldn’t move it at all. Then virtual machines came along, and they allowed you to have flexibility where it still mapped to a particular type of server. The application knew it was running on Linux or knew it was running on Windows, but you could run a bunch of those on the same pieces of physical hardware, and it could move them from one piece of physical hardware to another piece of physical hardware that had the same underlying characteristics. Containers move to an even a greater level of abstraction where, fundamentally, the application programmer doesn’t know and doesn’t really care at all about the underlying operating system or capabilities. So it allows you to have a very small container that’s just focused on your application, and that’s portable. You can move it around, run a bunch of them in one place, and use other technologies to orchestrate what’s running where. It’s really about having the flexibility of how and where I run those applications and abstracting away some of the complexity that developers used to need to know, but really had nothing to do with the application itself.   What are container-native technologies, and how do they fit into this picture? Container-native technologies are simply tools and platforms for developers to use as they create cloud-native applications around containers. There are tools and technologies around container orchestration which include monitoring, tracing, security, and a whole suite of things that you need to build containerized applications. They all can be built on our IaaS and PaaS in the Oracle Cloud. It sounds like these container native-technologies do a lot of things in the background so that the developer can focus on application development. And how does Kubernetes facilitate containerized workloads? Kubernetes originally came out of Google, and it’s basically an orchestration layer around containers. For example, if I’m writing a containerized application, I can run it on top of Kubernetes, and Kubernetes will handle a lot of the underlying infrastructure orchestration—specifically, things like scaling up to meet demand or scaling down when demand is light. If servers crash, it will spin up more. The application developer simply says, “Hey, here are my containers. This is what they look like. Run them,” and then Kubernetes manages and orchestrates all of the underlying capacity. Kubernetes works whether you’re developing an application for three people or a global enterprise. What you’re doing is applying good architectural structure around a large-scale application whether you need it or not. So, you’re getting inherent reliability and scaling abilities along with capabilities to address and handle failures. For example, let's say I deploy a cluster within an on-prem or cloud infrastructure region and it is spread across three different physical availability domains. Even if one of the availability domains had a catastrophic failure, my application would continue to run using the other domains. What's more, when I build it on top of managed Kubernetes in Oracle Cloud Infrastructure, it’s automatically highly available and automatically scalable. Taking it down to a more concrete level, how does this new development paradigm change the software architecture? It’s a different sort of fundamental software architecture. It’s not dependent on the underlying infrastructure, so you’re really focusing on the application. You can focus on whether it is working the way it’s supposed to by monitoring metrics around your application and assessing if it’s behaving the way you expect it to behave under different conditions.   And how do these container-native technologies support the development of hyperscale cloud applications? Hyperscale is this ability to scale on demand. With Netflix, for example, if the demand for Stranger Things goes crazy, they put out a new release. That demand is met by their cloud being able to scale up and scale down. Netflix had to write a bunch of its own technologies to do this. Now, you can take advantage of technologies like Kubernetes to get that same capability. This allows you to take advantage of cloud infrastructure, like Oracle Cloud Infrastructure, to develop hyperscale applications. In light of all these changes, how should IT architect its environments to support hyperscale cloud application development? First, you need to break the direct ties between the application development and the infrastructure itself. So your infrastructure could be running on Oracle Cloud or maybe on-prem with Oracle Cloud at Customer, or maybe on your own dedicated hardware. The infrastructure for modern applications is simply running Kubernetes and containers on top, and then you’re monitoring things through software. In a modern world, they’re all just running the base Kubernetes layer and containers are getting distributed based on other criteria: load, security, what servers are up, etc. Your infrastructure should be designed to provide the platform on top of which you're running your new applications. Now, what we find is that most enterprise customers aren’t all one way or the other. Many customers will want to move entire servers into the cloud, and that’s basically what we mean when we talk about lifting and shifting workloads to bare metal. Then customers will have suites of virtual machines that they will want to run on raw Oracle Cloud compute. And then, many will also have modern applications that are container-based running on top of Kubernetes. At the end of the day, our customers each have very unique IT requirements and business challenges that they must navigate and we make sure that Oracle provides a customized path for success for each customer. For instance, the security, reliability, and predictable performance characteristics of Oracle Cloud Infrastructure (OCI) are there because we recognize the existence of all of these legacy applications that need to be migrated over. Our cloud was designed from the ground up to be a bare metal cloud, to enable you to lift-and-shift your existing servers into our infrastructure, integrate them with Oracle databases and the cloud, and deliver techniques for moving your data from on-premises into the cloud. And then my layer, the Container Native Group, is building these open source, modern application services and container-based services on top of that infrastructure so we’re getting all of the infrastructure benefits automatically. This allows us to provide the same enterprise-grade characteristics to our container and Kubernetes offerings. These are all exciting developments happening now, but can we take a peek into the future? What do you see as the next big thing in application development? The other half of what my team is working on is Oracle Functions, a functions-as-a-service offering based on open source. It's a different event-based programming paradigm. The idea is that all a programmer needs to do is provide functions that do a certain amount of work, and those functions get called based on events happening in the real world or in other programs. The developers don’t need to worry about any of the stuff that we have been talking about in terms of building applications. They’re just providing functions. This is the new wave of how things can get written, and we believe that the applications will be a mix of container-based applications running in platforms like Kubernetes combined with functional programming through things like Oracle Functions. We think Functions need to be done in a way that is cross-platform the way containers are, meaning that I can write a function once and I can run it anywhere. We’re also supporting something called Cloud Events. This is a way of having a standard format for events across various cloud and SaaS providers. What that will allow you to do is fire events from, say, Oracle SaaS systems and then program functions around it as well as integrate it with third-party SaaS systems. You can imagine a situation where there’s an Oracle HR system, and when somebody gets added to that system, it fires an event that says, “Oh, there’s a new employee.” Now, there’s a bunch of other systems looking for that event within the enterprise. For example, “Oh, a new employee got added. Let’s print them a badge. Let’s give them access to the right buildings in the Access Control System.” Writing that function requires it to say, “Oh, when I see event, ‘New employee,’ go do this, this and this.” That’s a powerful thing. I can just go add new things, look for events, including new types of events, and integrate all these systems together. And since we are building on top of an open-source event standard that all other major providers have also agreed to support, customers will be able to tie together systems that were never able to be integrated before. Legacy applications were built as a single huge application which did everything, but now, modern applications are being built with a combination of small services, often termed microservices. These small services are orchestrated using technologies like Kubernetes and Functions provided through a functions-as-a-service (FaaS). This is definitely where application development is going and what my group is focused on providing in addition to all the core benefits that Oracle Cloud Infrastructure provides.       Thank you, Jon, for this primer on container-native technologies and for this window into the future of development. It seems like the bottom line is that enterprises can take advantage of all these application development technologies whether their infrastructure is on-premises, in the cloud, or in any hybrid environment. They just need to take the right steps to invest in the right kind of infrastructure and the right kinds of application development technologies.

Jon Mittelhauser, VP of the Container Native Group at Oracle, is a serial entrepreneur who is considered one of the founders of the World Wide Web. Today, his focus is on defining, architecting, and...

Today’s Big Data Strategy Requires an Open Ecosystem: A Primer for Enterprises

Jean-Pierre Dijcks began his career in data integration and data warehouse consulting before he took on product management roles at Oracle for Oracle Warehouse Builder and Oracle Database Parallel Execution. Currently, he is Master Product Manager for Oracle Big Data Cloud Service and Oracle Big Data Appliance and plays a leading role in Oracle's Big Data platform strategy.  Businesses face an avalanche of data coming from every direction: traditional business data sources, sensors that make up the Internet of Things, emails, the internet, medical imaging, social media, videos, and more. Unfortunately, traditional data warehousing solutions have limited what businesses can glean from this potentially rich source of intelligence. To explain how your organization can gain valuable insights and visibility with a cloud-ready open data lake environment, we have our own data expert Jean-Pierre Dijcks.  Let’s start with the big picture. Can you share with us a brief history of data warehousing? If we look back about 20 years, enterprises had multiple implementations of different applications to run basic business processes like finance or HR. These packaged and custom apps had no built-in business intelligence (BI) functionality. As a result, business users had to wait (and wait) for IT to jump through this series of steps called data extraction, transformation, and loading (ETL) to put standardized structured data into the first data warehouses. As data warehousing became more popular, and customers began to outgrow their hardware systems, Oracle came out with its first generation of the engineered system, Oracle Exadata. For the enterprise, this integrated system that combined compute and storage was designed to optimize database performance. Still, business users wanted to incorporate more sophisticated analytics to improve fraud detection, risk management, competitive market analysis, and so on. As a result, many businesses turned to third-party vendors like SAS in conjunction with their Oracle Exadata data warehouse. That brings us to the present. What’s different today? How can enterprises use an optimized data and analytics platform that allows them to take full advantage of data from their database and other sources? We still use packaged and custom apps that go through the ETL process. Many customers have a mix of some older and newer BI tools, including Oracle Business Intelligence Foundation Suite. And a company’s data warehousing environment is likely still on an Exadata system, although a newer generation. A critical difference today is that there’s been a proliferation of semistructured and unstructured data. Most of what enterprises need to add depth to their analytics is semistructured data that has tags or other markers but doesn’t necessarily conform to the formal structure defined in a relational database. Email, JSON, and log files are all good examples of semistructured data. Even unstructured data with metadata—like documents and images—can often be classified as semistructured, meaning that it can be catalogued, searched, queried, and analyzed based on just the metadata. This proliferation of data has led to the emergence of data lakes—repositories for structured, semistructured, and unstructured data that store the data in a raw, unmanipulated, unprocessed form until it’s needed. Hadoop has emerged as the primary open-source platform for processing these large data sets across many clustered machines. Co-developed with Cloudera’s full distribution of Apache Hadoop, Oracle Big Data Appliance provides full access to the data lake and its capabilities. This engineered system can host all your data in Hadoop—your data lake—processing diverse data at speed and scale. It’s important to note that when we talk about building this data warehousing ecosystem based on Oracle Engineered Systems, we’re talking about an open ecosystem. The Oracle Big Data Appliance is able to integrate with BI tools, ETL vendors, and more. Because it’s co-engineered with Cloudera, it can deliver the Hadoop data lake environment along with key security, monitoring, and management capabilities. It really gives customers an out-of-the-box appliance with all the components you need to make use of the Hadoop environment. One thing that hasn’t changed is the importance of SQL to access all of the data across both the Big Data platform and the relational database. To ensure rich and secure access to the data lake, Oracle introduced a high-performance data virtualization solution, Oracle Big Data SQL. Given this new environment that allows businesses to collect and gain deep insights into all their data in real-time, what are the key takeaways for enterprises when it comes to developing their data strategies? Data warehousing has evolved from DIY systems, built from individual components from different vendors, into integrated systems that are much more powerful. The advent of the big data era has driven the need for cloud-ready solutions that can consolidate and transform structured and semistructured data to optimize the intelligence that can be derived from it. Oracle’s role has grown to offer engineered solutions that integrate many of the components of the ecosystem so they work together to provide greater productivity and savings than DIY solutions. With the Big Data Appliance’s third-party solution integrations, customers don’t have to lift and shift entire systems. They can build an effective, cloud-ready platform that takes advantage of their existing systems while integrating the best of engineered systems and third-party tools.

Jean-Pierre Dijcks began his career in data integration and data warehouse consulting before he took on product management roles at Oracle for Oracle Warehouse Builder and Oracle Database Parallel...


June Database IT Trends in Review

This summer has been an exciting one for converged infrastructure; lots of announcements past month!  In case you missed it... Oracle debuted "Oracle Soar" on Jun 5th, an automated enterprise cloud application upgrade product that will enable Oracle customers to reduce the time and cost of cloud migration by up to 30%. Larry Ellison discussed the details of Oracle Soar which includes a discovery assessment, process analyzer, automated data and configuration migration utilities, and rapid integration tools. The automated process is powered by the True Cloud Method, Oracle’s proprietary approach to support customers throughout the journey to the cloud.  According to Wikibon, Do-It-Yourself x86 servers cost 57% more than Oracle Database Appliance over 3 years. The Wikibon research paper also shows above-the-line business benefits of improved time-to-value from a hyperconverged Full Stack Appliance are over 5x greater than the IT operational cost benefits. They also claim that the traditional enterprise strategy of building and maintaining low-cost x86 white box piece-part infrastructure is unsustainable in a modern hybrid cloud world. The experts talk converged infrastructure and AI We invited Neil Ward-Dutton, one of Europe's most experienced and high-profile IT industry analysts, to discuss how robotic process automation (RPA) and artificial intelligence (AI) have the potential to transform not just routine administrative business processes but also those that have traditionally depended on skilled workers. Read the interview here. Top fintech influencer and founder of Unconventional Ventures, Theodora Lau, joined us to discuss how AI is transforming banking. To be able to process all the data that modern enterprises create, such as financial data, at speed and scale, enterprises need better infrastructure to support it. Learn more about the interview here. Internationally recognized analyst, and founder of CXOTalk Michael Krigsman joined us on the blog to discuss the positive influence of digital disruption. The way we approach business today, he says, is being turned on its head by new demands from internal and external customers. We’re at a crossroads where innovative technologies and new business models are overtaking traditional approaches, creating significant pressure and challenges for tech infrastructure and the people who manage it. Read the interview here. The future of banking Srinivasan Ayyamoni, transformation consulting lead at Cognizant focusing on the banking industry, discusses the relentless cycle of innovation, rising consumer expectations, and business disruptions that have created major challenges as well as lucrative opportunities for the banking industry today. Read more here. Chetan Shroff, Oracle Commercial Leader at Cognizant, discusses why banks must look carefully at their IT infrastructure before they can benefit from new, exciting tech innovations. Don’t Miss Future Happenings: subscribe to the Cloud-Ready Infrastructure blog today!

This summer has been an exciting one for converged infrastructure; lots of announcements past month!  In case you missed it... Oracle debuted "Oracle Soar" on Jun 5th, an automated enterprise...

Automate and Assist: How AI and RPA Are Changing Businesses

Today's blog post features Neil Ward-Dutton, one of Europe's most experienced and high-profile IT industry analysts. Ward-Dutton co-founded MWD Advisors and advises many leading technology vendors and across industries as diverse as financial services, retail, utilities, and government.  Robotic process automation (RPA) and artificial intelligence (AI) have the potential to transform not just routine administrative business processes but also those that have traditionally depended on skilled workers. To maximize the value of these processes, an optimized infrastructure can help businesses collect, manage, and analyze the data behind them, at scale and in real time. This ideal infrastructure is best epitomized by engineered systems in which all the components are co-built to optimize performance and security. To understand how RPA and AI impact the enterprise and how they can work together, we spoke with Neil Ward-Dutton, one of Europe's most experienced and high-profile IT industry analysts. Ward-Dutton co-founded MWD Advisors and advises many leading technology vendors and across industries as diverse as financial services, retail, utilities, and government.  Oracle: So that we are all on the same page, can you define robotic process automation and artificial intelligence as they relate to business processes? Neil Ward-Dutton: RPA is a technology that enables you to integrate systems non-invasively. It automates interactions with systems by mimicking the actions that you or I might use to drive an application on the screen, such as copying and pasting or looking up information from one system and typing into other systems. RPA is often used where information needs to be read from / written to multiple legacy systems. An example would be a big bank that has grown through acquisition, where customer records are kept in multiple systems. When a customer requests an address update, an RPA system might be a great way to handle that update—rather than relying on a person to do it all manually. That’s RPA’s sweet spot. AI services can be complementary to RPA. As in the address example where the customer makes a written request, AI might read the communication, understand that it is an address change request, and send the information to the RPA system. Similarly, AI services can be used to scan ID documents, check the ID against a banned list, and detect fraud. AI services can also provide “smart assists” for work. For example, in insurance claims processing, such an assist might suggest next steps in an investigation or recommend a particular decision. Oracle: What industries are aggressively adopting these technologies and why? Ward-Dutton: Industries are rapidly adopting technologies as solutions to address the increasing pressures to reduce cost, improve the customer experience, and achieve regulatory compliance. Industries with high turnover workforces, such as retail, transport, leisure, and call centers, are adopting the technologies at the fastest rate to address cost reduction and improve the customer experience. As a more specific example, telcos—with their huge product catalogs and different service plans and different handsets—need automation to arm their call center employees with the information to resolve problems or make the right recommendation. Again, that knowledge is hard to maintain with a fast-changing workforce. Financial services have been big adopters to date, because despite banks’ huge historical investments in IT a lot of their administrative back-office processes are still highly manual. Oracle: Will these technologies replace human interactions? Ward-Dutton: Not necessarily. With today’s automation technologies, the result is not black and white. It’s not that work is automated or it isn’t. Some processes can be automated end-to-end. But in the real world, in almost all processes there will be a mixture of approaches required. Automated assistants will also work alongside people, giving them recommendations and warnings and predicting situations. Automation and assistance are complementary ways of using these technologies. Oracle: How can you use the data you’re collecting as a result of the automated processes? Ward-Dutton: All these automations are creating data that you can use to provide insight in real time. You can see the patterns in invoicing, the patterns in fraud, the trends in interest in products or activity in particular customer segments. Once digital technology is more embedded in work, that work is then instrumented in ways that have never happened before. If you can leverage that data effectively, then you have a superpower because you can start to really do some quite advanced reasoning around optimization, or signals that suggest new customer products or even new markets.  Oracle: What you’re really talking about is harnessing big data with the right hardware and software. Processing large data workloads at speed and scale requires an “out-of-the-box solution—completely streamlined infrastructure that becomes the backbone for real-time, granular analytics. From a business standpoint, what’s the best way to approach adopting these technologies? Ward-Dutton: I advise people to separate their processes into transactional, prescriptive, and exploratory work when you’re looking at automation opportunities, because different types of work fit best with particular automation approaches: Prescriptive work is work that’s highly routine and rules-based. As we said before, this might include dealing with simple requests from customers, partners, or suppliers.  Transactional work is based on a process you can define up front, but you still need humans that need to drive some or all of the work. This might include customer onboarding. Exploratory work is work where you know what kind of outcome you’re aiming for, but you don’t necessarily know what will need to be done ahead of time. This is more investigative-type work that is usually done by people with significant, specific training—an example would be fraud investigation or complaints management. You want to create a template architecture that can be applied to different areas of the business. Instead of saying, “We’re going to use these technologies to automate the call center,” you look at the patterns of work that need to be done everywhere in the business. You want to make sure that people use the right technology in the right way, so you’re not spending twice. Oracle: Speaking of spending, how expensive are these solutions? Ward-Dutton: This technology is much better packaged, and frankly, it’s cheaper than it was before. There’s a lot of fierce competition between the vendors and some aggressive pricing models. It’s certainly possible to get started and explore proofs of concept and build business cases with very little investment. Oracle: Who should be driving the adoption of these technologies? Ward-Dutton: The initial catalyst typically comes from business teams. It might come from operations or customer service. The point where you need to think about architecture for the entire organization is when you get questions like, “This RPA stuff is cool, and it’s not expensive, but do we also need to do business process modeling?” That’s when you say, “Okay, timeout. Let’s think about how this all fits together.” You don’t need to stop everything for a year, but you do need to get a couple of people together to look at the different kinds of automated technology that you already have in play and map out the ways they can work together. Although the energy will come from the businesspeople, IT should be involved.  Oracle: What do you suggest for getting the architecture right so you can capture, store, and organize massive amounts of data sitting in data lakes like Hadoop? Ward-Dutton: Rather than having architects gather loads of information and run away into a tool shed and come back with a finished “ta-da here it is,” you need a very open, collaborative process. You want the entire organization to become much more educated about what’s possible and much more aligned around how technology can serve the business. This is also the time to consider cloud-ready architecture that will help the organization make its own choices about where and how to deploy systems. Oracle: Are there function or features that are non-negotiable when exploring acquiring automation tools? Ward-Dutton: If you are looking at automating customer onboarding in a telco and one of your competitive differentiators is that you are committed to activating the customer within one hour, you’ll be thinking about latency and processing speed. You might need to look at doing it in the cloud or a hybrid cloud model. For data collection for a quarterly report, shaving time won’t give you incremental value. Oracle: How will AI and RPA evolve?  Ward-Dutton: I see a lot more AI technology embedded in the tools themselves. You see this in architecture and development tools to help make architecture developers more productive. You’re seeing smart assist in which the tool is saying, “I think what you want to do is connect the database to that form or that logic in a particular way.’’ They are using the knowledge of the way others have built systems to suggest configuring functions. I might tell you, “Based on this architecture the volume of transactions going through the system means that it’s going to fail in about three weeks if you don’t retune the system.’’ You can see this in integration and development platforms and some architecture solutions. It’s pretty mind-blowing. Oracle: This has been a fascinating discussion, Neil. I would just like to add that from my Oracle perspective, RPA has great potential, especially paired with AI, but, ultimately, it needs to be powered by big data. Something like Oracle’s Big Data Appliance can provide the kind of high-performance, cloud-ready infrastructure that is critical to taking advantage of automation. You need to be able to aggregate and store large amounts of data from multiple sources, including social media, sensors, and machines with ease in an on-premises environment. It’s important that the entire technology stack be co-engineered so that everything works in concert to provide the optimal performance needed for these data-intensive applications that need to run in real time. And Oracle’s engineered systems provide this integrated infrastructure on-premises while preparing you for the cloud.

Today's blog post features Neil Ward-Dutton, one of Europe's most experienced and high-profile IT industry analysts. Ward-Dutton co-founded MWD Advisors and advises many leading technology vendors and...

Hyperconverged vs. Engineered Systems: What to Look for When You’re Looking for a Solution to a Business Problem

As Oracle's senior vice president of database systems technology, Juan Loaiza helped develop Oracle Exadata and is responsible for developing the mission-critical capabilities of Oracle Database.  Let’s start with some definitions. How would you define hyperconverged infrastructure? Hyperconverged came out of the idea that IT had generic servers that could be used for both compute and storage. It grew up as an alternative to the traditional compute servers with separate storage arrays. With hyperconverged architecture, you could now have a set of servers that could serve local storage over to remote servers as well. But, the hyperconverged software that allows this is not application-aware, so it’s not a specialized software but, rather, generic software. That is, it’s not designed for specific workloads. The downside is that because you’re using the same servers for two purposes, every time you buy more compute power, you also have to buy more storage. How are engineered systems different from hyperconverged infrastructure? Engineered systems also use servers for compute and storage, but engineered systems are built for a specific purpose. For instance, Exadata is built with a specific purpose of running database workloads, so it has functionality built in specifically for those workloads. The software is application-specific. And since we don’t put the compute and the storage in the same servers, that lets us expand either the compute or storage independently. While the compute and storage are kept separate, all engineered systems are designed with software and hardware together—that’s why we say the products are co-engineered to work best with each other. This co-engineering offers greater simplicity of design, plus it’s all supported by a single vendor. Tell us more about what happens when you put intelligence inside storage? In the case of Exadata, we put database intelligence directly into the storage. We’re moving database functions that are data-intensive into the storage so that the bulk of data intensive work happens in the storage server. Instead of trying to move all the data across to the main database server, it’s taking the lowest level of database processing and running it in storage to filter out unnecessary data before it is sent across the network. As a result, the data flow is reduced by orders of magnitude. How would engineered systems differ from hyperconverged infrastructure? You have the same basic components as in hyperconverged infrastructure. The difference is that an engineered system is designed as a single, coherent unit, with specific software functionality and applications based on the workload. For a database, the components would include compute, storage, and networking. On top of that, you have the OS, the virtualization, all the firmware, and then the software package itself like the Oracle database or the Oracle backup products or the Oracle analytics product. You then add a lot of specialized software like the Exadata storage software that performs data processing in storage. Specialized algorithms are built at every layer. It’s the whole stack that’s co-engineered, and it’s put together from the top down, not from the bottom up. It starts with the workloads you want to run and is built to deliver the best performance for that workload. How do the architectures of hyperconverged infrastructure and engineered systems provide a cloud-readiness strategy for enterprises? The cloud has converged toward an on-demand consumption model that’s basically a cloud of storage and a cloud of servers. The reason is simple: An application needs “X” number of gigabytes or terabytes of storage. Users only want to pay for that storage. They don’t want to allocate a bunch of servers they don’t need. Because it doesn’t separate compute from storage, the hyperconverged model isn’t really used in the cloud. Conversely, engineered systems are ideally suited to this cloud architecture. And because we design the architecture to be identical on-premises and in the cloud, if you have Exadata on-premises, you can move seamlessly to the cloud whenever you’re ready, and you can move as much or as little to the cloud as you want. What are the top reasons to choose engineered systems versus hyperconverged? The number one reason is that engineered systems are designed for a specific purpose with hundreds of specialized software algorithms to support that purpose, so they’re going to give you much better performance, at a lower cost and with higher availability. A second reason is that Oracle tests everything completely top to bottom. When you get software updates, it includes everything from the firmware to the OS to the networking to the switch software to the database software. It’s all integrated and tested together. A third reason is security. It’s hardened as a whole, so you don’t have to worry about securing the gaps between different parts of the system. I would add one more thing. What Oracle ships on-premises is essentially the same as what we have in the cloud. That makes it super easy to migrate from on-premises to the cloud.  With other vendors, they may have an on-premises solution, and when you try to move to the cloud, it’s a completely different environment. More importantly, everything in the software layer is also completely different. It’s like two different worlds. With our engineered systems, you have the exact same infrastructure on-premises as in the cloud, so you get a seamless, super-easy migration to the cloud. We offer different sales models, too. You can buy it outright. You can subscribe to it in the public cloud. We also have Cloud at Customer where you get a subscription, and then Oracle manages it for you even though it still resides inside your datacenter. All have the exact same architecture. What kind of differences in performance can you expect? Engineered systems can easily be many orders of magnitude faster than hyperconverged, depending on the workload. A lot of that has to do with the specialized software features that come with the engineered systems and also the optimization for the specific workload from the firmware to the OS to the database. It’s not just a generic system that runs whatever you put on top of it. What are the big trends you’re seeing in the industry? The big trend is still the cloud. But now, people are looking for a solution, not a platform. If they just want a database, they don’t really want all those servers and storage and networking to deal with. The move is away from do-it-yourself. It’s the same trend we see on-premises where people, traditionally, wanted to put everything together themselves and manage it. Now, they want an engineered system that works better and is easier to manage. The next trend is toward autonomous solutions; it’s a step beyond automation. Oracle now has the Autonomous Database, where the database basically manages itself. From a customer point of view, the entire infrastructure and everything else around the database vanishes. The whole idea of thinking about hardware architectures like hyperconverged, it doesn’t matter anymore.  Only the database is visible to the customer. What are the critical considerations that our readers should keep in mind if they’re looking to build or buy their infrastructure? First, they need to think about the long-term architecture that they want to move toward. Most people know they need to move to the cloud at some point. But the cloud model can be a combination of public cloud, private cloud, public cloud in their own datacenter with Cloud at Customer—or any combination. They need a well-thought-out plan to get to the cloud model they want. They also should start thinking about moving toward a more autonomous model because that’s going to bring a lot of benefits including cost reductions, better performance, and better security. Finally, they need to think about security. The sophistication of attacks is increasing like crazy. It used to be that attackers were lone hackers. Now, we have nation-states and large criminal organizations attacking. When you put all the parts together yourself, you’re basically saying, “I’m going to take on these nation-states myself.” That’s not sustainable. Most corporations don’t have the expertise and the sophistication to go to war with a nation-state in cyberspace. Instead, they should partner with a vendor who has the resources and expertise to provide that security. And, with our engineered systems, the Oracle security experts are building integrated security—along with all the other benefits—into the whole stack.

As Oracle's senior vice president of database systems technology, Juan Loaiza helped develop Oracle Exadata and is responsible for developing the mission-critical capabilities of Oracle Database.  Let’s...

Cloud Infrastructure Services

An Ounce of Prevention: Real-Time Weapons in the War on Financial Fraud

Bank executives have learned from experience that most news related to financial fraud is likely to be bad news. They also know that after decades spent fighting criminal activity against their institutions and their customers, this is a war that never ends—it just moves to a different battlefield. Today, however, there’s actually cause for optimism that banks can get—and keep—the upper hand against many common types of financial fraud. Why the optimism? The answer lies in a new generation of real-time, self-learning analytical systems. They’re allowing banks to recognize payment fraud and other crimes as they’re happening, and to intervene before money changes hands. Banks can now invest in smarter and more agile approaches to fighting fraud.  Before banks can benefit from these innovations, however, they’ll have to look carefully at the IT infrastructure required to implement and deploy them successfully. A Seemingly Unrelenting Wave of Financial Fraud   Admittedly, it’s hard to believe the tide is turning when banks appear to be immersed in a sea of criminal activity—all of it searching relentlessly for cracks in an institution’s anti-fraud defenses.   The mainstream arrival of chip-and-pin payment cards in the U.S. market offers a case in point. Criminals who found card counterfeiting more difficult didn’t give up, but simply changed tactics.   The results of a survey of losses related to consumer identity fraud offers insight:    Total losses in this area climbed to $16 billion in 2016—up $1 billion from 2015.  Account takeover losses jumped by 61 percent. New account fraud increased 20 percent. Card-not-present fraud increased 40 percent.   Finally, according to the American Banking Association, banks stopped fraud attempts worth a record $17 billion in 2016. But, in exchange, banks almost certainly faced a record number of fraud attempts—and, in fact, might have lost ground in terms of the proportion of fraud attempts they stopped.   A Better Approach: Beat the Criminals to the Scene of the Crime   The answer to this challenge lies in technology that can analyze transactions in real time and spot financial fraud as it’s happening. Once you have this level of real-time fraud detection and can apply it to every transaction, it becomes possible to intervene and halt transactions flagged as fraudulent.   This is a game-changing innovation. Real-time intervention can stop a transaction before money or goods change hands. There’s nothing to clean up, no funds to recover, and no losses to document. The criminal and the crime are still there, but now they’re walking away empty-handed.    Real-Time Fraud Prevention: A Demanding Formula for Success   Of course, it’s one thing to describe such technology. It’s quite another to develop and implement the capabilities required to make it work.   In general terms, there are four areas where a real-time fraud prevention system has to step up its game.    First, a real-time fraud prevention system requires the ability to assess a transaction in real time, across any transaction channel: POS, ATM, online, mobile, ACH, SEPA, wires, SWIFT, and so on. Second, the fraud prevention approach requires the use of highly sophisticated, adaptive, self-learning models with an unprecedented ability to assess and to make decisions about a transaction. There’s no room for human “backup” in this process—either the system makes good decisions about the transactions it analyzes, or it doesn’t and forces the bank to deal with the consequences. Third, such a system must be capable of handling thousands of simultaneous transactions per second during peak periods. It should measure the latency it adds to a typical transaction in milliseconds and ensure the same low latency during peak periods that it delivers under ideal conditions.  Finally, and perhaps most important, a real-time fraud prevention system must be remarkably accurate and unfailingly consistent. A single false-positive result that blocks a transaction is bad enough, with a significant risk of losing a customer and creating a vocal “anti-champion” who shares his or her experience with others. A series of false positives, however, is exponentially worse: a massive and very expensive blot on a bank’s reputation and relationships.   The Right Platform Demands the Right Infrastructure   These capabilities are no longer just a technology wish list. The Oracle Financial Services Fraud platform illustrates the state of the art of real-time financial fraud prevention: a seamless integration of a cutting-edge financial crime data model, behavior detection, inline processing, real-time decisions, advanced analytics, and supporting functions. The resulting system is accurate and reliable enough for banks to stake their reputations and customer relationships on the results. That level of trust, in turn, is critical to putting the technology where it can do the greatest good.   There’s another important angle to this issue: IT infrastructure. With the right approach to infrastructure, a solution like this can deliver some truly amazing anti-fraud capabilities. The details ultimately depend a great deal on a bank’s unique technology and business needs; modern infrastructure is, among other things, extremely versatile and adaptable.   Yet there are some key issues that indicate where we expect most banks to focus their infrastructure budgets and attention—especially when the goal is to take full advantage of a solution like Oracle Financial Services Fraud. Two infrastructure issues consistently stand out: ensuring cost-effective, anti-fraud solutions, and taking the right approach to a bank’s analytical infrastructure needs.   Taking a platform approach to application infrastructure   IT costs related to fraud, risk, and compliance are rising. Next year, for example, 60 percent of banks say they plan to increase spending on fraud management IT systems, and that is just one area among several related to risk, fraud, and compliance capabilities.   As these costs continue to climb, controlling them has become a major organizational imperative. There simply won’t be enough money, time, or resources to go around if banks treat fraud prevention as a separate activity with its own set of infrastructure needs.    As it happens, fraud prevention has quite a bit in common with the infrastructure requirements for applications addressing AML/BSA compliance, KYA regulations, compliance reporting, trading/broker compliance, and the like. This includes, for example, shared access to data sources, collation and preparation, analysis, and reporting needs—all of which point to the importance of finding approaches that allow infrastructure to be leveraged across current and future requirements.   Deploying a commonly available—but uncommonly advanced—analytical infrastructure.   If you drill down a bit into the pros and cons of a platform approach, you discover quickly that there’s a lot riding on its analytical capabilities. In short, it’s not enough to be good today. You also need an architecture that enables you to stay good next week, next month, and next year as capabilities evolve and the power of underlying analytical models continues to increase.   To meet these two requirements, banks need an infrastructure that’s specifically designed for advanced analytical functions. Oracle Engineered Systems are optimized for that specific workload all the way up and down the stack. Everything from the firmware to the OS to the database is completely optimized for that business application. It’s going to deliver much better performance at a lower cost and with much higher availability than DIY solutions.    Oracle Exadata, for example, is designed to process enormous amounts of data in real-time with low latency because Exadata has intelligent Storage Server Software which enables Smart Scan—i.e., offloading query execution on to the storage server closer to data and passes only desired results back to the user. Smart Scan can work without any changes to existing Oracle Database code and banks have noticed 10-100Xs performance improvements.   Banks can embrace a winning technology combination—real-time fraud prevention and advanced application infrastructure—to gain a hard-earned advantage in the fight against payment fraud and other crimes. Unfortunately, there is no end to the fight against financial fraud.    Today, a bank’s ability to adopt advanced fraud prevention systems, and to set up these systems for success with the right application infrastructure, is its best hope for hanging onto the advantage and keeping criminals from hitting the payment-fraud jackpot.   About the Author   Chetan Shroff, Oracle Commercial Leader at Cognizant, is a Chartered Accountant and holds an MBA in Finance. He has 17+ years of experience across process studies, defining IT roadmaps, conducting process, technology and usability assessments, program management for large transformational engagements, solution architecture, delivery management, end-to-end implementation, global template definition, roll-outs, upgrade services, and establishing centers of excellence. Chetan has worked within multiple collaborative models involving onsite-offshore, multi-vendor, multi-stakeholders across the U.S., Europe including the U.K., APAC and Middle East regions.  

Bank executives have learned from experience that most news related to financial fraud is likely to be bad news. They also know that after decades spent fighting criminal activity against...

Engineered Systems

Oracle Exadata: Ten Years of Innovation

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. I recently read some interesting blog posts on the driving forces behind many of the today’s IT innovations.  One of the common themes was the realization that sometimes purpose-built engineering is better at solving the toughest problems.  Given 2018 marks the 10-year anniversary of the introduction of Oracle’s first engineered system, Oracle Exadata, I started thinking about many of the drivers that led to the development of this system in the first place.  Perhaps not surprisingly, I realized Oracle introduced Exadata for the same reason driving other innovations--you can't reliably push the limits of technology using generalized "off-the-shelf" components.     Back in the mid-2000's, the conventional wisdom was that the best way to run mission critical databases was to use a best-of-breed approach, stitching together the best servers, operating systems, infrastructure software, and databases to build a hand-crafted solution to meet the most demanding application requirements.  Every mission critical deployment was a challenge in those days, as we struggled to overcome hardware, firmware, and software incompatibilities in the various components in the stack.  Beyond stability, we found it difficult to meet the needs of a new class of extreme workloads, that exceeded the performance envelopes of the various components.  What we found was we were not realizing the true potential of the components, as we were limited by the traditional boundaries of dedicated compute servers, dumb storage, and general purpose networking.   We revisited the problem we were trying to solve:   Performance:  how to optimize the performance of each component in the stack and eliminate bottlenecks when processing our specific workload. Availability:  how to provide end-to-end availability, from the application through the networking and storage layers. Security:  how to protect end-user data from a variety of threats both internal and external to the system. Manageability:  how to reduce the management burden to operate these systems. Scalability:  how to grow the system as customer's data processing demands ballooned. Economics:  how to leverage the economics of commodity components while exceeding the experience offered by specialized mission critical components.   Reviewing these objectives in light of the limits of the best-of-breed technology led to a simple solution--extend the engineering beyond the individual components and across the stack.  In other words, engineer a purpose-built solution to provide extreme database services.  In 2008, the result of this effort, Oracle Exadata, was launched. The mid-2000’s saw explosive growth in compute power, as Intel continually launched new CPUs with greater and greater numbers of cores.  But databases are I/O hungry beasts, and I/O was stuck in the slow lane.  Organizations were deploying more and more applications on larger and larger SANs, connecting the servers to the storage with shared-bandwidth pipes that were fast becoming a bottleneck for any I/O intensive application.  The economics and complexity of SANs made it difficult to provide databases the bandwidth they required, and the result was lots of compute power starved for data.  The burning question of the day was, “how can we more effectively get data from the storage array to the compute server.” The answer, in hindsight, was quite simple, although quite difficult to engineer.  If you can’t bring the data to the compute, bring the compute to the data.  The difficulty was you couldn’t do this with a commercial storage array—you needed a purpose built storage server that could cooperatively with the database process vast amounts of data, offloading processing to the storage servers and minimizing the demands on the storage network.  From that insight, Exadata was born. Over the years, we’ve built upon this engineered platform, refining the architecture of the system to improve performance, availability, security, manageability, and scalability, all while using the latest technology and components and minimizing overall system cost.    Innovations Exadata has brought to market:   Performance:  Pushing work from the compute nodes to the storage nodes spreads the workload across the entire system while eliminating I/O bottlenecks; intelligent use of flash in the storage system provides flash based performance with hard disk economics and capacities.  The Exadata X7-2 server can scan 350GB/sec, 9x faster than a system using an all flash storage array. Availability:  Proven HA configurations based on Real Application Clusters running on redundant hardware components ensures maximum availability; intelligent software identifies faults throughout the system and reacts to minimize or mask application impact.  Customers are routinely running Exadata solutions in 24/7 mission critical environments with 99.999% availability requirements. Security:  Full stack patching and locked down best-practice security profiles minimize attack vulnerabilities.  Build PCI DSS compliant systems or easily meet DoD security guidelines via Oracle-provided STIG hardening tools. Manageability:  Integrated systems management and tools specifically designed for Exadata simplify the management of the database system.  New fleet automation can update multiple systems in parallel, enabling customers to update hundreds of racks in a weekend. Scalability:  Modular building blocks connected by a high-speed low latency Infiniband fabric enable a small entry level configuration to scale to support the largest workloads.  Exadata is New York Stock Exchange’s primary transactional database platform supporting roughly one billion transactions per day. Economics:  Built from industry standard components to leverage technology innovations provides industry leading price performance.  Exadata’s unique architecture provides better than all flash performance, at low-cost HDD capacity and cost.   Customers have aggressively adopted Exadata, to host their most demanding and mission critical database workloads.  Chances are you indirectly touch an Exadata every day—by visiting an ATM, buying groceries, reserving an airline ticket, paying a bill, or just browsing the internet.  Four of the top five banks, telcos, and retailers run Exadata.  Fidelity Investments moved to Exadata and improved reporting performance by 42x. Deutsche Bank shaved 20% off their database costs, while doubling performance.  Starbucks leveraged Exadata’s sophisticated Hybrid Columnar Compression technology to analyze point-of-sale data while saving over 70% on storage requirements. Lastly, after adopting Exadata, Korea Electric Power processes load information from their power substations 100x faster allowing them to analyze load information in real time to ensure the lights stay on. The funny thing about technology is you must keep innovating.  Given today’s shift to the cloud, all the great stuff we’ve done for Exadata, could soon be irrelevant—or will it?  The characteristics and technology of Exadata has been successful for a reason—that’s what it takes to run enterprise class applications!  The cloud doesn’t change that.  Just as in an on-premise world where people don’t run their mission critical business databases on virtual machines, because they can’t, customers migrating to the cloud will not magically be able to suddenly run those same mission critical business databases in VMs hosted in the cloud.  They need a platform that meets their performance, availability, security, manageability and scalability requirements, at a reasonable cost.  Our customers have told us they want to migrate the Cloud, but they don’t want to forgo the benefits they realize running Exadata on-premises.  For these customers, we now offer Exadata in the cloud.  Customers get a dedicated Exadata system, with all the characteristics they’ve come to appreciate, but hosted in the cloud, with all the benefits of a cloud deployment:  pay-as-you-go, simplified management, self-service, on-demand elasticity, paid for with a predictable operational expense budget with no customer-owned datacenter required. However, not everyone is ready to move to the cloud. While the economics and elasticity are extremely attractive to many customers, we’ve repeatedly found customers unwilling to put their valuable data outside their firewalls.  It may be because of regulatory issues, privacy issues, data center availability, or just plain conservative tendencies towards IT—they are not able or willing to move to the cloud.  For these customers, we offer Exadata Cloud at Customer, an offering that puts the Exadata Cloud Service in your data center, offering cloud economics, with on-premises control. So, it’s been a wild 10 years, and we are continuing to look for ways to innovate with Exadata.  No matter whether you need an on-premises database, a cloud solution, or are looking to bridge the two worlds with Cloud at Customer, Exadata remains the premier choice for running databases.  Look for continued innovation, as we adopt new fundamental technologies such as lower-cost flash storage and non-volatile memory, that promise to revolutionize the database landscape.  Exadata will continue as our flagship database platform, leveraging these new technologies, and making their benefits available to you, regardless of where you want to run your databases. I hope this post gives you a sense of the history behind Exadata, and some of the dramatic shifts that will be affecting your databases in the future.  This is the first in a series of blog posts that will examine these technologies.  Next, we will look more closely at performance, and why performance is critical in a database server, and how we’ve engineered Exadata to provide the best performance for all types of database workloads. Stay tuned for more: Oracle Exadata: Ten Years of Innovation Yes, Database Performance Matters Deep Engineering Delivers Extreme Performance Availability: Why Failover Is Not Good Enough Security: Can You Trust Yourself? Manageability: Labor is Not That Cheap Scalability: Plan for Success, Not Failure Oracle Exadata Economics:  The Real Total Cost of Ownership Oracle Exadata Cloud Service:  Bring Your Business to the Cloud Oracle Exadata Cloud at Customer:  Bring the Cloud to your Business About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. I recently read some interesting blog posts on the driving forces behind many of the today’s IT innovations. ...

Engineered Systems

Is "Zero-Overhead Virtualization" Just Hype?

At its first release—Oracle SuperCluster T4-4—Oracle claimed zero-overhead virtualization for the domain technology used on Oracle SuperCluster. Was this claim just marketing hype, or was it real? And is the claim still made for current SuperCluster platform releases? To answer these questions we need to examine the virtual machine implementation used on SuperCluster: Oracle VM Server for SPARC, also known as Logical Domains (LDoms for short). Oracle VM Server for SPARC is a Type 1 hypervisor that is implemented in firmware on all modern SPARC systems. The virtual machines created as a result are referred to as Domains. The diagram below illustrates a typical industry approach to virtualization. In this case, available hardware resources are shared across virtual machines, with the allocation of resources managed by a hypervisor implemented using a software abstraction layer. This approach delivers flexibility, but at the cost of weaker isolation and increased virtualization overheads. Optimal performance is delivered only by “bare metal” configurations that eliminate the hypervisor (and therefore do not support virtualization). By contrast, Oracle VM Server for SPARC has a number of unique characteristics:   SPARC systems always use the SPARC firmware-based hypervisor, whether or not domains have been configured—there is no “bare metal” configuration on SPARC that eliminates the hypervisor. For this reason, the concept of bare metal that applies to most other platforms has no meaning on SPARC systems. An important implication is that no additional virtualization layer is required on SPARC systems when configuring domains. That means no additional performance overheads are introduced, either. The SPARC hypervisor partitions CPU and memory resources rather than virtualizing them. That approach is possible because CPU and memory resources are never shared by SPARC domains. Each hardware CPU strand is uniquely assigned to one and only one domain. In other words, each virtual CPU in a domain is backed by a dedicated hardware strand. Further, each memory block is uniquely assigned to one and only one domain. This approach has a number of important implications: Since each domain has its own dedicated CPU resources, no virtualization layer is needed to schedule CPU resources in a domain-based virtual machine. The hardware does the scheduling directly. The result is that the scheduling overheads inherent in most virtualization implementations simply don’t apply in the case of SPARC systems. Memory resources in each domain are also dedicated to that domain. That means that domain memory access is not subject to an additional layer of virtualization, either. Memory access operates in the same way on all SPARC systems, whether or not they use domains. Over-provisioning does not apply to either CPU or memory with SPARC domains. We have seen that access to CPU and memory resources on SPARC systems used in Oracle SuperCluster does not impose overheads, both because these resources are dedicated to each domain, and also because the same highly efficient SPARC hypervisor is always in use, whether or not domains are configured.   We’ve examined CPU and memory. What about I/O? I/O virtualization is a major source of performance overhead in most virtualization implementations.   I/O virtualization with Oracle VM Server for SPARC takes one of three forms:   Partition at PCIe slot granularity. 
In this case one or more PCIe slots, along with any PCIe devices hosted in them, are assigned uniquely to a single domain. The result is that I/O devices are dedicated to that domain. As for CPU and memory, the virtualization in this case is limited to resource partitioning and therefore does not incur the usual overheads inherent in traditional virtualization.
This type of virtualization has been available on every Oracle SuperCluster platform release, and indeed virtualization of this type was the only option available on the original SPARC SuperCluster T4-4 platform. In this implementation, InfiniBand HCAs (which carry all storage and network traffic within SuperCluster), and 10GbE NICs (which carry network traffic between the SuperCluster rack and the datacenter), are dedicated to the domains to which they are assigned. As is true for CPU and memory access, I/O access for this implementation follows the same code path whether or not domains are in use.
Domains of this type are referred to as Dedicated Domains on SuperCluster, since all CPU and memory resources, and InfiniBand and 10GbE devices, are uniquely dedicated to a single domain. Such domains have zero overheads with respect to performance. SuperCluster Dedicated Domains are illustrated in the diagram below. Virtualization based on SR-IOV. 
For Oracle SuperCluster T5-8 and subsequent SuperCluster platform releases, shared I/O has also been available for InfiniBand and 10GbE devices. The resulting I/O Domains leverage SR-IOV technology, and feature I/O virtualization with very low, but not zero, performance overheads. The benefit of the SR-IOV technology used in I/O Domains is that InfiniBand and 10GbE devices can be shared between multiple domains, since domains of this type do not require dedicated I/O devices. SuperCluster I/O Domains are illustrated in the diagram below.   Virtualization based on proxies in combination with virtual device drivers.
 This type of virtualization has been used on all SuperCluster implementations for functions that are not performance critical, such as console access and virtual disks used as domain root and swap devices.   All Oracle SuperCluster platforms since Oracle SuperCluster T5-8—including the current Oracle SuperCluster M8—support hybrid configurations that deliver InfiniBand and 10GbE I/O virtualization via Dedicated Domains (domains that use PCIe slot partitioning), and/or via I/O Domains (domains that leverage SR-IOV virtualization).   An additional layer of virtualization is also supported, with one or more low overhead Oracle Solaris Zones able to be deployed in domains of any type. An example of a configuration featuring nested virtualization is illustrated in the diagram below.     The Oracle SuperCluster tooling leverages SuperCluster’s built in redundancy, along with both the resource partitioning and resource virtualization described above, to allow customers to deploy flexible and highly available configurations. High Availability will be the subject of a future SuperCluster blog.   In summary, SPARC domains are able to offer efficient and secure isolation with zero or very low performance overheads. The current Oracle SuperCluster M8 platform delivers domain-based virtual machines with zero performance overheads for CPU and memory operations. Oracle SuperCluster M8 virtual machines also deliver I/O virtualization for InfiniBand and 10GbE with either zero performance overheads via Dedicated Domains, or with very low performance overheads via I/O Domains. Learn more here.   About the Author Allan Packer is a Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle. He has worked on issues related to server systems performance, sizing, availability, and resource management, developed performance and regression testing tools, published several TPC industry-standard benchmarks as technical lead, and developed a systems/database training curriculum. He has published articles in industry magazines, presented at international industry conferences, and his book "Configuring and Tuning Databases on the Solaris Platform" was published by Sun Press in December 2001.  Allan is currently the technical lead and architect for Oracle SuperCluster.

At its first release—Oracle SuperCluster T4-4—Oracle claimed zero-overhead virtualization for the domain technology used on Oracle SuperCluster. Was this claim just marketing hype, or was it real? And...

Cloud Infrastructure Services

Mapping a Path to Profitability for the Banking Industry

Just over 20 years ago, a supercomputer named Deep Blue made history by beating the world’s best chess player, Garry Kasparov, in a six-game match. It was able to do this using hardware with a little over 11 Gflops of processing speed. In contrast, the iPhone X you might be holding right now is capable of about 346 Gflops. That’s enough raw computing power to take on Kasparov plus 30 more grandmasters... at the same time. Such comparisons remind us even by modern technology-industry standards, mobile technology continues to advance at a breakneck pace. The result of this trend—a relentless cycle of innovation, rising consumer expectations, and business disruptions—has created major challenges as well as lucrative opportunities for the banking industry. Today, more banks are discovering that a successful mobile strategy offers a clear path to a profitable future. They are also discovering, however, that the wrong IT infrastructure decisions —especially those involving legacy infrastructure—risk turning this journey into a costly dead end. Understanding the Mobile Banking Opportunity There are many reasons why banks increasingly view long-term success through a mobile banking lens. Consider a few examples of the opportunities that an institution can unlock with a successful mobile strategy:   Room to grow: According to the Citi 2018 Mobile Banking Survey, 81 percent of U.S. consumers now use mobile banking at least nine days a month, and 46 percent increased their mobile usage in the past year. Mobile banking apps are now the third most widely used type of app—trailing only social media and weather apps.   A global opportunity: According to World Economic Forum, 500 million adults worldwide became bank accountholders for the first time—but two billion more remain without banking services. As with access to healthcare and education, easy access to affordable mobile connectivity – with 1.6 billion new mobile subscribers coming online by 2020—will put banking and payment services in front of many people for the first time.   A mobile-banking revenue boost: According to a 2016 Fiserv study, mobile banking customers tend to hold more bank products than branch-only customers—a trend that suggests bigger cross-selling opportunities. As a result, mobile banking customers bring an average of 72 percent more revenue than branch-only customers.   Millennials are “mobile-first” banking customers: 62 percent of Millennials increased their mobile banking usage last year, and 68 percent of those who use mobile banking see their smartphones replacing their physical wallets.   Second-Rate Mobile Banking Technology Is Risky Business—and Getting Riskier As mobile technology advances, however, so do the risks associated with a second-rate mobile banking presence. This is especially true for banks that previously settled on a “good enough” mobile strategy—an approach that, in many cases, was designed to work within or work around the limitations of a bank’s legacy systems.   Two risks stand out for banks that continue to accept a “good enough” approach. First, as competitors invest in cutting-edge mobile technology, they expose the glaring usability, reliability, and capability gaps associated with legacy IT infrastructure.   Second, it’s clear that technology innovation drives rising consumer expectations. When a bank’s mobile offerings fall short, the consequences can be profound, far-reaching, and extremely difficult to rectify:   Unhappy consumers are ready and willing to abandon their banks: In 2016, about one in nine North American consumers switched banks.    Millennials are even faster to switch: During the same period, about one in five adults age 34 or younger switched banks. Another 32 percent of those surveyed said they would switch in the future if another institution offered easier-to-use digital banking services.   Bad banking apps are a big deal: Seeking a better mobile app experience is now the third most common reason for switching banks—ahead of security concerns and customer-service failures.   Digital lag leaves mobile apps lacking: A recent survey of UK bank customers found just one in four said they were able to do everything they wanted using a bank’s mobile app, and 34 percent found their bank’s app easy to use.   There are many reasons why a bank might continue to rely on a lower-caliber mobile presence built using aging legacy infrastructure. It’s very difficult, however, to imagine why any of those reasons would justify this level of possibly grievous damage to a bank’s customer relationships, brand image, and industry reputation.     It’s Not Too Late to Invest in Mobile Banking Success   I know that I have painted a foreboding picture—especially for banks that want to embrace a modern technology infrastructure but haven’t yet been able to follow through.    That’s why it’s important to make another point: It’s not too late to get ahead of these challenges and to make the investments that enable a truly first-rate mobile banking strategy.   First, bear in mind that traditional banks still hold some very important cards: Consumers still consider them more deserving of trust than most businesses, their physical branches (though declining in numbers) are important for certain types of advisory and high-value services, and they have the compliance and legal expertise required to navigate the treacherous regulatory waters of the banking mainstream.   Second, it’s crucial to recognize that moving away from legacy infrastructure—the sooner the better—may be the single most important move a bank can make to trigger a quick and decisive pivot toward mobile banking success.    4 Keys to Winning with Bank IT Infrastructure   Let’s focus now on specifics: four action items that a bank IT leader can use to drive a fast and effective infrastructure modernization program:    1. Embrace the cloud to support global growth. Mobile technology performance is a key to creating a good user experience; nobody likes to wait, especially when they want to access their money. Cloud-ready infrastructure is a much better foundation for building robust and reliable mobile offerings—for example, eliminating the latency problems that happen when on-premises systems try to service a global customer base.   2. Get and stay ahead with help from integrated, co-engineered systems. Hardware and software designed to work together and offered in simple pre-configured and pre-optimized packages, offer better performance and faster deployment than DIY non-optimized alternatives. This can be a bank’s most powerful technology weapon for fighting back against the complexity, management, and reliability issues that accompany rapid growth and pressures to scale.   3. Liberate your IT staff to do the things that matter. Co-engineered systems and cloud infrastructure both contribute to many of the same goals: attacking complexity, enabling growth, designing scalable and resilient systems. This means less time spent on tedious maintenance tasks—and more time focusing on business goals that drive success.   4. Build infrastructure that’s ready to handle today’s data and analytics challenges. An entire category of fintech upstarts is focused on reaching new markets through the use of unconventional credit analytics and scoring systems. These firms incorporate everything from educational achievements to call center records and website analytics into models that identify preferences and assess risk for customers who don’t yet have—and might never get—conventional credit scores. In many cases, the only way to serve these customers will be over mobile banking apps and systems.    Many banks could pursue similar opportunities, given the massive quantities of customer data at their disposal. But first, they’ll have to put systems into place that are capable of pulling this data from dozens of siloed sources; combine it with the masses of data flowing into the organization; and apply the right management, analytical, and storage solutions to unlock the insights within.   Oracle Sets Up Banks for Mobile-Tech Success   Oracle’s engineered systems are especially adept at giving banks everything they need for a truly modern, mobile-ready IT infrastructure. First, that’s because they are built to be fully integrated systems. Engineered systems like Oracle Exadata enable banks with one dedicated, high availability environment to run Oracle Database, and another like Oracle Exalytics or Oracle Exalogic to run advanced analytics and other critical business applications.   Second, along with a single, integrated technology stack, Oracle gives banks a single, integrated technology partner to support a modern mobile banking strategy. This is a powerful advantage combined with Oracle’s ability to deliver openness where it matters: In compliance with open architectures, open industry standards, and open APIs; and to achieving interoperability and integration.   These are the qualities that truly give an IT team freedom and flexibility to support innovative mobile banking functions that are money in the bank.    About the Author Srinivasan Ayyamoni is a Certified Accountant with 20 years of experience in business transformation, technology integration, and establishing finance shared services for large global enterprises. As a transformation Consulting lead with Cognizant's Oracle Solution Practice, he manages large digital transformation engagements focused on helping clients establish a high-performance finance function and partnering with them to achieve superior enterprise value.

Just over 20 years ago, a supercomputer named Deep Blue made history by beating the world’s best chess player, Garry Kasparov, in a six-game match. It was able to do this using hardware with a little...

The Future of Banking: How AI is Transforming the Industry

Today's blog post is a Q&A session with top fintech influencer and founder of Unconventional Ventures, Theodora Lau. Named one of 44 "2017 Innovators to Watch" by Bank Innovation, ranked No. 2 Top FinTech Influencers 2018 by Onalytica, and named to the list of LinkedIn Top Voices 2017 for Economy and Finance, she's a powerful voice in the industry. If you probe into the rapid adoption of artificial intelligence (AI) initiatives in the enterprise, it quickly becomes clear what’s behind it: big data. In a 2018 NewVantage Partners survey of Fortune 1000 executives, 76.5 percent cite the greater proliferation and availability of data is making AI possible. As Randy Bean in an MITSloan Management Review article puts it, “For the first time, large corporations report that they have direct access to meaningful volumes and sources of data that can feed AI algorithms to detect patterns and understand behaviors….these companies combine big data, AI algorithms, and computing power to produce a range of business benefits from real-time consumer credit approval to new product offers.” To be able to process all that data, such as financial data, at speed and scale, enterprises need infrastructure to support it. Infrastructure specifically designed for financial and big data applications, with hardware and software that has been co-engineered to work optimally together, can offer better performance and faster analytics. It’s definitely helping deliver a better customer experience—and that’s especially true in the financial services industry. We asked fintech influencer Theodora Lau to talk about the major innovations taking place in the traditionally conservative world of financial services. One key driver of this innovation is the infiltration of AI technology into the financial services industry. A second driver is a new era of partnerships between fintech startups and traditional financial institutions. Traditional financial institutions and fintechs have discovered that, by partnering, they can take advantage of each other’s strengths to develop innovative, revenue-generating offerings. PwC’s Global FinTech Report 2017 found that 82 percent of mainstream financial institutions expect to increase their fintech partnerships in the next three to five years. Theo, how are fintech startups disrupting the industry, and how are the traditional financial services companies responding to that? If you’d asked that question a few years ago, most people would have said banks are in trouble and need to defend against fintechs. But, starting sometime around 2017, the industry began to turn around and become more willing to collaborate. It makes sense because the services fintech startups are typically more focused on specific use cases: they hone in on those and do it really well. They have really good ideas and they tend to be very customer experience-driven, though they lack the scale compared to incumbent banks. And, as much as we talk about how bank infrastructure is aging, banks still have a large customer base and can scale. Traditional financial services companies have existing customers and brand recognition; whereas, fintech startups are typically starting from scratch. At the end of the day, it’s money that we're talking about, and money is very personal and emotional. How much will a consumer actually go out and trust a company that has no history? While a startup may have the most beautiful customer experience, will I trust it enough to hand over my money? I see the two of them [traditional banks and fintechs] working together as the best outcome from a consumer perspective as well as for their own survival. Is it true that new technology is making more collaboration possible as well? Yes, exactly—through APIs and open banking. I don't believe that any single bank can offer everything that the consumer wants, and I don’t think it’s in their best interest to try to be everything for everybody. For instance, ING, a large bank based in Amsterdam, has multiple operating units in different countries. Its German operations formed a partnership with a startup called Scalable Capital, which is an online wealth manager and robo-advisor, to offer a fully digital solution for its customers in Germany. This is a brilliant example of a partnership where the bank extends its product offerings by leveraging the solutions and capabilities that someone else has. What AI technology is changing the industry? Open banking Open banking is the big game changer. One example is Starling Bank in the UK, which does a really good job being an online marketplace. Using APIs, it acts as a hub through which consumers can get access to different things that traditional banks don’t offer, including spending insights, location-based intelligence, and links to retailers’ loyalty programs. Technology companies with banking services Another example is Tencent and Alibaba in China and the big ecosystem that they’ve built. Between the two companies, they own over 90 percent of all of the mobile payments in China. They're not banks, but they’re technology companies that put the consumer in the center of everything they do. They view payments and financial services not as an end in itself, but as a tool to further enhance their offerings. Voice banking We can't forget about voice banking. We see more banks trying to get into that space—though we are not quite there yet. Voice is very intuitive. It’s just easier to talk than it is to remember how to navigate a menu, which is a challenge in online/mobile banking. Imagine if you can actually say, “Hey, pay my bills,” instead of having to remember where you need to go on the menu tree. Let's go deeper into how AI has changed the customer experience. How has it affected personalization and the omnichannel experience? When we’re talking about AI in customer experience, it’s important to remember that banks are not really competing with other banks anymore. When consumers do their “banking,” they're comparing their experience to that which they get with all the other online businesses. How does banking compare to me getting something from an ecommerce site? Is it quick and easy? Is it available when I want it and where I want it? The threat to banks isn’t so much fintech companies as the big tech companies like Apple, Amazon, Alibaba, and Tencent. They are the ones banks should be worried about. Look how many customers they have. Look at the products and services they offer, even payments. It’s because of the vast amount of customer information they collect, as well as data analytics and AI, that big tech companies can provide data insights into user behavior and spending habits, allowing them to anticipate your needs and offer contextual, personalized recommendations. That’s how payments are supposed to work as well. Consumers shouldn't have to think, “I need to pay something.” They have a specific task they want to do, and banking services are just a means to an end. From a consumer perspective, hopefully, AI can make banking ambient and transparent in our increasingly connected world. We've been talking a lot about retail banking, but I presume AI is also making similar changes to other areas of financial services. Marketing is a good example. A big thing is figuring out how to entice people to open an email because everything is digital now. HSBC ran a trial using AI to figure out whether its members would prefer rewards for travel or merchandise versus rewards in the form of gift cards or cash. It sent emails to 75,000 credit card members using recommendations that were generated by AI while a control group received emails with rewards from a random category. As it turned out, the emails using AI-generated recommendations had a 40% higher open rate. That’s a fascinating business use case because you don't want to waste your marketing dollars if people are not going to open your emails. Do traditional financial service companies have the infrastructure in place to fully leverage AI or even to partner with fintechs? How is AI changing processes within their firms’ infrastructure? Financial institutions have a lot of data, but when it comes to being able to leverage AI, which is very heavily data-dependent, the challenge is being able to access that data. A lot of times, all of these systems are very siloed. So while a bank may have a ton of data about a customer, how well can it actually pull all of the data together to be able to generate insights that are useful and can be leveraged? The other challenge: If you can get the data together in a meaningful way, are they explainable? If you are using AI to make decisions, such as in lending, are you going to be able to explain what the AI is recommending, and how someone gets qualified for a loan, for example? That's something you need to do. What’s holding the banks back in terms of modernizing their technology? It’s a couple of things. You need to look at the make-up of the people, because it has to start from the top. Embrace technology At the upper layer, finance people have been doing the same thing for many years. Until you have leaders, including senior executives and board members, that are passionate about and actually understand technology, it's hard to transform. It goes beyond just having a mobile app —true digital transformation and modernization involve change in culture, mindsets, and processes. Data security Of course, it’s also a heavily regulated industry. If you're going to be upgrading something, and you already have customers and money and transactions there, you need to be very careful about what you're doing. Privacy and security of data is of paramount importance. The pain of upgrading infrastructure It’s also a very expensive and lengthy process to upgrade core systems, so money is definitely one part of why financial institutions aren’t modernizing their infrastructure. Some of my friends would say that some banks are actually not scared enough yet. Look at their earnings—they're still making good money. So if they’re not feeling the pain as much yet, then how urgent is it for them to actually do something drastic? Yet many mid-size financial companies don’t have large budgets, but still need to modernize their technology solutions to manage the explosion of data. There are banks that are certainly more at the forefront of technology, and they’re betting big on technology. For example, JPMorgan Chase’s technology budget is over $10 billion in 2018, with most of it going toward enhancement of mobile and web-based services. Where do you see AI taking financial services in the future? What I would like to see in the future in the US is what we see right now in China in regard to their platforms. In India, China, and Africa, the mobile adoption is so much higher compared to the US where the mode of doing things is so different. We shouldn't be looking at banking as an entity per se. Consumers are looking for banking services. That’s what we will be evolving to in the future, and we'll need AI to be the brain and the engine to offer a deeper, richer, more personalized experience. Authentication is another interesting area. No one wants to remember passwords or carry those little tokens. That's not customer-friendly at all. So biometrics and voice authentication will be very fascinating. At least that’s true of voice banking, which is still in the exploratory stage. Checking balances is not really exciting, but, in the future, AI will let the bank know I got a work bonus and will automatically ask whether I’d like to put aside 10 percent of that into savings. Things like that will enable financial wellness and more value overall for customers. That’s where I think AI can help in the future—and that’s how we can make banking better. And behind this future will be the enormous quantities of data that make this customer knowledge possible, and the ability to collect and analyze the data in real time, built on the right infrastructure. Learn more about how machine learning and AI can add substantial value to the financial services ecosystem.    

Today's blog post is a Q&A session with top fintech influencer and founder of Unconventional Ventures, Theodora Lau. Named one of 44 "2017 Innovators to Watch" by Bank Innovation, ranked No. 2 Top...

Engineered Systems

Coming to a Cloud Near You: Backup Made Easy

When it comes to business, the only thing you can expect is the unexpected. That’s why offsite backups are so important. As more and more of the business comes to rely on data, it’s crucial to make sure there’s a copy of that data somewhere offsite in case of natural disaster, user error, equipment failure, or a cyberattack. With cloud backup, enterprises of any size can easily write and store their backup to the cloud for better performance, reliability, security, and ease of use compared to traditional, tape-based backup—without the responsibility of manually managing backups or maintaining backup systems. Oracle recently added cloud backup to Oracle Database Appliance so that organizations won’t miss out on the benefits of cloud backup. We spoke with Tammy Bednar, Senior Director of Product Management for Oracle Database Appliance, to learn more about this new feature, why it matters to IT managers, and how it helps businesses get on the path to the cloud. Tammy, tell us about the easy and integrated Oracle Cloud backup feature. Cloud backup is a new function for Oracle Database Appliance to better deliver database integrity. Customers can now easily back up to the Oracle Cloud or local Oracle Database Appliance, clone the database from a cloud backup, or recover from the backup. A single interface makes it simple to implement a database recovery strategy from any workstation. To implement this strategy, all you need to do is log in to the Oracle Database Appliance web console, store your cloud credentials, create a backup policy, and attach it to a database. What are the benefits of integrated cloud backup? Before now, customers had to use a third-party software solution to back up their data to disc or tape, or, alternatively, manually configure their own backup solution. In addition, customers had no way of checking the integrity of their backup to make sure data that was backed up can actually be recovered. This left them at risk of a corrupt backup file. With cloud backup, IT has the ability to monitor the integrity of the backup from on-premises to the cloud, giving them complete visibility for end-to-end confidence. An added benefit is there are no upfront hardware costs to worry about. Cloud backup is a pay-as-you-go, elastic model, which means you pay for only the storage you use without having to worry about running out of tapes or local storage. Because it’s the cloud, it can scale up as quickly as you require to meet your business needs. How is data protection enhanced with Oracle Cloud database backup? First, the network used between the Oracle Database Appliance and the Oracle Cloud is completely encrypted. In addition, each backup file is encrypted before it leaves the on-premises environment, so it’s also encrypted in transit. Finally, once the backup is in the Oracle Cloud, it is stored securely, so you have end-to-end security. Plus, your data is better protected with the redundancy inherent in the cloud, so you don’t have to worry as you would if your physical backup were damaged.   Is cloud backup an option for regulated industries that require keeping data on-premises? Absolutely. The cloud backup has been specifically designed to provide the archive capabilities necessary to create a usable backup that meets compliance standards. Not only that, but it is extremely cost effective and secure. What else makes Database Appliance cloud backup different? Database Appliance uses Oracle Recovery Manager (RMAN), which enables users to make online backups of an Oracle database without requiring the database to be brought down. To provide better recoverability, the archive logs are automatically backed up to the cloud every 15 minutes by default. What is the recovery process like? The customer just logs into the Oracle Database Appliance web console. They can then choose to recover from the latest backup available or select one from an earlier point in time.   How does a cloud backup facilitate the path to the cloud? For customers using on-premises systems, a cloud backup may be their first chance to incorporate the cloud into their IT infrastructure. Oracle's cloud-ready infrastructure provides exact cloud equivalents, using the same framework you’re already used to. This makes it simpler to move from one to another. Once your backup solution is in the cloud, it becomes much easier to utilize the backup itself for business uses instead of just as a backup. For example, some customers use their backup to create Dev/Test environments to incorporate into their work or clone to another on-premises system. The cloud is transforming the way enterprises do business for the better. Your backup is no different. With no hardware costs, faster backup and recovery times, and greater reliability, it’s easy to see how cloud backup will play an important role in every organization's digital transformation. Learn more about Oracle Cloud Backup and download the “Back Up to Oracle Cloud” eBook to understand the ins and outs of using the Oracle Cloud to back up your Oracle Database Appliance. About the Author: Tammy has worked in the computer industry for more than 30 years. She started out coding applications in ADA and decided a change was needed. Oracle hired her in the Database Support Organization 20 years ago and she has been involved with database releases since version 6.0.36. Tammy started her product management career on the database High Availability team with Recovery Manager (RMAN), database backup and recovery,  and Database Security development team, focusing on auditing, Oracle Audit Vault and Oracle Database Firewall and now focuses on the Oracle Database Appliance.  

When it comes to business, the only thing you can expect is the unexpected. That’s why offsite backups are so important. As more and more of the business comes to rely on data, it’s crucial to make...

Cloud Infrastructure Services

Why Should CMOs Care About GDPR?

What GDPR means for CMOs: “Is all the hype justified?” As a direct link to customers and their data, marketers will be uniquely affected by GDPR so we asked Oracle’s Marie Escaro, Marketing Operations Specialist for OMC EMEA SaaS, and Kim Barlow, Director, Strategic and Analytic Services for OMC EMEA Consulting, to discuss how GDPR affects marketing teams. Is all the hype around GDPR justified? How seriously should marketers be taking it? Kim: European regulators have a clear mandate to tighten controls on the way businesses collect, use and share data, and the prospect of large fines for non-compliance is enough to make companies err on the side of caution. Marketers should take this very seriously, as a large part of their role is to ensure the organization has a prescriptive approach to acquiring, managing and using data.  Marie: Businesses increasingly rely on data to get closer to their customers. With data now viewed as the soft currency of modern business, companies have every reason to put the necessary controls in place to protect themselves and their customers. What does this mean for CMOs and marketing teams? Marie: Marketing teams need a clear view of what data they have, when they collected it, and how it is being used across the business.  With this visibility, they can define processes to control that data. I once worked with a company that stored information in seven different databases without a single common identifier. It took two years to unify all this onto a single database, which should serve as motivation for any business in a similar position to start consolidating their data today. It’s equally important to set up processes to prioritize data quality. Encryption is a good practice from a security standpoint, but marketers also need to ensure their teams are working with relevant and accurate data. What’s been holding marketers back?  Kim: There is still a misconception around who is responsible for data protection within the organization. It’s easy to assume this is the domain of IT and legal departments, but every department uses data in some form and is therefore responsible for making sure it does so responsibly. Marketing needs to have a clear voice in this conversation. Many businesses are also stuck with a siloed approach to their channel marketing and marketing data, which makes the necessary collaboration difficult. These channel siloes within marketing teams have developed through years of growth, expansion and acquisitions, and breaking them down must be a priority so everyone in the business can work off a centralized data platform.   Is this going to hamper businesses or prove more trouble than it is worth? Kim: Protecting data is definitely worth the effort for any responsible business. But GDPR is not just about data protection. It’s a framework for new ways of working that will absolutely help businesses modernise their approach to handling data, and benefit them in the long term. If we accept data is an asset with market value, then it’s only natural customers gain more control over who can access their personal information and how it is used and shared. Giving customers the confidence their data is safe and being looked after responsibly, while ensuring that data is better structured and higher quality will be good for the businesses deriving value from that data. What should CMOs do to tackle GDPR successfully?   Marie: As with any major project, success will come down to a structured approach and buy-in from employees. CMOs need to stay close to this issue but in the interests of their own time should at least appoint a strong individual or team as part of an organization-wide approach to compliance. Marketing needs to be a part of that collaborative effort and should be working in a joined-up way, with finance, IT, operations, sales and any other part of the business to ensure all data is accounted for and properly protected.  Find out more and discover how Oracle can help with GDPR . About the Authors Marie Escaro is at Oracle. She has more than 15 years in coordinating partnership between sales and marketing, using high performance tools to improve marketing usage, data quality management in CRM and marketing automated processes. She specializes in marketing automation, CRM, direct marketing, international localization, communication, Eloqua Master and finally sharing the feeling of having a positive impact and changing the world working with the best marketers in the industry. Kim Barlow is currently the Director of Strategic and Analytical Services EMEA at Oracle. She has had an extensive career in tech. She is currently working on a number of clients to help drive their lifecycle and digital strategies using Oracle technology. She loves her life, her family, her friends and her work colleagues.

What GDPR means for CMOs: “Is all the hype justified?” As a direct link to customers and their data, marketers will be uniquely affected by GDPR so we asked Oracle’s Marie Escaro, Marketing Operations...

Cloud Infrastructure Services

What is IT's Role in Regards to GDPR?

Usually when any sort of new compliance and regulation regarding personal data comes out, it is automatically assumed to be solely ‘IT’s problem" because technology is such a huge component of the data collections and data processing system. But compliance is in fact an organization-wide commitment. No individual or single department can make the organization compliant. If you've somehow missed the May 25th deadline, don't panic too much, you're not alone. But you do need to move quickly because there are clear areas where IT can add significant value in helping the organization achieve GDPR compliance a whole lot faster and more methodically. 1. Be a data champion Organizations know how valuable their data is, but many departments, business units and even board members may not realize how much data they have access to, where it resides, how it is created, how it could be used and how it is protected. This is one of the main reasons why organizations are lagging; unclear oversight into where all personally identifiable data (PID) resides.   The IT department can play a clear role in helping organizations understand why data, and by extension GDPR, is so important and determine the best way to use and protect it. By helping educate the greater organization on what exactly GDPR is and the ramifications of non-compliance will help influence a sense of urgency across the organization and ensure that everyone is moving quickly to comply. In addition, GDPR is an excellent opportunity for IT to explore intergraded infrastructure technology and different approaches to data management that can help unify where and how PID is used and processed. Oracle Exadata is a complete engineered system that is ideal for consolidation and performance of the Oracle Databases that handle much of an organizations PID. 2. Ensure data security GDPR considers protection of PID a fundamental human right, so organizations need to ensure they understand what PID they have access to and put in place appropriate protective measures. IT has a role to play in working with the organization to assess security risks and ensure that appropriate protective measures, such as encryption, access controls, attack prevention and detection, are in place.   In my previous post on the new regulations that the telecommunications industry is facing, I mentioned that PCI-DSS compliance is being used as a basic guideline for IT to help achieve GDPR compliance. GDPR is unfortunately quite broad and not well defined, so the more clear demands on PID security so many companies are intelligently using that as a starting point. Engineered systems, including Exadata, have gone under rigorous review to determine its compliance with PCI DSS V3.2 so customers can take care of at least the technological requirements of that regulation.   At a glance, Exadata features extensive database security measures to help customers protect and control the flow of PID: Perimeter Security, Defence in depth, Open Security by default, DB Scoped Security and ASM Scoped Security (CellKey.ora – Key, asm, realm), Infiniband, Open Security by default but particular gateways can be assigned to segregate the networks, Auditd monitoring enabled (/etc/audit/ audit.rules), Cellwall: iptables firewall, Boot loader is password protected. All of which align perfectly with many industry compliance strategies for GDPR that focus on: 1) Authentication, 2) Authorization, 3) Credential Management, and 4) Privilege Management.   3. Help the organization be responsive GDPR requires organizations to not only protect personal data but also respond to requests from individuals who, among other things, want to amend or delete data held on them. That means that their personal data must be collected, collated and structured in a way that enables effective and reliable control of all this information. This means breaking down internal silos and ensuring an organization has a clear view of its processing activities with regard to personal data.    4. Identify the best tools for the job GDPR compliance is as much about process, culture and planning as it is about technology. However, there are products available that can help organizations with key elements of GDPR compliance, such as data management, security and the automated enforcement of security measures. Advances in automation and artificial intelligence mean many tools offer a level of proactivity and scalability that don’t lessen the responsibility upon people within the organization but can reduce the workload and put in place an approach which can evolve with changing compliance requirements.    5. See the potential An improved approach to security and compliance management, fit for the digital economy, can give organizations the confidence to unlock the full potential of their data. If data is more secure, better ordered and easier to make sense of, it stands to reason an organization can do more with it. It may be tempting to see GDPR as an unwelcome chore. However, companies should also bear in mind that this is also an opportunity to seek differentiation and greater value, to build new data-driven business models, confident in the knowledge that they are using data in a compliant way.  Giving consumers the confidence to share their data is also good for businesses.    The IT department will know better than most how the full value of data can be unlocked and can help businesses pull away from seeing GDPR as a cost of doing business and start seeing it as an opportunity to do business better.   Learn more about GDPR and how Oracle can help

Usually when any sort of new compliance and regulation regarding personal data comes out, it is automatically assumed to be solely ‘IT’s problem" because technology is such a huge component of...

Supply Chain Management Is Evolving: How Will It Affect Your Enterprise?

Today's blog post is a Q&A session with leading supply chain and ERP influencer, Lisa Anderson. Operational efficiencies, productivity improvements, and cost savings are the top-three strategic advantages of cloud-based supply chain management, according to an IDG survey of senior managers and directors around the world. To gain these advantages, enterprises need to have infrastructure that helps them cost-effectively harness their large data workloads and move to the cloud easily. In fact, the biggest challenge for most companies is figuring out how to have their on-premises infrastructure engineered in such a way that it mirrors the capabilities of the cloud. This way, when companies are ready, they can take their supply-chain data and make a seamless, fast migration to the cloud. Whether you’re a manufacturer, retailer, or large corporation, companies looking to gain real-time, complete visibility in their supply chain require integrated infrastructure with scalable data storage, processing, and computing power to get the job done. To better understand these benefits and how innovation and infrastructure are changing the supply chain, we spoke with Lisa Anderson, president of LMA Consulting Group. As an internationally recognized supply chain expert, she presents at conferences, such as the Global Supply Chain & Logistics Summit and the APICS International Conference, and is frequently quoted in the media, including The Wall Street Journal, ABC News, and The CEO Magazine. Ranked number 16 in SAP’s Supply Chain Influencers and recognized as one of the top 1 percent of consultants worldwide, Lisa has deep experience helping businesses maximize value. You’ve said that the customer experience continues to play a role in the transformation of supply chain management. How is it impacting both B2C and B2B industries? We’ve all become accustomed to getting whatever we need, whenever we need it, with frequent status updates and easy returns. We’ve raised the bar. And it leads to a host of challenges for vendors, mainly in the sense that they need a wide breadth of products available to meet customer demand at any time. Even though the vast majority of my clients are not in the retail or B2C world, they're all impacted by this elevated experience. I was recently talking with a couple of distribution executives who said that, several years ago, there was a small percentage of deliveries that were due on the same day, if any. Now, roughly 80 percent of the orders they receive are expected on the same day. They’ve had to start working on Sundays because customers—including business customers—are expecting these extremely rapid deliveries. There are several other ecommerce themes that are changing supply chain management. One is 24/7 accessibility: the ability to place orders and look up your order status whenever and wherever you are. Another is rapid customization. One of my clients has become number one in his industry by making sure his company provides not just rapid deliveries, but also quickly customized orders. His company does things like paint on the fly, which doesn’t normally happen in manufacturing. What is the technology that is making this supply chain management transformation possible? Blockchain impacts supply chain management by allowing for immediate visibility and transparency of global financial transactions—like electronic data interchange (EDI) on steroids. When products require traceability, such as if you have a recall, you can use blockchain to immediately see where your products are in the supply chain and who paid for what. That traceability can certainly be achieved within ERP software already, but if you require the next layer of complexity and immediate transparency, then blockchain technology could be useful. Big data is another aspect of technology that is changing the supply chain landscape because companies can better tailor the customer experience when they know more about what the customer wants. IoT comes down to data, because you’re trying to attach the data together between different devices. In manufacturing, IoT shows up in preventive maintenance and anticipating when a machine might break down before it happens. When you see how different elements are working together, you can target what needs to be fixed or maintained, without just following a schedule that may or may not be addressing a real problem. This can reduce waste and improve efficiency. But data is just as challenging as it is helpful. Before we get to work every day, we receive lots of messages between emails, texts, videos, billboards, and messages from our cars—everything is connected these days. The biggest challenge that my clients face is that they’re overwhelmed with data, but they also want and need the data to provide a better customer experience and understand what their customers really need. And they also want to figure out how to do that in a scalable and profitable way. The challenge is how to sift through all the data that’s collected and put it all together into something meaningful and provide information at your fingertips. My clients are very interested in solutions like dashboards, and it’s a key ingredient in selecting the software; however, getting it implemented correctly is difficult. It sounds like the right infrastructure that can manage multiple data sources and provide actionable insights can improve the entire supply chain process. What about the role of the ERP system in supply chain management?  We’ve improved supply chain performance significantly by focusing a lot of effort on the demand plan. Instead of using the older perspective of a monthly forecast and whether it’s accurate as is, we’re looking at how we can do this in a more agile, flexible way. The ERP system needs predictive analytics to be able to modify a demand forecast on the fly. Also, by using vendor-managed inventory systems, we’ve been able to reduce lead times. We’re able to meet short lead time orders that we couldn’t previously meet, with the same or slightly lower inventory levels, at a 5 percent margin improvement. It wasn’t solely due to demand planning, but that was the first step. Once you get beyond demand planning, the next element is going to be a more agile production schedule geared to the customer—one that's going to offer suggestions, give you notices, and be exception-based, so that you don't have to put as much manual effort into it. The demand plan flows down into the production schedule, and then capacity analysis is the next key topic. What steps can enterprises do to modernize their supply chain management? We’re in the era of the customer, so start with the demand side of the equation. There are ways, regardless of what your tool set is, to improve upon your demand now and your prediction of future demand. You may not have a system in place to do this yet, but regardless, you should be doing more to look at the demand within your supply chain. One other quick tip is to look at what information you are getting out of your system and how you can better utilize that information. I find that no matter what client I’m working with, we can always do a better job of accessing information and taking the most relevant information to make better decisions. Even if your system isn’t yet modernized to the point of predictive analytics, you want to move in that direction. You can do this by just getting information from multiple sources and creating a simplified database. What will supply chain management look like in five years or 10 years from now, and what technology can help take enterprises there? We’re going to continue seeing the ecommerce effect: the importance of speed, responsiveness, and agility, and the rise of smaller, more frequent orders. All of my clients are interested in managing their vast supply chain networks with lower costs, but better service. They’re trying to find technology to support these goals and figure out how to automate using AI and data. One ideal future is with 3D printing, because you can print what you need, where you need it, when you need it, and further extend your supply chain. Even then, distribution is going to have costs associated with it, and the last mile will continue to be one of the biggest challenges. Delivering all these smaller, more frequent orders to both consumers and businesses impacts transportation negatively and your distribution network significantly. You need your inventory strategically located closer to a customer, or to have flexible manufacturing capabilities that can respond quickly to demand. The system comes into the picture when you want to set up your network to have what you need, where you need it. How to improve delivery metrics will continue to be a key consideration in the future. If we can reduce the cost to manufacture and distribute inventory by leveraging supply chain management tools, we can reduce prices and actually do something as radical as bringing more manufacturing back to the U.S. Take a Deeper Dive… Supply chain management professionals are eager for new ways to leverage data to drive business value. It is important to understand, however, that successfully using big data requires the right infrastructure designed to manage multiple data sources and provide the computing power to deliver actionable insights across the entire supply chain process. The key to gaining business value from supply chain data is by using big data infrastructure that can acquire, store, process, and analyze huge amounts of data workloads for supply chain insights.

Today's blog post is a Q&A session with leading supply chain and ERP influencer, Lisa Anderson. Operational efficiencies, productivity improvements, and cost savings are the top-three strategic...


May Database IT Trends in Review

April and May flew by! Check out the latest database infrastructure happenings you may have missed in the last 2 months... In case you missed it... General Data Protection Regulation (GDPR) took effect last week on May 25th and many companies were "unprepared" despite having 2 years to plan for it. if you're set, great! Otherwise, check out these posts to get you up to speed ASAP: What is GDPR? Everything You Need to Know. ​It's Not Too Late: 5 Easy Steps to GDPR Compliance Your Future Is Calling: Surprise! There’s (Always) More Regulation on the Way The experts take over We've recently invited tech luminaries to talk about the intersection of new, emerging technologies and the challenges that organizations are facing now in the digital age. Welcome to the ‘Self-Driving’ Autonomous Database with Maria Colgan, master product manager, Oracle Database Going Boldly into the Brave New World of Digital Transformation with internationally recognized analyst, and founder of CXOTalk, Michael Krigsman The Transformative Power of Blockchain: How Will It Affect Your Enterprise? with blockchain expert and founder of Datafloq, Mark van Rijmenam How is the telecommunications industry changing? Your Future Is Calling: How to Turn Data into Value-Added Services Telcos, Your Future Is Calling: It Wants to Show You What’s Possible Telcos, Your Future is Calling! Is Your Back Office Holding Your Back? Your Future Is Calling: Get Connected—With Everything Don’t Miss Future Happenings: subscribe here today!

April and May flew by! Check out the latest database infrastructure happenings you may have missed in the last 2 months... In case you missed it... General Data Protection Regulation (GDPR) took effect...

Cloud Infrastructure Services

​It's Not Too Late: 5 Easy Steps to GDPR Compliance

GDPR went into effect last week May 25th with, unsurprisingly, many organizations scrambling to make the deadline. If you've been keeping up on this blog, you know that we've been highlighting this topics for months. But don't worry, it’s not too late to take control of your data and prepare your organization. Here, we outline five surprisingly simple steps that can help you get on the path to getting your organization to compliance.   Step 1: Don’t panic! Seriously! You may have missed the deadline, but you're not the only one. A recent report estimated that 60% of businesses were likely to miss the GDPR compliance deadline and the articles coming out since the 25th indicate this to be quite true. It might be tempting to hastily implement as many data protection measures as possible as quickly as possible. While this sense of urgency is warranted, as always a measured and strategic approach is best. Companies first need to understand GDPR, how it applies to them, and exactly what their obligations are. This will give them a clear view of the data management and protection measures they need to address their compliance needs.   Step 2: Centralize your data GDPR asks that only the absolute minimum of necessary user information be collected and processed and that users have control over what you do and how you hold that data. Thus, having greater visibility in how and where the organization collects data is imperative. To better monitor data, organizations first need to make relevant information easily accessible to all the right people internally. Years of growth and diversification may have left them with disjointed systems and ways of working, making it difficult for individual teams to understand how their data fits in with data from across the organization. This makes customer information almost impossible to track in a cohesive way, which is why it’s crucial to centralize data and ensure it is constantly updated. This is one of the reasons why a unified Oracle stack is so attractive. Performance, speed, and cost savings of Oracle Engineered Systems and the cloud are great, but it is the consolidation, standardization, and security from chip to cloud that makes complying with regulations like PCI-DSS and GDPR so much easier.    Step 3: Build in data transparency Once you have a solid grip on your data and data-related processes, the next step is to facilitate the exchange of information between teams. Teams like customer service and sales draw on more customer data from more touch-points than ever before to help personalize products or services, but this also means the information they collect is spread thinly across the organization.  To gain a more accurate view of their data, organizations need to integrate their systems and processes so every team has access to the data they need.  Step 4: Choose consistency and simplicity over breadth With businesses collecting such large volumes of data at such a rapid rate, complexity quickly becomes the enemy of governance. Rather than opting for a breadth of technologies to manage this information, your business may want to consider using a single system that sits across the organization and makes data management simple. Cloud-based applications are well-suited to this end, as they allow businesses to centralize both data and data-driven processes, making it easier to track where and how information is being used at all times. As I mentioned before, consolidating your Oracle Database infrastructure onto Oracle Engineered Systems like Oracle Exadata delivers the standardization and security needed to help comply with new regulations like GDPR and beyond. With exact equivalents in the cloud, Exadata allows customer to get their systems in compliance today while still keeping an eye on the demands of tomorrow.   Step 5: Put data protection front-of-mind for employees New technologies can only go so far in making an organization GDPR compliant. As ever, change comes down to employees, culture and processes. Data protection must be baked into the organization’s DNA, from decisions made in the boardroom down to the way service teams interact with customers.    Much of the focus around GDPR has been on the cost organizations will incur if their data ends up in the wrong hands, but it’s worth remembering that above all else the law requires them to show they have the people, processes and technologies in place to protect their information. By following these simple steps organizations can put themselves in a better position to take control of their data. Learn more about how Oracle solutions like Oracle Engineered Systems can help support your response to GDPR.

GDPR went into effect last week May 25th with, unsurprisingly, many organizations scrambling to make the deadline. If you've been keeping up on this blog, you know that we've been highlighting...

How to Build a Digital Business Model

Many companies understand the opportunities presented by digital technologies, but lack a common language or framework to transform their organizations. Through extensive interviews and surveys, researchers at the MIT Sloan Center for Information Systems Research (CISR) have developed a framework to guide thinking about digital business models. The framework focuses on business design considerations and aimed to discover how much revenue is under threat from digital disruption and whether the company is focused on transactions or building a network of relationships to meet customers’ life event needs. CISR analyzed 144 business transformation initiatives to determine the underlying factors that drive next-generation business models and they found two common key dimensions: Customer knowledge. Many companies are launching products and initiatives to learn more about their end customers. Business design. Many firms are striving to shift from value chains to networks or ecosystems. CISR took these two dimensions and created a two-by-two matrix which highlights the business models that will be important in the next five to seven years, and beyond. Now, not every organization transforms in the same way because there is no one-size-fits-all approach to building digital business models. As companies evaluate their digital business model, they must answer several key questions. For organizations developing new digital business models best practices, the research suggest answering these 4 key questions is a good starting point: How much revenue is under threat from digital disruption? It is important to think beyond traditional competitors. What parts of your value chain or business might be attractive to another company? Is the business at a fork in the road? Key decisions include whether to focus on transactions and become an efficiency play, or meet customers’ life events and build a network of relationships. Investments must be driven by what the company is great at. What are the buying options for the future? Moving a company’s business model is the equivalent of buying options. One path is to buy an option that helps the company evolve a little bit at a time. What is your digital business model? Woerner recommends focusing on the business model you want to become. It is important to know where you want to go as a company. Curious about the framework and Woerner's research? Join this Harvard Business Review webinar on Wednesday, May 30th to hear Woerner speak live with Oracle and share her research findings and insights about digital business models. http://ora.cl/oe8SH 

Many companies understand the opportunities presented by digital technologies, but lack a common language or framework to transform their organizations. Through extensive interviews and...

Data Protection

GDPR: Too late? Too complicated? Too flexible? Don’t panic.

‘GDPR is coming tomorrow!’ The Wall Street Journal just reported today that as many as 60% to 85% of companies say they don’t expect to be in full compliance by Friday’s deadline. Suggestions as to why this is the case include businesses weighing the cost of compliance against the cost of non-compliance and deciding to accept the risk, while others will simply fail to get their affairs in order in time.  So, as we approach the deadline, what's next? A great many organizations will be compliant and should find their preparations stand them in good stead.  But what about those organizations who miss the deadline tomorrow, either by delay or design? Should they start panicking now? Should they throw resources and money at the problem in the hope of scrambling over the finish line at the eleventh hour? Is it now more risky to rush a response than it is to miss the deadline, but do so with a deliverable approach in place that demonstrates a commitment to compliance?  If businesses are rushing to compliance, what should they be prioritizing?  Part of the problem in answering that question is the fact the regulation itself doesn’t provide a convenient tick box guide to compliance. Lori Wizdo, principal analyst at Forrester has written: “The GDPR is a comprehensive piece of legislation. But even at 261 pages long, with 99 articles, [it] doesn’t provide a lot of specificity.”  Wizdo was writing for B2B marketers, but the conclusion is the same for all parties. “In practice this renders the GDPR more flexible than traditional “command and control” frameworks”.”  This conclusion is right, of course, but if you’re asking, in a panic, what constitutes best practice compliance, “it’s flexible” isn’t necessarily the answer you’re looking for. All the more reason to stop panicking, pause and consider an appropriate response. If an organization has only now decided it needs to address GDPR then the one thing it cannot change is when it started. Rather than wishing they could turn the clocks back, they should focus on clearly understanding what they want to achieve and how best to go about it. For example, within GDPR there is a clear focus on security and data protection. But organizations should not develop tunnel-vision for those objectives alone.  In our recent series on the future of IT infrastructure and the telecommunications industry, we suggested that following PCI-DSS guidelines can get businesses closer to GDPR compliance. So that is a great first step. “A panicked response to GDPR, which focuses almost exclusively on data protection and security, distorts an organization’s data and analytics program and strategy. Don’t lose sight of the fact that implementing GDPR consent requirements is an opportunity for an organization to acquire flexible rights to use and share data while maximizing business value," says Lydia Clougherty Jones, Research Director at Gartner. Flexibility again, but this time as a benefit to organizations trying to come to terms with GDPR. And this is an issue – and inherent contradiction - at the heart of GDPR. The same regulation can be seen as an unwelcome overheard that some organizations try to avoid, put off, or weigh up but dismiss, or it can be seen as an opportunity to modernize and create a data-driven business that also carries less risk.  While organizations may not be able to change when they started the process, every one remains in control of how effectively they respond. One of the first steps is to educate yourself before you rush into any hasty decisions.

‘GDPR is coming tomorrow!’ The Wall Street Journal just reported today that as many as 60% to 85% of companies say they don’t expect to be in full compliance by Friday’s deadline. Suggestions as to...

Engineered Systems

Cognizant Guest Blog: Supercharged JIT and How Technology Boosts Benefits

Just-in-time manufacturing (JIT) strategies date back to the 1980s, and manufacturers today continue to embrace JIT as they navigate a fast-changing business and technology landscape. This kind of staying power raises an obvious question: How has JIT adapted and evolved to be as useful today as it was 30 years ago? We recently discussed this question with two experts on modern manufacturing technology: Vinoth Balakrishnan, Associate Director at Cognizant Technology Solutions, and Subhendu Datta Bhowmik, Senior Solution Architect at Cognizant. Their insights reveal the critical role that cloud infrastructure plays in creating a new generation of high-performing, “supercharged” JIT manufacturing organizations.    JIT has a pedigree that dates back to the 1980s. Why do modern manufacturing organizations continue to embrace JIT strategies? Balakrishnan: The key to understanding JIT is to realize that it is not just a functionality or feature—it is an organization-wide discipline. In addition, there are two distinct pillars of a JIT strategy: One that is focused on organizational and process issues, and another that is more technology-focused. It is the organizational/process pillar of JIT that keeps it relevant even as technology evolves and changes. This is especially true for continuous improvement (CI), which is a core element of any modern JIT strategy. This is a concept that rises above shifting technology and business trends—giving manufacturers a proven and scalable model for building agile, efficient, and highly competitive operations. Of course, technology plays an important role in JIT, which excels at combining established practices with modern technology innovation. This versatility allows JIT to adapt readily to new manufacturing challenges and competitive pressures, and to meet the demands of global, multi-plant operations with very complex supply chains. This combination also leads to what we think of as “supercharged” JIT strategies that unlock new just-in-time benefits and capabilities. Technology innovation is transforming JIT into a truly frictionless materials-replenishment loop—one that shifts from manual to automated processes, and that enables supply chains linking hundreds or even thousands of companies via strings of real-time, fully automated transactions. Another way to think of this transformation is to imagine a supply chain that replaces material with information. When you can share reliable, real-time information up and down any supply chain, you enable huge efficiency gains, and drastic cuts in waste and misallocated resources. These benefits are relevant to all types of manufacturers, by the way, but they are especially important in industries where we see the most complex supply chains and the greatest scalability challenges—for example, the aerospace and automotive industries. Can you discuss a few areas where you have already seen technology innovation combine with JIT strategies to deliver game-changing benefits?  Bhowmik: Two examples come immediately to mind. First, the Industrial Internet of Things (IIoT) has enabled major speed, efficiency, and accuracy gains in key JIT manufacturing practices. The IIoT leverages its core capabilities—machine-to-machine communication and real-time data flows—to elevate JIT performance. Manufacturers gain real-time visibility into manufacturing processes and performance; and they are able to adjust and improve manufacturing processes on the fly. Value stream mapping—an exercise that identifies waste in a manufacturing process stream—illustrates the value of combining the IIoT with JIT activities. Value stream mapping was previously a manual exercise using individual observations and pencil-and-paper notes. The IIoT enables real-time, fully automated value stream mapping—a much faster and more accurate approach—and allows manufacturers to fix problems on the spot.  Second, cloud services are fueling a transformation in JIT capabilities and performance. One of the best examples involves supply chain management—an area where manufacturers face major challenges dealing with application and data integration, scalability and complexity, among many others.  Cloud services allow manufacturers to solve many of these issues by defining a common information-exchange framework—one in which each supplier represents a node in a virtual supply chain. This framework allows manufacturers to adapt and adjust in real time to shifts in demand, supply chain disruptions, time-to-market requirements, and other potential risks to JIT performance.  Looking ahead, which emerging technologies are most likely to have a similar impact on JIT capabilities and performance? Balakrishnan: Assuming a reasonable time frame—let’s say five years—I would look first at intelligent process automation (IPA).  IPA has implications for JIT manufacturing when it combines existing approaches to process automation with cutting-edge machine learning techniques. The resulting IPA applications can learn and adapt to new situations—a key to combining process automation with continuous improvement. Distributed ledger technology—also known as blockchain—is another important area of innovation. Blockchain has the potential to enable “frictionless” transactions that minimize cost, errors, and business risk, and some firms are already using blockchain to create private trading networks within their enterprise supply chains. Continuous improvement remains a pillar of a modern JIT strategy. Does CI present any special challenges or opportunities related to technology innovation? Bhowmik: I think it’s important to answer a question like this one by restating—first and foremost—that JIT is a technology-independent concept.  Certainly, this is true of Kanban, Five S and other CI methodologies that play a role in JIT strategy. These concepts have proven staying power and rely on timeless concepts—and these qualities make them even more valuable as strategic tools. At the same time, it’s important to understand that “technology independent” doesn’t mean “technology free.” Instead, it means that manufacturers are free to choose the right technology that complements a chosen CI methodology and meets their business needs. Fortunately, it is very easy to find examples that illustrate this point. Perhaps the most useful of these involves the ability to shift from physical Kanban cards to “eKanban” signaling systems. These rely on IIoT machine-to-machine communications and data flows to track the movement of materials; to distribute and route Kanban signals; and to integrate Kanban systems with ERP and other enterprise applications. eKanban systems based on IIoT capabilities are fully automated, and they scale to accommodate global manufacturing organizations of any size. They virtually eliminate the risk of manual entry errors and lost cards. Technology doesn’t change the principles that make Kanban useful, but it does radically improve your ability to apply those principles. For a second example, consider the role that machine learning and artificial intelligence can play in upgrading the IT security measures protecting your JIT manufacturing infrastructure. If a cyberattack stops the flow of eKanban signals, it can also stop your manufacturing processes. The benefits of eKanban are real, and they’re incredibly valuable—and it’s worth protecting those benefits with appropriate security technology choices.  These examples are a great lead-in to our final question: How can manufacturers set themselves up for success with their own “supercharged” JIT strategies?  Balakrishnan: My first piece of advice would be to partner with an integrator, or another source of expert advice and technology services. I realize this sounds like self-serving advice coming from a technology integrator. Nevertheless, it’s a valid recommendation, given the sheer number of technology options available to manufacturers. Most JIT-related technology initiatives, however, are built on the same foundation: cloud-ready infrastructure. It’s very important to understand what it means to be “cloud ready,” especially in a manufacturing context. First, a cloud-ready infrastructure must support easy and efficient integration of the infrastructure (IaaS), platform (PaaS) and application (SaaS) layers of a manufacturing technology stack. It must also facilitate integration with other systems—within and outside of the enterprise—and support interoperability standards such as service-oriented architectures. Second, cloud-ready infrastructure must offer a level of availability that is suitable for business-critical applications. Third, it must support Big Data applications—ingesting, storing, managing, and processing massive quantities of manufacturing and IIoT data. Next, it must be highly scalable—enabling fast and economical hardware upgrades, as well as scaling capabilities without scaling cost and risk. Finally, cost is always a concern. The most common way to control costs is to use commodity hardware optimized specifically for a cloud-ready manufacturing technology stack. Bhowmik: We’ve had a great deal of experience assessing and implementing cloud infrastructure solutions, of course, and we find that Oracle Exadata does the best job of satisfying these requirements. This is largely due to Oracle’s use of engineered systems: pre-integrated, fully optimized hardware-software pairings that incorporate the company’s expertise and experience building cloud-ready systems for the manufacturing industry.  Oracle Exadata meets our scalability, security, availability, and cost requirements; and it performs exceptionally well in Big Data and IIoT environments. As a result, Oracle Exadata remains our first choice for building cloud-ready infrastructure solutions for our manufacturing clients. About the Authors Vinoth Balakrishnan, CPIM (supply chain certified), Six Sigma Black Belt (ASQ certified), Total Productive Maintenance (Japan) certified Oracle Manufacturing, Supply and Demand Planning Architect with 16+ years of experience in manufacturing, supply chain and ERP domain in the U.S., Europe, and Asia. He leads the Oracle VCP/OTM practice at Cognizant.    Subhendu Datta Bhowmik, CSCP (supply chain certified), IOT (internet of things) and Machine Learning (Stanford) certified Oracle Solution Architect with 20 years of Oracle experience in large program management, supply chain management, product development lifecycle, and digital transformation. At Cognizant, he’s working on all Oracle Digital Transformation initiatives. 

Just-in-time manufacturing (JIT) strategies date back to the 1980s, and manufacturers today continue to embrace JIT as they navigate a fast-changing business and technology landscape. This kind of...

The Transformative Power of Blockchain: How Will It Affect Your Enterprise?

Today's blog post is a Q&A session with blockchain expert and founder of Datafloq, Mark van Rijmenam. According to the World Economic Forum, 10 percent of the global gross domestic product (GDP) will be stored in blockchain technology by 2027. It has major implications in every industry: impacting how businesses are governed, how transactions are handled, and who owns the data they produce. To understand how blockchain technology is specifically impacting the enterprise, we spoke with Mark van Rijmenam, widely considered one of the most influential blockchain experts. Van Rijmenam is the founder of dscvr.it and Datafloq, a faculty member at the Blockchain Research Institute, and author of the best-selling book Think Bigger. His book on blockchain for social good, Blockchain: Transforming Your Business and Our World is due out this August in English and Chinese. Because our readers may have different definitions of blockchain, how do you define it? A blockchain is a database in which data can be written and read but not edited and, as a result, you get data which is immutable. In addition, hash algorithms make data verifiable so that you can see it has not been changed. In addition, the data on a blockchain gets timestamped so that the data is traceable throughout the entire ledger. The three characteristics of data within a blockchain—immutable, verifiable, and traceable—really change the game. They enable us to create all kinds of new applications that weren’t possible before. Can you talk a little bit about what kind of infrastructure would be needed to support blockchain projects? From an infrastructure perspective, it all depends on the needs of the industry. In banking, for example, one of the challenges at the moment is that blockchains don’t have high transaction speeds. If you want to have a blockchain among a couple of banks, the institutions need to be able to perform high speed transactions and, for example, the bitcoin blockchain is not capable of doing that. Is there anything enterprises can do to put a technology foundation in place that would specifically support blockchain projects? We have to find solutions for regulations which prohibit putting data in the cloud. That’s basically the same as the GDPR, which will affect enterprises all over the world. You could say that GDPR is in conflict with blockchain because it requires you to be able to delete your data and, on a blockchain, you can’t. From an enterprise perspective, I think what’s important for organizations to remember is that blockchain technology is a means to an end. Not every problem requires blockchain. It’s predominantly useful when we collaborate with different entities. When there’s a transaction going on, and there’s a trust issue, that’s when blockchain comes into play. Organizations should look at the technology from that angle to see if it’s relevant to them. Where the problem is processing and analyzing huge quantities of data, blockchain is probably not the answer but, rather, a specific big data solution. What are some examples of when blockchain technology would be appropriate? For example, in the healthcare industry, all of a sudden, we can have tokenized, patient records on the blockchain. Because data on the blockchain is immutable, verifiable, and traceable, we know the records are true or correct. More importantly, these records will be owned by the patients in the future, and they will determine what or who gets access to the data and if the organization needs to be for access or not. In the financial industry, you can track and trace where certain money came from. Similarly, in the supply chain industry, you have access to details about shipment. For instance, if products were moved by ship, you can be certain that the temperature in the container stayed below four degrees Celsius. And if it went above that temperature, you automatically pay less because the products might be damaged. Provenance is an important use case for blockchain, which can result in increased transparency, for example, in the retail industry. You see the same applications of blockchain technology in the finance industry applied to the healthcare industry or the retail industry, etc. For example, you can trust people or organizations a lot more. The fact that you now can trace the life of a product or data creates transparency and trust because you know that it was owned by someone at a certain point and that it has not been changed. For every application, there are different companies in each industry working on them, and I think that’s fascinating to see. It’s also amazing how people are collaborating on all these new applications and basically reinventing society. What do you think the role of IT professionals should be as they’re trying to build out their strategy? Blockchain is a new technology that all IT professionals need to understand, and cryptography plays a massive role in that. It really depends on what kind of background you have, but you need to be familiar with the different technologies that are being developed, and they’re growing rapidly at the moment. From a data scientist perspective, suddenly you have to work very differently with data to analyze it and use it. From a developer perspective, you should be aware of what’s happening within this world and be ready to change course. Is there a tie between big data and blockchain? Yes. In fact, I recently wrote a white paper on the convergence of big data and blockchain. The main thing that will change is who owns data, moving from centralized organizations that are using the data to the person or organization that created the data. Within this new paradigm, in order to use and analyze data, you need a different type of consent. So the Facebook and Cambridge Analytica problems that we experienced recently will hopefully disappear when the data from social media platforms is owned by the end user who then determines what others can have access to. For example, users might offer a platform access to certain pieces of data for free and require a platform to pay for access to other types of data. Blockchain technology has some real implications for governance. What does that mean for enterprises? With blockchain, governance is embedded in the code, thanks to smart contracts. For example, if you fund an organization and certain milestones are hit, extra funds become available. That can be done in any type of organization. And you have to ensure these contracts are done correctly because, once on the blockchain, they become immutable. You can change parameters, but you can’t change the contract itself. So, from this perspective, blockchain has a real possibility to make governance more transparent. Are there other future applications for blockchain that are just wild ideas right now, but may actually come to fruition in the not so distant future? One area I’d like to focus on is the use of blockchain for social good. Especially at the moment, a lot of people think of cryptocurrencies as criminal and blockchain as financial, but blockchain also has tremendous possibilities to be used for good. For instance, there are about 1.5 billion people who don’t have an official government-issued identity and 2 billion people who are unbanked. These are problems that blockchain can help solve. If you don’t have an identity, you can’t get a bank account. If you don’t have a bank account, you can’t get a loan or prove the ownership of your house, and you remain in poverty. We can use the technology to help remove poverty, reduce the effects of climate change, get rid of fraud, improve fair trade, and improve democratic systems. Yes, blockchain will improve our business, but it will also improve our world if we use it correctly and that is exactly what my next book is all about. Learn how Oracle’s new platform-as-a-service (PaaS) offering is allowing companies to develop new applications using blockchain technology.

Today's blog post is a Q&A session with blockchain expert and founder of Datafloq, Mark van Rijmenam. According to the World Economic Forum, 10 percent of the global gross domestic product (GDP) will...

Engineered Systems

Oracle Database Appliance: Simplicity and Performance Go Hand-in-Hand

Financial transactions are an essential part of life. For retail bank customers, paying monthly bills online helps avoid late fees. For business owners, rapidly processing customer payments keeps the cash flowing. For investors, buying or selling a perfectly priced security helps keep portfolio objectives on target. Given the importance of such matters, seamless service and access to real-time data are critical.Indeed, when a lapse in data access occurs, the impact on a financial service company’s bottom line can be significant. A Ponemon Institute study estimated that the average cost of an unplanned data center outage in the financial services industry neared $1 million, encompassing: Damaged or lost data Reduced productivity Detection and remediation costs Legal and regulatory headaches Tarnished reputation and brand Downtime-related risks are significant for small and large financial service providers alike. Fortunately, building the infrastructure to help ensure high availability data access can be more budget-friendly than you think. Customer connections multiplying Fintech firms are leading the way in developing individual relationships with their customers, according to EY’s 2017 FinTech report. EY found that a third of digitally active consumers in 20 markets around the world use fintech services and project usage will exceed 50% in the coming years. Traditional financial services companies are now moving aggressively to catch up to and get ahead of these nimble industry disrupters. Interestingly, even as digital channels explode, EY’s 2017 Global Consumer Banking Survey found that between 60% and 100% of retail banking customers worldwide still visit local branches. While delivery platforms vary from cutting edge to old school, the foundation of all financial services remains data: Real-time insights into information such as balances, transactions, and rates accessible at any time of day. Yet collecting, managing, and analyzing that data must be balanced with controlling costs and sustainable profit margins. Keeping it simple High-end, sophisticated database technology is great, but sometimes isn’t a fit from a cost or business perspective. For example, a large financial service company may operate a broad network of remote or branch offices with small business-type needs, while a smaller firm may contend with a tight budget and limited resources. Increasingly, however, financial service providers have found that the Oracle Database Appliance (ODA) offers a streamlined, cost-effective approach to data management. This purpose-built system is optimized for Oracle Database and Oracle applications, and it can be configured and deployed in 30 minutes or less. Engineered to grow with a firm’s database needs, it leverages standardized, proven configurations that don’t require specialists or a team of installers. Plus, the Oracle Database Appliance eases budget concerns as clients only license the CPU cores they need (up to a robust 72). Certainty in an uncertain world Underlying the simplicity and cost effectiveness of the ODA is Oracle’s tradition of reliability and durability. Full redundancy and high availability rates allow data to be accessed 24/7 while protecting databases from both planned and unplanned downtime. Designed to eliminate a single point of failure, the system also reduces the service area of attack with a single-system patch feature. For high-availability solutions, the Oracle Database Appliance may be paired with Oracle Real Application Clusters, Oracle Active Data Guard, and Oracle GoldenGate. Built-in flexibility The Oracle Database Appliance works seamlessly with the Oracle Exadata Database Machine to provide unlimited scalability as businesses grow. Better suited for large enterprises, the Exadata system is simply too powerful in some situations. For example, a small and growing financial services company may not need the full Exadata solution at this stage of the business—or have the internal resources to support it. Similarly, for a multinational bank that employs Exadata at a macro level, a new office or branch may have modest database needs as it builds a local footprint. The Oracle Database Appliance is ideal in both situations. Additionally, in the latter case, the branch-level installation will fully integrate with the Exadata system housed at any regional or international base. The two systems were designed to be complementary, with smooth data movement between connected databases and the cloud as well. Ultimately, Exadata has its place, but with the Oracle Database Appliance, you aren’t forced to take on the complexity and cost if it doesn’t fit. Customer success story: Yuanta Securities keeps it real-time Taiwan-based Yuanta Securities Company is an investment banking firm that provides assorted brokerage and other investment services across a 176-branch network. To realize the benefits of its merger with Polaris Securities, a popular transaction platform operator, Yuanta Securities needed to ensure seamless, real-time data synchronization between the two firms’ distinct transaction systems without disrupting the customer experience. In addition, it sought to consolidate six databases into a single platform, simplify system management, and rely upon a single support vendor. To tackle these challenges, Yuanta Securities deployed three Oracle Database Appliance units—one for its production site, a second for its disaster discovery site, and the third for development and testing. While one Oracle Database Appliance unit required just three hours for installation and configuration, the entire implementation, which included Oracle GoldenGate and Oracle Active Data Guard, was live within 45 days. The disruption to customer transactions was minimal as the company achieved near-real-time, back-end data synchronization with GoldenGate. Furthermore, Yuanta Securities slashed its hardware costs by 70% and saved on licensing costs due to Oracle Database Appliance’s flexible, capacity-on-demand licensing model. Customer success story: Coopenae grows full-speed ahead Costa Rica-based Coopenae is a credit union that serves 100,000 members through 27 locations nationwide. Founded in 1966, the cooperative offers a full array of financial services aimed at meeting the financial needs of its members and their families and communities. Coinciding with Coopenae’s 50th anniversary, management modernized the company’s systems environment to address existing challenges as well as prepare for future opportunities. Key requirements of the upgrade included: Accelerated batch processing times that didn’t affect other business critical applications such as funds management A highly efficient and scalable engineered system A high-performing, server-virtualization environment featuring a simplified, cost-effective single-vendor support approach Oracle Database Appliance fit the bill on all fronts, along with redundant databases, servers, storage, and networking. In turn, Coopenae reported that its database performance improved by three-fold, financial statements and other reports were generated five times faster, and monthly closing processing time dropped from six hours to two hours. A smart way to fulfill your database needs As Yuanta Securities and Coopenae discovered, always-on, high-performing database technology doesn’t have to break the bank. Nor does it require debilitating deployment times or complicated support requirements. Instead, the Oracle Database Appliance offers a simple path to improved data performance and the adaptability to align with growing business needs.    

Financial transactions are an essential part of life. For retail bank customers, paying monthly bills online helps avoid late fees. For business owners, rapidly processing customer payments keeps...

Going Boldly into the Brave New World of Digital Transformation

Today's blog post is a Q&A session with influencer, internationally recognized analyst, and founder of CXOTalk, Michael Krigsman. Digital innovation disrupts industries and fundamentally changes the way we do business. For instance, in 2018, IT-as-a-service comprised more than a third of IT spending, and Gartner predicts that by 2020, artificial intelligence (AI) will create more jobs than it eliminates. To Michael Krigsman, internationally-recognized analyst and founder of CXOTalk, this upheaval means big changes for IT. The way we approach business today, he says, is being turned on its head by new demands from internal and external customers. We’re at a crossroads where innovative technologies and new business models are overtaking traditional approaches, creating significant pressure and challenges for tech infrastructure and the people who manage it. We invited Michael to share more insights about digital transformation, especially how it’s impacting IT infrastructure and roles within IT departments. And he knows his way around the topic; he’s written more than 1,000 articles on tech innovation, has been named the #1 CIO influencer among industry analysts, and ranks among the top four most mentioned IT leaders on Twitter. Let’s start with the big picture. Can you talk a little bit about the dominant drivers in digital transformation today? Digital transformation refers to the way an organization responds to changes in its environment. Consumer expectations have changed. Competition has changed. Startups with new business models are challenging established companies in significant ways. And so, digital transformation is an organization's response to what is going on in and around its environment, including the internal changes the company must make. For established organizations, these changes often mean dismantling departmental silos and transforming how staff communicate and share information. The goal is developing a new relationship with customers and enabling the organization to speak with a single voice. We see varying degrees of digital transformation by industry and company size. In some industries, like retail, external circumstances have forced them to make dramatic changes. A number of retailers have done it successfully, but others have been slow to respond and, therefore, are really suffering. Look at Toys R Us as an example of a large retailer going out of business. To get a little more specific, how does AI play into digital transformation? I recently discussed this question with Paul Daugherty, Chief Innovation and Technology Officer at Accenture. I asked him what's different about AI today. Paul said that you need three things to take full advantage of AI: computing power, lots of data, and meaningful algorithms. As processing speeds increase and the amount of data we collect increases exponentially, the algorithms we create will produce increasingly rich predictions. So, as we digitally transform and collect more data, we accelerate the impact of AI. For example, in marketing, we see personalized offers all the time, such as when an eCommerce site suggests product recommendations. If AI is done right, the offer makes sense and you say, “Oh, that’s interesting. I should look at that.” In another dramatic example, we see important uses of AI in medicine. I recently interviewed one of the pioneers of augmented and virtual reality, who is using these new technologies in surgery and telemedicine to provide access to healthcare and medical education in parts of the world that have traditionally been underserved due to their location and economics. What is happening in IT, and what do people in IT need in terms of infrastructure to support digital transformation? The obvious answer is that there’s a set of enabling technologies. But the real question is: Does this technology live on-premise or in the cloud? Depending on where that data lives, it's going to require different skill sets. If you're building these resources in-house, then you're going to need infrastructure people to build it, manage it, and run it. However, we know that the broader trend is for companies to outsource significant parts of their computing to the cloud, and fewer database administrators are needed in-house. This development leads to an important question for somebody that works in this area of IT: “What should I do?" To answer this question, we need to remember that digital transformation is driven by a relentless focus on the customer; so that's a great place to start. To align more closely with both internal and external customers, IT must become more agile and adaptable than ever before. The bottom line is that IT folks need to develop an entirely new mentality, and that's difficult for many people. How do they transition into a different role and a different mindset? There’s no magic bullet. There are going to be people who can’t do that—they just don't have the emotional flexibility. Change is both an emotional and psychological response to the world: I do things one way today, but I need to do things differently tomorrow. The best companies will help guide, mentor, and provide training. For example, I’ve been looking at “citizen development” lately, which is a set of development tools that experienced end users can use to do basic automation. It’s great because it pushes the workload for basic application functions onto the users, which frees up IT’s time, and the users get exactly what they want. You could use folks who already have a technology background to support users’ citizen development projects. That's just one example of repurposing skills to a higher-value function. Instead of doing system or database administration, they're now supporting people. Anytime you have somebody helping business users achieve their goals more directly, that’s going to be a higher-value activity. We’ve talked a little about it already, but what are the risks to IT from digital transformation? The real risk for IT is one of relevancy. If users are going around IT and buying all kinds of solutions, what does that say about IT? There’s a big message that IT isn’t giving users what they need. The risk is that if IT is not sufficiently agile, or they don't involve users in the decision-making process, then users may not get what they want. That’s how IT becomes marginalized. Then how is the role of IT evolving? When end users buy and use their own tools and systems, things like data integration and security remain the ultimate province of IT. A departmental user who’s buying a SaaS tool most likely won’t have the skills or permissions to integrate corporate data into the platform successfully. So, data integration is certainly one of the key roles that IT needs to retain as its own. As I see it, IT is under pressure because it must simultaneously innovate, sustain operational excellence, and save money—that’s what they’re being told to do. “We need you to keep the lights on, keep the systems running. We want you to innovate and, at the same time, cut your budget and do all this magic with less money than you did before.” These three goals are in conflict and almost mutually exclusive, but the modern CIO mandate involves them all. Because this process of digital transformation is unique for each business, are many of them looking for help determining a roadmap to get there? Can they do it on their own or do they need external guidance? To begin with, digital transformation projects are really business transformation projects and we should think about them that way. These projects relate to business models and customer experience. The solution that enables all these things. The right infrastructure to support these cloud, mobile, AI, and data initiatives can help us give a more personalized service and a better quality experience for our customers. So, get help if you need it, and you probably will. Sometimes companies hire chief digital officers to take responsibility. Just be aware that the goal of the Chief Digital Officer (CDO) should be to eventually make her or his role obsolete. You don’t have born-in-the-cloud companies with CDOs because they're already digital from the ground up. Once a company is sufficiently down the digital path, it should no longer need a CDO. As markets and consumers continue to change, companies may reach a point of conflict between established operational processes and new business models needed to satisfy evolving consumer expectations. Being committed to digital transformation means making investments that may have low ROI in the short term but will pay off nicely in the future. Many companies find that form of innovation to be hard even though these investments are essential. Learn more about how to manage a business through digital transformation from Oracle CEO Safra Catz.  

Today's blog post is a Q&A session with influencer, internationally recognized analyst, and founder of CXOTalk, Michael Krigsman. Digital innovation disrupts industries and fundamentally changes the...

Cloud Infrastructure Services

Telcos, Your Future Is Calling: It Wants to Show You What’s Possible

With more than 107,000 visitors from 205 countries or territories, this year’s Mobile World Congress 2018 in Barcelona provided a glimpse into the future of telecommunications. The big news, of course, is the imminent arrival (finally!) of 5G networks that can power exciting new technologies and end-user solutions. The introduction of high-speed, high-volume data networks opens vast new growth opportunities, thanks to the proliferation of data that telcos can harness to develop new customer offerings. No less exciting are the possibilities that go beyond the needs of businesses and consumers to benefit society more broadly. Here’s a look at some of the trends to come out of MWC 2018. Innovation Is Helping Solve Societal Problems The rise of smart cities, while still in the initial stages, is a strong indicator of the enormous potential that ubiquitous data flow has to improve the lives of citizens, even if they are not using the technology themselves. This is part of a larger trend of telcos moving up the value chain, beyond simple connectivity and network services to providing platforms, applications, and managed services to their customers. For example, multiple providers, including China Mobile, T-Mobile, Telefonica, and Orange, are working with cities in China, Germany, France, Spain, Portugal, and Brazil to collect air quality data from connected monitors. These carriers then leverage big data and cloud solutions to create predictive and preventative analytics. Korea Telecom is working with the South Korean government to roll out a similar project nationwide. In all, the global air quality monitoring and control market is forecast to be worth about $20 billion in 2021, and the Internet of Things (IoT) and big data are making this possible. A Chinese telco is using a similar scheme to predict outbreaks of avian flu by placing probes in chicken farms. More Capacity Is Enabling Value-Added Commercial Offerings On the commercial front, telcos are advancing the continuing convergence between communications and media by looking for more opportunities to deliver content. This may entail creating the content themselves or collaborating with content providers who are looking into merger and acquisition activities (such as Verizon’s $4.5 billion acquisition of Yahoo!, completed in 2017). Here are some examples we saw of telcos using data to provide targeted services: Turkey’s Turkcell is creating “smart” billboards in high-traffic urban centers that use data gathered from nearby mobile phones to tailor advertisements to the crowd. Orange is offering a network-connected console that can control the growing number of “smart home” devices, from thermostats to lighting.  Multiple carriers are exploring new services for cars revolving around assisted or autonomous driving. New Data Protection Regulations Were Top of Mind Presenters and attendees alike were concerned with the European Union’s General Data Protection Regulation (GDPR), which goes into effect May 25, 2018. Among other stipulations, GDPR requires that organizations holding confidential data must allow individuals to opt in to have their information shared as well as provide the ability have their personal data removed from a system, known as the “right to be forgotten.” The good news is that GDPR does not specify how organizations enact controls, but instead focuses on end results. This means that telcos already in compliance with the more prescriptive PCI DSS V.3.2 requirements, are in an ideal position to transition to full GDPR compliance. Smart telcos will see the rise in privacy regulations less as a challenge and more as an opportunity to provide customers with differentiating products and services. Telcos Need the Infrastructure to Support Data-Intensive Services The imminent 5G revolution means more data moving faster than ever before. The great news is that this capability offers vast opportunities for the future of data-intensive services. The challenge? Legacy infrastructure is not only sluggish, it also creates silos that prevent the free flow of data required to maximize value. Cloud allows enterprises to consolidate and gather data in real time from all sources for 360-degree business insights, real-time, personalized customer offerings, and a friction-free customer experience. A cloud-ready platform with integrated infrastructure can maximize efficiency and speed innovation, all while helping to assure regulatory compliance. Take the example of NTT DOCOMO, Japan’s largest mobile service provider: By consolidating its data onto Exadata Database Machines, the company reduced billing processing time by 90% even as it lowered system costs by up to 30%. Imagine the possibilities that such data consolidation, speed, and processing power could bring to other customer-facing applications. The Future Holds a Mobile World of Possibility Telcos understand today’s changing market dynamics and are looking for ways to provide more value-added services to customers. They can accomplish this by leveraging the growing volume of data that they collect from users while complying with increasing privacy regulation. The arrival of 5G networks will supply the power and capacity to deliver innovation like smart cities—as long as telcos can rely on powerful, low-latency, reliable, and secure IT infrastructure. Modern back-office operations built on engineered systems that are co-engineered so that all the components work together for optimal performance can help telcos answer the call of the future that we glimpsed at the MWC. 

With more than 107,000 visitors from 205 countries or territories, this year’s Mobile World Congress 2018 in Barcelona provided a glimpse into the future of telecommunications. The big news, of...

Welcome to the ‘Self-Driving’ Autonomous Database

Maria Colgan, master product manager, Oracle Database, started her career at Oracle over 20 years ago as a developer. Today, she creates best practices for incorporating Oracle Database into enterprise environments and applies customer and partner feedback into future product releases. Maria writes for her popular SQLMaria blog and the Oracle Optimizer blog. Maria, thank you for taking the time to talk with us about the evolution from automated databases to autonomous databases. Would you kick it off by briefly describing the trends that are putting pressure on DBAs dealing with traditional database technology? DBAs today have extremely varied roles in their organizations and wear many hats. Specifically, they face tremendous pressure in three areas: time, security, and speed. Time is probably their most critical resource, as businesses want to become more agile. DBAs are frequently asked to provision new systems or extend existing databases in order to accommodate growing volumes of data coming from multiple sources, like sensor data, IoT data, and unstructured data. The DBA is being asked to get this data into a system quickly, so that the business can run analytics on it earlier and get value out of that data faster. There’s also, of course, the pressure to maintain security. No organization wants to be in the news. The DBAs need to stand up these systems quickly, but also securely. The final element that’s putting them under the microscope is this need to move to the cloud. They’re trying to choose the right cloud services to make sure they can ingest the data quickly and access it quickly. How does the automation of databases help address these challenges? The journey to automate databases, especially here at Oracle, started over 20 years ago when we started to automate some of the most common, mundane tasks that a DBA had to do, for example, managing storage space, memory, and collecting statistics from data. We all understand this process automation, but now we have introduced the term autonomous. What’s the difference? With process automation, what you get is functionality that is done automatically, but the DBA is still free to go in and override or disable any of that automation. When it’s autonomous, we’re looking at much more comprehensive automation. It’s having the more mundane and repetitious operational tasks happen automatically without the DBAs even being able to intervene once they have set the policies for those processes. It’s shifting the balance of power a little bit, but it’s really freeing up the DBAs from having to worry about these monotonous tasks. With automation, the DBAs can write their own scripts to back up the system. But with the Oracle Autonomous Database, we’ll automatically do it. The DBAs will still ultimately control the process because they are responsible for setting the policies for those processes: For instance, they’ll be able to tell the Autonomous Database, “I want the backup to reside here in this particular cloud service. And I want you to retain the backup for X amount of days, weeks, or months.” They’re able to determine that level and intervene at that point, but the actual backup will be a part of the autonomous service and something they won’t need to change or influence. The term "self-driving database" really does fit the Autonomous Database, doesn’t it? We think so. It does the operations that are necessary to keep a system highly available. By making a database self-driving, we remove any of the accidents that may happen by having somebody behind the wheel. Any time you have manual processes, no matter how careful you are, there’s the possibility of human error. By being self-driving, the Autonomous Database will make sure that the database gets backed up on a nightly basis, patched quarterly, and that the standby is automatically available and maintained. All of these tasks just happen for the customer. It really is self-driving in that respect. This self-driving aspect of the Autonomous Database provides this guaranteed level of availability, security, and performance. And one of the things that folks value more than anything when it comes to a database is this predictability. Now that we have a better understanding of what it does, can you describe the major components of the Autonomous Database and how it’s structured? The first piece is the automated infrastructure. That’s the Exadata platform. Of course, we believe that Oracle Exadata is the absolute best platform for the Oracle Database because it’s been completely optimized for that database. It offers us full, high availability just by the way it’s structured. Then, you get Oracle Database 18c, where we’ve got complete automation in most aspects of the database. That gets layered on top of the infrastructure. The final part is the cloud. It gives us this opportunity to automate the operations that keep a database highly available—things like the standby being in a separate region with a third-party observer. All of those operational tasks that are required to keep the lights on in a database can now be fully automated because the database is running in Oracle Cloud, and we have full control of that. Can enterprises get the Autonomous Database on premise? I get that question often. People will ask, "If I download 18c, will I get an Autonomous Database on premise?" I have to say no. It is really the combination of these three things. Our software running on our hardware in our cloud allows us to give you that end-to-end automation and the full autonomous experience. But there is a way to get the Autonomous Database on premise. And that is with Oracle Cloud at Customer. That gives you Oracle’s hardware—our cloud machine and our Exadata system—in your data center, behind your firewall, but completely managed and operated by Oracle for you. The functionality is effectively exactly the same as the cloud service, in as much as you simply connect, begin loading your data, and get going, rather than having to do all of the management tasks. What are some of the quantitative performance improvements that customers using the Autonomous Database might see? I have a great example of performance improvement as well. We’ve had a number of folks run tests on the Autonomous Data Warehouse, which is the first member of the Autonomous Database family to become publicly available. For example, Accenture wanted to test how quickly they could load large volumes of data. They needed to load 500 million rows of data into the system. In the tests, it took them three minutes on the Autonomous Data Warehouse Cloud. That was about a 14X performance improvement over what they’d seen on some other cloud options. Data analytics is always a key function that enterprises need. What would a solution bundle look like to get this functionality? For data analytics, the bundle would include the Autonomous Data Warehouse integrated with Oracle Analytics Cloud. What you basically want is to have a platform where you can quickly ingest data and instantly begin to run analytics and the seamless integration of these cloud services give you that. Analytics are often driven though a dashboard and reports that visualize the data. Oracle Analytics Cloud (OAC) would give you the capability to instantly ask business questions and get the data back in a visual format—that is, in charts and graphs. Having that integrated with the Oracle Autonomous Database means that you can provision your database, load your data, and actually begin running those analytics all within a couple of minutes, rather than days, weeks, or even months. The key benefit is speed. A business might need to make time-sensitive critical decisions about, let’s say, produce that’s fresh and will spoil quickly: “Where should I ship it? Where is the demand for this produce across the country?” It’s about being able to have all of the information at your fingertips, so you can make business decisions based on real data. Are there specific indications that an enterprise might look at that and say, “The automation that we’ve got is really not enough—we really need to move to Autonomous Database”? Provisioning, patching, and upgrading database systems on-premises is typically a very human resource-intensive set of operations. The Autonomous Database is really the only solution where you can hand off all of that, have Oracle do it automatically for you, and have those patches applied in a rolling fashion, so that the system isn’t down while we’re still running the maintenance. How would you sum up the advantages of Autonomous Database? With all of the time-suck tasks—provisioning a new system, patching, and backing up existing systems—happening automatically in the autonomous database, the enterprise can focus on agility and innovation. We’re going to give businesses back that one commodity that’s most important—time.

Maria Colgan, master product manager, Oracle Database, started her career at Oracle over 20 years ago as a developer. Today, she creates best practices for incorporating Oracle Database into...

Cognizant Guest Blog: Uncovering the Keys to Success in Today’s Retail Industry

Innovation and Integration For modern retailers, success remains a moving target... Today’s retailers know from experience that there’s no place for complacency in their business plans or technology investments. Our recent 4 part series "Selling the Future" examined the role of IT technology in modern retail. We had a chance to sit down and talk with Hemalatha Vema, Senior Director of ERP and Analytics at Cognizant, who plays a key role in helping retailers stay ahead of the technology innovations and shifts in consumer behavior that continue to define risk and reward in a fast-changing industry. We spoke with Vema about some of the key challenges retailers face today, in terms of understanding the technology landscape, how to stay ahead of changing consumer needs and expectations, and how to invest in the right foundational technologies to stay competitive and relevant. Clearly, the relationship between retailers and consumers has changed dramatically over the past decade or so. Where do you see the most important changes happening today, in terms of how consumers interact with retail technology? There are several areas where we see a very dynamic relationship between technology innovation and consumer needs. The first involves the increasingly critical role of omni-channel retail experiences. Consumers today routinely begin the journey to purchase on one digital device but end it with a purchase on a different device—and they often engage with a retailer in still other ways between those two points. This includes a growing tendency to combine elements of digital and in-store experiences—for example, trying out a product in a retail showroom, purchasing the product online, and then picking up a purchase in person. Obviously, in some ways, consumers expect these interactions to be different—that’s why they’re choosing them. At the same time, consumers clearly expect a seamless and intuitive experience across all of these channels and platforms. A related challenge involves the desire for personalized experiences like relevant product recommendations, targeted promotions, and so on. This combination of personalization and omni-channel can be very powerful, but it also adds another layer of personalization technology—and thus an additional set of integration and data management requirements—to a retailer’s marketing technology stack.     A third focus area involves rapid-fire innovation around voice technology. Voice apps require major front-end changes in terms of user experience and user interface development. Yet voice also requires even bigger changes to back-end systems, which forces retailers to rethink everything from data management to business intelligence and application integration.  Finally, I think retailers must stay ahead of rising consumer expectations around retail supply-chain transparency. Consumers are asking more sophisticated questions about ethical sourcing and eco-friendly manufacturing practices, and we think they are more likely to penalize retailers that are unwilling or unable to supply appropriate answers.   Where are retailers most likely to find technology gaps that prevent them from meeting their buyers’ expectations and needs in these areas?   The single biggest challenge most retailers face involves a lack of real-time inventory insight.  Let’s begin by laying out a fundamental principle. Selling something—anything—to a customer requires the ability to answer two questions: Do you have the item they want, and how much will it cost? “Do you have what I want” is a very simple question, and a customer usually expects a very simple answer. What happens in between, however, is anything but simple. A typical retail inventory may include hundreds of thousands of SKUs spread across hundreds of retail and digital stores, pickup points, warehouses, and supply-chain nodes. When different applications govern different elements within this environment—for example, separate systems for physical and digital store inventories—it can be a struggle to maintain any level of inventory insight, in real time or otherwise. This is truly a show-stopping issue for a modern retail operation. Without accurate, real-time inventory visibility, most other modern retail initiatives grind to a halt.  Do these challenges related to inventory control and visibility reflect bigger issues around data access and integration? This is often the case. A retailer’s inventory-related issues often turn out to be just one facet of a much bigger organizational challenge related to a retailer’s ability to access, manage, and leverage its data assets. Keep in mind that a typical retailer may run hundreds of IT systems and applications. Most of these serve horizontal functions: marketing, sales, merchandising, logistics, point of sale, and the like. These systems are typically very well adapted for particular retail roles, with specific priorities and needs. When we look at the keys to modern retail success, however, what we see is a need for seamless, friction-free communication across retail functions. Does an in-store pickup point have real-time access to digital purchasing data? Do call center service reps have a complete view of a customer’s purchases and prior service requests? A modern retailer must be able to answer thousands of similar questions every day, and this means breaking out of these horizontal silos to build a truly integrated, data-driven retail organization. I’ll make a couple of related points here. First, the same data-driven capabilities also flow upward. That is, they support truly cross-functional analytics, reporting, and decision-making processes. Second, these capabilities don’t just allow better decision-making and superior customer experiences. They also support a host of improvements in supply-chain control and efficiency. A good example involves blockchain, a key technology for building automated, self-governing, highly responsive supplier networks. Blockchain isn’t possible without first breaking down the data silos and eliminating the manual processes that once separated a retailer’s finance, operations, logistics, and other relevant functions.  Once you’ve helped a retailer identify the need to adapt a data-driven technology infrastructure, are there next steps you can recommend in terms of selecting and implementing solutions?  Of course, as system integrators, our team at Cognizant is extremely focused on adapting solutions to a retailer’s unique business needs, existing technology infrastructure, and other factors. Yet we also recognize that any modern retailer will rely on certain core capabilities. One of these involves the data-driven functionality we discussed previously. Another involves adopting cloud-based infrastructure that is scalable, easy to use, highly available, and ideally suited for app-integration and automation initiatives. On both counts, what we find is that Oracle’s use of engineered systems—for example, the Oracle Exadata environment—is an excellent match for our retail clients’ needs. By an engineered system, we mean integrated hardware and software that has been optimized for really heavy data workloads and makes the information usable for planning, forecasting, procurement, production, marketing, customer service, and other functions. Oracle Exadata also meets our requirements around scalability, security, availability, and cost. It also performs exceptionally well in big data environments, a significant factor, for example, when we consider the impact of unstructured voice data on existing retail analytics and information systems. And finally, it’s a cloud-ready infrastructure, so at any point, what’s running on an engineered system can easily be pathed over to the cloud. Thank you for the wonderful insights, Hemalatha. I invite our readers to check out the 4 part series on IT technology and retail here: Part 1: Selling the Future: The Last Days of Retail or the Best Days of Retail?, Part 2: Selling the Future: Designing Experiences Your Customers Crave, Part 3: Selling the Future: You’re Not Selling Goods. You’re Selling an Experience, and Part 4: Selling the Future: Where Can Your Supply Chain Take You?. Learn more about how Oracle Engineered Systems can help your company thrive in the face of change. About the Author Hemalatha Vema is Practice Leader at Cognizant’s Oracle Solution Group. She has more than 20 years of business consulting experience in Oracle ERP, Industry and Analytics solutions. She has consulted for various global customers, managed large engagements and incubated new Oracle Practices. She is continuously exploring new trends and launching new solution offerings for customers.

Innovation and Integration For modern retailers, success remains a moving target... Today’s retailers know from experience that there’s no place for complacency in their business plans or technology...


SAP Reveals Much Longer-Term Commitment to Oracle

Today's guest post is from Mustafa Aktas, Head of Oracle on Oracle Solutions, EMEA Region, Engineered Systems for Oracle Applications, ISVs and SAP. SAP has much longer term commitment to Oracle then many think both for on-premise and extending to the Cloud! You may have heard that earlier this month on April 4th, 2018, SAP certified Oracle Database Exadata Cloud Service, aka ExaCS, to run all the Business Suite stack applications such as ERP/ECC, HR, SCM, CRM, etc. This is yet another commitment of SAP to support Oracle in the very longer term as we have seen the proof of that with initial signals earlier last year with the extended support of SAP HR/HCM on Oracle (anyDB) from 2025 to 2030 where we expect similar commitments to happen to all other core applications. With this certification coming after the availability of Oracle Database 12c and some key options only available to SAP customers who has chosen to leverage SAP on Oracle infrastructure such as Real Application Cluster, Automatic Storage Management and Oracle Database In-Memory what we already have certified already for on-premise now available on Exadata Cloud Service as well. With this certification, similar to what we already have available for on-premises Exadata for the last 6-years with 100s of live SAP customers (and growing), SAP customers can now deploy Oracle Databases of SAP Business Suite (either for Dev, Test/QA, Pre-Production, Production, Disaster Recovery – on any platforms) based on SAP NetWeaver 7.x technology platform on Exadata Cloud Service. The same Exadata architecture, performance, capabilities, and advantages (for example, the ability to build cloud assurance and hybrid deployment, and scale out/failover between on-premise to cloud, and rapid deployment) are now available with this ExaCS certification for SAP at an optimized cost based on cloud’s pay-per-use licensing model. The Oracle Cloud Infrastructure portfolio for SAP customers is getting much more interesting with "Bring Your own License" and other great programs that directly benefits our customers. REMINDER: Don’t forget to download the SAP on OCI Newsletter here to see Cloud with no I/O latency and much positive price/performance ratio compared to Azure and AWS. CONCLUSION: Oracle complete, open, secure, scalable, powerful cloud’s value means a lot to SAP users who can follow all the development updates/plans and available SAP on Oracle cloud certifications closely on SAP SCN as well.  About the Author Mustafa Aktas is the Head of Oracle on Oracle Solutions for the EMEA Region, focusing on Engineered Systems for Oracle Applications, ISVs and SAP products. He leads the focused and specialized x-LOB team including co-prime sales, business development, presales, consulting and delivery functions, to help our customers to build low-cost, optimized, high-performing cloud platforms to achieve substantially and higher business benefits. His team's motto is to help clients to lower operating cost and risk with high performing, integrated solutions that also increase user productivity and free up IT resources to focus on new innovations that drive business growth.  Feel free to reach him mustafa.aktas@oracle.com for any questions anytime. 

Today's guest post is from Mustafa Aktas, Head of Oracle on Oracle Solutions, EMEA Region, Engineered Systems for Oracle Applications, ISVs and SAP. SAP has much longer term commitment to Oracle then...

Cloud Infrastructure Services

Telcos, Your Future is Calling! Is Your Back Office Holding Your Back?

Telecommunications service providers (telcos) have faced an ongoing assault on their business the past decade from some unlikely sources. How nimbly their back-office systems support their business strategy can determine if telcos can win the battle against upstarts and keep their businesses relevant in a fast-changing environment. Back-Office Systems Must Be Able to Handle New Business Models With the introduction of the smartphone a little more than 10 years ago, the entrance of over-the-top (OTT) players like YouTube and Facebook, and now the expansion of IoT and the coming 5G network rollouts mean new revenue streams continue to disrupt the market. This offers new opportunities as well as new risks. As evidence of the accelerating pace of change, at the Mobile World Congress 2018 held in Barcelona earlier this year, Sprint announced that it is ready to roll out 5G capabilities this year in six U.S. cities: Atlanta, Chicago, Dallas, Houston, Los Angeles, and Washington, DC. Legacy operations and business software are reaching the limits of what they can do to support this new reality—and it’s not just traditional operations and business support systems. Back-office systems extend to all customer-support and employee software. From the call center to billing to field service workers, existing systems can’t deliver the customer experience expected by today’s digital consumers. Modernizing Operations Holds the Key to Business Innovation The key to success for many telcos is providing a strong back-office operation that can support the agility, scalability, and processing needed to compete with the nontraditional telecommunications companies and other competitors that are infiltrating the market. Building the back office on a cloud-based platform with integrated infrastructure can minimize costs, maximize efficiency, and make it easier and quicker to innovate. The cloud paves the way for transformation from traditional to digital, from slow to nimble. What specifically do traditional telcos need to do to modernize their back offices and compete on a level playing field with the OTTs? Modernize financial management, planning, and operations to enhance agility and innovation. Build an end-to-end supply chain platform with cross-functional, demand-driven digital operating models and collaborative planning processes. Be cloud-ready with on-premises or cloud infrastructure that provides a choice of deployment models, allowing IT flexibility to move seamlessly among different options throughout the digital transformation process. Build infrastructure on cloud-ready engineered systems to help manage the deluge of data so that telcos can focus on innovation, not slowed-down data management. ​NTT DOCOMO Builds Back-Office Operations for Blazing Speed Telco giant NTT DOCOMO—Japan’s largest mobile service provider, with more than 66 million customers—is one telco that has answered the call to modernize. NTT DOCOMO launched its +d initiative, which offers a unified brand experience for all the company’s services, by consolidating data across the enterprise to introduce new services. To accomplish this total integration and create a seamless environment, it needed a database platform that would support rapid data growth over the next decade, reduce systems maintenance, and enable rapid new application development. By consolidating 350 legacy servers and storage systems on 30 Oracle Exadata Database Machines, it was able to cut maintenance costs in half and installation costs by 25 percent. Perhaps most important, with its integrated engineered systems designed to work together to achieve maximum performance capability, DOCOMO can now process mobile billings 10X faster than before the consolidation. The power of consolidated systems that work in harmony using Oracle Exadata and Oracle Maximum Availability Architecture means that DOCOMO can now give its customers real-time mobile charge calculations for a billion calls a day. By deploying an integrated engineered system, the company benefits from simpler systems management with greater visibility into performance issues. A Major Wireless Service Provider Makes Financing Super Simple with SuperCluster Similarly, a major U.S. telco had a rapidly growing lending and leasing business for post-paid subscriber handsets. But its established business processes were expensive and provided a less-than-optimal customer experience. To overcome these hurdles, the company chose Oracle Financial Services Lending and Leasing (OFSLL) application and SuperCluster M7. Now, this telco can deliver an end-to-end solution to originate, service, and securitize the assets to third parties. Don’t Let the Back Office Hold You Back Telcos can meet the challenges of the fast-evolving, data-driven industry and become nimble competitors if they ensure that they have a modern back office on an integrated cloud-ready infrastructure. They can’t do it by sitting back and letting the OTTs take the lead or by lagging behind in technology adoption. It’s the forward-looking telcos committed to digital transformation that are answering the call of the future. To learn more about how Oracle Engineered Systems can modernize your back office, visit our website or contact your Oracle representative.  

Telecommunications service providers (telcos) have faced an ongoing assault on their business the past decade from some unlikely sources. How nimbly their back-office systems support their business...

Cloud Infrastructure Services

Your Future Is Calling: Surprise! There’s (Always) More Regulation on the Way

Unless you’ve been hiding out in a cave for the past couple of years, you know that the always-highly-regulated telecommunications industry is about to be hit with even more regulation. The latest salvo from regulators is Europe’s General Data Protection Regulation (GDPR), which goes into effect May 25, 2018. GDPR will add new accountability obligations, stronger user rights, and greater restrictions on international data flows for any organization that stores user data, including telcos, financial services providers, and social networks. These regulations apply to data for individuals within the EU as well as the export of any personal data outside the EU—so it will affect all businesses that collect data from EU citizens. While increasing the compliance requirements, these new regulations also present tremendous opportunities for telecommunications companies to gain greater customer trust through improved data protection, and to expand and refine their service offerings. Greater Challenge, Greater Opportunity GDPR dramatically ups the ante both in terms of its data governance requirements and the cost of noncompliance, which could result in legal action or fines of up to €20 million, or 4% of worldwide annual revenue. No matter where they are headquartered, companies holding the confidential data of EU citizens will not only need to comply themselves, but also ensure that their vendors, including SaaS providers, meet the requirements as well. Yet, in a 2017 survey by Guidance Software, 24% of service providers predicted they would not meet the deadline. The survey also identified the top four actions companies must take to become GDPR compliant: Develop policies and procedures to anonymize and de-identify personal data (25%). Conduct a full audit of EU personal data manifestation (21%). Use US cloud repositories that incorporate EU encryption standards (21%). Evaluate all third-party partners that access personal data transfers (21%). How to Prepare for GDPR: Build on PCI DSS The good news is that telcos already have a blueprint for achieving GDPR compliance: Payment Card Industry Data Security Standard (PCI DSS), the latest version of which has been in place since 2016. While PCI DSS deals with cardholder data (CHD) and GDPR’s focus is on personally identifiable information (PII), both are designed to improve customer data protection. What’s more, the segmentation and security measures required for PCI DSS can be deployed to help meet the less prescriptive GDPR requirements. Take the advice of Jeremy King, international director at the Payment Card Industry Security Standards Council (PCI SSC): “People come to me and say, ‘How do I achieve GDPR compliance?’… Start with PCI DSS.”   Clearly, communications service providers must gain greater control over their data to comply with regulations. But doing so also presents an opportunity to increase customer trust. Furthermore, the data consolidation required to achieve regulatory compliance opens avenues for better insights and analytics—which help telcos provide more revenue-generating services and offer a better customer experience.    These goals will require a standardized, integrated infrastructure that provides data security, scalability, agility, resilience, and processing power. Because every component is co-engineered to optimize performance, co-engineered systems that have security designed into its DNA, like Oracle Exadata, can help meet the storage and processing needs of telcos handling sensitive personal or payment data.  The built-in ability to isolate storage and compute nodes that must adhere to varying degrees of confidentiality, integrity, and availability requirements meet the PCI DSS V3.2 requirements. This is just one of the reasons why Spanish telecommunications giant Telefonica chose Exadata to consolidate its mission-critical databases, boost database performance by 41x, and optimizes operating expenses to ensure business continuity in the face of outages or, worse, potential cyberattacks and security breaches.   Cloud Computing: The Perfect Solution—Except When It Isn’t  Cloud computing offers a flexible, scalable pathway to meeting the growing compliance requirements that organizations face. The public cloud provides full-time, dedicated security monitoring and enhancement, and allows companies to bring new security measures online without costly IT intervention or retooling of legacy, on-premises infrastructure.     But relying on the public cloud raises concerns for telcos, which must ensure data sovereignty and governance and may also worry about latency issues. One alternative that solves this dilemma is Cloud at Customer, which offers a full public cloud model delivered as a service, behind the enterprise firewall. This ensures that telcos can maintain their data security and regulatory compliance even as they take advantage of the benefits of cloud computing, such as automatic updates to meet the latest compliance standards, quickly and effortlessly.    AT&T Turns to Cloud at Customer to Enhance Capabilities and Compliance As the world’s largest telco, AT&T operates a massive private cloud based on proprietary virtualization. But it needed a cloud-based solution to run its 2,000 largest mission-critical Oracle databases, and its private cloud couldn’t deliver the needed performance for the transaction-intensive databases. The company also needed a solution that would keep all the customer data on premises for regulatory, privacy, and security reasons.    Cloud at Customer allowed AT&T to take advantage of the same infrastructure platform that Oracle uses in its own data center, but located in AT&T’s facility. Through Cloud at Customer, AT&T runs critical databases up to 100 TB in an Oracle-managed cloud that provides the same flexibility and scalability as the public cloud. This configuration also offers performance benefits, according to AT&T lead principal technical architect Claude Garalde: “For performance, you want the database to be really close to the application and middleware layers,” he notes. “You don’t necessarily want to be going out over a public internet link or even a VPN.”   If Compliance Seems Like a Burden, Try Data Exposure...  Regulation will remain a fact of life for telcos. Getting ahead of the regulatory game, then, is critical. The key is to recognize that regulation represents a growing demand from customers that companies keep their sensitive data safe. With that in mind, telcos need a flexible, scalable data storage and processing solution that ensures compliance while also supporting aggressive business goals. Engineered systems that include Oracle Exadata, and a deployment model such as Cloud at Customer provide the crucial link between data security and data utilization to power transformational innovation.   Learn more about how Oracle Engineered Systems can help you maintain compliance with new regulations while supporting a customer-centric business model:  

Unless you’ve been hiding out in a cave for the past couple of years, you know that the always-highly-regulated telecommunications industry is about to be hit with even more regulation. The...

Data Protection

No Downtime for the Enterprise, Part 2: A Fresh Look at the Value of Oracle’s Maximum Availability Architecture

In a recent blog post, my colleague Andre Carpenter argued that a successful digital transformation requires zero downtime—a goal that can only be achieved with an intelligent and adaptive backup and recovery framework. At Oracle, we call this framework Maximum Availability Architecture (MAA). MAA is Oracle's best practices blueprint for application and database workloads based on proven technologies, expert recommendations, and customer experiences. The goal of MAA is to achieve the highest availability at the lowest cost and complexity. Why Your Legacy System Can't Keep Up Already have a backup system? Of course you do. But a system that works reasonably well today will be woefully inadequate—and increasingly costly and complex—tomorrow. Consider these three trends that impact your ability to eliminate downtime: Business processing increases every year: According to Gartner, the top business priority for CIOs is growth and market share. As your organization adds more digital products and services and your market grows, so do your data volume and processing needs. Tolerance for downtime decreases every year: In 2016, unplanned data center outages cost an average of $740,357, and nearly $9,000 per minute. These costs come from business disruption, lost revenue, and impaired end-user productivity. Large-scale outages quickly become public knowledge and can damage your company’s reputation. Operational effort increases every year: Keeping up with your growing data using legacy backup and recovery methods introduces greater complexity, higher cost, and increased risk. The reality is that more companies are increasingly exposed to data loss because they're struggling to improve their backup and recovery capabilities as their operational models evolve, including continued migration to cloud to manage ever-increasing quantities of data.  This risk is not just a technology issue: 80% of all unplanned outages are due to people or process issues.  Enter MAA and Oracle Engineered Systems To safeguard your high-availability (HA) and mission-critical environments, you need a new framework—one designed to facilitate recovery. At its heart, MAA tightly integrates data recovery processes with the database. It backs up database transactions, not just files, so that the standby database stays current with the primary database. As a result, it can deliver a recovery point objective (RPO) of less than one second, mitigating both data loss and downtime. Innovative technologies, including autonomous database platforms capable of managing and optimizing themselves, will continue to lower the risk of human error and make MAA increasingly efficient. Enterprises can choose deployment options designed to meet IT Service Level Agreements (SLAs) and based on lessons learned in solving the toughest HA problems while keeping costs down. There are four levels of MAA: For development, test, departmental, and some production environments: 1.  Bronze: Includes automated restart and restore from backup. 2.  Silver: Adds active/active database clustering to Bronze capabilities.  For production, mission critical, and extreme critical environments: 3.  Gold: Adds HA clustering, disaster recovery, and backup using physical replication, resulting in zero data loss and fast failover. 4.  Platinum: Achieves zero data loss and zero downtime using advanced capabilities that make outages transparent to users. Opting for cloud-ready engineered systems can further support your efforts to achieve high availability. Choosing a technology stack in which all layers are co-engineered to work optimally together can not only streamline your IT, but also improve performance. And should you choose to tap the essentially unlimited data storage and processing power of cloud computing to meet your organization’s current and future needs for high-availability data, deploying cloud-ready engineered systems that have exact cloud equivalents, such as Oracle Engineered Systems, streamlines the transition to the cloud. Organizations can also opt to maintain complete data sovereignty and control through Oracle Cloud at Customer, which replicates the public cloud within their own data centers. Case Study: Best Practices in Action A $20 billion transportation and industrial company was using legacy metered back-up solutions for its rapidly growing back-office application data. As a result, the company found itself paying high licensing fees even as it placed unnecessary load on its database servers, potentially affecting performance. What’s more, the restore process it used was complex and prone to risk. By deploying MAA and Oracle Zero Data Loss Recovery Appliance (ZDLRA), the company can now perform ongoing incremental backups.  Since only changed data is sent to the appliance, the process requires minimal database server and network resources, allowing the company to reclaim disk capacity for useable storage over time. In fact, recovery using Exadata Database Machine X7 storage currently used for backup disk groups and standby Data Guard instances will return approximately 500 useable TBs over the next four years. In addition, the company’s IT team can now easily restore the full database at any point in time during the recovery window. You have the Blueprints for Success As your data processing grows in volume and complexity, it becomes increasingly difficult to maintain the high availability your customers and your business demand. Oracle Maximum Availability Architecture, built on engineered systems, is your insurance policy against data loss exposure. To learn more about how Oracle MAA works with Oracle Engineered Systems to meet your HA data needs, visit www.oracle.com/maa.

In a recent blog post, my colleague Andre Carpenter argued that a successful digital transformation requires zero downtime—a goal that can only be achieved with an intelligent and adaptive backup and...

Cloud Infrastructure Services

4 Trends to Keep an Eye on in 2018

Today's guest blog comes from Andre Carpenter, Principal Sales Consultant at Oracle. 2018 is already shaping up to be an exciting year which has already shown it is more about the new and emerging technologies than the existing platforms and trends out there. The use of server and desktop virtualization continues to soar and grow, but now the focus has turned to building and delivering micro services through rapid spin up of containers to serve the needs of the business. Just in the last three months I have seen more and more interest from customers on how they are tackling the hybrid and multi-cloud adoption challenge.   This has led onto discussions on how Oracle’s three deployment models give them choice and how perhaps the right model for many is a combination of the three. Here are my four technology predictions for the coming year: 1. Data becomes “Smart” Data We are already seeing strong IoT adoption in the enterprise, driving more and more consumption of data storage and contributing to the phenomenon we know as Big Data (more on this later). But what are companies doing with this data from smart devices once it is ingested into their enterprise data management lifecycle?   With considerations such as permissions, sharing, security, compliance, governance and the actual life span of the data, datasets are now being designed to become smarter in the sense that they actually drive these elements (perhaps by way of meta-data) themselves.   This allows further enhancements and possibilities around self-healing, provisioning, and security permissions which will make the day to day life of the IT operations manager a heck of a lot easier.   Take our self-managing, self-driving database as an example, with the promise to automate the management of the Oracle Database to eliminate the possibility of human error, self-patching and even the possibility to back itself up without any human intervention. This frees up human resources to concentrate on other areas of their IT environment that haven’t quite reach an autonomous state while not worrying about the database driving itself in the background.     What is scary is that the next evolution of this prediction will be the ability to learn from these events making Smart Data even smarter.   2. Big(ger) Data   8K video files, larger frame rates leading to hungrier storage demands. We have seen this in one of our customers' storage farms not only from a consumption perspective but from a performance perspective too.  The ability to store bigger data faster increases time to market for many and provides that competitive edge that customers are always seeking.   Another source of this boom is the Internet of Things (IoT) and edge devices, each with their own consumption requirements. Smart cars, aviation and smart devices are all generating data at a unprecedented rate.   This takes me back to the IDC digital universe published in 2012 which aims to measure the "digital footprint" of this phenomenon estimating that by 2020 we would have generated 40 ZB.  The previous year's forecast for 2020 was 35 ZB by comparison.    3. Security is paramount even in the cloud   For many, security is the reason why and the reason why not to go Cloud. It can be assumed on one level that public cloud providers host a far more secure and robust platform for cloud users to adopt their services.     The main reason for this, is that the technology that these public cloud environments provide is a more modern and robust platform specifically designed for the particular purpose it is serving – to host multiple workloads in a secure and reliable manner.     In contrast to tradition on-premises environments, where stack technologies have organically formed and grown over the years without any real thought or intent to resemble a cloud hosting environment.   The result is incompatible components, poor security measures right through the stack and aging hardware.  This makes many IT leaders nervous to move their workloads to the cloud as they have created a monster that does not simply lift and shift. And certainly not without security risks.   A study by Barracuda Networks found that 74% of IT leaders said that security concerns were restricting their organizations' migration to the public cloud.  This stance is supported by Gartner’s very own Jay Heiser, research vice president at Gartner who states:   "Security continues to be the most commonly cited reason for avoiding the use of public cloud".   Whichever way you look at it and whichever deployment model is in consideration, security has now taken a front row seat in the priorities list of most Chief Digital Officers/Chief Information Officer’s things to tackle in 2018.   4. Multi-cloud model is the norm!   No surprise here with this one, but I still find a lot of cloud practitioners in the industry still believing that customers standardize on just one public cloud provider when in fact customers are embracing the choice paradigm and using multiple cloud vendors for multiple purposes. This is backed by research from Right Scale in their Cloud Computing Trends: 2018 State of the Cloud Survey who found that 81 percent of enterprises they surveyed have a multi-cloud strategy.     The market has now become a fierce battleground for these providers, and more scrutiny and demands from end customers are forcing the cloud players to  provide more flexibility and simplicity to move workloads on AND off their cloud without any major financial headache or penalty which was a major consideration initially.   In closing, we are witnessing a massive explosion right across the I.T industry where technology appears to be accelerating a head of where many enterprises are in their IT roadmap and the challenge to maintain security, utilise big data smartly, and leverage cloud choice to gain competitive edge all remains critical in the CIO and CDO’s agenda.   I can’t wait to see how the year pans out and just how and if these predictions transpire.     About the Guest Blogger Andre Carpenter is a seasoned IT professional with over 12 years’ experience spanning presales, delivery, and strategic alliances across the APAC region for many large vendors.  Prior to joining Oracle, Andre held a number of roles at HPe including Principal Consulting Architect and Account Chief Technologist helping customers drive their IT Strategy, looking at how new and emerging storage technologies could impact their competitiveness and operations. He also evangelised HPe’s Converged infrastructure and storage portfolio through product marketing, blogging and speaking at industry conferences. Andre holds a Bachelor of Information Degree as well as a Master of Management (Executive Management) from Massey University, New Zealand. You can follow Andre on Twitter: @andrecarpenter and LinkedIn www.linkedin.com/in/andrecarpenter

Today's guest blog comes from Andre Carpenter, Principal Sales Consultant at Oracle. 2018 is already shaping up to be an exciting year which has already shown it is more about the new and emerging...

Cloud Infrastructure Services

Your Future Is Calling: Get Connected—With Everything

Mobile devices play a pivotal role in the technology innovations that continue to transform the business and consumer worlds, as attendees at the recent Mobile World Congress 2018 (MWC) saw and experienced. The cutting-edge applications for artificial intelligence (AI), virtual reality (VR), and a plethora of connected devices (aka Internet of Things, or IoT) demonstrated at the conference, while promising tremendous benefits for all industries, also create an explosion of data that needs to be managed before they realize their full potential. For traditional telecommunications companies, it is the data generated from this connectivity that holds the key to differentiating themselves and maintaining a competitive position in the industry. Accelerated 5G: It’s All About the Data There is a growing sense of urgency to complete the deployment of the 5G networks that will bring the potential of IoT to reality. Ericsson and Nokia are reporting that 5G rollouts are ahead of schedule, and telcos like Verizon and AT&T are hoping to be the ‘first’ to roll out 5G soon. This eagerness stems largely from the realization that with a 5G network in place, telcos become uniquely positioned to facilitate and monetize the data they collect. Thanks to 5G and IoT, the data capital of telcos will quickly expand, and the opportunity in front of them is tremendous. To get a sense of the revenue possibilities, note that IHS forecasts that the IoT market will grow from an installed base of 15.4 billion devices in 2015 to 75.4 billion in 2025—a nearly 400% increase over a 10-year period. Considering that every connected device will generate enormous amounts of data, this increase presents huge new potential revenue for telcos that have an IT infrastructure ready to optimize the coming data deluge. One need only look to the smart city revolution occurring in Barcelona to learn both the opportunities and the challenges. Lessons Learned from the Smart City of the Future: Barcelona In Barcelona, Spain, the current possibilities of IoT are continually being tested. After implementing 500 km of optical fiber, free Wi-Fi routed through street lighting, and sensors that monitor air quality, the city’s leadership is finding ways for IoT to better serve its citizens. Examples of the successes to date include the ability to deliver more reliable bus services with information updates at bus stops and easier ticketing, and sensors in multistory parking garages that help people find vacant spaces. These advances have not come without issues. The most pressing has been how to manage data coming from disparate sources, which poses a challenge for how to integrate that data so that it can be efficiently analyzed and leveraged. Said Barcelona CTO and digital commissioner Francesca Bria, “City Hall ended up with a lot of data, with a lot of dashboards, and yet without any capacity to really use data and information to take better decisions for the public good, or to give ownership of the data to citizens.” Big Data: The Winning Strategy There’s no question that big data solutions are needed. There’s also no question that DIY infrastructure is not the answer. The vast potential and real-time impact of IoT makes building the needed infrastructure both time and mission-critical. While DIY solutions may be tempting, they also come with risks and challenges. In fact, Gartner estimates that 85% of big data projects fail due to the difficulty of integrating with existing business processes and applications, among other challenges. To realize the potential of monetizing their data, telcos must recognize the vital importance of a seamlessly integrated infrastructure that is capable of supporting massive data workloads. Fortunately, telcos can learn from the success of one of the world’s largest telecommunications companies, Telefonica Spain. Telefonica Spain serves 320 million customers in 21 countries in Latin America and Europe. The telco giant is investing big in the future of the communities where it operates, with more than $7.5 billion already invested in R&D. With the huge amounts of structured and unstructured data that it must manage, the telecom company’s goals included: Unifying business intelligence (BI) systems for visibility into market trends and customer preferences Simplifying and integrating IT, BI, and big data systems to analyze data from multi-device customers Developing new and better products and services To accomplish these goals, Telefonica Spain deployed Oracle Engineered Systems, which included Big Data Appliance, to aggregate massive amounts of data from multiple of sources at speed and scale. Specifically, Telefonica used Big Data Appliance to analyze data from coming from their landline service, mobile service, pay-TV, and other digital sources including market trend data to gain a better understanding of customers’ use of services and generate customer insights. Telefonica also leveraged Oracle Exadata and Oracle Exalogic to power and unify the company’s mission-critical applications and CRM platform with BI tools, and thereby analyze more than 3,200TB of data. The results have enabled Telefonica to reduce costs, create seamless integrations between systems and networks, make better business decisions, and offer more personalized services. Telefonica’s ability to analyze real-time data from multiple customer devices has made a significant impact on customer satisfaction in several areas, including call center efficiency, complaint resolution time, and the ability to enhance its cloud TV service with personalized content recommendations. The company tripled the cost efficiency of its BI infrastructure and reduced the time-to-market of new data sources by 90%. Building the Data Infrastructure to Monetize IoT We all know that telcos operate within a heavily regulated environment, and, as such, typically rely on on-premises infrastructures to maintain control of their data. On-premises environments address compliance regulations, but if massive amounts of data must be retrieved from multiple remote locations in the cloud, latency issues arise. These latency issues can hinder the real-time data processing that will be necessary to truly monetize IoT. What telcos need is an infrastructure that keeps them compliant but can also deliver the benefits of a cloud deployment—for instance, real-time data processing, simplicity, and an on-demand, subscription-based consumption model. Oracle Big Data Cloud at Customer (BDCC), which brings the cloud to them, inside their own data center, delivers on the promise. BDCC allows telcos to not only gain the ability to monetize real-time data generated by IoT, but also cut time-to-market and operating costs. A Pivotal Time for Telcos IoT will create an explosion of data that will require 5G networks to make its potential a reality. Telcos are ideally positioned to lead this charge, but the commitment to invest in integrated infrastructure designed specifically for big data processes will be critical for the successful monetization of this opportunity. Now is the time for telcos to not only reimagine the future of their business, but also to begin going after it. The ability to leverage engineered systems to create the foundation of a highly connected world, upon which business and government can implement IoT innovations, will create a brighter future for people—and a more profitable future for telcos.

Mobile devices play a pivotal role in the technology innovations that continue to transform the business and consumer worlds, as attendees at the recent Mobile World Congress 2018 (MWC) saw and...

Cloud Infrastructure Services

March Database IT Trends in Review

Check out the latest database happenings you may have missed in March... In Case You Missed It... Forbes - Larry Ellison: Oracle's Self-Driving Database 'Most Important Thing Company's Ever Done.' Ellison pulled no punches in framing his view of how truly disruptive the Autonomous Database will be: it requires no human labor, delivers huge cost savings, and is more secure because eliminating human labor eliminates human errors. Read the interview here. GDPR is Just Around the Corner What is GDPR? Everything You Need to Know. The EU General Data Protection Regulation (GDPR) comes into effect on May 25th 2018. For those of you who are still coming to terms with what GDPR actually means for you, your team, and your company, we sat down with Alessandro Vallega, security and GDPR business development director for Oracle EMEA, to help get some answers to frequently asked questions about GDPR. Read the blog. Time to See GDPR as an Opportunity, Not a Chore. We are very quickly moving to a world driven by  all sorts of connected devices and new forms of artificial intelligence and machine learning. In order to succeed with these technologies, businesses will need the public to trust their approach to managing data. By acting now, companies will guarantee their approach to data is compliant with the new GDPR rules and gain the confidence to continue delighting customers with better, more personalized services. Learn more. Addressing GDPR Compliance Using Oracle Security Solutions. This white paper explores how you can leverage Oracle Security Solutions to help secure data at rest and in transit to databases, organize and control identity and access of users and IT personnel, and manage any aspect of a complex IT infrastructure to get a leg up on addressing GDPR requirements today. Download the white paper. How Are Companies Evolving IT to Keep Up with New Demands? The Gaming Industry Bets on Cloud-Ready Solutions. The gaming industry has seen a remarkable transformation in the past few years; 57.6% of 2017 revenue at major Las Vegas Strip resorts came from non-gaming activities. With customers spending more and more on celebrity restaurants, high-end shopping, and shows and even theme-park like rides. Ensuring that your IT environment is ready to take on the cloud should be the a top, if not #1 priority, for the casino and gaming industry. MedImpact Optimized Pharmacy Management with Oracle Exadata and ZFS Pharmacy benefit managers (PBMs) like MedImpact work with customers' health plan to help them get the medication they need. PBMs face a rapidly changing patient landscape that is demanding higher efficiency, rapid response, and improved health care outcomes. MedImpact was able to accelerate database performance up to 1000% and pull reports and analytics in seconds versus hours with Oracle Engineered Systems. Watch their story here. Don’t Miss Future Happenings: subscribe to the Oracle Cloud-Ready Infrastructure Blog today!

Check out the latest database happenings you may have missed in March... In Case You Missed It... Forbes - Larry Ellison: Oracle's Self-Driving Database 'Most Important Thing Company's Ever Done.' ...

Cloud Infrastructure Services

Your Future Is Calling: How to Turn Data into Value-Added Services

Maybe it’s time to stop calling them smart “phones.” After all, we use them to shop, do our banking, watch TV, take photos and videos, wake us up in the morning, board airplanes, catch a ride, and so much more. Telephony is an ever-smaller part of what we use phones for. Like so many other industries, the telecommunications (telco) industry is experiencing a revolution built on data and technology. On the one hand, traditional telco providers have a unique opportunity to take advantage of this revolution. However, as over-the-top (OTT) content and app providers like Facebook, Hulu, Netflix, and Amazon also engage with customers through compelling content, they can also be a seen as competitors by the traditional telco providers. The traditional telcos are struggling to catch up with these nimbler competitors. The secret to gaining an advantage lies in figuring out how to monetize all the data that comes through the telco network. And that means building an entirely new business strategy on a robust infrastructure. Start with a Level Playing Field Mobile World Congress 2018 in Barcelona, which took place February 26 through March 1, provided a lens into the current and future state of the telco industry—and the challenges that lie ahead. Telefonica CEO José María Álvarez-Pallete López, in his kick-off keynote, talked about a “new mindset” that’s needed in the industry. First, the carriers should have an investment strategy that funds the rollout of next-generation 5G network technology, which will enable telcos to offer a more robust range of services and “level the playing field with the internet giants.” He also talked about a “digital bill of rights,” and addressed issues surrounding privacy and machine ethics. Where Do the Telcos Stand Today? It’s not news that items that used to be the telcos’ key revenue offerings, like voice and text, have diminished. In fact, “…according to Informa’s World Cellular Revenue Forecasts 2018, global annual SMS revenues will fall down from US$120 billion in 2013 to US$96.7 billion by 2018, due to increasing adoption and use of Over-the-Top (OTT) messaging applications.” The OTTs have circumvented the traditional telcos, delivering SMS messaging apps through network operators without paying for network access. Data, clearly, has become more important than telephony. Telcos must find ways to monetize the data they have to provide more personalized services and generally better service for customers/consumers. That’s how they can differentiate themselves from competitors. But they also need to move aggressively to beat nimbler competitors. They need to be proactive, and they need to reduce operational costs. Here’s how they can do it. The Road to Value-Added Services Is Called 5G It’s not an overstatement to say that the 5G network will, once again, revolutionize telecom. Not only could telcos offer improved mobile experiences with artificial intelligence and augmented/virtual reality, as well as more machine-to-machine connectivity, but there are potential societal benefits that could arise with 5G. Vodafone CEO Vittorio Colao described a “connected” ambulance that would be tied to the hospital to which it is transporting a patient, transmitting information along the way. And NTT Docomo’s President and CEO Kazuhiro Yoshizawa painted a picture of the future in which telehealth solutions support remote diagnoses where physicians are locally unavailable. Because telcos can track activity via customers’ smartphones, they can capture that data and use it to partner with other businesses to provide better customer experiences. Think about electronic hotel check-in, for example. A telco would know when a traveler’s flight has landed. It could synch this information with the hotel where the traveler is staying and offer a mobile-based hotel check-in—ready for the customer when he or she arrives. Agile 5G networks also open the door to opportunities provided by the Internet of Things (IoT). These opportunities include smart grid, connected vehicles, in-store marketing, smart buildings, Industry 4.0, and digital spaces. The Road to the Future Needs Solid Infrastructure To become formidable 21st century competitors, the telcos need to be able to capture, manage, and monetize data in an agile environment. And that means a solid infrastructure upon which to build a data-driven, 5G-network strategy. With the right infrastructure, they can create omnichannel customer experiences, personalize offerings, adapt and respond in real time, go to market faster, streamline their operations, modernize the back office, and turn data into a reusable, revenue-generating asset. Whatever path telcos choose to modernize infrastructure, it must include a strategy for adopting cloud. One option is to optimize on-premises infrastructure with Oracle cloud-ready engineered systems, which provide a clear migration path to the cloud. A second option is to build a hybrid cloud infrastructure in which they can lift and shift workloads to the cloud easily between identical on-premises and cloud architectures. For others, the best option will be to bring the public cloud into their data centers and behind their firewalls, with Oracle’s Cloud at Customer. By having a fully integrated infrastructure, telcos can speed implementation time, get vastly greater performance, and gain cloud-readiness whether they move to the cloud today or in the near future. We can take a look at one telco that’s already taken steps to realize its full future potential. Gansu Mobile Hits the Road to the Future Gansu Mobile is a subsidiary of China Mobile Ltd., a mobile network with 628 million 4G LTE mobile users and 100 million broadband customers. Like many telcos, the company wanted to support new business growth by improving the performance and reliability of its core business processes, such as mobile billing and messaging systems. It was also looking to resolve system problems more quickly, respond to customers faster, and lower total cost of ownership of its IT infrastructure by moving its databases to a single platform. With Oracle Exadata Database Machine, Gansu integrated five legacy databases onto one machine. By optimizing its database and application performance, Gansu was able to support a 30% increase in broadband internet users. At the same time, it halved its customer response time, resulting in a marked improvement in customer satisfaction. Perhaps even more important, by implementing infrastructure to support the massive growth in data required for innovation, Gansu Mobile was able to offer new services like Internet Protocol television (IPTV), video-on-demand, and interactive network teaching. The Telco That Can Manage the Coming Explosion of Data, Can Control Its Destiny You wouldn’t build a highway without a strong foundation. In the same way, the road to the future for telcos must start with a strong foundation: high-performance, cloud-ready, reliable infrastructure that can support the almost-here 5G networks. With that foundation in place, telcos can build the capacity and speed needed to power innovative, data-driven service offerings. Learn more about Oracle offerings for the Telcommunications industry: - Oracle Engineered Systems - Oracle's Cloud at Customer solutions - Oracle Communications Solutions

Maybe it’s time to stop calling them smart “phones.” After all, we use them to shop, do our banking, watch TV, take photos and videos, wake us up in the morning, board airplanes, catch a ride, and so...

Cloud Infrastructure Services

What is GDPR? Everything You Need to Know.

The EU General Data Protection Regulation explained... The EU General Data Protection Regulation (GDPR) comes into effect on May 25th 2018. For those still getting to grips with what it means, we sat down with Alessandro Vallega, security and GDPR business development director for Oracle EMEA, to help get some answers to frequently asked questions about GDPR. What is GDPR? The EU General Data Protection Regulation (GDPR) will come into effect on 25 May 2018. It applies to all organizations inside the EU and any outside who handle and process data of EU residents. It is intended to strengthen data protection and give people greater control over how their personal information is used, stored and shared by organizations who have access to it, from employers to companies whose products and services they buy or use. GDPR also requires organizations to have in place technical and organizational security controls designed to prevent data loss, information leaks, or other unauthorized use of data. Why is GDPR being introduced? The EU has had data protection laws in place for over 20 years. However, in that time, the level of personal information in circulation has grown dramatically, and so have the different channels through which personal information is being collected, shared and handled. As the volume and potential value of data has increased, so has the risk of it falling into the wrong hands, or being used in ways the user hasn’t consented to. GDPR is intended to bring fresh rigor to the way organizations protect the data of EU citizens, while giving citizens greater control over how companies use their data.  What should organizations do to comply with GDPR? GDPR does not come with a checklist of actions businesses must take, or specific measures or technologies they must have in place. It takes a ‘what’ not ‘how’ approach, setting out standards of data handling, security and use that organizations must be able to demonstrate compliance with. Given the operational and legal complexities involved, organizations may want to consult with their legal adviser to develop and implement a compliance plan. For example, while GDPR strictly speaking does not mandate any specific security controls, it does encourage business to consider practices such as data encryption, and more generally requires businesses to have in place appropriate controls regarding who can access the data and be able to provide assurances that data is adequately protected. It also states businesses must be able to comply with requests from individuals to remove or amend data. But it is up to organizations how they meet these requirements and ultimately it is up to them to determine the most appropriate level of security required for their data operations.  What are the penalties for not being compliant with GDPR? If organizations are found to be in breach of GDPR, fines of up to 4% of global annual revenue or €20 million (whichever figure is highest) could potentially be imposed. Furthermore, given how critical personal data is to a great many businesses the to their reputation damage could be even more significant, if the public believes an organization is unfit to control or process personal information. Who needs to prepare for GDPR? Any organization based inside or outside the EU that uses personal data from EU citizens, whether as the controller of that data, such as a bank or retailer with customer data, or a third party company handling data in the service of a data controller, such as a technology company hosting customer data in a datacentre, depending on their respective roles and control over the data they handle.  What personal information is covered by GDPR? GDPR is designed to give people greater control over personal information which may include direct or ‘real world’ identifiers such as name and address, or employment details, but may also include indirect or less obvious geolocation data or IP address data which could make a person identifiable.  Is GDPR bad for businesses? Complying with any new regulation may bring additional work and expense but GDPR also gives organizations an opportunity to improve the way they handle data and bring their processes up to speed for new digital ways of working. We are living in a data-driven economy. Organizations need to give consumers the confidence to share data and engage with more online services. Following the requirements of GDPR can help in that regard. Who should be in charge of GDPR? GDPR compliance must be a team effort. It is not something that can be achieved in, or by, one part of the organization. Ultimately, its importance is such that CEOs should be pushing their teams and appointed owners across the business to ensure compliance. Almost every part of a business uses and holds data and it only takes one part of the business to be out of alignment for compliance efforts to fail.  How can Oracle help with GDPR compliance? Oracle has always been a data company and takes very seriously our role in helping organizations use their data in more effective, more secure ways. We have more than 40 years of experience in the design and development of secure database management, data protection, and security solutions. Oracle Cloud-Ready Infrastructure and Oracle Cloud solutions are used by leading businesses in over 175 countries and we already work with customers in many heavily regulated industries. We can help customers better manage, secure and share their data with confidence. For more information, see: Helping Address GDPR Compliance Using Oracle Security Solutions Is compliance being left to chance? How cloud and AI can turn a gamble into a sure thin

The EU General Data Protection Regulation explained... The EU General Data Protection Regulation (GDPR) comes into effect on May 25th 2018. For those still getting to grips with what it means, we sat...

Cloud Infrastructure Services

Time to See GDPR as an Opportunity, Not a Chore

When many people think of data-driven businesses the temptation may be to think of major consumer facing websites, online retailers or social media companies. But the reality is, organizations of all sizes, across all sectors are getting closer to their data in order to improve and personalize the customer experience or the way they work, or to transform whole industries or create new opportunities. The UK’s NHS Business Services Authority (NHSBSA) recently uncovered insights in its data that have helped it improve patient care and uncover nearly £600 million in savings. In India, a new crop of financial institutions have reimagined credit checks for the country’s unbanked population, assessing people for small business loans based on an analysis of their social media data.  But while the rise of data-driven business models and organizations has made life better for many people it has also raised concerns about how our data is collected, used and managed. This is the major motivation behind the EU’s General Data Protection Regulation (GDPR), which aims to raise the standard of data protection in modern businesses and provide consumers with greater transparency and control over how their personal details are used. New regulation can feel like a burden, but organizations should see GDPR as an opportunity to put in place processes and protections that given them the ability to make the most of their data, and give consumers the confidence to keep sharing their data with the organization.  To paraphrase TechUK’s Sue Daly, who joined a panel of data experts to discuss GDPR on the Oracle Business Podcast, we are moving to a world driven by connected devices, the Internet of Things, and new forms of artificial intelligence, and to succeed with these technologies businesses will need the public to trust their approach to managing data.  Transparency can also be a valuable differentiator. Telefónica, one of Spain’s largest telecoms operators, provides advertisers and content providers with anonymous audience insights so they can better tailor their content to individual users. In the interest of transparency, the company publishes the customer data it sends to third parties and gives people the option to opt out of sharing their personal details. Telefónica’s data-driven approach has taken it from strength to strength. Despite currency pressures and a difficult market, the company posted a 23% rise in profits at the end of February 2018.  The exchange is mutually beneficial, as it allows the operator to curate the right content for its own customers and provide them with a better user experience. Telefonica has now captured 40% of Spain’s lucrative digital media and advertising market. By comparison, most telcos only contribute to roughly 2% of the advertising value chain. This perfectly illustrates why businesses should not just wait for GDPR to arrive and do the minimum required in the name of compliance. With major changes come major opportunities, but only for organizations that are proactive and look beyond short-term regulatory burden.  Nina Monckton, Chief Insight Officer at the NHSBSA, who also joined the Oracle Business Podcast panel to discuss GDPR, had this to say: “The trick is to help people see how their data helps your business improve their quality of life. For example, when you explain that their anonymized details can help researchers find cures to serious illnesses the benefits become much more tangible”.  By acting now, companies will guarantee their approach to data is compliant and gain the confidence to continue delighting customers with better, more personalized services. 

When many people think of data-driven businesses the temptation may be to think of major consumer facing websites, online retailers or social media companies. But the reality is, organizations of...

Going the Extra Mile: From Fleet Management to Digital Freight Management

In a recent blog post, we talked about the new, consumer-driven supply chain—the mile after the last mile—in which consumers expect order delivery wherever and whenever they want it. To survive in this new reality, businesses need to build demand-driven supply chains. Integral to this process is a move from fleet management toward digital freight management. What’s the difference? Fleet management involves tracking and maintaining vehicles (engine operation, driver performance, etc.) for transportation companies such as common carriers. One of the goals of fleet management is to ensure that freight is handled appropriately and arrives at its destination in good condition. Freight management puts product at the forefront instead of vehicles, and focuses on providing shippers, rather than just fleet owners, visibility into the location and condition of their products during every phase of transit. By sharing information from both these sources—fleet and freight—and integrating it with supply chain and customer demand data, shipping can become more efficient and responsive. The key is to build upon integrated systems that allow data to flow seamlessly across every component. How can businesses get the needed visibility into their supply chains to facilitate the transition from fleet management to digital freight management? Evolving from Fleet Management to Freight Management Not every organization that ships goods owns its own fleet. Rather, these shippers depend on others—sometimes multiple carriers—to transport their products. To ensure that goods arrive safely and on time, companies need visibility and control over shipments, even when those shipments are not moving on their own vehicles. The pharmaceutical industry, for example, ships around $283 billion in high-value products that must be maintained under strict temperatures and humidity-controlled conditions. What happens when a shipment arrives as scheduled, but has been compromised by an undetected temperature fluctuation? A painstaking failure analysis may be needed to determine the cause of the incident. Imagine the same scenario in a freight-management environment using Internet of Things (IoT) logistics technology. In this new world, components that make up the supply chain are equipped with sensors. The shipper’s asset-monitoring system detects a potential failure of the refrigerated cargo container. It communicates automatically to the fleet owner to provide re-routing information to the nearest repair facility. Meanwhile, the shipper is instantly apprised of the situation and provided with a revised ETA. What brings all these pieces of the supply chain puzzle together is technology infrastructure that can consolidate and manage all the structured and unstructured data involved in the process, to make it available for analysis, reporting, and response. This integrated infrastructure—i.e., engineered systems—must be able to perform all these tasks whether the data reside on-premises or in the cloud, so that information can travel seamlessly across systems. Integrating Supply Chain and Freight Management with Cloud-Ready Infrastructure The next step in logistics is integrating customer demand information with freight management, so that organizations can react dynamically when demand shifts. This is especially urgent in the retail industry, where consumers have come to expect instant gratification. To compete effectively, retailers must operate on digital and data-driven supply chain management systems that use business intelligence (BI) tools to get visibility into the supply chain, and can then incorporate artificial intelligence (AI) and automation to make needed shipment adjustments swiftly and accurately—without waiting for human intervention. But retailers can’t very well transform their entire supply chains overnight. The solution is to build upon cloud-ready engineered systems that provide the infrastructure to store, organize, and process a growing mountain of data, no matter the source. Demand isn’t the only driver of supply chain adjustments, however. Consider a shipment of ketchup to a distribution center. It’s running late, but no one knows by exactly how long. Without this information, the truck leaves without the product, causing disruption for both the manufacturer and the retailer. Having basic visibility into the entire supply chain is essential for preventing such inefficiencies from creeping in. Doing this requires an integrated infrastructure like Oracle Big Data Appliance (BDA) that can aggregate all the disparate supply chain data and run these workloads to process for analysis. Next, BI and analytics tools can take the data and tell companies in real time exactly where a product is and when it will arrive at its next destination. These insights can then be made accessible to all the relevant parties so that decisions can be made immediately based on that single version of truth. Big Data Appliance enables robust analysis because this cloud-ready engineered system helps companies acquire and integrate data as well as provide the storage and compute needed to run these workloads and facilitate decision-making in real time. One example is Croatian IT services provider mStart, which was challenged with optimizing its supply chain processes and inventory management by reducing the financial and environmental impact of transporting goods to more than 2,000 stores. The company was able to reduce retail transportation costs using Oracle Retail Demand Forecasting and Oracle Retail Replenishment Optimization to improve supply chain processes, and Big Data Appliance to optimize inventory levels in near real time. All these solutions were anchored to an engineered systems infrastructure configured to deliver optimal performance. Ana Svetina, head of marketing for mStart d.o.o., explained why the company chose BDA: “We chose Oracle Big Data Appliance predominantly because of its seamless integration capabilities with the Oracle technologies that underpin our customers’ retail experience.” Achieving the Holy Grail in Moving Goods to the Consumer As technology evolves, we are fast approaching the “holy grail” of integration among consumer demand, supply chain, and logistics management systems. One crucial step toward achieving this goal is to re-think fleet management with a focus on the actual products in transit—in other words, digital freight management. A flexible infrastructure allows organizations to take advantage of each new technology as it comes online, unifying disparate systems on a single platform. Is your technology infrastructure capable of supporting this integrated approach? Learn more about how Oracle Engineered Systems can help your organization get on the path toward a flexible, digital supply chain.    

In a recent blog post, we talked about the new, consumer-driven supply chain—the mile after the last mile—in which consumers expect order delivery wherever and whenever they want it. To survive in...

Engineered Systems

Big Data Can Power a Superior Bike-Sharing Experience

It should come as no surprise that a superior customer experience (CX) drives growth in customer acquisition as well as in customer revenue, and a recent Forrester report confirms this. What makes a superior customer experience possible, of course, is data. According to a recent Accenture survey, nearly all (98%) of companies considered high performers in CX say they are data-driven around customer experience versus just 55% of all other companies surveyed. And 91% of high performers in CX say that data and analytics are critical to driving customer experience improvement versus 66% of all other companies surveyed. We know that organizations interact with customers in many ways and the data surrounding these interactions can be captured from many sources such as IoT devices, point-of-sale systems, customer relationship management (CRM) systems, websites, and social media. By applying data analytics, predictive analytics, and machine learning practices, businesses can analyze customer data in near-real-time for every customer interaction. This gives companies the ability to get a complete, single view of how customers behave, what they buy or need, and how they will likely interact in the future. Using this information, companies can drive decisions about business functions that have a direct impact on the customer experience; for example, inventory management. However, companies still relying on legacy data systems often struggle to be effectively data-driven. Typically, the volume of customer data generated is so huge that these IT systems simply can’t scale quickly or cost-effectively enough to adequately support big data analysis. Also, as organizations grow—organically or through acquisitions—they often add infrastructure that scatters data across many databases and storage solutions. These siloed legacy systems are not well suited for analyzing large volumes of data coming from multiple sources. It can take weeks to pull together and process data from disparate sources into dashboards or reports that provide the insights needed to make key decisions. To be competitive, businesses need to be able to make decisions in real or near-real time. Oracle Big Data Appliance provides that capability by processing large data workloads at speed and scale. This out-of-the-box solution is optimized for the entire portfolio of Engineered Systems products—delivering a completely streamlined infrastructure that becomes the backbone for real-time, granular analytics. In fact, companies can track and predict customer behavior by running their workloads on this integrated infrastructure where the hardware and software have been coengineered to work optimally with each other and provide the highest performance and faster analytics. How Citi Bike Could Help New York City Deliver a Superior Two-Wheeled Experience New York City–based Citi Bike is the largest bike-share program in the U.S. with more than 10,000 bikes and 600 stations in Manhattan, Brooklyn, Queens, and Jersey City. Members use an app to find a bike at a nearby Citi Bike station and can return the bike to any station. In a hypothetical situation, Citi Bike faced an inventory challenge when frustrated users complained on social media that they couldn’t get bikes and docking spaces—not the customer experience Citi Bike wants to deliver, but exactly the kind of challenge that big data is well suited to help solve. How might big data and analytics solve this problem? Using Oracle Big Data Appliance, for instance, Citi Bike could aggregate and store large amounts of streaming data from multiple sources, including social media, sensors, and machines with ease in an on-premises data lake. Or it could use Big Data Cloud at Customer to run and process heavy data workloads on a pay-as-you-go basis in its own data center (behind its firewall) without having to worry about securing, patching, and upgrading the hardware since Oracle would do the IT maintenance. To leverage the power of big data, Oracle’s integrated Big Data analytics solution delivers advanced analytics, dashboards, and business intelligence tools that give organizations a powerful new level of visibility and insight into their aggregated data. So a bike-sharing program like Citi Bike could study historical bike usage patterns and analyze social media posts about any problems with bike and docking station availability, and then use these insights to predict demand more accurately and redistribute bikes to ensure adequate inventory at peak times. As mentioned above, powering these analytics is the infrastructure layer composed of Oracle Engineered Systems such as Oracle Big Data Appliance which has been purpose-built from the ground up to deliver performance that can’t be easily matched by a DIY (do-it-yourself) solution. Big Data and Analytics = Business Solutions Oracle’s Big Data Appliance delivers on the promise of big data by capturing, storing, and organizing massive amounts data from diverse sources into Hadoop. Oracle’s Big Data analytics solutions layer on the appliance to provide immediate analysis of the data through dashboards and business intelligence tools so that organizations can study customers’ behaviors in detail and gain insights that they can use to improve their businesses. Big data analytics combined with Big Data Appliance makes it possible to identify and address inventory management problems—which, in turn, can improve the customer experience, a key driver of business success.

It should come as no surprise that a superior customer experience (CX) drives growth in customer acquisition as well as in customer revenue, and a recent Forrester report confirms this. What makes...

Cloud Infrastructure Services

The Gaming Industry Bets on Cloud-Ready Solutions

Today's guest post comes from Robert Garity, the Senior Sales Director for Gaming for Oracle Hospitality. The gaming industry has seen a remarkable transformation in the past 20 years. At Las Vegas’ major Strip resorts (those grossing more than $1 million in gaming revenue annually), 57.6% of 2017 revenue came from nongaming activities. Some resorts are seeing as much as 65% of revenue from non-gaming sources. Today, it’s fine dining (often at restaurants of celebrity chefs), extravagant hotel accommodations, luxury retail, pampering spas, dazzling shows, and pro-designed golf courses that are the big money-makers. As these nongaming revenue streams expand, they bring about new challenges for the players in the gaming industry. What are these challenges, and how can these gaming enterprises meet them by adopting new technology models? Managing Challenges In-House Can Be a Crapshoot Traditionally, casinos managed IT systems in-house, but with the growth and direction the industry has taken in recent years, it is becoming increasingly complex. In-house IT teams simply can’t keep up with the increased volume and complexity of the operations. For example, large fluctuations in volume for occasions such as a Thanksgiving weekend, New Year’s Eve, or Christmas week can strain the capacity of the IT stack and slow systems down. Beyond the volume demands, staying current with system versions has become increasingly difficult and costly when managed in-house. That challenge becomes even greater when gaming enterprises can’t find and keep on-site experts—which is especially problematic for casinos in rural and remote locations. Managing the back-of-the-house infrastructure requires considerable expertise. If the enterprise has only one in-house expert and that person leaves the company or otherwise unavailable, finding someone to replace them may be an extremely difficult effort. If all these challenges weren’t enough, gaming environments are a high-value target for payment thieves, including the food and beverage and lodging areas of the business. Casinos and resorts with gaming have massive amounts of credit card information, plus customer loyalty program information. For these reasons, they must keep payment and nonpayment customer information absolutely secure or face disastrous consequences. (Watch for a future post that addresses the data security issue for the entire hospitality market.) Gaming operations struggle to manage a large data center full of servers and technology experts. Traditionally, they’ve had separate instances of their systems at each location. Offloading to third-party data centers doesn’t solve the problems of the labor to manage the systems (database and application-specific expertise) or the security, it simply moves it to another location. Why the Gaming Industry Is Betting on the Cloud Gaming enterprises see the cloud as an avenue to gain control of the complexity of the modern gaming environment. The bottom line is that the cloud allows these enterprises to centralize infrastructure and have it managed by experts so that their IT teams can focus on managing the operations that contribute directly to creating a world-class customer experience. How is this possible? Traditional on-premises infrastructure management requires experience with the database, operating system, and applications, and can be challenging for gaming operators to manage. Why? Because often times the environment is comprised of commodity hardware from multiple vendors, cobbled together by a select few who have the type of knowledge to manage this kind of complexity. It is especially difficult and complicated to upgrade those environments with all the systems with which they need to interface: casino management, hotel operations, food and beverage management, catering, vendor management, customer loyalty programs, liquor-dispensing, surveillance systems, and more... All of these systems need to work together seamlessly and require testing and attention during the upgrade process—which is extremely difficult to pull off with a complex on-premises installation, and without outside expertise.  If, on the other hand, gaming operators move to the cloud, those integrations are much more manageable, far easier to test, and not nearly as difficult to deploy. The cloud also allows casinos to roll system upgrade costs into the monthly fees for cloud service, and have the third-party team of experts implement upgrades for them. Casino operators are also able to bring the POS and property management systems (PMS) as well as other systems, into the cloud to centralize their management—a huge advantage from a security perspective. The cloud resolves the problem of volume fluctuations as well. It allows casinos to increase capacity on an as-needed basis, and then drop back down when volume subsides, based on a monthly subscription fee.  Cloud and Cloud-Ready Solutions May Be the Winning Hand With Oracle solutions in the cloud, there’s very little on-site hardware to manage and minimal database server-level product, removing the hardest part of the systems management from the IT department. In the case of Oracle Hospitality OPERA Cloud Services (lodging) and Oracle Hospitality Simphony (POS), anyone—including food and beverage managers—can have the skill set to manage these systems on-site. The IT staff is freed to focus on the workstations, kitchen display systems, third-party integrations, training on applications, and all the other pieces of an operation that enable it to deliver a customer experience that exceeds expectations. Contrary to what used to be believed, the cloud offers a more secure environment than what can be provided on-premises. Both Nevada and most Native American jurisdictions have come to the realization that casinos cannot provide a secure environment on-premises to host their database and applications. The cloud provides that level of data security that is a non-negotiable for these operations. Many casinos have taken the first steps toward the cloud by migrating applications such as email. Comfortable with this, now they are beginning to move their POS (food and beverage and retail) and lodging systems to the cloud. Realistically, it will be some time before they move casino management systems to the cloud. Many jurisdictions have regulations around deployment of these systems and some other applications in the cloud and how it must be done. What we’ve been doing at Oracle is helping gaming enterprises understand how the cloud is beneficial to them, and why they need to make the move. Timing has been the biggest concern. But we are making sure that they understand that this is the most secure place to have your data, and this is the best way to have your system managed to be fault-tolerant and maintain a high degree of uptime. Engineered Systems Is the Stepping Stone to Cloud Large deployments may require a different approach. With some resorts managing multiple large properties in many different geographies, taking those systems to the cloud will take time. But addressing on-premises issues like IT infrastructure complexity, application reliability, database performance, and data security while still keeping an eye on cloud is possible today. OPERA and Oracle Exadata Database Machine are a winning combination. A resort’s reservation system is its lifeblood; if it were to go down, guest annoyance would be least of their worries. I mean, entire vacations could be ruined. Fault-tolerate design enables Exadata to deliver 99.99999% reliability for mission-critical applications like OPERA so guests will never notice if something goes wrong on the backend. Exadata is able to deliver the performance and availability casinos and resorts need because it was co-engineered with the Oracle Database team, giving it a pretty unfair advantage. You can consolidate hodge-podge commodity systems onto a single Exadata system that was purpose-built to deliver optimal speed, performance, and security for the Oracle stack. Less systems to manage means less resource requirements, and a single stack architecture means more streamlined support and management. Ensuring that your environment is ready to take on the cloud should be the a top, if not #1 priority, for the casino and gaming industry. Exadata is available in three consumption models: on-premises, in the cloud, or cloud at customer. With exact equivalents in the cloud, cloud migration is frictionless and happens on your terms. This type of flexibility allows customers to choose how and when they go to the cloud because it all comes down to this for the gaming industry: the guest experience. And cloud and cloud-ready solutions are changing the game in the gaming industry. Learn more about how you can prepare your infrastructure for the cloud with Exadata and the entire Oracle Engineered Systems stack, systems purpose-built to maximize the performance of on-premises deployments of mission-critical applications like Oracle OPERA and Oracle Simphony.  About the Author Bob leads a very successful team of sales executives at Oracle Hospitality (formerly MICROS Systems) in the gaming group, working with all of the world's leading casino resort operators. Bob's background includes extensive experience in technology and product sales, management, hospitality and live entertainment and event production. He was awarded the MICROS Chairman's award for excellence in enabling digital transformation for many major gaming and resort brands. He is originally from Sioux Falls (Brandon), South Dakota and currently resides in Henderson, Nevada with his wife Karmin. Connect with Bob on LinkedIn https://www.linkedin.com/in/robertgarity.

Today's guest post comes from Robert Garity, the Senior Sales Director for Gaming for Oracle Hospitality. The gaming industry has seen a remarkable transformation in the past 20 years. At Las Vegas’...

Cloud Infrastructure Services

February Database IT Trends in Review

February, 2018 newsworthy happenings in review: Oracle’s Cloud-Ready Infrastructure emerges as key element for database workload demands of modern manufacturing; @Wikibon publishes new TCO model addressing operational costs for Oracle workloads; your guide to data backup and recovery, and avoiding blame and shame game; Cloud at Customer momentum showcased at Oracle CloudWorld New York. In Case You Missed It - Modernizing Manufacturing  Manufacturers turn to cloud-ready infrastructure to tame unruly Internet of Things. Here’s a tasty snack: based an early industry study: ’… 94% face challenges collecting and analyzing their IoT data…41% say these data challenges top their list of IoT concerns…’ Full story here. Is virtual reality in manufacturing at a tipping point? Turns out virtual reality and augmented reality (VR/AR) are actually becoming useful on the factory floor: '…More than one-third of all U.S. manufacturers either already use VR/AR or expect to do so by the end of 2018​...'  Read more. How does today’s technology improve just-in-time retail manufacturing? Consumers today expect immediate accessibility to goods and near-same-day delivery times. This has increased the need for manufacturers to have real-time visibility into their operations.  Read full article. Key nugget: ‘…Worthington Industries, a North American steel processor and global diversified metals manufacturing (~$3.4B annual revenue) supports 80 manufacturing facilities and 10,000 employees…Worthington needed to streamline its JIT processes…’ Results? ‘…improved forecasting accuracy by 50% and create forecasts three times faster…avoiding shortages and excess inventory across the 11 countries in which it operates…’ Here’s a deeper look at their strategy How can manufacturers benefit while transitioning to Cloud? Alamar Foods is a master franchise operator for Domino’s Pizza, with +300 locations in the Middle East, North Africa, and Pakistan. How’s their transition going?: ‘…Performance of business-critical applications, such as Oracle E-Business Suite, increased by up to 30%, providing round-the-clock availability to +350 internal users bolstered productivity…’  Here’s how.​ Insight to Impress Your Colleagues David Floyer, CTO & Co-Founder of Wikibon, quantifies the evolution of IT infrastructure management and operational costs: ‘…from Roll Your Own #RYO model to an integrated, full stack system where Oracle Exadata Database Machine optimizations reduce operating costs by at least 30%...running Oracle on x86 Costs 53% More than Exadata…’ Here’s the model. New from Morgan Stanley Equity Research: IT Hardware Gets a Second Life—and a Double Upgrade.  ‘…several catalysts are converging to give IT Hardware a second life—and drive double-digit earnings growth in 2018. For this reason, our team recently gave the IT Hardware group a double upgrade, shifting our view from cautious to attractive…' Learn More. Keeping it Real – Data Protection, Avoiding the Blame and Shame Game Seriously, what’s the point of backing up if you can’t recover? While data backup and recovery may not be the most glamorous job in an organization, have a failure when restoring critical data and you’re suddenly the center of attention—in the worst way. ‘…The optimal solution is designed with recovery in mind and has the recovery process tightly integrated with the database so that database transactions, and not just files, are being backed up…’  Read more Before You Go It’s here! Oracle Database 18c is now available on the Oracle Cloud and Oracle Engineered Systems! Oracle Database 18c, the latest generation of the world's most popular database is now available on Oracle Exadata and Oracle Database Cloud.  It's the first annual release in Oracle's new database software release model, and is a core component of Oracle's recently announced Autonomous Database Cloud. Click here for details on OTN. Spotlight from the recent Oracle Cloud World New York 2018: Oracle’s Cloud at Customer is offering gaining momentum across industries, including Healthcare. In case you didn’t attend, here are the details from the ‘Cloud at Customer with Quest Diagnostics’ - Session BRK1124. ‘…Quest Diagnostics is the world’s leading provider of diagnostic information services, and one of the world's largest database of clinical lab results, with insights revealing new avenues to identify and treat disease, inspire healthy behaviors and improve health care management….’  That’s so cool.  Find details on their deployment of Cloud at Customer here. Don’t Miss Future Happenings: subscribe here today!

February, 2018 newsworthy happenings in review: Oracle’s Cloud-Ready Infrastructure emerges as key element for database workload demands of modern manufacturing; @Wikibon publishes new TCO model...

Manufacturers Turn to Cloud-Ready Infrastructure to Tame an Unruly Internet of Things

The last thing a manufacturing IT leader needs is another issue to keep them awake at night. But for many, that’s exactly what the Internet of Things (IoT) is turning out to be. Let’s look more closely at why IoT has become a source of anxiety within so many manufacturing firms—and how engineered systems infrastructure can shift the focus back on IoT as a massive source of value for manufacturers of every size, and an essential first step towards smart manufacturing. The Deeper Concerns Behind IoT Worries Like so many manufacturing technology challenges, the issues with IoT seem pretty clear-cut at first, but a closer look reveals a more nuanced story. Three findings from a recent survey of IoT stakeholders explain, in a nutshell, why manufacturers experience IoT-related angst: 92% of the businesses with IoT projects say there’s room to improve their data capture and analysis capabilities. 94% face challenges collecting and analyzing their IoT data. 41% say these data challenges top their list of IoT concerns. Why does IoT data cause so many headaches? There are two main issues in play: Manufacturers are dealing with staggering quantities of data, flowing constantly from hundreds or even thousands of discrete sources. The sheer volume of even a modestly-sized IoT environment can swamp unprepared or underprovisioned systems. The Internet of Things consists largely of devices and data that weren’t designed to play the roles they’re tasked with playing today. Manufacturing systems and other devices have long been designed to generate data used for management, monitoring, and maintenance tasks. These capabilities are the foundation for what is called operations technology (OT): a class of systems designed to monitor physical devices, processes, and events within a manufacturing or other business environment.  Many of the OT systems in use today, however, have pre-internet origins—a time when storage, computing, and other resources were relatively scarce and very expensive; and when closed, proprietary standards and protocols were the norm. These largely on-premises OT systems generate data that tends to live in siloes, so it’s isolated and difficult to access, and even more difficult to integrate effectively. Turning IoT Challenges into Opportunity It’s not all bad news: Manufacturers do have easy and economical access to technology that cuts IoT challenges down in size. The key is the emergence of engineered systems: a stack of hardware and software infrastructure that is architected, integrated, tested and optimized to power business applications, technologies, and decisions. For a manufacturer, an engineered system combines the cost and performance advantages of modern, commodity hardware with cutting-edge software capabilities, advances in integration and process automation, and a simple, single-vendor approach to make it all work at peak performance. Let’s consider some key points in a typical IoT scenario where Oracle Engineered Systems in particular can make a difference for a manufacturer: Moving IoT data out of siloes and into applications: Oracle Internet of Things (IoT) Cloud Service handles the “dirty work” in a typical IoT environment. This cloud-based, platform-as-a-service (PaaS) offering gives manufacturers a simple and reliable way to connect IoT devices, analyze the resulting data flows in real time, and integrate data with enterprise applications, web services, and other Oracle cloud services. Setting up IoT for success in business-critical applications: Some of the most valuable uses for IoT data involve manufacturing process-improvement gains: more predictable yields, higher quality, reduced downtime, improved equipment reliability, and the like. Of course, you need the right analytical insights to achieve these gains, but you also need the ability to keep these analytical processes running with the kind of consistency and reliability one expects to see in business-critical applications. Oracle Engineered Systems products such as Big Data Appliance along with key big data analytics offerings allow manufacturers to achieve this level of performance and analytical insight. These combined solutions help manufacturers aggregate massive quantities of data from IoT environments that may encompass thousands of devices with multiple data sources, and billions of discrete data points at any given time. Big Data Appliance delivers the infrastructure layer to help acquire, aggregate, store, and process data of virtually any volume and any type in an open Hadoop environment. These capabilities, in turn, give the big data analytics offerings a solid, scalable, and reliable foundation for delivering business-critical analytics solutions. IoT Impact: Bigger Benefits for More Manufacturing Applications When used effectively, IoT data and analytical insights supported by powerful, cloud-ready infrastructure can enable classes of applications to be far more valuable: Just-in-time (JIT) inventory management applications that combine operational data with internal and external supply chain data sources, giving manufacturers near real-time visibility into current and projected inventory levels Predictive maintenance applications that look at a device’s historical usage levels, breakdown and repair data, and real-time operational monitoring data. These support the ability to schedule the right maintenance tasks, on the right schedule, and to keep downtime, major repairs, and unplanned equipment replacement to a minimum. New or upgraded manufacturing process automation capabilities that integrate IoT data and business applications, enabling manufacturers to automate a bigger share of their production activities without sacrificing efficiency or creating safety or product-quality risks. OT Isn’t Going Anywhere OT is still valuable, and indeed an essential tool for maintaining efficient and productive on-premises manufacturing operations. The goal now is to maximize the value of OT, placing it within an integrated set of capabilities—all of which call upon IoT data, analytics applications, and streamlined infrastructure.  Manufacturers are now challenged with integrating these applications with OT capabilities and supporting it with modern infrastructure—all while operating in a high-pressure, high-performance environment. Oracle Engineered Systems overcomes this challenge with “co-engineered” infrastructure products that are optimized to work with even the application layer and improve performance. This approach supports scalability and can be tailored to work with a range of deployment models: from traditional, on-premises data centers to public-cloud configurations. Turning IoT Data into Practical Analytical Insights This level of “performance under pressure” is very common for organizations that want to get value from their IoT environments. A recent example, involving the semiconductor division of a multinational electronics manufacturer, illustrates this point. The key challenge for this firm, which maintained R&D centers in the United States and Asia, was a statistical analysis process that simply wasn’t up to the task of getting useful insights from a massive set of IoT resources: 500,000 sensors and 3.5 billion data points spread across facilities on two continents. The firm was determined to upgrade its IoT data collection and analysis capabilities; to gain the right insights to improve product quality and equipment performance; and ultimately to boost manufacturing yields—potentially a major advantage in a highly competitive industry. An Oracle Engineered System for big data analytics turned out to be an ideal fit for this firm’s IoT needs. The engineered systems approach didn’t just hold up the massive data volumes involved; it actually enabled near-real time data analysis that could identify the root cause of equipment failures as they happened. The Oracle solution also integrated the firm’s IoT data streams with analytics tools that revealed previously hidden patterns and trends and helped to predict the results of manufacturing process changes. By capturing and unlocking the insights within its IoT data, the firm achieved its goal of higher manufacturing yields. In the process, it identified some important new methods to enhance product quality, even as it used efficiency gains to reduce operating costs, and achieved incremental sales and revenue gains. Airbus Stays Ahead of Fight-test Data Challenges In many cases, as the previous example suggests, it’s not enough simply to grind through these types of large-scale analytical tasks. Manufacturers are engaged in a perpetual race against the clock; they need solutions that keep them ahead of competitors and that avoid creating chokepoints in existing manufacturing processes. Another case history, this time involving aircraft manufacturer Airbus, drives home the value of using the Engineered Systems approach to solve analytical problems where time is money - and where even minor performance hiccups can impose unacceptable delays.  As Airbus ramps up production—it expects to produce 30,000 new planes over the next two decades—it must also scale and streamline its flight-test processes. Today, a typical test flight lands with about 2TB of data, providing a source of potentially critical insights into aircraft performance, efficiency and flight safety. Airbus uses the Oracle NoSQL Database running on the Big Data Appliance to ingest this test data, store and manage it, and make it accessible on-demand to Airbus analyst teams. The Big Data Appliance gives Airbus a robust infrastructure that moves test data exactly where and when it needs to go—allowing the company to shave 30% off its average testing time even as it continues to scale its manufacturing and flight-testing processes. Could Airbus build its own big data infrastructure solutions? Of course it could. But the Airbus management team knows its capabilities are best applied where they are most valuable: testing and improving their aircraft systems, not building big data infrastructure. A Better Way to Benefit from IoT Insights We all know just how fast technology innovation moves today. Most of us are also familiar—typically from first-hand experience—with the pain that often results when amazing new capabilities run up against legacy systems and data. It’s a dilemma that is on full display when manufacturers see the potential within their IoT data but experience the realities of dealing with legacy OT environments. It doesn’t have to be this way. Engineered systems, deployed in ways like the ones I discussed here, give manufacturers a simple and affordable way to cut through the confusion and complexity, and to turn IoT data into revenue-impacting insights.

The last thing a manufacturing IT leader needs is another issue to keep them awake at night. But for many, that’s exactly what the Internet of Things (IoT) is turning out to be. Let’s look more closely...

Virtual Reality in Manufacturing: Technology at a Tipping Point

Virtual reality and augmented reality (VR/AR) technologies are getting a lot more attention lately. Even as VR/AR moves quickly into the consumer mainstream, however, it’s clear that some manufacturers still view these technologies as too “out there” to be useful on the factory floor. Some manufacturers’ perceptions of VR/AR aren’t keeping pace with real-world impacts. More than one-third of all U.S. manufacturers either already use VR/AR or expect to do so by the end of 2018, according to PwC research. In contrast, another one-third of manufacturers don’t see any real value or use for VR/AR and have no plans to implement applications based on the technology. What’s even more interesting is the variety of VR/AR applications already in use. While the same research found that manufacturers most commonly use VR/AR to support product design and development as well as safety and manufacturing skills training, a significant number of VR/AR users reported applications such as virtual assembly, process-design improvement, maintenance and repair tasks, data and information access, remote collaboration, and supply-chain functions. We’ve seen this process play out before with big data and the cloud. Many manufacturers found practical and profitable uses for big data and for cloud-based applications even as others rejected them as too immature, expensive, and impractical for everyday use. We may be approaching the same tipping point with VR/AR technology. As the early adopters’ gains get too big to ignore, the fence-sitters will fall in behind them—and even today’s VR skeptics won’t be very far behind. At the same time, however, VR/AR applications can create some unique challenges for existing IT systems and environments. How a manufacturer addresses these IT challenges can have a major impact on how quickly and how much it benefits from virtual reality investments.  Virtual Reality Makes Its Move onto the Manufacturing Factory Floor VR/AR plays a role in a wide range of manufacturing applications. Product design and prototyping was one of the first areas where VR infiltrated the manufacturing industry. Among manufacturers using VR/AR, PwC’s research states that 39% use it for product design and development—the most common application for the technology. Safety and training is another area where VR/AR established an early presence. According to PwC, today about 28% of manufacturers use VR/AR safety and training applications, many of which support realistic and responsive training scenarios that would be too costly or too risky to conduct in a physical environment. More recently, AR/VR technology has proven itself in other areas where it will have a direct, real-time impact on manufacturing operations. By setting up a virtual production line, for example, engineers can perform and apply the results of time and motion studies—including real walking, maneuvering on or around equipment, reaching, equipment handling, and other elements that competing simulation methods struggle to replicate in any meaningful way. Today, more manufacturers are turning to AR solutions that use glasses or a heads-up display to overlay up-to-date assembly instructions, visual or video examples, and other resources. These solutions give workers hands-free, voice-controlled access to information while they stay on-station and with hands on their task—a combination with huge implications for assembly-line uptime and productivity. Many of the same AR innovations can also make life easier for maintenance personnel—giving them on-the-spot access to technical manuals, maintenance records, service requests, and other relevant data. Manufacturers Face Real IT Challenges from Virtual Reality Tools VR/AR applications pose multiple challenges for a typical IT organization. To implement these applications effectively, you’ll want to consider three areas where many manufacturing firms run into infrastructure-related problems; and consider how to address similar issues within your own organization:   1. Capacity. Most manufacturing firms probably have enough aggregate unused compute capacity to support a typical set of VR/AR applications. The problem is that many of these applications—especially those creating immersive experiences within highly complex environments—may involve brief but very intensive processing loads at levels that a manufacturer hasn’t seen before and is ill-prepared to handle.   2. Storage. Many VR/AR applications are data-intensive in multiple ways. Some applications may integrate real-time data flows from a factory floor or a warehouse. Others might require access to large data stores that allow them to recreate detailed and immersive virtual settings. Taken together, a typical set of VR/AR applications is likely to stretch—and often to exceed—a manufacturer’s fast storage networking and related capabilities. 3. Networking. Running VR/AR applications in on-premises environments is often the best way—and in many cases the only way—to achieve acceptable performance metrics, such as throughput and latency. Even then, a manufacturer’s ability to support VR/AR applications often depends on big-picture IT architecture issues. This includes answering where and how applications are positioned and integrated within a given set of processing, storage, and networking resources. Engineered Systems: Real Benefits for Virtual Tools A reliable and affordable solution to these IT challenges is Oracle Engineered Systems because these purpose-built systems integrate, optimize, and deploy a specific set of hardware, software, storage, and networking capabilities for a clearly defined range of uses. One of the most common implementation scenarios for Engineered Systems involves pairing the Oracle Big Data Appliance (BDA) with Oracle Database—a direct integration accomplished using Oracle’s Big Data Connectors which efficiently load the data from Hadoop (BDA) into Oracle Database. As it turns out, this combination is ideal for illustrating how an Oracle Engineered System supports VR/AR applications in a manufacturing setting. Big Data: Fueling Success with VR/AR Applications VR/AR offerings may not be the first things that come to mind when you think about data-driven applications. Peel back the more visual and immersive elements of a typical VR/AR application, however, and you’ll find that these are, indeed, intensely data-driven and data-dependent manufacturing tools. Let’s look at a specific example: a warehouse application that uses AR glasses to give workers the equivalent of X-ray vision. The AR application can display, in real time, information about a container’s contents; its origin and destination; special handling or hazardous materials alerts; and other manufacturing, supply chain, or logistics insights. The AR tool’s data display might be a user experience masterpiece: clear, concise, and always relevant to the task at hand. Look under the hood, however, and you’ll see massive volumes of data streaming out of a firm’s ecommerce, ERP, shipping, and other enterprise applications. Within the warehouse itself, hundreds or even thousands of sensors—many of them designed to feed proprietary monitoring tools—feed thousands of additional data points per second into this data reservoir. Manufacturing training and safety applications offer another great example of how big data—usually defined as high-volume, high-velocity, heterogeneous streams of structured and unstructured data—fuels many VR/AR applications. A realistic production-line simulator might incorporate real-time data streams from production equipment, performance monitoring systems, environmental sensors, and other sources. Some of this data may be structured, but a lot of it may arrive as continuous, high-volume streams of unstructured monitoring data. The Oracle Big Data Appliance (BDA) does all of the things that traditional database systems generally cannot do: ingest and process big data streams; store them as-is in either structured or unstructured form; and make filtered subsets of this data reservoir available on demand to downstream database systems, analytics tools and other applications. Manufacturers using BDA are capable of ingesting high-volume big data workloads—many of which would overwhelm non-optimized, generic systems—while still relying on a relatively compact and efficient IT environment. In this case, our Engineered System, Oracle Big Data Appliance, owns the infrastructure layer of a two-tier model where it acquires, stores, processes, and analyzes the flowing data. The Oracle Database occupies the second tier, where it provides the compute and performance required to power this data processing and potentially run data-driven applications. Engineered Systems and Virtual Reality: A Formula for Manufacturing Success Advances in VR/AR will likely soon drive manufacturing technology ahead another big step. If VR/AR technologies continue to follow a similar trajectory to big data and the cloud, we’re going to notice a remarkable evolution in attitudes: Manufacturers that question today whether VR/AR has any meaningful role to play will soon be rushing to catch up. At the same time, we’re going to hear concerns about just how much these applications demand from IT environments, which already run business-critical applications and data systems. These issues point to another source of value associated with Engineered Systems: their ability to offer a cost-effective, largely self-contained solution, to problems that might otherwise take a huge toll in terms of cost, complexity, and risk. By creating Engineered Systems that are purpose-built to work seamlessly together, and by architecting these systems around sets of pre-configured components that are optimized to work with one another, we double down on this approach, and amplify the benefits they provide to manufacturers. For manufacturers, all of these capabilities really boil down to a simple idea: Engineered Systems allow manufacturers to capitalize on the opportunities of VR/AR while largely avoiding the risks and costs. If you’re looking for a way to build a lasting competitive edge, that is a pretty good combination.  

Virtual reality and augmented reality (VR/AR) technologies are getting a lot more attention lately. Even as VR/AR moves quickly into the consumer mainstream, however, it’s clear that some...

Engineered Systems

Digital Transformation and the Future of Manufacturing - How Today’s Technology Drives Improved Just-In-Time Manufacturing

Unified commerce (in which multiple retail channels are connected to one real-time version of information), mergers and acquisitions, and new retail formats underlie a continuing retail industry transformation. It’s not online versus brick and mortar; instead, it’s online and brick and mortar. It’s not about one channel versus the other, but serving customers across all channels seamlessly. One result of the transformation happening in the retail sector is pressure on manufacturers to deliver faster, better, less expensively. Consumers today expect immediate accessibility to goods and near-same-day delivery times. This has increased the need for manufacturers to have real-time visibility into their operations—to respond quickly to demand fluctuations. Factories need to know in real time if they have sufficient raw material, how much finished goods inventory they have, and the status of their products throughout the supply chain. These data points can facilitate just-in-time (JIT) shipments, minimizing excess inventory while increasing customer satisfaction.   A brighter economic climate and new technology are creating a manufacturing renaissance   The U.S. is in the midst of a manufacturing rebound, a turnaround that began with increased optimism after the 2016 elections and continued through 2017. For the first time in decades, more manufacturing jobs were created than left the country, according to data compiled by Reshoring Initiative.    While automation is responsible for job losses, it’s this same innovation that can drive more demand, which means more manufacturing, and more jobs. As Bob Doyle of the Association for Advancing Automation says, automation may eliminate what are called the “three Ds, a dull, dirty, or dangerous job.” The jobs being created—and that will emerge in the future—will require higher-level skills that can make use of new technologies.   New technology can help manufacturers achieve unprecedented efficiency and productivity to respond to the new reality. Stephen Gold in a recent IndustryWeek article, “Four Digital Trends Manufacturers Should Watch for in 2018,” cites the Internet of Things (IoT) as one of four digital trends that is facilitating a manufacturing renaissance. The specific areas where Gold predicts IoT will have a major impact include predictive maintenance, self-optimizing production, and automated inventory management.   To take advantage of these transformative technologies, manufacturers need the technology infrastructure that can support more efficient JIT practices.   The principle behind JIT manufacturing and JIT inventory practices—producing what customers want, when they want it, with minimal waste—became popular back in the mid-1970s. Yet most manufacturers employing JIT practices are still running on generic infrastructure that hinders significant process improvement.   How Oracle Engineered Systems improve JIT processes   Oracle Engineered Systems hold the key to breaking free of the constraints of generic infrastructure. Purpose-built to maximize performance at every layer of the stack, Engineered Systems deliver on the requirements of JIT processes and allow manufacturers to run leaner and more profitably, and to take advantage of emerging technologies. How?   Engineered Systems run faster, better, and more reliably because they are co-engineered at the source code level solely to run Oracle Database workloads. Better Oracle Database performance results in faster application response times, faster report generation, and faster analytics. And that contributes to higher employee productivity, real-time inventory insight, and higher customer satisfaction.   Lion Corporation was able to provide its executives with accurate business-decision data, such as toothpaste sales, by accelerating query performance by up to four times and processing data updates two times faster with Oracle Exadata.   Engineered Systems can also produce supply chain management reports—batch processes— faster for real-time inventory insight. Without having to wait hours, or even a day, to run reports, you can avoid materials shortages that lead to production delays, missed shipments, and unhappy customers, or, at the other extreme, excess inventory that leads to waste and added costs.   Let’s look at one manufacturer that turned to Engineered Systems to improve its JIT practices.   Worthington has the mettle to compete   Worthington Industries is the premier North American steel processor and a global leader in diversified metals manufacturing, with nearly $3.4 billion in annual revenue. With 80 manufacturing facilities in 11 countries and 10,000 employees, Worthington was looking for improved infrastructure to streamline its JIT processes and support an aggressive global growth strategy. To accomplish this, the Columbus, Ohio-based company had four goals.   #1: Enterprise-wide standardization First, Worthington had to standardize its manufacturing and work order processes so that all its business lines and plants were running the same way.   Oracle Engineered Systems made enterprise-wide standardization and process streamlining possible, improving system scalability and enabling business expansion.   #2: 24/7 operations Next, it needed to deploy an integrated enterprise resource planning (ERP) and supply chain management (SCM) system on a managed cloud services platform. This would allow the company to take advantage of global support and ensure maximum system availability to run its manufacturing plants around the clock with minimal unplanned downtime.   Worthington implemented Oracle Advanced Supply Chain Planning, running on Oracle Exadata. This improved forecasting accuracy by 50% and create forecasts three times faster. With better forecasting, the enterprise could better manage materials inventories to avoid shortages and excess inventory across the 11 countries in which it operates.   #3: Faster database performance Third, an ERP system update was needed to improve performance.   Using Oracle Exalogic to run the Oracle Business Suite and Oracle Business Intelligence Standard Edition, data warehouse loads are now running four times faster thanks to improved system performance and availability.   #4: Financial management consolidation Finally, Worthington wanted to consolidate its financial management systems and processes across multiple manufacturing business lines.   Availability that results in less unplanned downtime was raised to 99.8% and system scalability facilitated by implementing Oracle E-Business Suite release 12—which includes Oracle E-Business Suite Financials—and Managed Cloud Services.   Manufacturers are finally reaping the full benefits of JIT processes   With the pressure to run even leaner and more profitably, manufacturers are looking for ways to improve the efficiency of JIT manufacturing. Generic infrastructure may be causing performance issues that limit availability, speed, and access to analytics. Engineered Systems such as Oracle Exadata help manufacturers take JIT to the next level, with faster and more accurate forecasting, better communications at every step of the supply chain, and higher availability.   With a robust infrastructure, manufacturers are set to take advantage of the latest technologies that can drive out inefficiencies and respond to new reality of manufacturing, with its demand for leaner, faster, and more reliable processes. The time is now for Engineered Systems-driven just-in-time manufacturing.   Discover the unique ways that companies are leveraging Oracle Exadata to achieve remarkable benefits for their businesses, hear their stories in the Exadata Knowledge Zone

Unified commerce (in which multiple retail channels are connected to one real-time version of information), mergers and acquisitions, and new retail formats underlie a continuing retail industry...

What’s the Point of Backing Up If You Can’t Recover?

While backup and recovery may not be the most glamorous job in an organization, have a failure when restoring critical data and you’re suddenly the center of attention—in the worst way. We’ve invited an expert on the topic, Donna Cooksey,  Solution Specialist, to give us the scoop on the state of backup and recovery, and tips on how to avoid unwanted attention and perhaps finally get some credit for your backup and recovery efforts.  Donna, would you please tell us briefly what your role is at Oracle? I’ve been with Oracle since 2003. As a product manager in Oracle’s high availability development organization for over 12 years, I have focused on backup and recovery products, including Oracle’s Zero Data Loss Recovery Appliance (ZDLRA), Recovery Manager (RMAN) and Oracle Secure Backup (OSB). Now as a solutions specialist in the Cloud Business Group, I work primarily on the Recovery Appliance with account teams and customers to optimize data protection for the Oracle Database. One of the biggest things for me is, I always look at technology from a customer perspective because it’s about what problem you’re solving versus bits and bytes and buffer overflows. What is the situation that enterprises face today in terms of data backup and recovery? We’re gathering and using data 24 x 7, so we cannot afford to have production slowdowns resulting from backup and recovery. And protecting data has gotten more complex as compliance regulations continue to grow and security breaches become more common.  Data protection today has seismic consequences for businesses, as outages are often made public, risking the loss of reputation, revenues, customers, and/or fines. When you consider that business and mission-critical data resides in transactional databases, protecting this data is extremely important. What’s wrong with the way businesses back up their databases? Most traditional backup products treat database backups the same as flat file backups. Flat file backups simply need to be restored, and you are good to go. If you shut down your database to perform a “cold” backup, it can simply be restored and will be in a consistent state. However, I haven’t spoken to a customer in years that shuts down the database to make a backup. Most database backups are performed while online, meaning you’re actively processing incoming transactions during the backup. If you have a failure, you need to not only restore your files but also recover your database from the point at which you had the failure. You don’t want to lose any transactions that have occurred between the most recent backup and a failure. You want zero data loss. Unfortunately, most backup products don’t address this database situation. Also many customers assume that if a backup was performed, the data can be successfully restored. But there are many moving parts and vendors in the lifecycle of a database backup, so a flipped bit or failed disk can easily corrupt backups.  How should businesses back up their databases? The optimal solution is designed with recovery in mind and has the recovery process tightly integrated with the database so that database transactions, and not just files, are being backed up. Oracle Zero Data Loss Recovery Appliance, for example, works with Exadata Database Machine to protect ongoing transactions. The standby database stays current with the primary database. You also get a sub-second recovery point objective (RPO). Transactions are sent immediately, even before they’re written to disk. The Recovery Appliance offloads most backup processing from the database servers, freeing up all those CPUs for your applications. Everything runs faster as a result. Backup and restore are about equally as fast. In fact, sometimes, restore is even faster than backup. Most important, Recovery Appliance provides real-time recovery status. No more having to guess or assume validity of your database backups. Recovery Appliance has extensive monitoring, alerting, and reporting as well. For example, if data loss exposure exceeds a user-defined threshold, an alert is triggered. No longer do you need to babysit backups.  Exadata and ZDLRA are based on the Maximum Availability Architecture (MAA) and the principle of a “no single point of failure” design. MAA is a best practice blueprint to achieve optimal high availability at the lowest cost and complexity. It enables continuous availability and zero-data-loss protection. We have a white paper that goes through MAA reference architectures  and provides the standardization requirements to ensure high availability and data production for enterprises of all sizes and lines of business. These reference architectures are based on a common platform that can be deployed on-premises or in the cloud.  Can you share a success story from one of our customers? SK hynix, a leading semiconductor manufacturer based in Korea, presented at Oracle OpenWorld about its Exadata and Recovery Appliance successes. Its enterprise has 40 Exadata machines across four business areas. Business-critical highly transactional applications on their database servers—which required high availability—had been taking a performance hit because their legacy backup system consumed so many CPU cycles. SK hynix switched to Oracle’s Recovery Appliance to offload most of the backup processing. By sending only changed blocks, it was able to reduce any impact on network performance. Most importantly, the Maximum Availability Architecture enabled the organization to meet its SLAs for availability. Managing a bunch of RMAN scripts with the legacy system, along with periodically validating backups, was cumbersome. In contrast, Recovery Appliance gives SK hynix simplified, intelligent, and automated storage management.  What’s the takeaway for businesses from all this? You need all the components of your IT stack working together to eliminate complexity, optimize productivity, and produce the required results. There’s no point in backing up your data if you can’t recover it, and database recovery adds another dimension to data protection strategies. You need a recovery solution that’s designed to work with your database. Oracle Engineered Systems, such as Exadata and Recovery Appliance, are designed precisely to accomplish this—from production to backup and recovery.   

While backup and recovery may not be the most glamorous job in an organization, have a failure when restoring critical data and you’re suddenly the center of attention—in the worst way. We’ve invited...

Engineered Systems

Manufacturers Benefit While Transitioning to the Cloud

Manufacturers are always looking for ways improve the performance, efficiency, and availability of their operations and business processes, so they can accelerate innovation, ramp up product development, and increase product quality, without increasing IT overhead. To achieve these goals, manufacturers are moving to cloud technology. During the transition, companies are reaping the benefits of cloud-ready or cloud-based solutions alongside on-premises systems. But some manufacturers aren’t quite ready to make the move to the public cloud, and need on-premises infrastructure and applications that are built to be compatible with cloud-based solutions, so they can easily migrate to the cloud when the time is right. Other manufacturers are required by regulations to always keep certain data on-premises, so a combined cloud-based and on-premises solution that is engineered to work together seamlessly is their best approach. Do It Yourself (DIY) Approach to Digital Transformation Traditionally, industrial companies source solutions from multiple vendors and craft a DIY cloud infrastructure. However, this approach tends to be expensive, time-consuming, and difficult to upgrade, especially when third-party applications are upgraded and break the integration with the DIY solutions. DIY solutions are also vulnerable to staff turnover, which can put mission-critical systems at risk. If the engineers who built the DIY solutions leave the organization, there is a loss of institutional knowledge. Recruiting new engineers to work on a DIY system is hard because they won’t have any experience with that system. Alternative to DIY: Choose the Right Partner A faster and easier option is working with one partner that offers integrated cloud technology that works with your existing on-premises systems. For example, Oracle Engineered Systems provide a cloud-ready, scalable infrastructure that makes it quick and easy to migrate from your existing systems to the cloud. Oracle’s on-premises infrastructure is identical to the one used with Oracle Cloud solutions, so you get to enjoy benefits, such as increased performance, efficiency, and uptime—along with advanced analytics and visibility—while you’re preparing to migrate. Also, because the Engineered Systems are co-engineered with Oracle software, they speed up performance beyond what a generic infrastructure can. At the heart of an Engineered System for industrial companies is Oracle Database Appliance, a package of fully integrated hardware and software optimized for peak performance with Oracle Databases and other applications. Automation simplifies installation—the appliance can be up and running in minutes, allowing manufacturers to run databases on a single, centrally managed appliance, from one vendor. A single-vendor model streamlines IT support and reduces the amount of time spent managing vendors and getting their solutions to work together, giving your IT team more time to create added value for your company. It can also can potentially cut software-licensing costs and reduce the number of databases, thus decreasing support requirements. Let’s take a look at how two industrial companies that replaced legacy solutions with Oracle Engineered Systems benefited. Food Company Gets Faster Results Alamar Foods is a master franchise operator for Domino’s Pizza, with more than 300 locations in the Middle East, North Africa, and Pakistan. It also owns Premier Foods—a meat-processing factory—and several Dunkin Donuts franchise locations in Egypt. Its aging system didn’t have the performance and availability the company needed. The internal IT staff installed Oracle Database Appliance and then had an Oracle partner migrate the data. Performance of business-critical applications, such as Oracle E-Business Suite, increased by up to 30%, and their round-the-clock availability to more than 350 internal users bolstered productivity. The time to finish monthly close for the 200 stores in Saudi Arabia was reduced from 13 to nine days. The company also gained the ability to initiate new projects, such as data warehousing, human resources, and business intelligence initiatives, that could now be supported by its Oracle Engineered Systems. Infrastructure costs were reduced by using Oracle Database Appliance fully integrated software, servers, storage, and networking, so there was no need to buy separate hardware and software components and deal with incompatibility issues. Company Nixes Downtime Al Yusur Industrial Contracting Company (AYTB) offers services in the fields of construction and fabrication, operation and maintenance, industrial cleaning, shutdowns and turnarounds, and housing and catering. It is also a major provider of industrial, technical, and logistical support services for businesses in the oil and gas, chemical, petrochemical, power, desalination, and other sectors throughout Saudi Arabia and Qatar. With the company’s old system, it was experiencing excessive downtime. A regional power outage could cause up to 12 hours of system downtime and would require more resources for follow-up investigations to determine the extent of the damage. Performance of its database and applications was subpar. AYTB implemented Oracle Database Appliance and Oracle E-Business Suite with the assistance of two Oracle Partners. It also used Oracle Premier Support to expedite the project and provide ongoing support. Oracle Active Data Guard provided infrastructure stability and data security. As a result, its database and application performance improved by more than 90%. Maximum availability was provided to all users, with zero planned or unplanned downtime events since implementation. Manufacturers Benefit While Transitioning to the Cloud Moving to the cloud is not an all-or-nothing choice. If you’re not ready to move or can’t move everything due to regulations, combining cloud-ready technology with existing systems offers a good way to transition on your own timeline. Oracle Engineered Systems along with Oracle Database Appliance make it easy and fast to switch to the cloud when you’re ready. Until then, you gain the benefits of cloud technology, including increased speed, efficiency, and uptime for your operations and business processes. Interested in learning more? Visit our website for information on Oracle Database Appliance and Oracle Engineered Systems.

Manufacturers are always looking for ways improve the performance, efficiency, and availability of their operations and business processes, so they can accelerate innovation, ramp up product...


4 Steps to Enhance Financial Data Security in Your Organization

Did you know... Financial Services Organizations Are Now the #1 Target for Cyberattackers? The WannaCry ransomware attack that broke out May 12 attacked hundreds of thousands of Windows XP computers and tens of thousands of organizations spanning more than 150 countries. It provided a wake-up call about the vulnerability of organizations and the potential worldwide scope of cyberattacks.   Beyond the regulatory and reputation nightmare, the global cost of cybercrime is staggering. It’s reported that it will reach $2 trillion within two years. Because of the compliance and regulatory requirements of both the financial services and healthcare industries, the cost per breach is expected to be higher than for almost any other industry groups, in fact.   Even more frightening is the fact that financial services is a top target for cyberattackers.  The recent Verizon security report shows that almost one-quarter of data breaches affect financial organizations, with 88% of these occurring through web application attacks and denial of service attacks, as well as payment card skimmers.  You Need to Build Resilience Into Your Infrastructure Recently, Accenture worked jointly with Oracle to provide a roadmap for strengthening business resilience and ensuring business continuity in the face of these ever-greater threats.   A key point is that you can mitigate data security risk better by building in security that prevents data breaches in the first place, rather than reacting to an event. Even if you can repel an attack, system performance will degrade while under attack—slowing operations and reducing staff productivity.   Network security alone simply won’t do the job. You need to build in security throughout your infrastructure, right to the core.   Are you wondering if your financial services business adequately protects sensitive customer data? Here are some questions you need to ask: Do the IT policies adhere to industry standards with regards to database security? What measures are in place to protect from unauthorized access or misuse by privileged users? What measures are in place to protect from data corruption, and unrecoverable and intentional damage to data? To ensure Oracle database security, Accenture takes a full-lifecycle approach based on 4 pillars. Let’s look at each briefly now. Data Security Pillar #1: Discovery In the discovery phase of the process, you begin by getting an assessment of where your systems are today. This includes an audit of your database architecture and past events; analysis and confirmation of vulnerabilities in critical areas like user access, application security, and patch validations; and then a summary of the findings with recommendations.   Data Security Pillar #2: Engineering a Solution The next step is to engineer a solution based on your specific business requirements. This should include a security and compliance model; an intrusion detection system; integration of third-party applications; and Oracle Advanced Security solutions that include encryption, masking and redaction, and compliant identity management solutions. Because Oracle Engineered Systems are completely integrated and optimized through every layer of the stack, the data security is already built in.   Data Security Pillar #3: Implementation Once the solution is developed, it’s time to implement it. But implementing a new solution is not only tedious, it can offer its own security risks as well. Sourcing individual components of a new database solution and working with the networking and storage team to install, configure, and patch can be overwhelmingly complex and time-consuming. Not to mention that taking systems offline for a long period of time could leave you unnecessarily exposed.   Oracle Exadata, Oracle's flagship Engineered System, is an all-in-one database platform. Because servers, networking, and storage are all pre-configured, pre-tuned, and ready to deploy, you can deploy in a matter of days versus weeks or even months. Because of the massive consolidation ratio, applying a pre-tested quarterly patch to a few Exadatas is faster and much easier than having to dedicated resources to patch several disparate machines and ensure compatibility after each update. You're also reducing your surface area of attack. A smooth implementation is a great indicator of how your security measures will continue to go in the future.   Data Security Pillar #4: Education Continued training, workshops, and educational materials can help ensure data security doesn’t stop once systems and processes are implemented. Building resilience into your organization extends much further than just hardware. Teaching employees new and old how to protect their passwords, avoid phishing scams, and develop good workplace habits, such as locking your computer when you step away to go to the bathroom, are all important measures in ensuring data securing across the entire organization.   How a Major Bank Realized Better Data Security and Performance With Engineered Systems   Oracle Engineered Systems are co-engineered with the Oracle Database team to deliver unique security enhancements and stronger end-to-end security for the entire stack. For Chinae Savings Bank of South Korea, security like that is paramount. With a network of 14 bank branches and internet and mobile banking services, Chinae needed to strengthen security for customers’ personal information, such as bank account details and home address, and prevent malicious attacks and data breaches to ensure compliance with stringent Korean Personal Information Protection Act requirements. By combining Oracle Advanced Security and Oracle Exadata Database Machine, Chinae experienced the following results:   Minimizing exposure of sensitive customer information during online transactions and keeping unauthorized users from accessing sensitive information improved data security. Data encryption and redaction capabilities ensured the bank’s compliance with South Korea’s regulatory requirements. Data redaction directly into the database operation system increased security without affecting system response time and CPU utilization rate. The "smart" Exadata features allowed credit-related transactions to be processed 3x faster than before, at 660 transactions per second. Exadata's "out-of-the-box" pre-tested, pre-configured platform allowed the new retail banking platform was deployed in just 5 months. This accelerates data transfer between Chinae Savings’ core banking and information system and the external system for Korea Federation of Savings Bank from 20 hours to just four hours—a 5x improvement. Chinae was able to improve data security, but also improved performance of their risk analysis and credit management by enabling bank employees to rapidly access customer credit data, such as loan amount and credit rating, and ensure timely updates to the account and customer information management systems.   You don’t have to sacrifice performance for data security. By engineering security into your infrastructure from the start, you can get the best of both worlds and avoid becoming a statistic on the growing data breach-shaming list. Learn more about the 4 pillars of data security in the report published by Accenture and Oracle, "Digital Trust: Securing Data at Its Core" and how Oracle Engineered Systems can help you enhance your financial data security.

Did you know... Financial Services Organizations Are Now the #1 Target for Cyberattackers? The WannaCry ransomware attack that broke out May 12 attacked hundreds of thousands of Windows XP computers...

Cloud Infrastructure Services

2018 Prediction: Key to Business Growth Lies with Empowered DBAs

Data is the backbone of every organization, and IT is the first part of the organization that touches it. So when you streamline IT, you enable the business to perform better on all sides. With DBAs increasingly being tapped for more strategic initiatives while still tied to the operational tasks that consume the bulk of their time, automation of rote IT tasks may be the answer to enabling innovation within the organization. In fact, TechTarget predicts that due to automation, the IT department of 2020 will be smaller and less focused on technology. For IT professionals, this will likely lead to a more rewarding career.   Oracle Engineered Systems delivers purpose-built performance and interoperability from the same people you trust with your data. By standardizing on the Oracle stack, you get end-to-end security for the database, hardware, software, and cloud, and simplified management with a single-vendor support model. Technical debt is going to be a very real problem for the data center of the future if infrastructure complexity isn’t addressed early on. Luckily, systems and databases are getting smarter, which opens opportunities to create more efficiency and effectiveness for the organization as a whole.    Oracle recently announced the world’s first self-driving database, which will redefine many job roles, especially in IT. Oracle Autonomous Database Cloud leverages ground-breaking machine learning and AI that eliminates the risks associated with human labor, human error, and manual tuning. The system can detect possible intrusions and engage its own defensive mechanisms as well as perform routine upgrades and patches, automatically, without downtime. This relieves IT teams from routine, but risky and potentially disruptive, IT tasks such as patching, upgrades, backups, and tuning. As a result, companies can direct greater resources toward extracting value from the data itself while still protecting the business from both security threats and unplanned downtime. This really opens the doors for what IT teams can enable for the business.   High Demand Data-Centric Roles   According to CompTIA, more than half of actively recruiting U.S. IT firms are currently hiring to fill a need for new skills data. Among CompTIA’s emerging job titles to watch for in 20187 are Chief Data Officer (CDO), data architect, and data visualization specialist. Because these roles interact with data heavily, collaboration with IT teams and DBAs is imperative for success.  Here are just a few of the jobs likely to come to the fore in the coming years.   Chief Data Officer: New regulations around data governance—particularly GDPR—are making boards and executive teams devote increasing attention to data management. This provides CDOs with an opportunity to lead the transformation into tomorrow’s data-centric organizations, working in lockstep with the IT teams and DBAs that manage that data directly. The value of a company’s data will only grow as companies begin to mine it for insights, and securing it will become as much a financial imperative as a technical one. With data currently being viewed as a company’s most valuable capital asset, that puts the CDO on par with the CFO.    DevOps Engineer: The onrush of data can either speed up development or create a massive bottleneck between development, those that need to ship new services and features as fast as possible, and operations, those that need to maintain system stability and performance. A DevOps engineer enables effective communication and cooperation between these two groups, so ideas can flow freely and hit the market sooner without taking the entire system down with it. Oracle’s cloud consumption models can help accelerate the adoption of a DevOps model. For example, by using Oracle’s public cloud or Cloud at Customer, creating a private cloud, or leveraging both for hybrid solution, to build a test dev environment can enable more agile development cycles. The main thing to remember here is that because Oracle owns both sides of the cloud, your test dev environment will be exactly the same as your production environment, allowing your team to focus on driving innovation rather than tending to potential downtime due to compatibility issues. No other company can tout that.   Data Designers/Data Visualization Engineers: Telling the story of data has never been in higher demand. IT teams are under pressure from business users to develop data visualization capabilities, often delivered through user-friendly dashboards, to help meet business intelligence objectives. And there’s where this role comes in. Visualization engineers make complex data accessible to both customers and business stakeholders, encouraging better decision-making and helping to build the foundations of a data-driven business. This role works closely with DBAs, or can even be the DBA themselves, extract large datasets efficiently to turn data into beautiful, interactive founts of information and insight. AS ONE, a scientific instrument trading company, leveraged the Oracle Database Exadata Express Cloud Service and Oracle Data Visualization Service to visualize 1.4 million data points at the core of the system quickly and efficiently.   Data Scientist: While this role is not exactly new for 2018, the demand for data scientists will only continue to grow on par with the growth of data. With growing quantities of unstructured data to mine for critical business insights, data scientists are becoming key players in the new data-centric economy. They will be at the forefront of extracting meaning from the bourgeoning data capital of their organizations and delivering it directly to stakeholders. Soon, data scientists will begin leveraging big data for predictive analytics—forecasting sales or anticipating customer buying patterns. Data scientists work closely with enterprise architects and DBAs to build the big data infrastructure that can pull in and analyze information from diverse sources at speed and scale. Traditionally, this required a costly on-premises buildout, but with the volume of data that enterprises are handling today, businesses can no longer wait. Oracle Big Data Appliance offers an out-of-the-box, high-end analytics solution purpose-built for handling data coming in and out of Oracle Database, so you can start extracting value from your data a lot sooner. CaixaBank, a leading Spanish Bank, consolidated data marts into data pools with Exadata, Big Data Appliance and Oracle Big Data Connectors—quickly and easily integrate massive data from all points of sale and customers’ online and mobile profiles to enable the bank to understand customers, their preferences, and their mood to quickly and flexibly offer them tailored solutions.    Blockchain Developer: Blockchain—the technology that enables cryptocurrencies such as Bitcoin, Ethereum, and my personal favorite Dogecoin—is far more than an instrument for financial transactions. It offers the ability to move and store almost any kind of valuable data, without a centralized database, while maintaining its integrity and security. It’s revolutionary and demand for blockchain expertise has grown swiftly within a variety of sectors and industry verticals, with businesses hiring for a variety of roles that require blockchain proficiency—from leading experts to software developers and engineers with a solid understanding of basic blockchain principles. Here are four things you can do with blockchain in your organization, today. While it may be a few years before blockchain is present in a large percentage of our customers’ technology stacks, Oracle is building paths for easier ramp-ups of blockchain initiatives when customers are ready with Oracle Blockchain Cloud Service.   Shifting Skill Sets Beyond IT   Just as the IT team will need to take on higher-level analytical capabilities, so too will those in many other facets of the business:   Marketing will require the ability to capture customer interactions and analyze data for more targeted and effective campaigns.  Product development will need data-savvy product managers to shepherd new products to market, working across teams to drive development.  HR teams must be versant in human capital management (HCM) platforms that use data to refine and automate recruiting, onboarding, development, and retention.  Supply chain management will need big data analysts to create sophisticated models to increase efficiency, reduce costs, and shape demand.  Sales analysts will use algorithms to determine which prospects and existing customers are most likely to purchase products and services.  Legal experts will be expected to mine data for evidence and assure compliance with policies and procedures. Who, or What, Will Support Business Growth in 2018 and Beyond?   Not too long ago, the future of IT was outsourcing. Now that automation is more accessible and economically feasible, it is fast becoming the next iteration of everything we know. Oracle 18c, better known as the Autonomous Database or self-driving database, eliminates the cost and outage risk associated with human labor, making it an incredibly reliable platform on which to run your business. And the Oracle Autonomous Data Warehouse Cloud, powered by the high-performance Exadata platform, delivers automated caching, adaptive indexing and advanced compression—all without human intervention.    With more efficient IT, companies can empower DBA talent to drive the highest impact on the business, helping the business make the most of their data and, creating an insight-driven economy that will enable growth in ways we have yet to imagine.

Data is the backbone of every organization, and IT is the first part of the organization that touches it. So when you streamline IT, you enable the business to perform better on all sides. With DBAs...

Selling the Future: Where Can Your Supply Chain Take You?

The retail industry turned 180 degrees when shirts, coffee pots, convenience-store snacks, and other retail goods took on secondary lives as data points in digital software and hardware. Simply put, apps and smartphones have trained shoppers to expect immediate gratification all the time. At the same time, ebusiness retailers have moved in with rapid-fire fulfillment and minimal capital investments. These digital-native retail disruptors are making traditional retail supply-chain management practices that focused on inventory replenishment obsolete. Today, supply chain management (SCM) success is more about managing direct demand than it is about managing inventory to keep stores stocked. Consumers want their demands met as soon as they recognize a need, and within this demand-driven supply chain is “the mile after the last mile”—that is, a direct customer order being fulfilled within hours after a digital order is placed, whether that’s delivery by drone, customer pick-up at a nearby location, or delivery through other means. Perhaps nothing represents the depth of retail-industry disruption more than the Futurecraft 4D, an athletic shoe with a 3D-printed sole that Adidas is mass-producing in the United States with the help of Silicon Valley company and Oracle customer, Carbon. The company plans to make 100,000 pairs in 2018, reversing a half-decade trend of shoemakers fleeing the United States for cheaper labor markets. (By 2014, 98% of shoes sold in the United States were made elsewhere.) A closed cycle of digital signals will drive the order, design, manufacture and delivery of each pair.  Essentially, Adidas will be able to quickly and inexpensively produce small batches of customized shoes with superior-quality materials for select groups of customers and sell them at a premium price—all the time matching supply precisely to demand and getting shoes to customers in days, not weeks. And soon, drones and autonomous car deliveries could make this into hours. The bottom line for any retail business or brand—with physical stores or without—is that near real-time digital inputs from customers and sensor technology are driving demand along a digital supply chain that extends beyond showrooms and shelves and directly to the customer—the mile after the last mile.  Streamlined and Flexible Supply Chains Will Dominate Competitive gains that retailers won in the past through things like automated warehouses and supplier rationalization programs are simply part of doing business now. The real opportunity for strategic benefit through SCM will come from newer technologies such as blockchain and additive manufacturing (3D printing at scale) that shorten the time and distance between supplier and retailer, and between retailer and customer. This streamlining is important for responding to customers, who are now telling retailers what they want, how they want it, when they want it, and how they want to receive it. Every relationship needs to be nurtured individually—and this does not align with the mass-merchandise model that has dominated the retail industry for so long. What’s needed is more flexibility in the supply chain, and this requires transparency, automation, and speed. What Today’s Reliable Retail Supply Chains Need How can organizations build supply chains designed for transparency, automation, and speed? Such supply chains are digitally and data driven, incorporating automation to minimize manual processes, and breaking down data siloes for end-to-end visibility. Technologies such as IoT and augmented reality hold the keys to creating a data-driven supply chain—provided they are built on the backbone of a cloud-ready engineered system— which becomes the infrastructure to store, organize, and process all the crucial data they need. But what retail organizations are often lacking is the cloud-ready engineered systems backbone—that is, integrated hardware and software that activates really heavy data workloads and makes the information usable for planning, forecasting, procurement, production, marketing, customer service, and other functions. It’s all about having an integrated infrastructure for the greater good at every step of the supply chain. For example, a limited database of supplier information is an idle asset. On its own, it won’t help a retailer find or become a trusted supplier that can meet variable anticipated demand for a product. But with a backbone of scalable, cloud-ready data storage, extreme processing, and computing power that pushes accurate and updated market and supplier data into easy-to-use applications, retailers can gain speed through automation and can get answers to these questions quickly and with confidence. They can also potentially order goods based on predictive analytics, provide special instructions, and trace merchandise movement from the same application—all while improving the ability to scale up or down in real time to accommodate spikes in business, such as during the holiday season. The engineered systems backbone also securely preserves one version of truth for all operational data, so that mistakes and miscommunications that slow down processes and add costs are eliminated or made visible for fast correction. That’s transparency. Now, think about what could happen with cloud-based technologies such as blockchain, which is promising to reduce friction in supply chains and trading networks. Retailers would be able to create self-governed networks of suppliers that trade goods using transparent, distributed-ledger capabilities. This could enable automated smart contracts, instant payments, and Internet of Things or sensor–activated shipments. Without human interaction, errors and missing information are reduced across supplier transactions, and transactions happen faster because retailers and suppliers are directly connected. If a shipment needs to be rerouted for an unexpected reason, the retailer can learn about it sooner through an automated alert and take action to prevent downstream delays. Says Mario Vollbracht, Oracle Global Director of Consumer Markets, “In rural Belgium, where I grew up, we had milk and other groceries delivered to the door. In some ways, the milkman is back. But that was a one-off process. Now, retailers have to figure out how to deliver this personalized experience at scale. Retailers need ‘one view of the truth’ that can be shared with every link in the supply chain.” Setting something like this up requires a solid foundation of engineered systems—the backbone—that can feed data into and receive data from the trading network, and seamlessly integrate it into operations.  Ready for Today and Tomorrow? One store-based retailer that is embracing the digital flip is 7-Eleven, which connects with customers though a digital guest program that is powered by Oracle Engineered Systems with Exadata and Exalogic machines, and uses enterprise performance management software across 8,500 locations. 7-Eleven has heavily invested in industry-leading, Oracle Engineered Systems technology to realize huge benefits in process integration and consolidation on a unified business-management platform. The company uses a cloud-ready model with PaaS, SaaS, and Fusion middleware. “We chose Oracle [not just for the guest program] but because we wanted to leverage those tools and platforms for a number of upcoming strategic projects for our accounting systems and also our merchandising systems,” explained Steve Holland, Chief Technology and Digital Officer. 7-Eleven now plans to “sense” demand through predictive data analysis and scale merchandise as needed, such as for busy holiday times. Holland said the company’s technology is now aligned with its value proposition, which springs from speed and availability. An integrated engineered system like 7-Eleven’s can provide a retailer with 80% of what it needs for successful SCM and enterprise management, and then the retailer can customize the remaining 20% for its unique needs. Incremental Change Is Not Enough With the amount of data and variation large retail enterprises have, it would be very difficult and more costly to duplicate an engineered system with entrenched processes and outdated technology—and this is a reality that all large retailers need to understand and respond to for survival. To move fast and capture the benefits as soon as possible, retailers need to reconsider everything they are doing based on previous models and rethink supply chain management from the perspective of the “me-now” customer in the new retail world: wanting fulfillment immediately after ordering (and that time is shrinking quickly from days to hours) and in whatever manner they want to receive it. What retailers must do to respond is automate actions, and integrate and consolidate data with integrated, engineered systems. The retail industry’s flip to digital fulfillment has already happened, and the industry will never be the same. To survive, businesses will need to align their supply chains to a demand-driven supply chain. Slow movers can count on closed doors and empty carts, while fast movers will be ready to respond to whatever consumers demand. Stay tuned for the next blog series, and check out the previous blog posts in this retail series: Selling the Future: The Last Days of Retail or the Best Days of Retail?, Selling the Future: Designing Experiences Your Customers Crave, and Selling the Future You're Not Selling Goods. You're Selling an Experience. 

The retail industry turned 180 degrees when shirts, coffee pots, convenience-store snacks, and other retail goods took on secondary lives as data points in digital software and hardware. Simply...

Cloud Infrastructure Services

Selling the Future: You’re Not Selling Goods. You’re Selling an Experience

As we saw in part 2 of our "Future of Selling" series, today’s consumers are often digital natives who don’t shop the way their parents did. With expectations of an app to do literally anything and everything via a smartphone or tablet, modern shoppers no longer view a trip to the store as a necessity before making a purchase. Some may even see it as a burden—time and energy wasted going to a store only to discover that they could buy it cheaper online and have it conveniently shipped wherever they wanted. Therein lies the dilemma for retailers: If a trip to the store is no longer taken for granted, how do retailers get shoppers into the store? Brick-and-mortar retailers are caught on the wrong side of the digital shift in retail, with many stuck in a dangerous cycle of falling foot traffic, declining comparable-store sales, and increasing store closures. More than 8,600 retail stores could close this year in the U.S.—more than the previous two years combined, brokerage firm Credit Suisse reports. Tom Goodwin wisely observed that retailers must decide to make shopping practical or an experience, not both: “Retail is becoming a world of extremes. Brands either need to remove complexity and make the process as simple as possible, or add it in to create a ‘delightful’ experience.” To appeal to new shoppers, the answer is to deliver more than just good, fast service. While material goods may not be as important to millennials and other digital natives, an experience is extremely important. It’s Not Either/Or but, Rather, Both/And Contrary to popular belief, millennials actually value the brick-and-mortar store experience more than any other demographic, according to a recent GeoMarketing article, millennials aren’t giving up on stores—they just want an enhanced shopping experience.” ROTH Capital Partners’ 2017–2018 Millennial Survey also identified that:   43% research online before buying at a physical store. 71% says the right in-store experience would increase visits and purchases. Rarely without a smartphone, millennials want the option to shop both online and in-store (and sometimes online while in the store). They value being immersed in the brand experience; whereas, Gen-Xers and prior generations tend to view shopping more as simply transactions. Millennials are building their identity through the modern shopping experience.   Personalizing the In-Store Experience If retailers can no longer simply stock the shelves or racks with a variety of products and expect to see sales, how can they create the kind of experience that today’s shoppers want? Personalization is key. And “shopping cart analysis” can provide the components to gain key consumer insights and turn them into a personalized customer experience. Based on a fully unified and integrated infrastructure, Oracle offers a complete, cloud-ready solution offers retailers one centralized platform that can analyze buyer behavior, turn those insights into personalized in-store and online experiences, and ensure that the supply chain delivers purchases as fast as the buyer wants. Creating a Digital Experience for In-Store Guests 7-Eleven’s entire business model is in-store, so it is a great example of a retailer that has taken bold steps to respond to the demands of a new consumer generation—connecting with its customers through a holistic digital guest experience the minute they walk through the door.  Leveraging Oracle Engineered Systems, Oracle Exadata, Oracle Exalogic, and Oracle Enterprise Manager, the convenience store giant launched a Digital Guest Experience (DGE) program across 8,500 stores in the U.S. and Canada. Everyday, 7‑Eleven connects with tens of millions of customers through point-of-sale terminals, web sites, and mobile apps to promote customer loyalty, distribute targeted promotions, customize digital coupons, and accept digital payments to deliver the most rewarding customer experience possible. The 7-Eleven app helps the company get to know every single user digitally. 7-Eleven can see what each customer is buying, how often he or she buys it, how offers (like a “buy six, get the seventh drink free” offer) affect buying behavior, and which offers the individual store guests prefer. And customers responded well. One year after the launch, app scans more than doubled, and customers’ baskets increased almost 25% on average. The company also discovered that when customers redeemed their free drinks, they spent 30% more on average than when they were purchasing before the rewards program. What about the time required for deploying successive versions of this mission-critical solution? Ron Clanton, 7-Eleven's DGE IT Program Manager, reported at Oracle Open World, "We are now able to provision new environments in less than 10 minutes. This includes the complete SOA Suite on Exalogic, and Enterprise Manager managing both the SOA Suite, Exalogic, and our Exadata databases." Watch what Steve Holland, Chief Technology and Digital Officer for 7-Eleven, has to say about building a powerful infrastructure required to keep an application that serves nearly three billion customers annually up and running 24/7. What Will the Future Retail Customer Experience Look Like?  No matter how convenient online shopping is, it can’t yet replace the high-touch in-store experience. Layout, stocking, and even the temperature inside the store are important. Beyond these factors, augmented reality and virtual reality can help shoppers literally see what’s possible. RFID research can also be used to personalize the customer experience. RFID tags on individual items helps retailers get product stocking right, understand what drives sales, make the fitting room a place for engagement, and handle shipping.  Retail industry expert Michael Forhez, Global Managing Director, Consumer Markets Industry Solutions Group at Oracle, gave us a glimpse into the future. According to Michael, the future experience will engage every customer who comes into the store. Confectioner Lolli and Pops has already taken a step toward the future and uses “smart store analytics” to anticipate staffing levels and decide where to place staff within the store.  "Further in the future (but not too far away), says Michael, consumers may stop at a store to pick up groceries, but go into the store when it comes time to plan a party. Once inside, the future consumer will expect customized service—perhaps meeting with the butcher to select a special cut of meat, then talking to the wine steward about what wines to pair with the meal, and connecting with a decorating professional to get the party ambiance exactly right." Technology is at the heart of creating a personalized experience. Artificial intelligence and cognitive computing can be used in the online experience to drive traffic to the store. Predictive analytics can help forecast buying behavior and create custom product recommendations. Timing of offers can also coincide with previous consumer patterns. Offers can even pop up on smartphones while the customers are in the store—a kind of 21st century “Bluelight Special.” For more from Michael, watch his appearance on the special pre-Oracle OpenWorld edition of "Exadata Your Way" where he explored some of the major changes happening in both the retail and financial services industry. An Offer Customers Can’t Refuse  With 1,807 stores and 38 distribution centers in the U.S., Target realized that it could offer a blended experience for customers, offering competitive same- and next-day delivery services and in-store pick up. Customers see an opportunity to save on wait time and shipping costs, and Target is able to keep stores relevant by turning them into fulfillment centers that pull customers back in stores.  A short three months after deploying Exadata, Target was able to build and push “pick up at store” and “ship from store” options to more than a thousand stores just in time for the holiday shopping season. The new infrastructure enables Target to serve modern customers better, ensuring that customers receive their Target.com order faster and more reliably than ever before. “Now that we are delivering with greater speed and great flexibility, it is changing expectations.” Tom Kadlec, Senior VP Infrastructure and Operations at Target. The Future of Retail Is Now Are your stores and your IT infrastructure up for the challenge of handling millennial consumer demands? You don’t have to wait until the future to respond to the new world of retail. The fact is, you can’t afford to wait. Luckily, there’s no need to start from scratch—you can build on what you already have to start offering a more personalized customer experience that will transform your business. Oracle Engineered Systems are pre-built, pre-configured, pre-tested database platforms co-engineered with the Oracle Database and application team at the source code level to provide a highly unified experience. With on-premises options with exact equivalents in the cloud, you can build based on your own architecture specifications. The cloud provides the capability to centralize all your data from different legacy systems in different data centers on premises or other cloud sites.  Macy’s chose Oracle Engineered Systems and Exadata Cloud Service for this reason. Rather than having to build its IT infrastructure based on individual components, the multi-chain retailer was able to deploy a completely integrated solution quickly that it can run on-premises or in the cloud with the exact same experience.  Macy’s is just one more example of a retailer who’s not waiting for the future to arrive, but running out to meet it. Don’t be left in their dust. Oracle Engineered Systems come cloud-ready, with three consumption models available: on-premises, cloud, and cloud at customer, a revolutionary system that deliver the public cloud behind your firewall fully managed by Oracle on a subscription basis. You can maintain control and comply with data sovereignty laws by keeping your databases on-premises, while leveraging cloud services for burst-computing, like during a busy holiday shopping season. Once the holiday shopping tapers off, you can reduce your capacity—all without capital expenditures because you purchase your capacity on an as-needed basis. Its an integrated solution that you deploy on your terms. Learn more about Oracle Engineered Systems.

As we saw in part 2 of our "Future of Selling" series, today’s consumers are often digital natives who don’t shop the way their parents did. With expectations of an app to do literally anything and...

Cloud Infrastructure Services

Selling the Future: Designing Experiences Your Customers Crave

The rulebook for competing and winning in the modern retail climate is entirely different than it was a decade ago . Back then, retailers were able to capture consumer loyalty by offering a vast product assortment, competitive prices, adequate service and positive feedback from friends and family. But the reality is, consumers can access hundreds, if not thousands, of options offering the very same thing online—and they only have to click a few buttons to get it. Retailers must not only provide the best products at the best prices. They must also provide convenient, immersive, and relevant experiences across all channels. And achieving that requires one thing: a deep understanding of every customer and their unique preferences, as well as the collective and ever-evolving expectations and tastes of target consumers. Although customer wants and needs are always changing, there is one constant: Consumers’ growing tech-savviness and reliance on digital tools has given them more power than ever before, and has ultimately turned the retail-consumer relationship on its head. With that shift, retailers need an IT infrastructure that can support the vast amount of data that must be collected and processed at speed to anticipate and exceed customer expectations. Digital interactions influence new retail rules and imperatives There is a common narrative that millennials and centennials (also known as Gen Zers) are the ones that have sparked the disruption occurring within the retail industry. To an extent, this is accurate. After all, they are “digital natives,” having grown up using technology regularly—including to shop. Research from PwC indicates that 40% of U.S. millennials buy products online on a monthly basis, and PwC’s U.S. Retail and Consumer Leader Steve Barr states that “as this generation of shoppers enters their prime shopping years, they will continue to drive much higher mobile usage.” An Accenture survey found that more than 40% of Gen Zers purchase more than 50% of their apparel and consumer electronic items online. About a quarter (24%) of them prefer to purchase all items online. But make no mistake: These digital natives are not the only ones gravitating to e-commerce. The National Retail Federation expects a significant uptick in online sales this year — 8% to 12%, to be exact. Brick-and-mortar, however, is poised to see a rise of merely 2.8%. Consumers of all ages are now gravitating to the convenience of online shopping thanks to retail innovators such as Amazon and Walmart. What sets these brands apart is that they have invested significantly in creating seamless and enjoyable experiences via personalized offers and coupons, curated assortments, and even automatic reordering for everyday essentials. Now more than ever, consumers expect retailers to know them and understand them on a deeper level; and tailor experiences to what they want and need at a moment’s notice. Nearly half (48%) of shoppers are even willing to share data—from their personal information to even their location—to gain these improved experiences across all channels, according to Deloitte. In order to compete and thrive with the likes of Amazon, retailers must collect and leverage customer data in order to tailor all facets of the customer journey—from awareness to conversion. Use data insights to your digital advantage The secret to standing out in this competitive retail climate is the ability to harness the power of data insights. Data intelligence allows retailers to monitor consumer purchases, as well as their behaviors with digital touchpoints, such as email campaigns, banner ads, social posts, and even different areas of an e-commerce site. Retailers can then marry this data with brick-and-mortar behaviors, including coupons and offers redeemed and items purchased, to strengthen inventory assortment, merchandise displays, store layouts, and even in-store marketing and promotions to provide the best in-store experience possible. These more relevant omni-channel experiences pay dividends for retailers, as they can lead to increased shopper engagement and sales. As Harvard Business Review research has noted, omni-channel shoppers spend about 4% more every time they’re in a brick-and-mortar stores, and 10% more when shopping online. But most of all, they help retailers create an extremely detailed view of the customer, what makes each person unique and, in the end, what drives that person to purchase, empowering retailers to make smarter investments and create better experiences in the long term. Data can also be used for analytics to assess past purchase behaviors for different demographics. What products or brands are most popular? Which sizes and colors are most in demand? What marketing campaigns are most effective? But it’s not just using data for today; it’s about using data to better prepare for tomorrow, too. By embracing predictive analytics, you can forecast (or predict) what your consumers are likely to do in the future, helping you make smarter decisions in all facets of the business—from merchandising to marketing. As the number of commerce channels and digital touchpoints available to consumers continues to expand, so too will the breadth and depth of data at your fingertips. This data can empower you to understand your customers on a deeper level and know the right way to tailor their offers and experiences (without smacking of Big Brother) across all channels. Bring it all together with cloud-ready IT solutions With so much data to collect and process from a growing number of online and offline touchpoints, how can retailers collect, store, and use data to their advantage when it is often resting in a series of disparate systems? Centralizing data is an essential first step but can also seem like a daunting challenge—but it doesn’t need to be. The cloud enables retailers to ditch data silos, and process and analyze data quickly. Retailers that build a cloud-ready IT infrastructure can act faster, engage better, and be bolder, especially as disruptive trends, like the Internet of Things, artificial intelligence, machine learning, and robotic process automation, play a more critical role in the new era of customer experience. But manually piecing together solutions is inefficient. It slows time-to-market and supports only incremental change. In today’s retail environment, where fast, innovative, and nimble organizations are the only ones to survive, retailers need a complete, integrated infrastructure encompassing database, network, and application services, so they can truly optimize data intelligence. And they need a partner that can help them maximize the value of their data and ensure their journey to the cloud is as seamless as possible. In the end, the right partnerships will help them forge a path to the future. For example, retailers worldwide have partnered with Oracle to use Oracle Engineered Systems to improve operations and customer experiences. Macy’s selected both Exadata Cloud Service and the Oracle Integrated Cloud solution based on positive experiences with Oracle Database and Engineered Systems. In another scenario, Hong Kong–based A.S. Watson Group increased sales productivity by deploying Oracle Exadata to accelerate online transaction processing and data analysis. Break down barriers between digital and physical Although consumers are relying more on digital channels to browse and buy, that doesn’t mean retailers should completely abandon their brick-and-mortar roots. In fact, new innovations in in-store design and technology enable retailers to leverage the data that fuels their digital interactions in stores. Stay tuned for the next post in this series, where we’ll share the ways retailers can harness technology to provide more personal and memorable in-store customer experiences.  

The rulebook for competing and winning in the modern retail climate is entirely different than it was a decade ago . Back then, retailers were able to capture consumer loyalty by offering a vast...

Cloud Infrastructure Services

GDPR Compliance and the Cloud – Help or Hindrance?

Today's guest post comes from Paul Flannery, Oracle's Senior Director, Business Development, Systems in the Europe, Middle East, and Africa region. Organizations are currently faced with the question of how to approach the General Data Protection Regulation (GDPR), the new legislation coming into force in May 2018 which sets out to harmonize data protection across the European Union. Rather than be seen as a compliance burden by Europe-based organizations and global entities who do business in the EU, GDPR should be seen as one of the best opportunities to deploy long term technology investment to unlock true digital transformation. While the regulation itself is limited to the processing of personal data, the EU’s interpretation of what that actually constitutes is broad. Essentially, any data that relates to an identifiable living human, including something as disconnected as an IP address that can identify a specific user’s device, is regarded as within the scope. The extended scope of the legislation doesn’t end there. For example, organizations are obliged to take into account the “state of the art” in cybersecurity, yet specific technologies, controls or processes beyond that phrase remain unmentioned, leaving a high degree of risk assessment and subsequent judgement needing to be applied by the organization itself. The timescale for addressing compliance is tight too, and any organization of sizable scale will find it difficult to even understand what data they have in the first place and assess its sensitivity. The cost of non-compliance is what has brought GDPR to the attention of boardrooms not just in the EU, but globally. The potential magnitude of fines are significant (4% of an organization’s global revenue, or €20 Million – whichever is greater), as well as the potential reputation damage that may result from non-compliance with the new mandatory breach notification requirements. The inevitability of cloud computing The cloud, whether it’s public or private, Software-, Infrastructure- or Platform-as-a-Service, can mean different things to different people, and the overall understanding across the majority of industries is somewhat immature, specifically with regards to compliance and security. Yet the journey to the cloud is happening regardless, and without proper security in place, that inevitable shift will arrive in the form of shadow IT, bringing with it unnecessary risk exposure. Generally speaking, there are substantial benefits in moving to the cloud, such as enhanced security capabilities that go beyond what would be affordable for most organizations in an on-premise environment. However any move to the cloud needs to be carefully planned and architected properly, as with the new legislation approaching, the consequences of getting it wrong are significantly increasing. GDPR compliance is a long term commitment, and investment in implementing a cost-effective supporting infrastructure will prove to be valuable in the years ahead. It might even represent one of the biggest opportunities to accelerate digital transformation in recent years. It places focus on good data management, with benefits to organizations ranging from increased security and operational efficiency, to improved customer service and corporate reputation. For example, one of the key legislative requirements is to be able to provide any individual with every piece of data an organization holds on them, including all data records and any activity logs that may be stored. On the one hand, this places significant technology requirements that would only be possible with the simplification and standardization of complex IT environments. Yet on the other, the potential for converged data of that quality from a business or marketing perspective is substantial, and brings with it a wealth of possibilities. Earlier this year, IDC gathered CIOs and CSIOs from enterprises across EMEA, to gain insight into how they are approaching GDPR in light of current cloud adoption and security requirements. Their resulting report ‘Does Cloud Help or Hinder GDPR Compliance?’ summarizes discussions from events in France, Italy, Morocco, Spain, South Africa, Sweden and Switzerland. It not only flags the many potential benefits of compliance, but also sets out IDC’s simple but effective technology framework to help organizations focus on the particular requirements of GDPR, and select the right technology for the job. The full report is available to download here. About the Guest Blogger Paul Flannery is the Senior Director, Business Development, Systems for the EMEA region at Oracle. With over 30+ years experience in the IT industry as a Software Developer, IT Manager and Pre-sales consultant, Paul brings a 360 degree view of the IT market : having held several Sales Leadership and General Management roles with over 25 years experience in Global Account Leadership, Partners & Alliances and Business Development, working with Large Global Corporate customers , across a broad range of Industries. Paul is well known for his strong strategic thinking approach, coupled with an execution focus to help Customers and Partners deliver Quantifiable Business Value to their Key Stakeholders. Find Paul on LinkedIn at https://www.linkedin.com/in/paul-flannery-1849262/

Today's guest post comes from Paul Flannery, Oracle's Senior Director, Business Development, Systems in the Europe, Middle East, and Africa region. Organizations are currently faced with the question...

Cloud Infrastructure Services

Selling the Future: The Last Days of Retail or the Best Days of Retail?

We’ve been hearing the term “retail apocalypse” ad nauseam. Is it really the last days of traditional retail? Or do the best days of retail still lie ahead? In the end, each retailer can determine its own fate: inevitable decline or enviable success. If the holiday shopping season is any indicator, there are some worrisome trends for traditional retailers that are not taking their futures into their own hands. Consumers continue to shift from brick-and-mortar (down about 1.6% over Thanksgiving Day and Black Friday) to online and mobile apps. In fact, Thanksgiving and Black Friday online sales rose 17.9% this year over last year. Purchases via smartphones were up an astounding 50% this Thanksgiving Day versus last year. While brick-and-mortar stores are holding their own this holiday season, they’re doing it with about 3,800 fewer stores. And the future doesn’t favor the traditional retail model. What happened to traditional retail? The Industry has always adjusted to previous cycles in consumer behavior to maintain and grow brand-loyalty, new generational nuances, distribution of household income, shifts in population and urban footprints, multi-channel support for commerce and mobile devices, and addiction to discounting and convenience. Fearless industry disruptors have used technology to turn their brand and entire industries upside-down, taking them to the brink of extinction in some cases - disruptors like Uber in ride-sharing, Airbnb in lodging, and Zara in retail.  And it all happened in little more than two decades. In retail, recent changing consumer behavior - especially millennial shopping habits - along with the rise of online and mobile app shopping have given consumers more control.  In many cases, they don't need to visit stores.  They can experience their brand and products digitally, in new ways. Other companies, like clothing retailers, are using emerging digital technologies to maximize their employee base and design processes to stay ahead of consumer trends.  One great example is lululemon athletica's creative design and planning approach to allow their global merchants to share design images with regional buyers and planners - to bring new trends to market, well ahead of each season.  The work-flow is supported by lululemon anthletica's creative employees. These employees prefer role-based, graphical interfaces and rich visual design experiences, because it keeps them in tune with their customers' buying preferences. Times are changing. It seems apparent that many retailers were a little too complacent and thought that shoppers would continue to come into stores.  And it left them flat-footed.  A recent Bloomberg article notes that while retailers announced ~3,000 store openings in the first three quarters of 2017, they also reported that ~6,700 stores would close, including about 550 department stores.  The article goes on to say that more retail chains have filed for bankruptcy and been rated distressed than during the financial crisis in 2008.   From Matt Townsend, Jenny Surane, Emma Orr and Christopher Cannon, in “America’s ‘Retail Apocalypse’ Is Really Just Beginning,” Bloomberg.com, November 8, 2017 But it’s not too late for retailers to adopt technology that offers them the chance to re-emerge in the marketplace—and win. Retail power continues to shift away from them at a much faster pace and into the hands of the consumer. Retailers are responding in a big way, focusing on the art of convenience.  Buyers shop online and get deliveries where they want them, when they want them. The supply chain doesn’t stop at the last mile any more. Now retailers must consider “the mile after the last mile.” Those leading retailers are deploying tools that take all the consumer data that’s available and extract knowledge about purchasing patterns, buying propensity, and more—tools like data analytics, predictive analytics, artificial intelligence (AI), and machine learning (ML). Armed with this intelligence, they are staying ahead of customers’ insatiable appetite to interact with their brands digitally, and deliver new services that anticipate their needs in real-time. Companies like Target Stores in the US are riding this innovation curb - making commerce or mobile purchases made online available for curbside or in-store pick-up – and real-time buyer-gratification with mobile couponing to drive loyalty. Even this won’t be enough. On top of disruption to the retail model, the pace of change still threatens to leave traditional retailers in the dust – those retailers who make life more convenient for people get this.   7-Eleven is another example of a company that started down the digital path with Oracle early on – using Oracle’s Engineered Systems flexible on-premises and cloud-based platforms as a cornerstone of their digital strategy. It appears that the industry remakes itself almost overnight, before some retailers can respond. Where does it leave traditional retailers today? In the traditional model, retailers relied on efficient processes and systems for success. They tinkered on the margins rather than implementing wholesale change. The old way won’t work anymore. We see this in the bankruptcy filings and disappearance of retailers that were household names only a few years ago. Just this year, some big names that filed for bankruptcy included The Limited, Hhgregg, Payless ShoeSource, Gymboree, and Toys R Us. Those that have survived still struggle with change. But the savvy retailers are taking a cue from the disruptors and adapting to the new reality. What lies ahead for retailers? The world belongs to the brave and the bold The future is about the customer experience (CX)—and that means harnessing the intelligence held in all the data collected in day-to-day operations. The power of information gathering, advanced analytics, and data visualization, coupled with digital marketing and real-time delivery capabilities, has spawned an entirely new breed of fearless competitors who’ve changed the rules of the game. This is not a world for the timid. Retailers must be willing to take risks and look at completely new business models. The smart retailers know this, and they know they can’t do it themselves. In the race to market, they can’t afford to spend months, or even years, developing strategies and building the IT infrastructure to support those strategies piecemeal. To survive, they must partner with companies with the expertise and experience to build a radically new strategy—partners who can help them design the future. In terms of IT infrastructure—which is absolutely critical to this radical operational overhaul—retailers must partner with vendors who have already built integrated IT solutions that can be deployed in weeks and provide performance that simply can’t be achieved with DIY solutions. This infrastructure must be cloud-ready to consolidate data and scale as needed. The retail graveyard is full of retailers who weren’t willing to take risks and move boldly. Are these your last days, or just the beginning of your best days? As a retailer, you hold the power to determine your future. Let’s get to the details This is just the beginning of the discussion. Watch for future posts on this topic. In part two, we’ll look more closely at the consumer-in-control—how retailers can adopt technologies like artificial intelligence (AI) to create the modern customer experience (CX), especially to meet the demanding expectations of the powerful cohort of “digital native” shoppers. In part three, we’ll take a look at how retailers can harness technology to provide a personalized in-store customer experience. It’s no longer if or when retailers need to act boldly, it’s now how—from leveraging personal devices and cloud-based solutions, to employing AI for forecasting and responding to in-store traffic, to comprehensive shopping cart analysis that can provide all the components for gaining key consumer insights and turning them into a personalized CX.  In the final installment, we'll turn to technology and the retail supply chain.  Automation and blockchain are also two key components helping retailers deliver on changing consumer expectations, whether it's at the store or the "mile after the last mile". Stay tuned.

We’ve been hearing the term “retail apocalypse” ad nauseam. Is it really the last days of traditional retail? Or do the best days of retail still lie ahead? In the end, each retailer can determine its...

Cloud Infrastructure Services

How 2 Manufacturing Companies Are Preparing for the Cloud

When I hear “manufacturing,” my mind immediately shifts to sepia-toned steam-powered factories. But the adoption of innovative technology is putting that outdated image to rest. Digital technologies that have emerged in the last 5 or so years, like the Internet of things (IoT), artificial intelligence, machine learning, automation, robotics, and data analytics have fundamentally changed the manufacturing sector. As Daniel Newman notes in a recent Forbes article about the top 5 digital transformation trends in manufacturing, “Not since Henry Ford introduced mass production has there been a revolution to this scale.” As manufacturers look to make their operations leaner and more competitive, integrated cloud-ready IT systems like Oracle Engineered Systems have become crucial for effective digitization strategies—helping to manage operating costs, improve efficiency, and enable almost instantaneous responses to changing customer and market demands. Competing in the Digital Age Requires Significantly Better IT Systems That's a fact. In the digital age, we have high expectations for seamless experiences—making the business environment increasingly more competitive. This is especially true for manufacturers, who must manage ever-growing complexities to meet the needs of their customers. Manufacturers are constantly looking for ways to beat the competition by moving faster, cutting costs further, and winning customer loyalty. Modernization doesn't just happen by deploying a new web-based SaaS application. And it doesn't just happen by upgrading your critical systems to the latest version (although that is a place to start). IT systems are the backbone of every company's technological prowess; its how companies compete now.   These two real-world examples demonstrate how companies can leverage new cloud-ready technologies to adapt their existing IT systems for the digital age; improving application and reporting speeds, enabling real-time analytics, and reducing unplanned downtime of their most important business applications while preparing for the cloud shift. They both do this by leveraging Oracle Engineered Systems to enable their current enterprise resource planning (ERP) systems to perform faster at maximum availability. The additional cost savings are just a cherry on top. Speeding Up Global Growth Worthington Industries, a rapidly growing diversified metals manufacturer, was experiencing growing pains as a result of global expansion. Worthington Industries had grown sales to US$2.8 billion, with 10,000 employees at more than 80 facilities in 11 countries.   The company was looking to consolidate its financial management systems across multiple lines of business and wanted to reduce costs by standardizing operating procedures across the business. Worthington also wanted to improve its manufacturing processes while providing near-real-time reporting to give managers the insights necessary to better manage the business. Higher system capacity, improved availability, and quicker disaster recovery were required to safeguard mission-critical systems. Worthington wanted to be able to do all of this without significantly increasing its IT team.   A co-engineered IT system, with storage and applications designed to work together and purpose-built for Oracle Database, was Worthington’s solution to provide a foundation for global growth. The company upgraded its integrated ERP and supply chain management system on a managed-cloud services platform, implementing Oracle E-Business Suite Release 12 with Managed Cloud Services.   Worthington’s new system improves scalability as the business grows and expands through acquisitions. Most impressively, Worthington Industries was able to meet its goals and improve system availability to 99.8%. Manufacturing Downtime = Penalties and Lost Business Spain-based CELSA Group, a highly diversified manufacturer of forged, laminated, and processed steel, is the largest steel producer in Spain and one of the largest in Europe. With more than 50 companies operating on five continents, CELSA Group knew it needed to improve its IT infrastructure to support international growth.   CELSA Group runs SAP ERP systems and was concerned about excessive downtime, which had delayed shipments of its steel products and exposed the company to late penalties and lost business. CELSA Group needed to be able to provide managers with better and timelier financial reporting and resource planning across its businesses, while optimizing its backup processes to reduce downtime.   CELSA implemented a new IT infrastructure based on Oracle SuperCluster and Oracle Exadata Database Machine. With diligent project planning, migration was accomplished seamlessly in less than a day.   Additionally, with the new engineered IT system, CELSA was able to improve on-time delivery by eliminating downtime, saved more than $650,000 annually in labor costs, optimized financial reporting across 2,000 users in 50 entities, and tripled backup speeds.  Ready for the Cloud, Ready for the Future By upgrading and standardizing their IT infrastructure to an integrated, co-engineered technologies that are purpose-built for Oracle Database, these two companies have been able to realize tremendous improvements in efficiency. Because Oracle Engineered Systems have exact equivalents in the cloud (see Oracle Exadata Cloud at Customer and Oracle Exadata Cloud Service), both companies have gained flexibility unique to Oracle, allowing them to scale easily, cut costs, and gain a single view across the business for greater market agility. Learn more about how Oracle Engineered Systems and cloud-ready solutions can address today's problems and prepare you for the shifting market demands of tomorrow. Raad the CIO magazine and Oracle collaboration on cloud-ready infrastructure here:

When I hear “manufacturing,” my mind immediately shifts to sepia-toned steam-powered factories. But the adoption of innovative technology is putting that outdated image to rest. Digital technologies...

Data Protection

No Downtime for the Enterprise

Today's guest blog comes from Andre Carpenter, Principal Sales Consultant at Oracle. It is astounding to think that IT availability concerns in this day and age still exist even after the many technological advancements we have witnessed in the industry over the last ten years.  Every year, there are numerous IT outages causing havoc and sometimes irreversible damage to both brand and revenue - British Airways claimed in this report that one such incident cost their business $102 million dollars USD and at least 75,000 customers found themselves grounded over three days; resulting in a massive dent to their company brand. Sure, we have entered a digital age where companies are striving for innovation rather than spending precious IT budgets on “keeping the lights on” and addressing availability gaps. In fact, if you look at the top priorities that CIOs are driving for from Gartner, you will see that digital transformation is king, and that for many organizations it is about creating differentiation, competitive advantage and finding new routes to market to create new revenue streams.  We have also seen the birth of the role “Chief Digital Officer” in many companies, tasked with driving growth and innovation in this digital age. By transforming their companies’ traditional analogue based systems to digital ones, and finding a new market in new media such as mobile applications, social media, talent recruitment, virtual retail, as well as web-based information management and marketing. Even more staggering is that market is also expected to boom over the next five years, with CIOs expecting their companies’ digital revenues to grow from 16% to 37%, according to Gartner’s recent  CIO Agenda Survey of 2,944 CIOs in 84 countries. CIO mag defines digital transformation as “the acceleration of business activities, processes, competencies, and models to fully leverage the changes and opportunities of digital technologies and their impact in a strategic and prioritized way." Even though you might see different variations of what digital transformation means and its purpose, one common consensus from IT leaders is that it is essential to the survival of most organizations today.  It is impacting every organization, but what is imperative and sometimes forgotten (and even omitted from Gartner’s CIO agenda insights) is in order to stay relevant and competitive in this digital age, there needs to be no downtime for the digital enterprise and to remain competitive is to remain ‘always on’.  One Size Fits No One A one size fits all approach to data protection can be disastrous, so it is important to shift to a more intelligent and adaptive backup and recovery framework—one that looks beyond just generic data recovery and incorporates specific database workload awareness, protection validation, and operational visibility.  And with increasing volumes of transactional data and always-on applications, these data protection inefficiencies are adding complexity, cost, and risk. It’s critical to protect not just every few hours of your business, but every single transaction to every single second. Let me break this down by giving a fictitious example, say your business runs a digital marketing platform responsible for bringing in 60% of your sales leads - every day, thousands of leads come in via various digital methods banner clicks, white paper downloads, social media and that platform is underpinned by an Oracle Database. A database outage of just four hours can result in hundreds and hundreds of leads being lost and not followed up leading to potential lost revenue and brand impact, if the database was indeed backed up by a purpose built backup appliance then you may be able to restore to a point perhaps 15 minutes prior to the outage - either way there is data loss involved without the opportunity to recapture that data. Leverage Technology That Limits, Not Increases, Loss Exposure So how does one mitigate database Loss Exposure? Simple, implement a purpose-built hardware solution that was developed by the same team that writes the application.  By doing so, the appliance becomes aware of what it is backing up, it understands the nature of the workload so it ends up storing less data overall - shortening backup windows and improving the recovery point objective for that database.   Here is one example of where we did just that for a customer that was continuously running out of storage on their generic backup appliance and had multiple failed backups across their 340 databases.  We showed that by switching to an Oracle Zero Data Loss Recovery Appliance, their storage footprint would reduce by approx. five times, their backup window would shrink by 59 times and their overall risk of an outage or data loss would be drastically reduced saving them thousands of dollars in storage whilst providing peace of mind that their business would not end up on the front page of a news site. It's Not About Storage, It's About Recovery Want to learn more? Get in touch with us today and let an expert come visit you on-site to assess how effectively you are backing your Oracle Database up today. We’ll also show you how tomorrow may look like through a database engineered hardware appliance like ZDLRA. About the Guest Blogger Andre Carpenter is a seasoned IT professional with over 12 years’ experience spanning presales, delivery, and strategic alliances across the APAC region for many large vendors.  Prior to joining Oracle, Andre held a number of roles at HPe including Principal Consulting Architect and Account Chief Technologist helping customers drive their IT Strategy, looking at how new and emerging storage technologies could impact their competitiveness and operations. He also evangelised HPe’s Converged infrastructure and storage portfolio through product marketing, blogging and speaking at industry conferences. Andre holds a Bachelor of Information Degree as well as a Master of Management (Executive Management) from Massey University, New Zealand. You can follow Andre on Twitter: @andrecarpenter and LinkedIn www.linkedin.com/in/andrecarpenter

Today's guest blog comes from Andre Carpenter, Principal Sales Consultant at Oracle. It is astounding to think that IT availability concerns in this day and age still exist even after the many...

Cloud Infrastructure Services

How 3 Companies Reestablished Their Competitive Advantage

It's difficult to mention digital transformation without the word "agility" entering the picture. Gaining a competitive advantage in the business world increasingly revolves around an IT architecture that supports real-time interactions and data-driven decisions. This, in turn, unleashes new levels of innovation and even the ability to disrupt an industry—or even the world.   Today’s digital enterprise is cloud-centric. According to IDC, enterprise adoption for cloud computing has reached 70%, and 56% of organizations are looking for opportunities to implement the cloud. As a 2015 McKinsey & Company report, “From Box to Cloud,” points out: "Cloud computing is moving closer to the center of executives' strategy discussions." Yet, despite these advances, it's painfully apparent that some organizations struggle to develop a plan that moves them out of the data center business so IT can participate more driving more value to the business. What Cloud-Native Startups Can Teach Enterprises Plenty of industries—the newspaper business being an excellent example—have focused on their own demise long before cloud and mobile apps became major disruptors. Yet, the New York Times recently reported its third straight quarter of better-than-the-economy growth. Revenue at the paper is up 6.8%, due largely to migrating away from a traditional industry revenue model (ad revenue) and embracing a “freemium”-type model that so many startups have found success with. The newspaper has leveraged social media to bring quality journalism to a new generation of readers and subsequently blocked users from accessing more than five articles a month. Many consumers held out, but by taking this fresh approach the New York Times now boasts over 2.5 million digital subscribers, and it has managed to keep this number steady for more than a year.   One thing centenarian businesses are good at is adapting to changing market conditions. They wouldn’t be around so long if this weren’t the case. So how can long-standing organizations start to adopt a more agile business model? How can they leverage new technologies to gain a competitive edge? What does a best practice approach look like? There are some important lessons we can learn from industry leaders that have embraced disruptors and leverage cloud technologies effectively. 3 Companies That Adopted Agile Cloud-Ready Infrastructure to Meet New Demand Consider All Nippon Airways. The company, which operates a fleet of 240 aircraft and accommodates upwards of 49 million annual passengers flying to 72 destinations, had to find a way to differentiate itself from a spate of upstart, low-cost carriers. Executives recognized that the carrier couldn't compete on price alone—at least not without undermining or completely abandoning its current business model. Achieving closer interactions with customers was at the center of its digital strategy. ANA needed a high-performing and scalable email system that could deliver real-time notifications to flyers. This capability, executives recognized, would reduce pressure on ground and phone agents to provide updates about flight cancellations, changes, and other important matters. Although the cost gains were clear, the initiative also ratcheted up value for customers. They could view notifications on their smartphones and stay up-to-date without calling in to the airline. ANA turned to Oracle Exadata Database Machine, along with GoldenGate and Oracle Enterprise Manager, to create a real-time data framework. The cloud-based environment allows passengers to view events as they happen. No less important: Passengers can customize notifications, including what they are informed about and what frequency ANA communicates with them. All told, the migration to the new IT platform required only eight months. A conventional migration would have spanned about 13 months.   Other companies have realized similar gains by adopting an agile, flexible IT framework to support its business. Macy's, another company with a long history, turned to Oracle Database Exadata Cloud Service when it wanted to introduce an intelligent merchandising application in 2016. The solution delivered a broader and deeper understanding of customers’ evolving preferences, and provided predictable and consistent application performance.   Nippon Paint Holdings, a 136-year-old business, embarked on a global expansion strategy through Oracle Engineered Systems. Oracle SuperCluster M7 delivered a high-performing and scalable platform to improve the company’s competitive position and support global growth: By consolidating five database servers into a single storage system, the firm has witnessed a 5x gain in data load time—from 2GB per second to 10GB per second. Ready to Breathe Some Fresh Air Into Your Business? Becoming cloud-ready is an important step in preparing to tackle the challenges of digital business. The cloud delivers potentially business-altering gains in scalability, flexibility, and cost-efficiency. It provides the framework to streamline processes and workflows while introducing opportunities for entirely new products and services.   In the coming years, enterprises that can adopt cloud technology and apply it in broad and deep ways throughout the organization are far better positioned to emerge as digital leaders. Digital business requires a fast, flexible, and dynamic framework. Clouds are now at the center of business and IT transformation.

It's difficult to mention digital transformation without the word "agility" entering the picture. Gaining a competitive advantage in the business world increasingly revolves around an IT architecture...

Cloud Infrastructure Services

Backup is Worthless If You Can’t Recover

Today's guest blog is by Kerstin Woods, Director, Converged Infrastructure and Cloud Storage at Oracle. They say a picture is worth a thousand words. But if that’s true, how much more value does a video deliver? What if backup and recovery of your data were more like a video than isolated pictures?...   OK, so you do your usual, ‘conventional’ backup, capturing your data at a particular moment in time. Maybe you do this once or twice a day, perhaps more often. But even if you backup once an hour, the result will only ever be a ‘still’; a static image from an isolated moment in time. But how much data have you missed in the hour(s) since your last exposure?  It’s like taking a handful of isolated pictures, hours apart, of your vacation, or your child’s school play. Imagine all you missed in between those shots? The unexpected, priceless memories? Perhaps even the most crucial moments of all.  What could you have done instead? Switched to video and not missed a second, of course.   And if you value your business like you value your personal memories, it should be no different when it comes to backing up your data. You need a video-like record of everything that happens. One that ensures all your data is continuously protected and quickly recoverable, second by second, not with gaps of hours or days.  Unfortunately, your current conventional backup system probably can’t offer such protection, and is most likely putting you at risk of losing precious business transactions. In fact, the average data loss risk exposure for mission-critical applications is a frightening 4.8 hours – almost five hours of potential lost business and revenues!  It’s time to re-think backup and instead set the bar on recovery. Because make no mistake, when you need your data most, the ability to recover it is everything. After all, who wants recover ‘some’ of their business, only to have to piece the rest back together with guesswork? You need every critical transaction to be recovered; zero hours of risk. To make that happen, to ensure recovery with no data loss exposure, you need continuous ‘live video-like capture’ backup of your Oracle databases.  And only Oracle can do this. Oracle’s Zero Data Loss Recovery Appliance (ZDLRA) is completely unlike any other database recovery solution (find out more in this IT Leaders’ Guide to Eliminating Oracle Database Data Loss). Effectively a video recorder for your Oracle databases, it is “…extreme TIVO for databases…”, according to Mark Peters, Practice Director & Senior Analyst at ESG, “… recording every change and playing back to any point in time, on demand.” Engineered by the same team that develops Oracle Database, everything is designed and optimized to work together across the whole stack. Like ‘video on demand’ for database data recovery, it is unparalleled and tremendously efficient. The ability to recover to any point in time from a DVR-like replay of your backups is certainly worth a whole lot more than a thousand words from isolated pictures.  Want to learn more? Read what IDC has to say in their recent report — Oracle's Zero Data Loss Recovery Appliance: A Transaction DVR for the Enterprise and visit the Data Protection Resource Center. Kerstin Woods is a 15-year IT industry veteran with experience spanning global alliances, product/solution development, partner marketing and product marketing in both Fortune 500 and early-stage startup companies. Kerstin also brings experience in consulting and launched her professional career designing rockets in the aerospace industry.  She studied mechanical engineering, psychology and business at Stanford University.  In her role as Director of Converged Infrastructure and Cloud Storage at Oracle, Kerstin oversees various product marketing, sales enablement and go-to-market activities across the storage portfolio.

Today's guest blog is by Kerstin Woods, Director, Converged Infrastructure and Cloud Storage at Oracle. They say a picture is worth a thousand words. But if that’s true, how much more value does a...

Cloud Infrastructure Services

Grow Your Customer Base With the Right Infrastructure

With the global economy on the upswing, organizations have the opportunity to lay the foundation for business growth. But slow, outdated IT systems can act as bottlenecks and prevent organizations from growing as fast as—or faster than—their competitors. The disparate and hodgepodge architecture built up over the years often requires extensive resources to upgrade and maintain, and suffers from a form of technical debt. Technical debt is term often used in the startup and app dev world to describe shortcuts used in coding that may get the job done in the moment, but are painful for others to work around. In IT, this translates to systems that were tacked on, refreshed on a different schedule, or simply supplied by a vendor to create a DIY infrastructure that few know how to manage. When infrastructure is not easy to manage, it is not easy to update. What technology do you use today that didn’t exist five years ago? A year ago? Six months ago? Companies need to be fast and flexible enough to keep up. The inability to access data for analysis and insights in real time can be detrimental for the business when you need to make decisions and serve customers quickly.    When the business can easily access, analyze, and act on real-time customer data, results can be amazing. These insights can fuel business growth by helping expand your customer base to new but highly targeted consumers, as well as deepen relationships with existing customers. Data can fuel customer base growth in a number of ways: Actionable Insights Fuel Business Growth When the business can easily collect, access, analyze, and act on real-time customer data, results can be amazing. These insights can fuel business growth by helping expand your customer base to new but highly targeted consumers, as well as deepen relationships with existing customers. Data can fuel customer base growth in a number of ways: Product development: Collect and process customer feedback to refine existing offerings, mine data for new product ideas, and power product lifecycle management data for new product development. Operations: Scale operations quickly and efficiently to support current and future customers, and strengthen customer relationship management capabilities to provide seamless service and support. Marketing: Develop more targeted marketing programs that use rich customer data to personalize offers and expand your customer base. To lay a foundation for customer growth, organizations are building cloud-ready infrastructures with a mix of private and public cloud offerings that provide a foundation for fast data access and processing—yielding insights for better customer acquisition and support. Deploy Infrastructure That Enables Growth Insights help business make better business decisions and data is key to those insights. In an IDC study about the business value of Exadata, a customer explained: “With Exadata supporting our database operations, we make better decisions, which is resulting in more revenue and lower costs. I’d say that we get 10% more revenue per year. Also, we’re saving money in processes like procurement and in areas like advertising — millions of dollars per year. It’s a lot of money.” By deploying solutions that have been co-engineered with the Oracle Database team, your infrastructure runs more efficiently making it better equipped to handle and scale for the amount of data your business creates and subsequently needs to analyze. It’s easier to spot the growth opportunities hiding in plain sight. Sometimes, the growth opportunity is lies in addressing your database performance and latency issues.  Stop Missing Revenue Opportunities With 1,200 properties in almost 100 countries, Starwood Hotels is one of the leading hotel and leisure companies in the world. But Starwood managers were receiving reservation information from the centralized data warehouse five hours later than they needed it. That meant they couldn’t adjust rates fast enough to accommodate demand given their current supply of rooms—resulting in lost bookings and, ultimately, lost revenue opportunities.   Starwood sped up its data warehousing by implementing Oracle Exadata Database Machine running on Oracle Linux. They also chose Oracle Advanced Customer Support Services to deliver Oracle Solution Support Center.   Now, Starwood Hotels is able to provide real-time data feeds—meaning Starwood managers can access transaction updates from the data warehouse within 5 to 10 minutes, rather than waiting up to 24 hours. Not only is the company better able to capture more booking opportunities with its accelerated reservations processes, but faster data processing in Starwood’s data warehouse supports its enterprise-wide loyalty program, sales data, customer service information, and marketing campaigns. “Oracle Exadata processes the information with extreme efficiency, and we have been impressed with the performance improvements we’ve realized,” said Marcello Iannuzzi, project manager, Starwood Hotels & Resorts Worldwide, Inc. “We have used Oracle products for a long time, and choosing Oracle Exadata enables us to preserve existing IT and human capital investments, particularly in our data warehouse infrastructure.” Go to Market Faster Some new revenue opportunities are within reach by just deploying a new system to support it. Telepin, a leading provider of mobile transaction infrastructure software that power some of the biggest brands in mobile money and payments, recognized that cutting deployment time meant realizing revenue faster. “Oracle Database Appliance helped us go to market with a simpler to deploy platform.” says Vincent Kadar, President, Telepin Software, “The faster we could deploy this solution in a box the faster we could start recognizing revenue. The pay as you grow licensing model allowed to align costs to revenue." With the economy in recovery mode, your customers are looking at increased budgets and looking to spend it. Make sure your IT infrastructure is ready to capture those opportunities by being able to meet demand and . Organizations that deploy optimized and integrated solutions with cloud-ready options now are better equipped to meet and anticipate customers’ needs and scale easily to handle future growth.

With the global economy on the upswing, organizations have the opportunity to lay the foundation for business growth. But slow, outdated IT systems can act as bottlenecks and prevent organizations...

An Important Announcement From This Year’s Oracle OpenWorld

Did you get a chance to visit Oracle OpenWorld 2017? If so, you probably learned about some of the major announcements Larry Ellison made about automation, AI, and machine learning, specifically around the new Autonomous Database. You may have also seen him pit Oracle against Amazon on-stage with six real-world workloads to prove that Oracle’s autonomous database in the Oracle cloud is faster and therefore cheaper than running Oracle database in the Amazon Web Services cloud and AWS’s own Redshift database in the AWS cloud. While Larry's keynotes may have stolen the show, engineers and product teams were working hard at covering Oracle solutions with live product demos, hands-on labs, and product showcases highlighting new products and new features for Oracle Infrastructure.    To me, the best part about Oracle OpenWorld is learning about new products, which is why I’m excited to tell you about enhancements to the purpose-built Oracle Database Appliance. Purpose-built to run Oracle Database I think we can all agree that databases represent a critical component of the IT department’s efforts and functions due to what they hold—data—your organization’s most critical asset. And a significant portion of IT spend is on maintaining and caring for those critical databases. Staying with traditional IT models may impact your success because you’re probably dealing with multiple vendors to run your database and applications. And that’s simply not a good long-term strategy. If you want to get the most out of your Oracle Database investment, a purpose-built appliance is a great solution.  What's new with the Oracle Database Appliance X7-2 portfolio? If you’re an Oracle engineered systems fan, you may know that we introduced our sixth generation of Oracle Database Appliance at Oracle OpenWorld. So, what’s new? To start we now have three models in the portfolio that include the latest Intel Xeon processors, an 80 percent increase in core count, and more storage capacity over the previous generation. In addition, there is support for KVM virtualization, expanded VLAN support, and SE RAC support on the high-availability model.  Simple to implement, manage, and support When I meet with customers, they tell me that they’re being asked to do much more with fewer resources at their disposal. Quite frankly, this is where Oracle Database Appliance delivers real help. A single Database Administrator (DBA) can have the system up and running in 30 minutes. Automated patch bundles help to streamline maintenance for all the elements of the software stack including firmware, OS, storage management, and database software. These fully tested patch bundles can be deployed quickly and then safely applied, which eliminates error-prone tasks typically confronting administrators. Listen to Lead DBA Kerry Jacobs, on why Mercer chose Oracle Database Appliance and how easy it was to deploy. It's optimized for your database solution No one knows the Oracle Database better than Oracle—it’s what we do. With Oracle Database Appliance, hardware and software engineers worked together to design a system completely optimized to run Oracle Database. All models include networking compatible with any data center, storage options that include NVM Express (NVMe) flash storage, and HDD storage for increased capacity. Pre-installed Oracle Linux and Oracle Appliance Manager and support for virtualization, add additional flexibility to the already complete and fully integrated database solution. Best part? It's affordable for every organization I often ask customers if they would consider an engineered system to run their database, and many times, they reply that they’re too expensive. That isn’t the case with Oracle Database Appliance. Our X7-2S model starts at the incredibly low entry price of $18,500. Combine this with the flexibility to run various Oracle Database software editions and capacity-on-demand licensing for even more value. Capacity-on-demand licensing allows you to deploy Oracle Database Appliance and license as few as two processors cores to run your database servers. You can then incrementally scale up to the maximum number of processor cores in each system, delivering CAPEX efficiency and room to grow for a more optimized TCO. Also consider time saved researching compatible components, creating and processing multiple orders across multiple vendors, waiting for all the various elements to arrive, and then assembling and validating the build-your-own system. Listen to Ehab Badr from Catalyst Business Solutions, and find out how they helped lower costs for their mid-market customers with Oracle Database Appliance. This is real value—OPEX and CAPEX savings—and it’s what we deliver to you every day.   Integrated with Oracle Cloud Whether you are using cloud today, or planning to in the future, Oracle Database Appliance provides a bridge between on-premises deployments and Oracle Cloud. This bridge makes it easy to implement a combined on-premises/cloud strategy to support backup, dev/test, or even disaster recovery environments in the cloud.  As we head towards the new year, I’m excited to offer the simple-to-deploy-and-use, fully optimized, and affordable Oracle Database Appliance, which is totally purpose-built for your Oracle Database. For more information, please visit: www.oracle.com/oda

Did you get a chance to visit Oracle OpenWorld 2017? If so, you probably learned about some of the major announcements Larry Ellison made about automation, AI, and machine learning, specifically...


5 Things at OpenWorld That Made Me Rethink Oracle

If I had to describe my first Oracle OpenWorld in two words, they would be disruptive innovation. I’m not going to lie. When I first joined Oracle, I thought of it as a 40-year-old database company with an interesting past, but I didn’t have a clear view of its future. But that view has firmly cleared after attending Oracle OpenWorld 2017, my first OpenWorld event. I saw first-hand that Oracle is building cutting-edge, transformative technology into every layer of its product portfolio stack: IaaS (Infrastructure-as-a-Service), PaaS (Platform-as-a-Service), SaaS (Software-as-a-Service), and now DaaS (Data-as-a-Service). What’s more, Oracle has designed its technology to help customers personalize their paths to the cloud through one seamlessly integrated platform that helps users operate critical business applications, all from a single vendor. The result is an IT environment that efficiently and cost-effectively drives business transformation—one that is both secure and scalable. Most importantly, Oracle is taking the lead to help customers unleash their innovation. One way that Oracle is doing this is through automation, which was a pervasive topic at OpenWorld. Attendees wanted to know how automation is going to help them, and they got lots of input! On a related note, Mark Hurd’s keynote at Oracle OpenWorld helped validate how Oracle was unleashing innovation in key ways. “Eighty percent is generous — 85% of IT budgets is spent on just basically keeping the existing systems running, so [there is] very little innovation [budget left for customers],” explained CEO Mark Hurd in his keynote address. “Our objective has been to have the most complete suite of SaaS applications, the most complete suite of PaaS services, and the next generation of Infrastructure-as-a-Service that all work together to complement each other. That’s what we’ve built out, and that’s what we now have.” In other words, Oracle is helping their customers say bye to complexity and hi to simplicity. All of the keynotes energized me, but I also attended various sessions and had the privilege of speaking to many customers and partners around the show floor. They shared specifically why and how Oracle is revolutionizing their businesses. Here are my top five highlights from the event: 1. Keynote by Cloud Business EVP Dave Donatelli During his keynote, “Oracle's Integrated Cloud Platform, Intelligent Cloud Applications, and Emerging Technologies for Business,” Dave shared how Oracle offers a number of options to help organizations evolve their cloud strategies on their own terms. Many of the customers I chatted with shared their journeys, referencing Dave’s six journeys-to-the-cloud model: Optimize your data center: Oracle Engineered Systems is powered by Exadata X-7, which is available as the traditional on-premises machine, as an Oracle public cloud machine placed behind your firewall, or as an Oracle cloud service deployed in our cloud. Cloud at Customer: In this path, the same hardware and software is used in the Oracle Public Cloud but located in your data center and offered as a cloud-like subscription model with no maintenance requirements. Oracle Cloud Infrastructure (IaaS): This is a complete layer of compute, storage, and networking. An important part of what happens here is data management and processing with tools such Oracle’s Big Data Appliance. Create a new cloud with Oracle PaaS and IaaS: Oracle makes it possible for developers to create applications with new technologies such as blockchain and machine learning, and deploy those applications wherever they like. Transform operations with SaaS: Oracle has written all of our SaaS products to have a common data model, which makes it easier to add business processes to your cloud environment on your schedule and without being locked into inflexible packages. It’s also easy to start with Oracle’s “vanilla” SaaS solution and easily customize it to add automation like bot-powered voice commands or augmented-reality for training. Build a “born-in-the-cloud” business: This isn’t just for start-ups. Oracle customers are launching new divisions and units as cloud-first businesses to find the speed, agility, and growth trajectory they need to be successful. Dave also covered the critical importance of the DaaS layer in enterprise technology for targeting and personalizing messaging and offerings to customers, and then measuring results to adjust for even more effective marketing. This is something many companies are doing to grow their revenue and market share, he said. 2. Customer Validation of Oracle Innovation I was struck by the stories of our users. They made it clear that Oracle innovations are making them more successful—for example, Oracle Exadata Database Machine X7’s ability to automatically transform table data into in-memory DB columnar data in Flash cache, enables enterprise research and other big data processing needs at scale. Some of my tweets from the event summarize their stories: Oracle’s range of #PaaS and #IaaS services are enabling researchers to do enterprise research at scale” -@CERN #OOW17 #CloudReady RECVUE is powered by @OracleBigData to help process over 50 million transactions/day for revenue & billing management. #OOW17 #CloudReady I had a chance to hear from Securex, Cloud Architect Winner of the Year at Oracle’s Global Leaders program, as they shared the following: “Today IT must deliver the capabilities for the business to drive agile data analysis and BI in a self-service manner. So we turned to Oracle and the cloud.” I also talked to a lot of customers who were concerned about scaling security. The big announcement of Oracle’s Autonomous Database and Highly Automated Cyber Security, and how they work together to secure data faster and better than any alternatives, caused a lot of excitement. “Oracle has helped make big data a driver of business success,” said Luis Esteban from CaixaBank, winner of the Oracle Innovation and Cloud Ready Infrastructure award. “We now drive better quality, customer knowledge, sales, and mitigate fraud threats.” 3. Big Data, Machine Learning, and Cloud Strategy, Oh My! Big data continues to be top of mind for so many businesses. I attended some key sessions intended to highlight the opportunities of big data and provide practical solutions to common challenges. General Session on Big Data Strategy Of course, many people were looking for ways to glean more value from the volumes of data their organizations collect. Today, successful big data projects are enabling more than 50% of organizations to see increases in revenue or reductions in cost. The big takeaway from this session was that Oracle’s Big Data cloud offerings can scale on-demand so they shift how an enterprise plans for capacity and analysis. Big Data and Machine Learning and the Cloud The lasting impression I have from this session is that it drew back the curtain on the technology that enables key business use cases for big data: innovation, customer insight, operational efficiency, fraud, risk, and compliance. It examined real-world examples of each use case and the technical architecture that supports them. For example, Oracle Big Data Manager is a new feature of the Oracle Big Data offering that uses machine learning to help users identify their most profitable sales opportunities, customers, markets, etc. 4. Feet on the Street: The Show Floor What would a live event be without selfies? I was privileged to tour the show floor and capture some pictures with folks interested in sharing why they came to Oracle OpenWorld: Vital from Charter Communications shares why he’s at #OOW17: “To see how I can leverage #ML, #AI capabilities in @Oracle’s cloud offerings so I can drive more efficiency and worker productivity for the team that I manage.” Why do you love @Oracle? “We have 12.5 million employees so PeopleSoft has been incredibly helpful!” #OOW17 “I’m most impressed with how @Oracle is integrating analytics tools into a single Oracle analytics cloud.” - Fors partner #oow17 #CloudReady 5. I Had Fun! Not only did I see just how powerful Oracle technology is for helping organizations modernize, innovate, and compete in a digital world, but I had fun! I will always remember these words from Larry Ellison during one of the keynote sessions. They represent for me the disruptive innovation I saw in action, and how Oracle is making that disruptive innovation work for customers: "We unify the data, we analyze the data, and we automatically detect and protect your data—all in one unified system." Find out more about these big data innovations, such as the new release of Big Data Appliance X-7 discussed in this blog "Announcing: Big Data Appliance X7-2 - More Power, More Capacity." If you’d like to catch up on the Oracle OpenWorld 2017 keynotes you may have missed, visit https://www.oracle.com/openworld/on-demand.html.

If I had to describe my first Oracle OpenWorld in two words, they would be disruptive innovation. I’m not going to lie. When I first joined Oracle, I thought of it as a 40-year-old database company...

Cloud Infrastructure Services

How Cloud Infrastructure is Reshaping the European Enterprise

Today's guest blog comes from James Stanbridge. Europe’s CIOs are under enormous pressure to rethink their IT strategies. Around them, information-driven business models and cloud-native applications have allowed small competitors to become major threats. Meanwhile, decisions-makers in the boardroom are demanding that IT be used to support more agile ways of working so they can compete in this challenging environment. Market dynamics will never be the same, and established businesses must adapt or risk falling behind. Success will start with faster IT delivery. Where IT departments were once seen as cost centres, it now falls on CIOs to modernise their service management processes and turn their team into a revenue generator. Until recently, IT’s focus was on automating and streamlining primarily back-office processes. Their priority has now shifted to making technology systems and software customer-facing and far more dynamic, which requires the ability to meet constantly-changing requirements. Legacy IT infrastructures cannot keep up, however. Companies need a delivery platform that allows them to integrate systems and develop new services more quickly. Public cloud providers have responded by delivering cloud infrastructure on-demand. On-demand cloud platforms support IT’s shift to new delivery models and help companies to better serve the working habits of digital and mobile users. However, not all public clouds are created equally. Oracle announced the expansion of its own EU cloud region in early 2017, launching new datacentres and infrastructure services in Germany to support our European cloud customers. With the datacentre region now fully operational, what benefits can businesses expect from Oracle Cloud? A single versatile infrastructure for your entire organization Oracle’s cloud infrastructure supports both traditional and cloud-native applications. This means businesses can run both environments simultaneously as they transition to a fully digital way of working and migrate their mission-critical applications to the cloud. We also have the only cloud that offers both dedicated and virtual resources on-demand via the same API and on the same infrastructure. An enterprise-friendly pricing and payment model It’s been well established that the cloud delivers significant savings over on-premises IT, but Oracle also offers the only cloud that is priced specifically for production workloads. From compute to storage, enterprises will find increasing savings as they deploy and scale. We also are designed to support many existing open source tools and integration methods, addressing both lock-in and operating costs. Finally, our pricing and payment models are tailored to each business’ needs, which means companies can scale their cloud transition and become more data-driven at the pace that’s right for them. Your cloud, our resources For IT teams, one of the major advantages of Oracle’s cloud on demand is that they get a highly available and secure infrastructure while maintaining full control over their computing resources, as well as the ability to manage these resources in an agile fashion. No other cloud was created to deliver this level of control for enterprise-scale organisations. There many ways in which the cloud is revolutionising the way businesses buy and consume technology, and these have come just in time. Europeans have never had to answer to so many stakeholders or deliver on more complex demands, and cloud infrastructure is ideally suited to helping them stay on top of these expectations.  That is why we are seeing cloud adoption explode across Europe. For a deeper dive into Oracle Cloud Infrastructure and how it helping European IT leaders transform their business, check out this in-depth overview from our own Leo Leung and Torsten Boettjer, Directors of Product Management for Oracle Cloud Infrastructure. James Stanbridge is Vice President, Product Management, Oracle Cloud Infrastructure, EMEA and Asia for Oracle. He is responsible for the organizations IaaS strategy and operations across the two regions and ensures that Oracle’s customers are able to realize the benefits of cloud technologies. Prior to joining Oracle, Stanbridge led infrastructure and massive scale services such as Alta Vista Search, Hotmail, MSN Messenger and then Azure and Office 365 going on to lead enterprise engineering support teams. He is a passionate cloud evangelist and specialist in enterprise environments and cloud migration strategies. James studied Technology Management at Wharton Business School and also serves an executive coach to leaders at corporations including Microsoft, Hewlett Packard, Paypal as well as start-up entrepreneurs and NGOs. 

Today's guest blog comes from James Stanbridge. Europe’s CIOs are under enormous pressure to rethink their IT strategies. Around them, information-driven business models and cloud-native applications...

Cloud Infrastructure Services

Automation Provides the Key That Makes IT Thrive

Is there anyone in IT who doesn’t have competing priorities? The reality is that IT will always be called on to provide faster performance, more functionality, and greater adoption of new technology, often without a commensurate rise in resources. Vastly Diminishing IT Spend and Resources Gartner forecasts global IT revenue each year, and the enterprise purse strings are unlikely to loosen much any time soon. In 2016, global IT revenue dropped from 2015. This year, Gartner significantly cut its forecast to a modest 1.4% rise, down from a previously projected 2.7%. The firm cited two major factors: a strengthening U.S. dollar and a continued slowdown in the server market.   What about “doing more”? In a recent survey out from Spiceworks, nearly two-thirds of the respondents said they are not planning any staffing increases in 2017. And that leaves the IT team to manage more data (on premises and in the cloud), fight cyberattacks, provide more business insights for better decision-making, and keep systems up and running—all without the help of more staff.   Managing day-to-day operations more efficiently frees up time for IT to focus on activities that improve business performance. And the key to operating more efficiently—and thriving—in this new-normal environment is automation. Automating the database, for example, significantly reduces human error, which allows business to run more smoothly, efficiently, and cost-effectively. To Thrive, Start by Consolidating IT Infrastructure Michael Paul, Sr. System Administrator, Pacific Gas & Electric and CTO, Yen Interactive, identifies the elephant in the room when it comes to infrastructure inefficiency. In a LinkedIn post, “Doing More With Less In Information Technology,” he says: “Only when a government or corporate entity collects all the information about what the individual silos do and the impact they have, they often discover an utter lack of synchronization across their IT infrastructure that causes them to be far more reactive than proactive and that drives up costs.” Silos, especially those that are deeply ingrained in the organization, must be bridged. To address the dilemma, you need engineered systems that integrate every layer of the IT stack across the enterprise, and from the data center to the cloud and back again. With this integration, organizations are able to implement automation and other efficiencies that eliminate manual processes. This simplifies, streamlines, cuts costs, and reduces errors. And those are all benefits that help IT thrive in support of the business vision. Automate Rote Data Center Tasks The theme of this year’s Oracle OpenWorld was certainly automate, automate, automate. Larry Ellison debuted Oracle Database 18c, the world’s first self-driving database, and drove home the fact that automation removes human error and reduces staffing costs.  From the standpoint of those in the front lines, this actually translates to job security. Leaner teams can now move forward from day-to-day maintenance tasks to more innovation projects that deliver value to the business. Consolidation onto the complete stack of Oracle solutions guarantees tighter automation throughout, systems are updated together and patched together so your business is fully secure at all times.   Here’s how 2 major brands and a national utility agency got more out of their IT systems through consolidation and automation:  1) Engineered Systems Drive Efficiency and Growth for Jiangxi Isuzu Motors When Isuzu Motors and Jiangling Motors entered into a joint venture in 2013 and became Jiangxi Isuzu Motors Co., Ltd., the business had to rapidly boost manufacturing efficiency in order to meet its aggressive goal of introducing pickup trucks and SUVs to new markets. By deploying Oracle E-Business Suite Release 12.2 on Oracle Exadata Database Machine and Oracle Elastic Cloud, the company was able to process a massive volume of daily workloads that enabled the company to triple year-over-year sales. At the same time, it was able to reduce the systems maintenance staff by 50% by eliminating multiple hardware and software vendors and consolidating with Oracle Engineered Systems.   Jiangxi Isuzu Motors achieved more incredible results that included increasing business agility across the entire supply chain, standardizing financial reporting to meet compliance requirements, improving manufacturing efficiency, gaining real-time visibility into processes, and ensuring business continuity by eliminating single point of failure.   The integrated platform across the enterprise and the ability to streamline and automate processes gave Jiangxi Isuzu—and the IT team—the power to support extremely rapid growth while lowering costs. 2) Sprint’s Sr. Technical Architect Gets a Better Night’s Sleep with Oracle Exadata Wouldn’t it be nice if your IT infrastructure helped you sleep at night? That’s what it did for Sprint’s Richard Ewald. The telecom giant’s senior technical architect, data warehousing, was searching for a database solution that would speed performance to meet the demands imposed by processing 15 billion transactions a day and supporting 300 concurrent users for its 24/7 operations.   Ewald chose Oracle Exadata for its speed and stability, and to reduce the data center footprint and storage space required. Performance gains, data center space reductions, and reliability improvements were nothing short of amazing. With the help of automation capabilities made possible by engineered systems, he was able to reduce the time to produce a report from seven days to about seven hours, cut the data center footprint by about two-thirds, and eliminate more than 150 TB of indexes on the database.   The result? Ewald’s team gained reliability that eliminated middle-of-the-night emergency calls.  As an added bonus, Platinum Services helped Ewald’s team sleep easy knowing that it would receive notification of problems even before IT was aware of anything wrong. 3) SuperCluster and Private Cloud Provide Agility to Public Pension Agency In a different scenario, Saudi Arabia Public Pension Agency (PPA), an entity within the Kingdom of Saudi Arabia’s Ministry of Finance, was looking for a solution that would empower IT by consolidating systems and by providing better performance and a disaster recovery architecture across two separate sites.   PPA’s UNIX environment was consolidated into a single integrated system with Oracle SuperCluster, and a private cloud was created. PPA also installed Oracle Exadata X4-2, Oracle GoldenGate, Oracle Enterprise Manager 12c, and Oracle Enterprise Manager OpsCenter 12c. PPA opted for a subscription-based model with automatic software integration.   This fully integrated infrastructure allowed PPA to realize some impressive results. For the IT staff, the new systems gave the IT team faster response to unforeseen events, significantly improved information security, and allowed for fast deployment and easy updates.   In addition to reducing the strain on IT and automating rote tasks like database deployment and compliance audits, the solution allowed the organization to make better use of its resources and enhanced its risk mitigation. Do More With Less Hardware When the IT stack is built from the ground up to provide a single, integrated solution, it offers the potential for greater efficiency, higher performance, improved reliability, and a reduced data center footprint. It also provides the scalability and flexibility to grow with the enterprise and move seamlessly between on-premises and cloud. And all that helps the IT team not just do more with less, but to automate processes that help the business achieve aggressive goals. With a co-engineered stack that is cloud-ready and automation-friendly, IT doesn’t have to just survive—it can thrive.

Is there anyone in IT who doesn’t have competing priorities? The reality is that IT will always be called on to provide faster performance, more functionality, and greater adoption of new technology,...

So Much Data, So Little Hassle: Building an Infrastructure to Tame Your Data

The promise of big data is essentially unlimited. Organizations across the globe are just scratching the surface of vast data mines to reveal new insights and opportunities. At the same time, distributed data storage and processing power in the cloud means lower cost and more linear scalability to meet needs. But with great promise comes great complexity. Enterprises planning a data management infrastructure to access and analyze big data face challenges that include disparate data access, security and data governance issues, and an IT skills gap. What’s needed is a single view into your data so that data scientists can spend more time analyzing it and less time merging it all together. The solution lies in integrating a unified query system into your streamlined infrastructure that automatically handles processing and joining data behind the scenes to present a clear picture of the data—without hunting across silos or working with multiple APIs and query languages. For example, Oracle Big Data SQL lets data scientists use familiar SQL queries to mine data across Hadoop, NoSQL, and Oracle Database quickly and seamlessly—essentially making big data as manageable as small data. Here’s how three Oracle customers are using innovative infrastructure solutions to tame and draw value from their data: CERN: Visualizing Scientific Discovery CERN’s Large Hadron Collider (LHC) is the world’s largest and most powerful particle accelerator, with 50,000 sensors and other metering devices generating more than 30 petabytes of data annually. This information tsunami is taxing the 250 petabytes of disk storage space and 200,000 computing cores in CERN’s data centers—a problem exacerbated by an essentially flat IT budget. At the same time, research scientists must extract and interpret data from the Hadoop platform, typically without the specialized technical skills such queries require. CERN is using the visualization tools in Oracle Big Data Discovery to transform raw data into insight—without the need to learn complex tools or rely only on highly specialized resources. They use this data to ensure that CERN’s accelerators are operating at their full potential and, if not, to identify what’s required to return them to capacity. Institut Català de la Salut: Dashboards to Drive Better Healthcare With almost 40,000 employees, Institut Català de la Salut is the largest public healthcare provider in the Catalonia region of Spain. In addition to providing care to more than 6 million citizens at hospitals and walk-in clinics across the region, Institut Català conducts research and trains specialists and students. As part of its digital transformation, the organization implemented a high-performance database solution to house and manage vast amounts of strategic, tactical, and operating data on the healthcare services delivered at its network of facilities. Institut Català incorporated Oracle Exadata into its infrastructure to gain the processing power users needed to access real-time data for business intelligence dashboards and reports. Since then, Institut Català has been able to generate more complex data models and reporting than its previous architecture was able to support. The result? Deeper insights across its entire healthcare network, enabling more informed business decisions systemwide based on patient data, staff performance, and real-time inventory information. Procter & Gamble: End-to-End Visibility into Product Performance The consumer packaged goods giant Procter & Gamble may be 178 years old, but it has no intention of letting an outdated infrastructure hinder its data processing and analysis capabilities. P&G’s business teams needed access to a wide variety of big sources about its 66 brands in order to answer high-level questions (“Why is this happening?”) in real time. The company quickly realized that growing volumes of data from structured and unstructured sources would not fit neatly into canonical data models, nor was it willing to spend the vast sums needed to store it all. P&G concluded that it could benefit from a hybrid public-private cloud topology to exploit the flexibility, scale, and cost savings of the public cloud while managing the governance of certain data types with private cloud. P&G chose Oracle Big Data Appliance with Hadoop for its scalability, cost-effectiveness, and ability to handle both conventional and unconventional data sources, including market signals, item sales, market share, surveys, social, demographics, and weather, not to mention new sources that aren’t yet on its radar. In fact, the new solution exposed 150 terabytes of never-before-seen data that has given the company fresh insight into the marketplace. See the Value of an Unobstructed View Data can realize its full value only when it drives insight, and it can only do that when it converges into a single, clear view. If your data remains locked away on disparate platforms with no easy way to access it, you need an integrated infrastructure that can set it free. Learn more about how Oracle Engineered Systems can help you get a single view into all your data. Join Us at Oracle OpenWorld 2017, October 1-5, in San Francisco Don’t miss the excitement of Oracle OpenWorld 2017! Explore the many informative and practical sessions we have scheduled, and take advantage of some of these opportunities to learn more about Oracle’s Big Data offerings and its Engineered Systems: General Session: Oracle Big Data Strategy [GEN5453]  Big data is going mainstream. Today, successful big data projects are enabling more than 50 percent of organizations to see increases in revenue or reductions in cost. In this session explore big data opportunities, discuss what it takes to be successful, and learn about Oracle’s big data strategy and product family. Enterprise Research at Scale: CERN’s Experience with Oracle's Big Data Platform [CON1298] Oracle has been deeply involved with the research community for more than 25 years and continues to lead the industry. It also works to make sure it maintains focus on solving the real problems of customers that rely on Oracle technology, such as CERN. Recent advancements made in the deployment of high-performance computing infrastructure and advanced analytics solutions are focused on accelerating enterprise research at scale. Advancements in key technologies including big data, machine learning/AI, and IoT, coupled with a far more cost-effective and elastic cloud delivery model have radically changed what is possible in data-driven research. Attend this session to learn from CERN's experience with Oracle’s cloud and big data solutions. Oracle Data Visualization: Fast, Fluid, Visual Insights with Any Data [HOL7782] More and more organizations recognize the need to empower users with the ability to ask any analytics question with any data in a truly agile, self-service manner. In this session learn how to use Oracle Data Visualization to quickly discover analytics insights through visualizations that can be built against a variety of data sources. See how easy it is to compose visual stories to communicate findings, without the need for complex IT tools. Also check out these sessions: Extending Garanti Bank’s Data Management Platform with Oracle Big Data SQL [CON1962] Big Data and Machine Learning and the Cloud, Oh My! [CON5462]

The promise of big data is essentially unlimited. Organizations across the globe are just scratching the surface of vast data mines to reveal new insights and opportunities. At the same...

Cloud Infrastructure Services

What’s New with Exadata? Oracle OpenWorld Exadata Power Hour Sessions Will Unveil More Than You Think

  Oracle Database technology is evolving to be autonomous.  Our pricing models are shifting too.  You’ve heard a lot of new stuff from Oracle over the past few weeks – as we focus more on operational automation delivered as a system to our customers.     It’s just time to leverage all the economies of scale of a cloud model. We also think it’s time to deliver automation and a platform tuned to allow an unprecedented combination of performance, availability and operational efficiency. Oracle Exadata Architecture is Being Transformed:     That’s because it takes a heck of a lot of innovation to continue to deliver world's most advanced Cloud and In-memory functionality for modern database environments – on-premises, hybrid or in the cloud for business critical workloads today and in the future. This is your exclusive look into what has been done to enhance Exadata and its integrated database architecture, and how to take maximum advantage of the newest capabilities – including database extensions for Exadata, I/O resource management, Exadata’s Smart Scan and Smart Flash Cache features, and snapshot and virtualization capabilities. We always want to hear what you think - so follow us @OracleExadata and @ExadataPM Exadata Power Hour Sessions:  Register: CON6661 - Oracle Exadata, Disruptive New Memory & Cloud Technologies, Monday 2:15pm PT Register: CON6663 - Oracle Exadata Technical Deep Dive: Architecture and Internals, Monday 3:15pm PT Helpful Links: Oracle Exadata Cloud Service Sessions All Oracle Exadata Sessions  

  Oracle Database technology is evolving to be autonomous.  Our pricing models are shifting too.  You’ve heard a lot of new stuff from Oracle over the past few weeks – as we focus more on operational...

How 2 IT Service Providers Built a Growth Strategy With Cloud

When businesses need to scale their IT infrastructure to adapt to fast-changing business conditions, they increasingly look to third-party service providers that can simplify the process for them with cloud-based, as-a-service offerings. Research confirms this trend: Gartner predicts that by 2020, there will be more compute power sold by cloud-based service providers as infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) than sold and deployed into on-premises data centers.   So it follows logically that IT service providers looking for infrastructure that allows them to pass scalability, flexibility, and agility on to their customers easily and cost-effectively are looking to the cloud. And the strategy is fueling dynamic business growth. Long before this push, two IT service providers leveraged the power of the private cloud to expand into new markets and increase profitability. Tieto’s Unique DBaaS Offering Helps Solve the Growth Puzzle In the Nordic countries, the cloud market is hot—growing 20% a year. As the largest IT service provider in Northern Europe, Finland-based Tieto Corporation faced significant barriers to growing its market share in this competitive market while also increasing profitability. To gain an edge, one of the company’s top priorities was to introduce a superior, easily deployable database-as-a-service offering for midsize companies and large enterprises.   But achieving this goal required Tieto to also lower its operating costs, improve database performance and availability for its customers, and beat the competition to market with innovative offerings. The final piece of the puzzle was to consolidate physical servers onto a single powerful, standardized, easy-to-administer architecture that could run databases from a range of vendors and provide cost-effective and secure multi-tenancy to customers.   Tieto solved the puzzle with Oracle SuperCluster running Oracle Database 12c. With this solution, the company was able to offer private and hybrid cloud solutions to its customers for a wide range of applications—everything from consolidating hundreds of databases onto a private cloud to giving midsize manufacturing companies the ability to scale. In fact, Tieto slashed client database provisioning time from weeks to minutes. Licensing fees also dropped, thanks to the multi-tenancy architecture.   By consolidating its own infrastructure, Tieto realized a 25% cost reduction and was able to implement its new architecture in only four months. One of Tieto’s most dramatic achievements was an 80% faster time to market for new products and services, helping it beat the competition out of the gate with new service offerings. The Cloud Adds a More Affordable Dimension to Enterprise Infrastructure Solutions Dimension Data knows that hardware infrastructure and licensing fees are the major obstacles for its South African clients who want to scale their capabilities as needed. A global service provider based in Johannesburg, South Africa, Dimension Data chose Oracle SuperCluster to power its cloud services and offer clients the ability to add or remove compute and storage resources on demand—and do so more affordably than trying to do it on their own. Clients also gain access to expertise that can help them determine what they need and then configure and manage it for them.   Multi-tenant capability was perhaps the most exciting addition to Dimension Data’s services. The ability to isolate each client’s database on the same box offers four-fold benefits for clients: full-redundancy, high availability, better performance, and lower licensing fees. As the first in the world to implement this specific multi-tenancy structure, the change did more than open up the firm’s business to a whole new tier of clients. It also helped Dimension Data’s clients grow their businesses with the confidence that they have the same level of security in a private cloud environment as they would have in their own data centers.   Take a look at this brief video to learn more about how Dimension Data has revolutionized the cloud-based services market. How Can the Cloud Grow Your IT Services Business? Enterprises today no longer view the cloud as a novelty. They now understand that it’s an essential element of a sound IT strategy that enables them to move faster and respond with more agility to changing business needs. They’re also getting the performance and scale to manage increasing demands on their infrastructure and the flood of data so many businesses are experiencing—and they’re able to do it all securely and with a more economical model. For many enterprises, as-as-service is the answer.   If you’re a service provider, you need to optimize your own infrastructure to deliver the benefits of the cloud to your customers. Engineered systems help you do that with integrated and simplified infrastructure that provides significantly faster deployment, greater speed, higher security, easier management, and reduced costs. Helping your clients grow helps your business grow. Explore More at Oracle OpenWorld 2017, October 1-5, in San Francisco We hope you’re planning to join us at Oracle OpenWorld 2017. If you haven’t registered yet, you can check out all the exciting sessions we have scheduled. Here are a couple of sessions we recommend you don’t miss: Oracle SuperCluster Deep Dive [CON4702]: Oracle SuperCluster is well-established in data centers around the world. This session explores new opportunities that will arise from pending platform and software changes to the Oracle SuperCluster family. Particular attention is paid to the benefits and practicalities of deploying Oracle SuperCluster in the cloud. Multiple deep dives and hand-on labs (HOLs) are scheduled throughout the event. Exadata/Oracle SuperCluster/Zero Data Loss Recovery Appliance Diagnostics, Use Cases [CON6409]: We previously covered how well SuperCluster and Exadata go together, but now we go deep into practical use cases for this combination of Oracle Engineered Systems. Whether you are a Platinum Support customer or not, there is a wide array of diagnostics available to help with rediscovery and expedite problem resolution. Tools such as ExaWatcher, sosreport, Exachk, explorer, and sundiag may not be as well-known as reviewing incidents and alert.logs, but the comprehensive collection of details provided by these tools may still be needed to investigate and resolve critical issues. This session details the common diagnostic methodologies for each product as well as provides a few use cases to demonstrate the effectiveness of the tools. Oracle SuperCluster Best Practices, and What Happens When You Do Not [CON3799]: In this entertaining and lively joint session with the Oracle SuperCluster lead architect learn about Oracle's best practices for implementing and running Oracle SuperCluster for maximum availability, serviceability, and operational simplicity. The session also includes examples of what happens when those best practices aren't followed. This is an encore of last year's well-attended session, and is updated with information on Oracle SuperCluster 2.3 S/W release best practices. Converged Infrastructure Customer Forum [CON7488]: This is an exclusive evening event featuring enterprise customers across industries sharing their digital transformations and business successes achieved with Oracle’s cloud infrastructure portfolio. It will be followed by a cocktail reception so you can meet and mingle with other industry visionaries and Oracle executives. RSVP required. Check out all the Oracle SuperCluster sessions below. With the recent launch of the newest SPARC M8 chipset, there will be much to learn about the new Oracle SuperCluster M8 systems. See you there!

When businesses need to scale their IT infrastructure to adapt to fast-changing business conditions, they increasingly look to third-party service providers that can simplify the process for them with...

Cloud Infrastructure Services

Don't Miss! Oracle OpenWorld Exadata Session Catalog - Top Sessions

We've pulled together your personal, must-attend Exadata sessions for #OOW17 – all in one place.  Mark your calendars and register today to get your seat. Exadata Power Hour Sessions – register today, these are filling up fast: CON6661: Oracle Exadata: Disruptive New Memory and Cloud Technologies              What’s new with Exadata? The Oracle Exadata architecture is being transformed to provide the world's most advanced Cloud and in-memory functionality – do not miss this session to get an overview of current and future Exadata capabilities, including disruptive In-Memory, Public Cloud and Cloud at Customer technologies.             Speaker: Juan Loaiza When: Monday 2:15 - 3:00, Moscone West, Room 3014 Register CON6663: Oracle Exadata Technical Deep Dive: Architecture and Internals              This is your exclusive look into what has been done to enhance Exadata and its integrated database architecture, and how to take maximum advantage of the newest capabilities – including database extensions for Exadata, I/O resource management, Exadata’s Smart Scan and Smart Flash Cache features, and snapshot and virtualization capabilities.   Speakers: Kodi Umamageswaran, Gurmeet Goindi          When: Monday 3:15 - 4:00, Moscone West, Room 3014 Register   Other key Exadata sessions focusing on security best practices, TCO modeling for Exadata cloud deployments, customer case studies, and technical deep dives: CON6665: Deploying Oracle Databases in the Cloud with Exadata: Strategies, Best Practices   Learn about Oracle’s Exadata Cloud strategy and how customers use Exadata Cloud Service to run Oracle Databases in the Oracle Public Cloud with the same functionality as Exadata on-premises.  Explore more about Exadata Cloud at Customer.  Walk away with total cost of ownership (TCO) models, to help you select the deployment that is just right for you.             Speakers: Ashish Ray, Amit Kanda, Paul Fulton (IT Applications Manager, Detroit Water & Sewer) When: Monday 5:45 - 6:30, Moscone West, Room 3006 Register CON6666: Oracle Database Exadata Cloud Service: Technical Deep Dive Discover how Oracle has combined cloud-based REST services, fast provisioning, elastic compute bursting, and software-defined networking, with Exadata’s technical innovations such as Smart Scans, Hybrid Columnar Compression (HCC) and IO Resource Manager (IORM). We will showcase the live creation of an Exadata Cloud Service, and demonstrate how you can use integrated critical database features such as RAC, In-Memory database, and Oracle Multitenant along with the service.          Speakers: Brian Spendolini, Binoy Sukumaran, Harry Gill (Senior Oracle Solutions Architect, Expedia) When: Tuesday 11:30 - 12:15, Moscone West, Room 3006 Register CON6668: Oracle Database Exadata Cloud at Customer: Technical Deep Dive              Come to this session to get a deep dive from Oracle Development on the architecture and technology behind this solution. You will also learn about deployment, operational, administration, and lifecycle management best practices associated with this unique cloud service.        Speakers: Manish Shah, Barb Lundhild When: Tuesday 3:45 - 4:30, Moscone West, Room 3006 Register CON6680: Exadata: Achieving Memory Level Performance: Secrets Beyond Shared Flash Storage New technologies such as PCIe NVMe flash has opened up many new possibilities for modern databases. A carefully architected shared NVMe Flash system such as Exadata can deliver close to memory performance and scale up to hundreds of terabytes in capacity.  This session offers an inside view of how Oracle Exadata has embraced flash in its architecture. Speakers: Kodi Umamageswaran, Gurmeet Goindi          When: Wednesday 12:00 - 12:45, Moscone West, Room 3008 Register CON6664: Oracle Exadata: Maximum Availability Best Practices and New Recommendations         In this session, Oracle Development takes a deep dive into the high availability capabilities and best practices for the latest Exadata release. Attend to gain deep knowledge of Oracle Maximum Availability Architecture (MAA) guidelines that will help them achieve the highest level of availability with Exadata systems, for on-premises and in the cloud.  We will look at OLTP, data warehousing, consolidation, and high performance In-Memory processing workloads.   Speakers: Mike Nowak, Swamy Kiran (Infrastructure Architect / DBA Team Technical Lead, The World Bank Group)              When: Wednesday 3:30 - 4:15, Moscone West, Room 3008 Register CON6671: Oracle Exadata Security Best Practices            Oracle Exadata is hardened with advanced security configuration and features at every level. Join this session to learn about the Exadata storage, OS and database-layer security options and best practices. We cover security considerations during deployment, database creation, and steady-state operations. You will also learn security and monitoring best practices for Exadata Cloud deployment models, including strategies for achieving compliance regulations and maintaining audit standards.            Speakers: Dan Norris, Jeff Wright When: Wednesday 5:30 - 6:15, Moscone West, Room 3008 Register   Want more on Exadata sessions?  Check out the link below to all of the Oracle Openworld Exadata Sessions and find the ones that are right for you! I hope to see you there!

We've pulled together your personal, must-attend Exadata sessions for #OOW17 – all in one place.  Mark your calendars and register today to get your seat. Exadata Power Hour Sessions – register today,...

Data Protection

Why Co-Engineering Matters: Fastest Speed, Highest Security with New Oracle M8 Systems

Today's guest blog comes from Renato Ribeiro, Director for SPARC Systems Engineering at Oracle. Oracle has just announced a new microprocessor, and the servers and engineered system that are powered by it. The SPARC M8 processor fits in the palm of your hand, but it contains the result of years of co-engineering of hardware and software together to run enterprise applications with unprecedented speed and security.  The SPARC M8 chip contains 32 of today’s most powerful cores for running Oracle Database and Java applications. Benchmarking data shows that the performance of these cores reaches twice the performance of Intel’s x86 cores. This is the result of exhaustive work on designing smart execution units and threading architecture, and on balancing metrics such as core count, memory and IO bandwidth. It also required millions of hours in testing chip design and operating system software on real workloads for database and Java. Having faster cores means increasing application capability while keeping the core count and software investment under control. In other words, a boost in efficiency. Enter Software in Silicon What is even more remarkable about Oracle’s SPARC M8 chip is the revolutionary implementation of accelerators and logic that are especially designed for extreme acceleration of in-memory data processing, and for the protection of that data while in memory, on disk, or moving over the network. This was breakthrough design in which small portions of the chip’s silicon were dedicated to specific operations that are key to enterprise software. When this technology, called Software in Silicon, is utilized, the performance advantage of the SPARC M8 processor cores increases to 7x for in-memory analytics compared to Intel’s latest cores. A similar performance boost of 7x is seen on cryptographic hashes using strong, wide keys. Software in Silicon technology was actually introduced with the SPARC M7 and S7 processors in the last two years, and the SPARC M8 chip is already using its second generation. It is a sign of maturity that, since its introduction, Software in Silicon features in these processors have been utilized by Oracle Database 12c automatically to increase in-memory query performance and transaction security.  Speeding Up In-Memory Analytics The results of this revolutionary approach are truly remarkable: the processor cores offload analytic operations to Data Analytics Accelerator (DAX) units, and go on performing other database tasks. The DAX units take data directly from memory, uncompress it on the fly if necessary, and then perform searches, filters and joins at extreme speeds. All this is transparent to the user, and is handled by the Oracle Database In-memory option for Oracle Database 12c. Open APIs have also made this technology available to Java 8 Streams processing. But what is important is the end result: analytics can be performed on transactional data in real time, and yield critical insight into the business in a fraction of the time compared to conventional approaches.  There is one anecdote that we hear often from customers testing such features created by Oracle’s co-engineering: the tests run so fast that the users say “something went wrong, the answer cannot be out already”, which leads to repetitions and additional tests until they realize that, yes, the speed is real. Reaching New Heights in Security IT security is a tough discipline in part because of complexity: too many devices, networks and applications in which to activate security features. Things get worse when there is overhead associated with security operations, causing it to be deployed on a case-by-case basis. The best way to make IT security stronger is to make it simpler: implement it by default into systems, and only open it up by exception. For that, you need to automate deployment and remove overhead.  Software in Silicon technology enables encryption of data in storage and in the network without a perceptible performance penalty. The 32 crypto accelerators in the SPARC M8 processor operate on the industry’s widest set of cyphers. They leverage the tremendous memory bandwidth and speed of the chip to provide the largest gains in performance compared to competitors. With SPARC technology, end-to-end encrypted transactions can be enabled all across the data center, simplifying security architecture. In another sign of co-engineering, the security functions in Oracle Database and Java applications use the crypto accelerators automatically. A second important Software in Silicon feature is Silicon Secured Memory, which is unique in its ability to protect data in memory from access errors or hacker attacks. It is already used by Oracle Database 12c, and can be activated easily with most applications, providing protection against one of the most common software errors that are exploited by malware: buffer overflows. The chip and server hardware monitor access to data in memory, and only allows operations when coming from the process that own that specific data location. It is an ingenious way to stop data from being corrupted or stolen at the source, identifying the offending software program that then can be patched or removed.  Adopt at Your Own Pace Many of our customers balance the deployment of new applications with the modernization of legacy ones. Taking advantage of great technology such as Software in Silicon means in general that their applications have to be deployed using modern software like Oracle Database 12c and the Oracle Solaris 11.3 operating system. The SPARC platform lets users adopt this technology at their own pace. Guaranteed binary compatibility allows legacy applications to first run unchanged in new servers with SPARC processors, and then be modernized later. In addition, Oracle has publicly committed to supporting Oracle Solaris 11 until at least 2034, while Oracle’s Lifetime Support policy for hardware ensures that customers do to not have to replace their existing systems before they and their applications are ready for an upgrade. Watch the Announcement If you haven’t done it yet, check the launch webcast on the new SPARC M8 processor and systems. And if you're going to be at this year's Oracle OpenWorld, don't miss the SPARC M8 sessions we have scheduled for you: Renato Ribeiro is a Director for SPARC Systems Engineering at Oracle. In over a decade working at Oracle and Sun Microsystems, he has gained expertise in applying computing technologies to databases and applications, virtualization, and benchmarking, especially in mission critical, large scale deployments.

Today's guest blog comes from Renato Ribeiro, Director for SPARC Systems Engineering at Oracle. Oracle has just announced a new microprocessor, and the servers and engineered system that are powered by...


Integrated Cloud Applications & Platform Services