Learn how businesses like yours can begin to optimize for today and plan for tomorrow with Cloud-Ready IT Infrastructure

Recent Posts

Engineered Systems

Say Goodbye to DIY Database Systems: Oracle Exadata X8M Is Here

The reviews are in: The new Oracle Exadata Database Machine X8M, launched in June, represents a breakthrough in hardware and software enhancements, adding unique machine learning (ML) capabilities to the already impressive Exadata features list. “Oracle Exadata X8M performance, operational simplicity, manageability, time-to-market, total costs, and price/performance is difficult to nigh impossible to match with any DIY database systems or any other database system…period,” says Marc Staimer, Tech Target/DSC Consulting. “This is why the Oracle Exadata X8M spells doom for DIY database systems.” Exadata X8M is the latest version of the Exadata Database Machine, which Oracle first launched 10 years ago, and a glance at the spec sheet makes it clear we haven’t rested on our laurels. Exadata X8M’s state-of-the-art hardware enhancements include the latest Intel Xeon processors and PCIe NVME flash technology to drive performance improvements—a 60% increase in I/O throughput for all-flash storage and a 25% increase in IOPS per storage server compared to Exadata X7. Each Exadata X8M storage server now features 60% more cores to offload Oracle Database processing and 40% higher capacity disk drives to support massive data growth and database consolidation strategies. These improvements come with no price increase, further improving the cost-effectiveness of the Exadata platform. Additionally, a new, much lower-cost extended storage server is available for storing infrequently accessed, older, or regulatory data. All customer data now receives the benefits of the Exadata scale-out architecture and Oracle Database storage, including application transparency, consistency of operational models, hybrid columnar compression (HCC), and the same security model with encryption across all tiers. “Exadata X8 represents a substantial improvement not only over the prior versions of Exadata, but over just about any other possible deployment of Oracle Database in the data center,” Carl Olofson, Research Vice President, Data Management Software, IDC concludes. Delivering extreme performance and availability, Oracle Exadata is the foundation for Oracle Autonomous Database, the world’s first self-driving database, leveraging machine learning to provide a self-driving, self-securing, and self-repairing database service that delivers a much more reliable and secure system to make organizations and developers more productive. Exadata X8M builds on that technological foundation with ML capabilities such as Automatic Indexing, which continuously learns and tunes the database as usage patterns change. The entire process is automatic and improves database performance while eliminating manual tuning. Exadata X8M also includes new automated performance monitoring, which combines artificial intelligence, years of real-world performance triaging experience, and best practices to detect performance issues automatically and determine the root cause without human intervention. Mark Peters of ESG calls Exadata X8M’s advanced capabilities “a veritable smorgasbord of delights for IT operations and DBAs to use, which delivers unsurpassed—and genuine—value to Oracle Database users.” Exadata also offers unparalleled cost effectiveness. Andy Patrizio of Network World cites numerous examples of real-world cost savings, including a financial services company that replaced 4,000 Dell servers running Red Hat Linux and VMware with 100 Exadata systems running 6,000 production Oracle databases. Not only did the consolidation reduce the data center’s power footprint, but patching was down 99%. These and other case studies leave  Constellation Research’s Holger Mueller to conclude that “Oracle…has demonstrated significant bodies of evidence with large-scale customers that have reduced costs through consolidation, increased security and improved performance by adopting Exadata.” What’s more, Exadata brings new meaning to the term cloud-ready by providing choice and deployment flexibility, enabling customers to use Exadata anywhere—in Oracle Cloud, as the core of Oracle’s unique Exadata Cloud at Customer service, and on-premises. This capability “gives CxOs the highest flexibility to fluidly deploy workloads across the cloud and on-premises,” explains Holger Mueller of Constellation Research. Celebrating 10 Years of Innovation Exadata X8M lies at the forefront of more than 10 years of continuous innovations and deep engineering. Today it runs mission-critical workloads such as OLTP, analytics, and IoT, and multiple verticals, including four out of five of the biggest banks, telecoms, and retailers. Imagine how it can help your organization reap maximum value from your data. To learn more about Oracle Exadata Database Machine X8M and its advanced AI-powered autonomous technology, visit https://www.oracle.com/engineered-systems/exadata/database-machine/

The reviews are in: The new Oracle Exadata Database Machine X8M, launched in June, represents a breakthrough in hardware and software enhancements, adding unique machine learning (ML) capabilities to...

Engineered Systems

What Is Smart Scan?

What is Exadata Smart Scan? To answer that, let's first look at how a traditional data center operates. The database system consists of three components or layers: compute, storage, and network. Compute layer: It houses the servers that process data requests from users and returns results back to them. It makes up the back-end system of user-facing applications. Say you’re trying to purchase an airline ticket on an online booking app. The application sends a request to the database server, which processes the relevant data and returns the results to your application. Storage layer: This is where the data actually resides. Here, servers store, secure, and manage data in tables as a set of data blocks. Network layer: The network layer lies between compute and storage and passes data blocks from one to the other. Click Video URL: What is Smart Scan?   The Data Center The problem with this structure is that data centers must use specialized storage and compute appliances that can’t scale for critical workloads. What’s more, it can create bottlenecks at the network layer, because all the blocks from the data table are transferred across the storage network. This process consumes bandwidth, impacts response time, and creates an unnecessary burden on the compute layer. To get around the scalability issue and reduce data transfer time, companies have combined compute, storage, and networking into a tightly integrated hyperconverged infrastructure. This solution was a compromise that provided basic configurations for generic workloads. It offered adequate storage and compute capabilities but was optimal for neither. Plus, it did nothing to reduce the bandwidth required to exchange broad swaths of data across the network.   Standard Hyperconverged Infrastructure   Oracle Exadata takes hyperconverged infrastructure and makes it smarter by integrating database-aware software and hardware innovations within and across compute, storage, and networking. These innovations create huge performance improvements leading to faster results. How Does It Do This With Smart Scan? Exadata turns the traditional model on its head. Instead of going to the compute layer, all queries are pushed directly to storage. There, data filtering based on the query occurs in parallel across all storage servers before it is sent to compute. This dramatically improves execution time and eliminates bottlenecks, because the network is transmitting a smaller set of relevant data blocks.   Oracle Exadata Say, for example, you wanted to know which customers spent more than $500 on a plane ticket in April. You send a query to the Exadata system, which would forward it to the storage layer. There, Exadata employs Smart Scan to filter out flight bookings below $500 and not made in April. Once Exadata extracts the relevant customers from storage, only that data is sent up to the database servers. The database then consolidates the result and returns it to the client. Compare this to a database running on a conventional or hyperconverged system, which would send a huge portion of the table from storage to compute, easily leading to bottlenecks. The result? Exadata Smart Scan drastically reduces CPU usage in the database servers, accelerates query execution, and eliminates network bottlenecks. To learn more about Smart Scan, as well as Oracle Exadata’s other data-optimizing technology, contact an Oracle Exadata Sales Specialist.  

What is Exadata Smart Scan? To answer that, let's first look at how a traditional data center operates. The database system consists of three components or layers: compute, storage, and network. Compute...

Engineered Systems

5G: Is the Hype Justified?

There’s a lot of hype around 5G now. We took a deep dive on the topic with expert Jack Shaw, president of Breakthrough Business Technologies and co-founder of Blockchain Executive, and learned some fascinating facts about the future of cellular communications. I think most people think 5G is just a faster, better 4G, but it's really something quite different, isn’t it? In reality, 5G has many advantages in terms of higher speed, greater capacity, and lower latency. One of the reasons it does is because it uses a different, higher, more capacious frequency on the wireless electronic communications spectrum. Those higher frequencies can hold a lot more information. So you can transmit a lot more data. However, higher frequency also means that you can't transmit as far as you can with lower frequency 4G. Under certain circumstances, it can be more difficult for 5G to penetrate through walls or dense environments like concentrated urban environments. With 4G wireless communications, you've got a cell tower typically every few miles. The towers are tall because they broadcast out over a radius of two or three miles. With 5G, you have many more actual wireless nodes transmitting the signals, and they are using a mesh environment to enable the signals to hop from one to another. Mesh networks are common in large Wi-Fi implementations. Rather than a single point that broadcasts out over a given distance, you have a number of wireless access points that communicate with each other and pass the signals to the next point. And those points, in turn, can be connected to a number of individual wireless access points to communicate with each other so that the information can be shared and redistributed. These nodes communicate with each other and relay the signal to one another so that you have a good signal penetration everywhere. Generally, you’re going to have nodes every couple of hundred yards and sometimes, even more closely together. But because of that, they don't have to necessarily be on high towers. They can be on existing light poles, or building corners, or just up at the tops of buildings. So rather than seeing a lot of new towers going up for 5G, what you're going to see is 5G wireless transmitters all over the place. And that gives us a lot more speed because of the higher capacity of these higher frequency bands. Just how fast is 5G, and will it eventually replace 4G networks? Right now, 5G, with 10-gigabit wireless communications speeds, could be up to 100 times faster than 4G. If it's 100 times faster, for example, an 80-gigabyte medical image that would take about six hours to download today would download in about three minutes with 5G. I’m predicting that by 2025, we will see 5G rolled out pretty broadly across most of the United States and for that matter, most of the world. But it won’t replace 4G, especially in rural areas where it’s impractical to have this huge node network. The second characteristic you mentioned was capacity. What kind of capacity does 5G have? To grasp the capacity of 5G, let’s take an example. If you take the length of a football field from the back of one end zone to the back of the other end zone, and an area just as wide, which typically is what you would see on the inside of a football stadium, that's about a hectare in area. Right now, with 4G, you can have about 20 devices transmitting and receiving per hectare simultaneously and allow all of them to get the full 4G access. So, if you go to a football game with 50,000 of your friends and even 5% of them at any point in time are trying to transmit selfies, all of a sudden you've got way more demand than 4G can handle. The speed is there, but 4G can only handle a certain amount of data at a time. 5G, on the other hand, could handle around 10,000 devices: 500 times the capacity. So, you can see why one of the first implementations of 5G technology will be in the NFL and major sports. These organizations want people sending selfies and pictures and tweets from their sporting events.   The final characteristic you mentioned is low latency. Let’s talk more about that now. Latency is, of course, the response time from the moment a signal leaves a device until it gets to the destination, and back. With 4G, the best response time is about 100 milliseconds, or 1/10th of a second, round trip. For most of what we do, that's fine. But for certain kinds of remote-control operations, you need a much lower latency. And 5G, as it starts to roll out, will be able to deliver latencies on the order of 10 milliseconds or 1/100th of a second and eventually, theoretically, it could be as low as a single millisecond. Today, surgeons can stand in the next room, hardwired to a robotic surgical device that's doing a prostate surgery, for example, and they can see on the screen exactly where the device is. They can actually control that scalpel much more finely and accurately than they could if they were in that same room holding the scalpel in their hand. Because they might move their hand a quarter of an inch and the scalpel might only move a fraction of a millimeter—giving them extremely fine control. Plus, they have a camera in there so they can see precisely what’s happening. Because they're hardwired, they're getting response times at the speed of light; they're seeing the instrument move as they move it. But if you're working from somewhere across the country, you don't want to be doing surgery remotely because the image that you're looking at is showing you where the scalpel was two- or three-tenths of a second ago, and it's not where it looks like it is to you. This makes remote robotic surgery impractical right now. With 5G, it will become practical because the latency will be so low that even if you're 1,000 miles away, you'll be able to see the scalpel in the exact location that it's actually in at that moment. This also comes into play, for instance, with autonomous vehicles because they have sensors from various different detection devices to figure precise locations: the edge of the road, the vehicle in front of it or behind it, and so on. AI is helping to do this analysis and react. All these decisions about whether it’s safe to change lanes, accelerate, or pass a slower moving vehicle must be made in real time. Five or 10 years down the road, we’ll be able to make much better use of the available highway space by being able to have these vehicles essentially draft each other. Low latency in 5G will allow autonomous vehicles to drive inches apart, driving 60, 80, even 100 miles an hour safely. And if there's the slightest variation, because, say, one car hits a bump on the road that slows it down a fraction, the other car will sense that immediately. It will adjust its speed and know that the car behind it is adjusting its speed just enough to reflect that adjustment and keep them from crashing into each other. This means you'll be able to have automobiles that are operating using the available space more effectively. In turn, this will reduce congestion on the highways, allowing higher travel speeds and the ability to do so because they're drafting each other, and providing better energy efficiency. Are there any other major advantages for industry that we haven’t discussed? The way in which 5G is being implemented by the big telecommunications companies is with a highly automated implementation process. In other words, they have virtual software systems that are automatically configuring the 5G deployments. As the telcos roll it out over the next couple of years, these virtual systems will automatically configure systems to meet utilization demands. It will be simpler for the consumer market as demand is much more predictable. In the business environment, you're going to have lumpier growth. But, overall, the advantage of having these virtual deployment systems that have a simplified form of AI managing the deployment process is that, unless it's something really extreme in terms of the level of demand, customers can get up and running. It means businesses will be able to implement and scale up very, very quickly – in a matter of seconds if the carrier has the right automated systems in place. For more on 5G, check out the results of an Oracle survey, “Oracle Survey Finds Enterprises Ready for Benefits of 5G.” If you would like to get the perspective of the telco industry, we recently spoke to expert Peter Jarich at GSMA Intelligence in a post “The Key to 5G Is the Enterprise.” To learn more on Oracle Engineered Systems, click here   Jack Shaw is a strategist, author, and thought leader who prepares leaders for digital transformation — managing the strategic business impacts of such current and emerging technologies as AI, blockchain, 5G, Internet of Things, and 3D printing.      

There’s a lot of hype around 5G now. We took a deep dive on the topic with expert Jack Shaw, president of Breakthrough Business Technologies and co-founder of Blockchain Executive, and learned some...

Engineered Systems

Oracle Private Cloud at Customer: 4 Reasons Which Make It An Ideal Solution

Businesses looking to modernize operations and improve customer experience know that the path to digital transformation lies in the cloud. No wonder, then, that the adoption of cloud native technologies in production has increased by 267% between 2017 and 2018. But most  businesses have realized that they have a need for both private and public clouds. Therefore, they’re exploring multi-cloud strategies that let them respond to fast-changing market needs while still ensuring data sovereignty and compliance to data regulations. The ideal solution to this challenge is a secure and agile infrastructure that supports application portability to other clouds without vendor lock-in. Oracle Private Cloud at Customer (PCC) lets you deploy an instance of the Oracle cloud in your data center, behind your firewall via a convenient subscription model that covers hardware, software, and all management and support services,  thereby eliminating  capital expenses. For those critical applications requiring the security and control of an on-premise environment, PCC essentially brings the cloud to you, including an identical feature set, a similar financial model, and the same operational model found in Oracle’s public cloud. The difference? Your IT team retains complete control of company applications and workloads. And, because it's precisely compatible with our public cloud, any hybrid cloud use case you might think of becomes almost trivial to implement and manage. There are dozens of reasons why this innovative architecture is the right solution for your infrastructure needs, here are four big ones: Fast time to value We designed Oracle Private Cloud at Customer to deliver enterprise-grade IaaS for rapid and easy deployment of mission-critical applications and workloads while keeping your data on-premise. PCC accelerates application deployment using VM templates and high-performance, low-latency Oracle Software Defined Networking (SDN) technologies to facilitate automated provisioning of the server and storage networks. Based on tests done by the Oracle VM team, you can deploy apps 7x to 10x faster than traditional VM solutions. Like public cloud solutions, PCC offers convenient OpEx subscription pricing with infrastructure managed by Oracle to free your IT resources for other strategic business initiatives. Secure infrastructure Oracle PCC offers isolated management and VM Access Networks to help ensure that your mission-critical workloads run securely in your data center and in full control by your IT team. Your cloud admins can create and manage users, private networks, and IaaS instances, as well as monitor and back up all guest VMs. Plus, because Oracle Cloud Ops handles using the Telemetry data received via Oracle Application Server Guard (ASG), they don’t have to worry about patching and troubleshooting. IaaS out of the box Oracle PCC comes with everything you need to deploy and manage applications in your private cloud. It comes pre-cabled with integrated compute scalable from 2-20 CN, ZS7 storage with 200 TB usable storage expandable to 2200 TB, and a 100GbE connectivity between rack components and the datacenter. The included Enterprise Management 13c IaaS self-service portal offers a single pane of glass for management, monitoring, and IaaS. With support for Oracle Linux Cloud Native Environment (OL CNE), your team can automate deployment, scaling, and management of containerized applications on Docker managed by Kubernetes. Applications deployed and developed using OL CNE are portable without changes to any Kubernetes-compliant platform both on-premise and in the public cloud, thus facilitating your multi-cloud strategy. PCC’s Ansible VM Lifecycle module helps you automate creation, deletion, starting, and stopping of VMs and helps cut down deployment times along with preventing human errors. Other out-of-the-box features include: Site Guard disaster recovery Oracle Hypervisor Premier support for Oracle Linux and Oracle Solaris Flexible architecture Oracle Private Cloud at Customer’s flexible architecture allows you to consolidate your organization’s Linux, Solaris, Oracle, and Windows workloads onto a converged platform. Scaling capacity is as easy as adding compute nodes as your business needs grow. Simply slide in an additional compute node one at a time and the controller software will take care of the rest.  PCC gives your team access to widely used open-source automation and container management tools, such as Ansible and Terraform, to accelerate application deployments by up to 80%. Oracle Linux Cloud native environment allows you to automate deployment, scaling, and management of containerized applications to maximize application portability. Simplify and accelerate your private cloud deployment with Oracle Private Cloud at Customer, which combines scalability, zero-downtime upgradability, and security with a flexible subscription model to deliver IaaS out of the box that can be managed from a single pane of glass. Achieve faster time to value while retaining control over data privacy, compliance, and governance while maintaining low latency for on-premise applications and data. To learn more about Oracle Private Cloud at Customer, visit oracle.com/engineered-systems/private-cloud-appliance/cloud-at-customer.  

Businesses looking to modernize operations and improve customer experience know that the path to digital transformation lies in the cloud. No wonder, then, that the adoption of cloud native...

Engineered Systems

Oracle Private Cloud Appliance X8-2 Provides Cloud-Ready Capability On-Premises

If your organization is like most, many of your critical enterprise workloads are still running on on-premises infrastructure. But the future of the digitized organization lies in a multi-cloud environment with the agility to respond to market shifts rapidly while still providing data sovereignty and compliance. Application portability within a secure and agile infrastructure is the right way to go. The Oracle Private Cloud Appliance is an integrated, wire-once, software-defined infrastructure system that can radically simplify the way your organization installs, deploys, and manages virtual environments. This highly scalable hardware-software solution helps your organization achieve maximum efficiency with existing investments. The appliance also streamlines your eventual migration to cloud computing because it’s built on the same architecture as the Oracle public cloud. The recently released sixth-generation Oracle Private Cloud Appliance X8-2 is 45 percent faster than the previous generation, offers 17X more storage, and is substantially more cost-effective, resulting in faster time to market and lower total cost of ownership (TCO). Optimized for Savings Oracle built Private Cloud Appliance from the ground up for cost savings and forward compatibility. In fact, its estimated TCO is 30 percent to 50 percent lower than competing solutions. Virtualization is included at no additional cost, so you won’t have to pay for VMware licenses, and Trusted Partitioning saves money on Oracle software licensing: You pay for the cores you use rather than full system capacity. The integrated wire-once design and low latency software-defined networking helps future-proof your investment by scaling compute and storage on demand without re-cabling. The hardware is scalable from 2-25 nodes, one node at a time, with support for up to 15 additional capacity and flash storage trays and up to 8 fully isolated tenant groups. Oracle Private Cloud Appliance also reduces downtime costs with zero-downtime upgrades and Intelligent Management node failover. You can upgrade compute nodes at your own convenience on a per-tenant group basis independent of the Management nodes. Intelligent Infrastructure Oracle Private Cloud Appliance is designed for rapid and automated private cloud deployment. Whether running Linux, Windows, and Oracle Solaris applications or running containerized cloud native applications, Oracle Private Cloud Appliance supports consolidation for a wide range of mixed Oracle and non-Oracle workloads. With installation automation and the Oracle VM virtual appliances and migration tool, you can go from power-on to production in hours rather than months. Oracle Private Cloud Appliance also supports Oracle Linux Cloud Native Environment (OL CNE) to automate deployment, scaling, and management of container workloads. And, it assures that applications are seamlessly portable to any Kubernetes-compliant platform. Unified Management Gaining visibility into and management of a multi-cloud environment—especially a multi-vendor hybridized cloud architecture—can be a drain on IT time and resources. But with Oracle Enterprise Manager you have unified management across public and private clouds. You can monitor usage via a single pane of glass, centrally provision storage, and execute disaster recovery at the push of a button. Enterprise Manager lets you manage multitenancy securely and flexibly: Just enter a few basic configuration parameters to create VMs manually or leverage Oracle VM Templates and Assemblies to get a full application up and running in as little as two hours. A self-service portal and automatic service requests help streamline troubleshooting and save your team time and resources for more strategic work. Cloud-Ready Because Oracle Private Cloud Appliance is built on the same servers, virtualization, OS, and storage as Oracle Cloud, hybrid environments and cloud migrations are far easier to manage. Oracle Linux Cloud Native Environment (OL CNE) supports containers and orchestration with management and development tools while adhering to Cloud Native Computing Foundation (CNCF) standards to avoid vendor lock in. You gain application portability across a multi-cloud environment out of the box. Your business needs IT infrastructure solutions that are not only agile, easy to use, and cost-effective, but also prepare your organization for eventual migration to the public cloud. You might consider a build-your-own, multi-vendor solution—if you’re looking to squeeze more life out of your current software solutions or to keep data on-premises to comply with regulations—but it won’t be optimized for your database or cloud-ready. To modernize and be up and running fast, Oracle Private Cloud Appliance offers a fully integrated hardware/software solution that provides multitenancy, zero-downtime upgradability, capacity on demand, and single-pane-of-glass management right out of the box. To learn more about Oracle Private Cloud Appliance X8-2, visit www.oracle.com/pca

If your organization is like most, many of your critical enterprise workloads are still running on on-premises infrastructure. But the future of the digitized organization lies in a...

Engineered Systems

Teleran Helps Turn Data and Analytics Into Business Advantage

Data and the analytics applications that draw out the value from that data have become a non-negotiable component of every successful business. But understanding, tracking, and managing the real use of that data remains a big challenge for most organizations. It’s about more than an understanding of the underlying data in the database and how it gets accessed and how it’s performing. Explains Teleran CEO Nathan Roseman, “At the application layer, organizations need to understand what questions are actually being asked by users, if the queries are effective, and if there are errors occurring at the application layer.”  The Time Has Come for a New Model Teleran has built solutions for Oracle Cloud, Oracle Exadata, and Oracle Database Appliance (ODA) that help business derive tremendous value from their data—and simplify implementation and management.  “Historically,” explains Roseman, “data marts, data warehouses, or large databases required a costly infrastructure and a lot of specialized DBA skills and, typically, a long implementation cycle.” As analytics and data warehousing usage increased, more and more analytical users would query these databases with more sophisticated questions. So the demands were constantly changing and growing. It required a tremendous amount of work from IT in the background to support all this changing activity and behavior, plus addressing security and compliance regulations. “IT was always behind the eight ball,” Roseman concludes. Using Oracle’s platforms, that has all changed.  Better Serving the Business, Simplified, Automated Teleran’s solutions track, analyze, and make visible what users are doing, providing real-time recommendations on how to improve the user's productivity and how to adjust the underlying resources to better serve the business as those needs change. They actually prevent users from making some of the common errors using automated controls.  This becomes particularly important as organizations move to the cloud, where you have users running sophisticated analytical applications; they can get in trouble, waste a lot of resources, and drive up consumption costs. “That's one of the concerns that's slowing evolution to the cloud: How do you control that cost? But those same concerns exist on-premises,” notes Teleran VP of Marketing Chris Doolittle.  Implementation and ongoing support become especially easy with ODA, which bundles the application, the operating system, the database, and the automated system patching and maintenance—in a pre-tuned environment. The in-a-box solution is particularly suited to small and midsized businesses. How the U.S.’s Largest Health Insurer Created a Healthier Business A large healthcare insurer in the U.S. needed to streamline its IT processes, especially data warehousing, to better support key business functions including finance, accounting, and customer service. The ODA-Teleran data and analytics solution met all the mandates around cost control, better service and value to the business, and protection of sensitive personal medical information. Once implemented, the insurer significantly lowered IT support and maintenance costs leveraging ODA automation and Teleran’s real-time user management. Most strikingly, performance improved significantly: Queries that had taken hours now run in minutes with ODA built-in performance tuning and Teleran’s real-time query management and optimization. And Teleran’s sensitive data audit and real-time redaction and protection controls, designed specifically for analytics, ensure HIPAA compliance. How a Brokerage Firm Increased Money Under Management Sometimes improving data and analytics usage can also directly add to the bottom line. Teleran works with the business users and the analytics staff to correlate business KPIs with analytics and data usage KPIs. In the case of a brokerage with 15,000 brokers and 2 million retail accounts, Doolittle says the client wanted to look at the top 10% of its brokers in terms of how they actually leveraged their data and analytical applications to drive more money under management (MUM), a prime business KPI.  With Teleran’s solution running on Oracle Exadata, the firm was able to evaluate the usage patterns of all the brokers and correlated that to the data accessed and analytics used. With the statistics gathered, the company was able to identify those analytics as well as the data the top brokers used to drive more business. Then the company was able to train all its brokers to leverage that best practice. Within 12 months, the client increased MUM by 5%. No Matter Where Your Data Resides, Teleran Has a Solution Many organizations are trying to figure out where they are on a journey to the cloud, or maybe even coming back from the cloud because they've figured out that it's cost-prohibitive or they need on-premises data solutions for regulatory reasons. Wherever the analytics are happening, Teleran’s solutions can be there: Teleran ODA, Teleran Exadata, or Teleran Cloud. Because Oracle’s platforms are all designed based on the same architecture, migrating to or from cloud can be seamless and fast. Data Protection Remains a Top Priority Not only do companies need to get more value from the data, they need to protect that data. Teleran’s solutions have real-time AI tech controls. Elaborates Doolittle, “We can manage the query flow for purposes of protection so we can block inappropriate access before it reaches the database.” He goes on to add, “We also have real-time redaction that's specifically tuned to address some of the gaps that occur with these powerful analytical tools that can allow people to infer sensitive data without necessarily even seeing it because of the way they query the database.”  That kind of behavior can be prevented with built-in AI. “Our solutions are particularly tuned to these kind of Wild West, highly heterogeneous type of application environments. We've got lots of data, lots of users, and lots of different kinds of applications going against it, so our controls can handle that complexity in a very simple way in terms of setting it up and managing it,” Doolittle concludes. If there is a violation, it can be addressed immediately and automatically with additional controls at the same time being able to identify and report on compliance violations. This is all done outside of the database, independent of the applications. To learn more about Teleran’s solutions for Oracle, visit the website.   Nathan Roseman is CEO and co-founder of Teleran. He is an entrepreneur and a recognized expert in networking technologies, security, and software applications. Prior to co-founding Teleran, Nathan was a founder and principal in Mosaic Investments, Inc., an investment management company that consulted to leading investment firms and technology companies. Nathan was also the founder and CEO of LAN Services a leader in network systems and vertical market applications and created the award-winning line of LANWare network management and security software products.   Chris Doolittle is Vice President of Marketing and co-founder of Teleran. Prior to forming Teleran, Chris was general manager of Information Builders' advanced analytics software division. Chris also held management and business development positions at PepsiCo and General Electric. He has many years’ experience in software marketing and sales, product management and strategy in analytics, data warehousing, and data security.  

Data and the analytics applications that draw out the value from that data have become a non-negotiable component of every successful business. But understanding, tracking, and managing the real use...

Engineered Systems

Exadata Economics: The Real Cost of Ownership

With all the benefits of Oracle Exadata, why do some people still run Oracle databases on other platforms?  That’s a question I’ve been pondering, and while there is no single answer, a common perception is that Exadata is too expensive.  In this post, we discuss the economics of operating databases and show why Exadata has the lowest Total Cost of Ownership for most enterprises. The disconnect between those who believe Exadata is more expensive and those who believe it is less expensive comes from a difference in how you measure your costs. Exadata systems, when viewed as a simple server, are more expensive to acquire than build-your-own Linux servers from any vendor, including Oracle.  But most experts agree you should not compare acquisition costs when evaluating technology—rather, you should look at the total cost of ownership. Several factors contribute to the lower Exadata TCO: •    Rich/Flexible Elasticity •    Much higher density database consolidation •    Simpler integrated management, tuning, and support •    Business value of new solutions   Rich/Flexible Elasticity Oracle Exadata allows you to license a subset of the cores in your system and grow it as demand increases. In addition, Exadata elastic configurations allow you to independently add compute and storage servers as required.  These expansions do not require any outages or forklift migrations.  Together, these features dramatically simplify capacity planning: You purchase what you need and grow your system only when and if required. Also, because Exadata offloads work from database servers to storage servers, fewer database server cores and licenses can support the same database workloads. Move to a cloud environments, and get even finer grained elasticity with pay-by-the-hour licensing.   Much Higher Density Database Consolidation Improved consolidation density is the source of some of the greatest savings in an Exadata.  Because of the system’s extreme performance and database-aware resource management capabilities, Exadata provides the highest consolidation density in the industry. With fewer resources to manage comes a reduction in the management burden—there are fewer physical servers, virtual servers, operating systems, and databases to manage. License costs are also lower. Finally, fewer resources directly reduce data center operational costs such as power, cooling, and floorspace.   Simpler Integrated Management, Tuning, and Support Customers running Exadata also spend less on management and administration.  Unlike a generic server, Exadata is designed specifically for running Oracle Databases.  Best practices are built in.  The system is tuned specifically for the database and provides many advanced features that just work—no administration required.  For example, Exadata figures out what to cache, what work to prioritize, and how to most quickly recover from both soft and hard failures, without time consuming administration.  Lastly, Exadata costs less to support.  The single vendor solution eliminates the need for vendor management, and the included Platinum Support offloads common maintenance.     Business Value of New Solutions   A last input on your total cost of ownership is business value of the solution.  Oracle Exadata, with its improved performance and manageability allows customers to do things that were previously impossible.  Customers can build new real-time business processes, embed deeper analysis into their processes, and get better answers to more sophisticated queries more quickly. For example, top financial services companies use Oracle Exadata to near-instantly analyze payments for risk across petabytes of data.  Exadata also provides the highest levels of availability—think of your cost of downtime, and what this means to your business. Want more proof?  In 2016, IDC quantified the savings of running Exadata.  They found Exadata provided 429% 5-year ROI with hundreds of thousands of dollars in savings. Although a few years old, the methodology still applies.  With the recent enhancements in the X8M, it’s likely the results today are even more compelling. It’s no wonder some of our largest customers have just standardized all Oracle database deployments on Exadata. This is the eighth in a series of blog posts celebrating the 10th anniversary of the introduction of Exadata, exploring the unique features of Exadata, and why they are important.  Next, we will look at the key role Exadata is playing for customers transitioning their workloads to the cloud. About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.  

With all the benefits of Oracle Exadata, why do some people still run Oracle databases on other platforms?  That’s a question I’ve been pondering, and while there is no single answer, a common...

Engineered Systems

Cloud Backup in 3 Easy Steps

IDC predicts that by 2025, the world will produce 175 ZB of data, up from about 33 ZB at the end of last year. That’s a staggering growth rate of 61 percent. IDC Senior VP David Reinsel explains that one zettabyte equals a trillion gigabytes. If you multiply that by 175, you realize that’s an almost incomprehensible amount of data.   But the only data that really matters is your business’ data. Your business doesn’t run without it. It also represents a big chunk of your IT budget.  Can Backup Really Be So Easy? You know that you need to back up your data to protect it against physical failures like component or network failure or file deletion and corruption. Logical failures like operational or human errors present another risk. Finally, your business can experience site failures.  Wouldn’t it be nice if backup were as simple as 1, 2, 3?  Enter Oracle Database Appliance (ODA) Easy Cloud Backup, which integrates ODA and Oracle Cloud Infrastructure (OCI) Cloud Backup. It allows you to backup to Oracle Cloud through a single interface and develop an effective recovery strategy. Plus, you don’t have to maintain any hardware on-premise to store the backup; it all happens in the cloud. Oracle keeps your backup data safe for recovery when you need it. Using Oracle Recovery Manager (RMAN), you can make online backups of an Oracle Database without requiring the database to be brought down. Rather than one large backup once a day, the cloud backup backs up the database archive every 15 minutes so that you have the confidence that your data is always up-to-date. It really is as easy as 1, 2, 3 to set up your backup policy: 1.    Log into Oracle Database Appliance and store your cloud credentials. 2.    Create a backup policy. 3.    Attach it to a database. Recovery is just as easy. Just log into the ODA interface, then choose to recover from the latest backup available or select one from an earlier point in time.   Not only that, but there are no upfront hardware costs to worry about. You pay only for the storage you use without having to worry about running out of tapes or local storage. Because it’s the cloud, you can scale capacity up or down to meet your business’ needs.  It’s Perfect for a Wide Range of Use Cases Enterprises can take advantage of ODA Easy Backup for a wide variety of uses: •    Active archive storage for data that needs to be accessible infrequently •    Database cloning when you need a copy of a database from a particular point in time •    Lift-and-shift workload migrations to move workloads from on-premise to the cloud •    Backup and recovery to ensure that data can be recovered from any point in time after a failure •    Test and development before rolling applications into production •    Management of standby databases with Data Guard Disaster Recovery so production Oracle Databases survive disasters and data corruptions ODA Simplifies Database Deployment and Backup and Recovery  Oracle Database Appliance is a fully integrated software and hardware solution purpose-built and optimized for Oracle Database and applications, speeding deployment, reducing risk, and providing capacity on demand. It eliminates all the complexity of do-it-yourself database setup and deployment as well as backup and recovery. Instead of assembling dozens of components from multiple vendors and continually testing, tuning, and reconfiguring, deployment takes just 30 to 90 minutes with one component, one button for installation, and one vendor for all your support. Plus, your solution can be maintained by a single DBA rather than a team of network, storage, and system admins—drastically reducing time and resources needed for deployment, maintenance, and support.  Optimized for Oracle Database and applications, ODA offers tight integration of all Oracle hardware, firmware, OS, database, and applications in a fully redundant platform designed for real application clusters. Proven best practices for configurations and database deployments are built in, as are high availability and disaster recovery.  ODA Easy Cloud Backup Provides Seamless Backup to Cloud In a nutshell, here are the advantages of Oracle Database Appliance Easy Cloud Backup: •    You’ll save money with no hardware costs associated with on-premise data backup and no team of IT specialists to manage backup and recovery.  •    Reliability will increase over on-premise solutions. •    It’s easy to set up and manage. •    You can create archive backup for compliance. •    Security is built in for your data in transit and at rest in the cloud. Watch this quick 2-minute video to see how easy Oracle Database Appliance Easy Cloud Backup is.

IDC predicts that by 2025, the world will produce 175 ZB of data, up from about 33 ZB at the end of last year. That’s a staggering growth rate of 61 percent. IDC Senior VP David Reinsel explains that...

Engineered Systems

Oracle Brings the Cloud to You With Gen 2 Exadata Cloud at Customer

The future of data is in the cloud. But for organizations that must keep at least some of their data behind a firewall for business, regulatory, or network latency requirements, migrating this critical data to the public cloud is not an option. Introduced two years ago, Oracle Exadata Cloud at Customer is designed to help these organizations easily move business critical database workloads to a cloud architecture and remove the obstacles to cloud adoption while still keeping the data securely behind the in-house data center walls. With the arrival of Gen 2 Exadata Cloud at Customer, Oracle builds on its vision with a consolidated management interface for databases across public cloud and Cloud at Customer as well as the latest Exadata X8 hardware. New: Consolidated Interface More than ever, Exadata Cloud at Customer delivers the full Exadata public cloud experience. It delivers database as a service in your data center and behind your firewall with public cloud hardware, software, and APIs, giving you an identical operational model and financial model that interoperates seamlessly with the public cloud. Gen 2 Exadata Cloud at Customer now incorporates an Oracle Cloud Infrastructure (OCI) control plane, giving you a consolidated view of and control over systems and databases whether in the public cloud or Cloud at Customer. Like Exadata Cloud Service, Cloud at Customer means that Oracle experts deploy and manage infrastructure, capacity can scale elastically, and you benefit from a subscription model with hourly pay per use. In addition, your administrators get Oracle’s fine-grained security controls as well as customizable isolation and operational policies. Built on Exadata X8 Gen 2 Cloud at Customer is built on Exadata X8, designed from the ground up to be the ideal database hardware, offering scale-out, database-optimized compute, networking, and storage for maximum performance at the lowest cost. Exadata X8 smart system software leverages specialized algorithms to improve all aspects of database processing significantly, including online transaction processing, analytics, and consolidation. Want to talk hardware? Exadata Cloud at Customer is equipped with scale-out two-socket database servers using the latest Intel Cascade Lake 26-core CPUs giving your organization 50 cores available per server. Cloud at Customer comes with 720 GB of memory available per database service, double the default memory of on-prem Exadata deployments. You’ll also benefit from Oracle’s superfast unified InfiniBand internal fabric. Exadata’s storage servers work off the latest 24-core intel Cascade Lake CPUs, offering 50 percent more processing power than Exadata on-premises to offload database processing. You also get 25.6 TB flash and 12 14-TB disk drives per storage server. “Gen 2 Oracle Exadata Cloud at Customer is ideal for IT organizations that want a public cloud experience in their own data centers—not just managed hardware, but a full-blown public cloud system with the same hardware, software, control plane, and services as the public cloud,” said Carl Olofson, Research Vice President, Data Management Software, IDC. “Our research shows that organizations realized average benefits worth $1.93 million per organization per year, a 356-percent ROI and a breakeven point of six months when using Exadata Cloud at Customer. In IDC’s opinion, customers seeking a proven, production-hardened on-premises cloud services solution should evaluate Oracle today.” Mark Peters, Principal Analyst and Practice Director at Enterprise Strategy Group (ESG), concurs, writing: “Exadata is the perfect ‘Engineered System’ to run Oracle Database. It delivers the performance, scalability, availability, and security that users invariably demand for their critical business and consolidated database environments. Offering that same purpose-built-application-platform for users in their own data centers allows them (when appropriate, by choice and/or constraint) to enjoy the best of both the on-premises/private cloud and public cloud worlds. It’s a case of ‘and,’ not ‘or.’” And, Marc Staimer, President and CDS of Dragon Slayer Consulting, concludes, “Oracle has raised the standard of what a managed DBaaS on-prem service should be by delivering their second-generation Exadata Cloud at Customer. When it comes to performance, flexibility, security, simplicity, and OpEx, there is simply no comparison at this time.” Your organization may have several obstacles to full public cloud deployment: regulatory or corporate policies requiring data to remain local to territory or corporation, applications that require the performance offered by local LAN, or databases that are tightly coupled with on-premises applications and infrastructure. Thanks to Exadata Cloud at Customer, these obstacles simply disappear. What’s more, with its full compatibility with on-premises databases, Exadata Cloud at Customer makes migration easy and low-risk, with or without downtime. If you can’t come to the Cloud, Oracle Cloud at Customer brings the cloud to you. To learn more about Oracle Exadata Cloud at Customer, visit oracle.com/engineered systems/Exadata/cloud-at-customer.    

The future of data is in the cloud. But for organizations that must keep at least some of their data behind a firewall for business, regulatory, or network latency requirements, migrating this...

Engineered Systems

News from Oracle OpenWorld 2019: Announcing Oracle Database Appliance X8-2

Your organization’s databases are fundamental to success, holding your most critical assets and supporting mission-critical applications. They also represent a significant part of your IT spend, so optimizing deployment is key. Oracle Database Appliance is a fully integrated solution purpose-built and optimized for Oracle Database and applications, speeding deployment, reducing risk, and providing capacity on demand to reduce licensing costs. Announced at Oracle OpenWorld 2019, Oracle Database Appliance X8-2 is our most powerful and flexible system yet. Built on the Latest 2.3 GHz, 16 core Intel® Xeon® Gold 5218 Processors, our 7th-generation solution offers up to 369 TB SSD or up to 92 TB SSD / 504 TB HDD (raw), and more physical network ports to support Oracle Database 19c as well as prior versions. Simplicity by Design Building your own hardware solution can take months, or longer, and requires assembling dozens of components from multiple vendors, resulting in ongoing cycles of testing, tuning, and reconfiguring. Oracle Database Appliance eliminates that complexity with deployment that takes just 30 to 90 minutes. Rather than piecing together server, storage, networking, and database with consultants, you have one component for installation. Instead of pursuing finger-pointing vendors to resolve issues, you have to make only one call for all support. Also, your solution can be maintained by a single DBA without the need for a team of network, storage, and system admins—drastically reducing time and resources needed for deployment, maintenance, and support time. Optimized for Oracle Database Oracle Database Appliance provides everything necessary for a database solution in a single appliance, right out of the box. As a single-vendor solution, Oracle Database Appliance offers tight integration of all Oracle hardware, firmware, OS, database, and applications. And taking it even further, the X8-2-HA offers a fully redundant platform designed for real application clusters. Proven best practices for configurations and database deployments are built in, as are high availability and disaster recovery.  Integration and optimization also enhance security:  •    Complete in-house hardware design: ODA’s motherboard, BIOS, and service processor firmware are designed from the ground up by Oracle engineers with manufacturing oversight. •    Security out of the box: Provides required RPMs to run the stack and software vulnerability scans. •    Secure Oracle operating systems: Our ultra-secure operating systems default to the highest levels of security. •    Timely system updates: Quarterly patching for entire stack. Affordability Built In When you build your own solution, you have to anticipate and provide for future capacity, making it easy to waste valuable IT spend on capacity you don’t yet need. By contrast, Oracle Database Appliance is a completely flexible solution that allows you to scale up processor cores, when you’re ready. Licensing software as you grow can provide significant savings. And since Oracle Database Appliance is cloud-ready, you can continue to save well into the future. Not only can your staff run the same stack in the Oracle Cloud as they do on-premises, but they can also leverage the same skills and standards in both locations. Cloud integration means you can back up critical data in the cloud and archive non-critical data quickly and affordably. Additional Oracle Cloud services include active archive storage, database cloning, lift-and-shift workload migrations, and disaster recovery.  With core-to-edge integration, Oracle engineered systems don’t merely solve network latency. They speed database and application performance and analytics to provide real-time, actionable information that you can leverage to propel growth. Mitsubishi Aluminum Co. Ltd., for example, found that consolidating six databases into Oracle Database Appliance meant processing sales and production data 40% faster, generating reports 30x faster, and ensuring business continuity. What will your organization do with the time, energy, and resources that Oracle Database Appliance will liberate? To learn more about Oracle Database Appliance X8-2, visit https://www.oracle.com/engineered-systems/ODA Read more on the latest Oracle next-generation innovations across its data management portfolio. 

Your organization’s databases are fundamental to success, holding your most critical assets and supporting mission-critical applications. They also represent a significant part of your IT spend, so...

Engineered Systems

Debunking Misleading PMEM and Cloud Adjacent Vendor Assertions

There has been a lot of hype since Intel introduced Optane persistent memory (PMEM). To cut through the confusing vendor assertions requires a brief overview of what PMEM is and what it is not. Optane PMEM comes in two flavors.  The first is a non-volatile memory dual inline memory module (NVDIMM).  The second is a NVMe SSD form factor more commonly called storage class memory (SCM).  The Optane PMEM value comes from lower latency, higher performance than NAND Flash, with much greater write wear-life, a.k.a. endurance, and similar non-volatility of flash.  Data remains unchanged when power is lost.  Optane PMEMs are noticeably slower than standard volatile DRAM.  Optane PMEM NVDIMMs have two modes, Memory Mode and Application Direct Mode.  Application Direct Mode or AppDirect requires the application and/or the file system to be modified to place data directly in and out of Optane PMEM.  There are not many applications or file systems that do this yet.  The new Oracle Exadata X8M is one of those few. Memory Mode sits behind DRAM.  The DRAM acts as a first-in-first-out cache to Optane PMEM. The applications have no control over data placement.  Memory Mode is the more common implementation because it requires no application changes to be used.  It’s how it’s implemented in servers. Many storage vendors have jumped on the Optane PMEM bandwagon, implementing the SCM SSDs as caching storage drives into their storage systems.  SCM allows them to claim lower latencies and more IOPS.  However, that performance is significantly less than the PMEM NVDIMMs and likely to be inconsistent under load.  And more importantly, it is not going to be as fast or consistently fast as Exadata X8M.  Here’s why. The Exadata Database Server running Oracle Database 19c accesses the Optane PMEM directly in the Exadata Storage Servers.  It leverages RDMA over converged Ethernet (RoCE) at 100Gbps in the internal Exadata interconnect, bypassing the network, storage controller, IO software, interrupts, and context switches. Exadata X8M is able to derive a consistent ≤ 19µs of latency or less from this architecture and as many as 16 million 8K SQL IOPS per rack.  Many database functions and all storage functions are handled by the Exadata Storage Servers freeing up the Exadata Database Servers for more performance. All Exadata Database Servers can access ALL Exadata Storage Servers’ PMEM. Each Storage Server can have up to 1.536 TBs of Optane PMEM NVDIMMs with up to 21.5-27 TBs per Exadata rack.  All PMEM is auto-mirrored for resiliency.  Contrast the Exadata PMEM architecture with the Optane PMEM storage class memory (SCM) approach common to stand alone storage systems.  Or the standalone database server utilizing Optane PMEM NVDIMMs in memory mode, not Application Direct Mode.  These implementations realize longer latencies for IO operations or are restricted by limited PMEM scalability. The standalone storage system has a much different path and lower performance characteristics. The database server IO connects to the storage system over an external switched network.  It will likely utilize NVMe-oF on Fibre Channel or Ethernet to get the lowest possible latency from that network, but may not.  NVMe-oF utilizes RDMA that enables two computers on the same network to exchange memory contents without involving the processors.  RDMA is designed to minimize network latencies; however, that depends on several factors.  One of which is the network chosen.  RDMA on Infiniband, Fibre Channel (FC), and Ethernet utilizing RoCE have it built into the NIC silicon thus providing the lowest latencies.  The NVMe/TCP version on Ethernet is software-driven (slower than silicon-based) and since it’s running on a layer 3 network, has the potential for network congestion, delivering inconsistent performance.  All of which causes higher latency.  This is why NVMe-oF utilizing TCP is not recommended for applications requiring consistently high performance.  Most storage systems are focused on NVMe-oF on 32Gbps FC and in some cases ROCE on 40 Gbps Ethernet.  Only two are utilizing Infiniband.   Because the layer 2 network fabric is shared it can develop hot spots that slows performance. That’s just one variable performance issue, there are other issues as well.  When the database hits the storage system it has multiple layers it must go through.  The RDMA only bypasses the primary CPU to go directly to DRAM.  From there is has additional latency areas it must pass including the PCIe controller, down the PCIe bus, to the SCM SSD cache .  The SCM storage architecture cannot avoid storage network or fabric issues, system interrupts, DRAM congestion, storage system processing slowdowns for storage-intensive functions such as snapshots, thin provisioning, data deduplication, compression, replication, RAID rebuilds, context switches, and more.  As a result, the latencies are very inconsistent with generally ~ 100µs or minimally more than 5X slower than Exadata X8M. Some storage vendors utilize SCMs as a faster storage tier than NAND Flash SSDs but slower than DRAM.  They utilize AI-ML to determine which data blocks go where based on past history. However, this methodology is flawed.  Data access patterns change from day to day.  The blocks of data that were hot today likely will not be hot tomorrow.   In addition, the blocks do not correlate with the data hotness of the Oracle Database, which is related to table, index, or partition level. Whereas the Oracle Database knows which tables, indexes, partitioning, and other database structures are hot, the standalone storage systems do not.  Exadata systems know which data is hot or not because it is co-engineered with the Oracle Database. The server system has a different path and limitations.  Server systems can take advantage of Optane PMEM NVDIMMs.  They have to run the Optane PMEM NVDIMMs in Memory Mode as a cache in front of Flash SSDs and/or HDDs.  The database writes and reads through the server to DRAM where it is cached and moved over time to PMEM as the DRAM cache1 fills and ages.  As the PMEM cache fills and ages, it then flushes the data to Flash SSDs and/or potentially HDDs. However, the server architecture has significant scalability and availability limitations. Optane PMEM capacity is limited by the number of DIMM slots which have to be shared with DRAM DIMMs.  There are only so many DIMM slots in a server. Availability is another issue because the Optane PMEM in other servers in a cluster is not a shareable pool.  This limits data redundancy and high availability options.  Data protection functions that are CPU and memory-intensive such as snapshots, replication, thin provisioning, deduplication, compression, RAID rebuilds, etc., all reduce database performance while running.This is because they share the same CPUs and memory, providing inconsistent latency and response times or requiring those functions to be performed in low usage timeframes.  Server vendors do not currently publish their database performance numbers or latencies because it depends on too many factors, including if there are other applications running on that server.  It’s difficult for them to predict application performance consistency. Therefore, whenever a storage or server vendor says that their Optane PMEM implementation is equal to or faster than Exadata X8M for running the Oracle Database, it’s factually incorrect. Period.  In many situations, it is not even close. Exadata smokes every server, storage system, and HCI appliance currently on sale in the industry today. It out-performs all of them. Cloud adjacent is another popular yet misunderstood technology.  It came about because many public clouds cannot or do not provide enough storage performance for mission-critical applications.  Or the customer cannot have their data stored in a public cloud because of performance, regulatory, legal, or data sovereignty requirements, and data must remain under their control at all times.  Storage vendors have been selling cloud adjacent types of set ups for a few years.  The storage can be owned by the customer or it can be a managed service from the storage vendor.  The storage system is placed in an Equinix (or similar) data center located geographically near or in the same facility or building as the targeted public cloud.  The Equinix data center is connected to the public clouds with a very high-speed 10Gbps connection.  The storage systems from NetApp, Dell EMC, Infinidat, HPE, and others have much better performance than the block or file storage available within the public clouds of AWS, Microsoft Azure, and Google. The issue is distance and the associated latency.  There is no getting around speed-of-light latency so it’s paramount that the cloud adjacent data center be as close as possible to the target public cloud.  This becomes increasingly evident with databases.  A database transaction will kick off dozens of IOs to the storage system.  The AWS website specifically states that for each RDS database transaction, expect approximately 30 storage IOs. That’s a lot of round trips between the public cloud and the cloud adjacent storage system.  Latency is additive and can cause application response times to become unacceptable. It is this problem that Oracle solves with Exadata in a Cloud Adjacent architecture. Customers can again purchase or utilize Exadata and have their system installed in the Equinix data center closest to the targeted public cloud provider (AWS, Azure, or Oracle). With the Exadata Cloud Adjacent architecture, each database transaction has a single roundtrip instead of dozens or as much as 30 roundtrips. Putting the Exadata in the Equinix data center is going to provide much better application response time than just putting the storage in the Equinix data center.  This is because the database talking to the cloud adjacent storage has much higher latency than the app server talking to the Exadata in a cloud adjacent configuration. Once again, if a storage vendor pushing cloud adjacency claims that they are as fast or faster than an Exadata Cloud Adjacent architecture, they are simply blowing smoke. For More Information on Oracle Exadata Go to: Oracle Exadata Paper sponsored by Oracle.  About Dragon Slayer Consulting: Marc Staimer, as President and CDS of the 21-year-old Dragon Slayer Consulting in Beaverton, OR, is well known for his in-depth and keen understanding of user problems, especially with storage, networking, applications, cloud services, data protection, and virtualization. Marc has published thousands of technology articles and tips from the user perspective for internationally renowned online trades including many of TechTarget’s Searchxxx.com websites and Network Computing and GigaOM.  Marc has additionally delivered hundreds of white papers, webinars, and seminars to many well-known industry giants such as: Brocade, Cisco, DELL, EMC, Emulex (Avago), HDS, HPE, LSI (Avago), Mellanox, NEC, NetApp, Oracle, QLogic, SanDisk, and Western Digital.  He has additionally provided similar services to smaller, less well-known vendors/startups including: Asigra, Cloudtenna, Clustrix, Condusiv, DH2i, Diablo, FalconStor, Gridstore, ioFABRIC, Nexenta, Neuxpower, NetEx, NoviFlow, Pavilion Data, Permabit, Qumulo, SBDS, StorONE, Tegile, and many more.  His speaking engagements are always well attended, often standing room only, because of the pragmatic, immediately useful information provided. Marc can be reached at marcstaimer@me.com, (503)-312-2167, in Beaverton OR, 97007.

There has been a lot of hype since Intel introduced Optane persistent memory (PMEM). To cut through the confusing vendor assertions requires a brief overview of what PMEM is and what it is not. Optane...

Latest Tech Trends, Their Problems, And How to Solve Them

Few IT professionals are unaware of the rapid emergence of 5G, Internet of Things or IoT, edge-fog-cloud or core computing, microservices, and artificial intelligence known as machine learning or AI/ML.  These new technologies hold enormous promise for transforming IT and the customer experience with the problems that they solve.  It’s important to realize that like all technologies, they introduce new processes and subsequently new problems.  Most are aware of the promise, but few are aware of the new problems and how to solve them. 5G is a great example.  It delivers 10 to 100 times more throughput than 4G LTE and up to 90% lower latencies.  Users can expect throughput between 1 and 10Gbps with latencies at approximately 1 ms.  This enables large files such as 4K or 8K videos to be downloaded or uploaded in seconds not minutes.  5G will deliver mobile broadband and can potentially make traditional broadband obsolete just as mobile telephony has essentially eliminated the vast majority of landlines. 5G mobile networking technology makes industrial IoT more scalable, simpler, and much more economically feasible.  Whereas 4G is limited to approximately 400 devices per Km2, 5G increases that number of devices supported per Km2 to approximately 1,000,000 or a 250,000% increase.  The performance, latency, and scalability are why 5G is being called transformational. But there are significant issues introduced by 5G.  A key one is the database application infrastructure. Analysts frequently cite the non-trivial multi-billion dollar investment required to roll-out 5G. That investment is primarily focused on the antennas and fiber optic cables to the antennas. This is because 5G is based on a completely different technology than 4G.  It utilizes millimeter waves instead of microwaves.  Millimeter waves are limited to 300 meters between antennas.  The 4G microwaves can be as far as 16 Km apart.  That is a major difference and therefore demands many more antennas and optical cables to those antennas to make 5G work effectively.  It also means it will take considerable time before rural areas are covered by 5G and even then, it will be a degraded 5G.  The 5G infrastructure investment not being addressed is the database application infrastructure. The database is a foundational technology for analytics.  IT Pros simply assume it will be there for their applications and microservices. Everything today is interconnected.The database application infrastructure is generally architected for the volume and performance coming from the network. That volume and performance is going up by an order of magnitude.  What happens when the database application infrastructure is not upgraded to match?  The actual user performance improves marginally or not at all.  It can in fact degrade as volumes overwhelm the database applications not prepared for them.  Both consumers and business users become frustrated.  5G devices cost approximately 30% more than 4G – mostly because those devices need both a 5G and 4G modem (different non-compatible technologies).  The 5G network costs approximately 25% more than 4G.  It is understandable that anyone would be frustrated when they are spending considerably more and seeing limited improvement, no improvement, or negative improvement.  The database application infrastructure becomes the bottleneck.  When consumers and business users become frustrated, they go somewhere else, another website, another supplier, or another partner. Business will be lost. Fortunately, there is still time as the 5G rollout is just starting with momentum building in 2020 with complete implementations not expected until 2022, at the earliest.  However, IT organizations need to start planning their application infrastructure upgrades to match the 5G rollout or may end up suffering the consequences. IoT is another technology that promises to be transformative.  It pushes intelligence to the edge of the network enabling automation that was previously unthinkable.  Smarter homes, smarter cars, smarter grids, smarter healthcare, smarter fitness, smarter water management, and more.  IoT has the potential to radically increase efficiencies and reduce waste.  Most of the implementations to date have been in consumer homes and offices.  These implementations rely on the WiFi in the building they reside. The industrial implementations have been not as successful…yet.  Per Gartner, 65 to 85% of Industrial IoT to date have been stuck in pilot mode with 28% of those for more than 2 years. There are three key reasons for this.  The first are the limitations of 4G of 400 devices per Km2.  This limitation will be fixed as 5G rolls out.  The second is the same issue as 5G, database application infrastructure not suited for the volume and performance required by industrial IoT. And the third is latency from the IoT edge devices to the analytics, either in the on-premises data center (core), or cloud.  Speed of light latency is a major limiting factor for real-time analytics and real-time actionable information.  This has led to the very rapid rise of edge-fog-cloud or core computing. Moving analytic processing out to the edge or fog significantly reduces distance latency between where the data is being collected and where it is being analyzed.  This is crucial for applications such as autonomous vehicles.  The application must make decisions in milliseconds not seconds. It may have to decide whether a shadow in the road is actually a shadow, a reflection, a person, or a dangerous hazard to be avoided.  The application must make that decision immediately and cannot wait.  By pushing the application closer to the data collection, it can make that decision in the timely manner that’s required. Smart grids, smart cities, smart water management, smart traffic management, are all examples requiring fog (near the edge) or edge computing analytics.  This solves the problem of distance latency; however, it does not resolve analytical latency.  Edge and fog computing typically lack the resources to provide ultra-fast database analytics.  This has led to the deployment of microservices. Microservices have become very popular over the past 24 months.   They tightly couple a database application with its database that has been extremely streamlined to do only the few things the microservice requires.  The database may be a neutered relational, time series, key value, JSON, XML, object, and more.  The database application and its database are inextricably linked.  The combined microservice is then pushed down to the edge or fog compute device and its storage.  Microservices have no access to any other microservices data or database.  If it needs access to another microservice data element, it’s going to be difficult and manually labor-intensive. Each of the microservices must be reworked to grant that access, or the data must be copied and moved via an extract transfer and load (ETL) process, or the data must be duplicated in ongoing manner.  Each of these options are laborious, albeit manageable, for a handful of microservices.  But what about hundreds or thousands of microservices, which is where it’s headed?  This sprawl becomes unmanageable and ultimately, unsustainable, even with AI/ML. AI/ML is clearly a hot tech trend today.  It’s showing up everywhere in many applications. This is because standard CPU processing power is now powerful enough to run AI/machine learning algorithms.  AI/ML is showing up typically in one of two different variations.  The first has a defined specific purpose.  It is utilized by the vendor to automate a manual task requiring some expertise. An example of this is in enterprise storage.  The AI/ML is tasked with placing data based on performance, latency, and data protection policies and parameters determined by the administrator.  It then matches that to the hardware configuration.  If performance should fall outside of the desired parameters AI/ML looks to correct the situation without human intervention. It learns from experience and automatically makes changes to accomplish the required performance and latency.   The second AI/ML is a tool kit that enables IT pros to create their own algorithms. The 1st is an application of AI/ML. It obviously cannot be utilized outside the tasks it was designed to do. The 2nd is a series of tools that require considerable knowledge, skill, and expertise to be able to utilize. It is not an application. It merely enables applications to be developed that take advantage of the AI/ML engine.  This requires a very steep learning curve. Oracle is the first vendor to solve each and every one of these tech trend problems. The Oracle Exadata X8M and Oracle Database Appliance (ODA) X8 are uniquely suited to solve the 5G and IoT application database infrastructure problem, the edge-fog-core microservices problem, and the AI/ML usability problem. It starts with the co-engineering. The compute, memory, storage, interconnect, networking, operating system, hypervisor, middleware, and the Oracle 19c Database are all co-engineered together.  Few vendors have complete engineering teams for every layer of the software and hardware stacks to do the same thing.  And those who do, have shown zero inclination to take on the intensive co-engineering required.  Oracle Exadata alone has 60 exclusive database features not found in any other database system including others running the same Oracle Database. Take for example Automatic Indexing. It occurs multiple orders of magnitude faster than the most skilled database administrator (DBA) and delivers noticeably superior performance.  Another example is data ingest. Extensive parallelism is built-into every Exadata providing unmatched data ingest. And keep in mind, the Oracle Autonomous Database is utilizing the exact same Exadata Database Machine. The results of that co-engineering deliver unprecedented Database application latency reduction, response time reduction, and performance increases. This enables the application Database infrastructure to match and be prepared for the volume and performance of 5G and IoT.  The ODA X8 is ideal for edge or fog computing coming in at approximately 36% lower total cost of ownership (TCO) over 3 years than commodity white box servers running databases. It’s designed to be a plug and play Oracle Database turnkey appliance.  It runs the Database application too.  Nothing is simpler and no white box server can match its performance. The Oracle Exadata X8M is even better for the core or fog computing where it’s performance, scalability, availability and capability are simply unmatched by any other database system.  It too is architected to be exceedingly simple to implement, operate, and manage. The combination of the two working in conjunction in the edge-fog-core makes the application database latency problems go away.  They even solve the microservices problems.  Each Oracle Exadata X8M and ODA X8 provide pluggable databases (PDBs).  Each PDB is its own unique database working off the same stored data in the container database (CDB).  Each PDB can be the same or different type of Oracle Database including OLTP, data warehousing, time series, object, JSON, key value, graphical, spatial, XML, even document database mining.  The PDBs are working on virtual copies of the data. There is no data duplication. There are no ETLs.  There is no data movement. There are no data islands.  There are no runaway database licenses and database hardware sprawl. Data does not go stale before it can be analyzed.  Any data that needs to be accessed by a particular or multiple PDBs can be easily configured to do so. Edge-fog-core computing is solved.  If the core needs to be in a public cloud, Oracle solves that problem as well with the Oracle Autonomous Database providing the same capabilities of Exadata and more. That leaves the AI/ML usability problem. Oracle solves that one too. Both Oracle Engineered systems and the Oracle Autonomous Database have AI/ML engineered inside from the onset. Not just a tool-kit on the side. Oracle AI/ML comes with pre-built, documented, and production-hardened algorithms in the Oracle Autonomous Database cloud service. DBAs do not have to be data scientists to develop AI/ML applications. They can simply utilize the extensive Oracle library of AI/ML algorithms in Classification, Clustering, Time Series, Anomaly Detection, SQL Analytics, Regression, Attribute Importance, Association Rules, Feature Extraction, Text Mining Support, R Packages, Statistical Functions, Predictive Queries, and Exportable ML Models.  It’s as simple as selecting the algorithms to be used and using them.  That’s it.  No algorithms to create, test, document, QA, patch, and more. Taking advantage of AI/ML is as simple as implementing Oracle Exadata X8M, ODA X8, or the Oracle Autonomous Database.  Oracle solves the AI/ML usability problem. The latest tech trends of 5G, Industrial IoT, edge-fog-core or cloud computing, microservices, and AI/ML have the potential to truly be transformative for IT organizations of all stripes.  But they bring their own set of problems.  Fortunately, for organizations of all sizes, Oracle solves those problems.    

Few IT professionals are unaware of the rapid emergence of 5G, Internet of Things or IoT, edge-fog-cloud or core computing, microservices, and artificial intelligence known as machine learning or...

Engineered Systems

Taking Analytics to the Edge: Moving Processing to the Data Rather than Data to the Processing

By Marc Staimer Ground-breaking changes are happening on the edge of computing. We’re long past the days when all analytics can be centralized in datacenters or even in the cloud. It’s an increasingly decentralized world where analytics has to take place in real time right where individual sensors are, or in the fog when there’s a need to collect information from multiple devices for fast insights. We recently sat down to talk with renowned technology consultant Marc Staimer about computing in the edge, the fog, and the core. In part two of our conversation, we’re going to take a more in-depth look at matching the analytics requirements to the location of the analysis, especially when those analytics need to take place on the edge or in the fog. Aggregating Data for Deeper Insights As we noted in part one, edge computing came about as a response to applications moving into the public cloud where the centralized processing can lead to unacceptable latency. If dealing with a single device and need minimal analytics, edge computing can handle it. Sensors on consumer appliances are an example where edge computing makes sense. When actionable data from multiple data devices is needed, it can take place closer to the edge with much lower latencies than going back to the core or cloud. This is fog computing. Fog computing devices are distributed near the edge and aggregate, analyze, sub-filter, and make decisions for multiple edge devices that have policy engines or AI/machine learning. Edge and fog computing can solve the intransigent distance latency, but they cannot resolve the performance of the analytics engines themselves. When that actionable information is needed fast, Oracle Exadata and the Oracle Database Appliance (ODA) are ideally suited to handle this fog computing, according to Staimer. With built-in AI/machine learning, they can be placed close to the edge. Staimer cites an example of a highly automated robotic automobile factory. It’s operating 7/24/365 and can’t bring its servers down for any reason. It has clustered servers so that it can update, patch, or upgrade each of the clustered servers on a rolling basis without disrupting the workflow. And typically there are different car models or variations in that same factory, as well as other highly automated robotic automobile factories making other cars or lines. “What if all those factories and robots were able to learn from each other utilizing AI machine learning? What if data from each factory floor was being collected, aggregated, and analyzed in real time? What problems could be avoided? What efficiencies could be gained? What quality could be improved? In reality, quite a bit,” says Staimer. This smart factory is a perfect play for edge, fog, and core because the factories already have edge computing going on. Now they can aggregate data from all their factories to optimize production across all factories. ODA and Oracle Exadata Address the Continuum of Analytics Requirements Why are Exadata and ODA well suited to help deal with the analytics problem? Explains Staimer, both are co-engineered with the database: “That co-engineering delivers the lowest possible OLTP latencies, and the fastest performance for every transaction versus any other database system.” The second, and potentially more important, aspect is that these systems are multiple databases in one, tied to a single database engine and copy of the data. If the data needs to be analyzed by different types of databases, such as OLTP, Data Warehouse mining, time series, key value (JSON), XML, graphical, object, etc., it can be. Each database is a pluggable database (PDB) that rides on Oracle’s unique multi-tenant Container Database (CDB). The data is organized virtually by each unique database without impacting or affecting the others. That's huge because it enables multiple database consolidation, eliminates excessive database licensing, duplicate database hardware infrastructure, complicated data protection and DR, duplicate data, storage islands, and, best of all, the data doesn’t have to be moved via ETLs between databases. The Rise of Edge Computing Around Microservices Requires a Better Solution This aggregation of multiple databases tied to a single engine and single copy of data solves a difficult problem for edge computing around microservices. Microservices are small applications with their own database that are pushed out to the edge or fog. The problem is that each microservice is self-contained with no access to the data or database on another device. If the microservice needs a data element from another microservice, that requires a reworking of the microservice and likely an extract transfer and load from one database to the other. All of this is a labor-intensive manual process and takes time, meaning the data becomes stale. It definitely does not occur in any sort of real time. Or, the data must be duplicated to all of the microservices that might require it. Whenever data has to be copied and moved, there is an increased risk of data corruption and loss. No one enjoys moving data. Besides taking time, it leads to rapidly escalating and out-of-control storage, networking, and even compute infrastructure costs. It might be manageable for a handful of microservices but not when there are hundreds to thousands of them as is typical with microservice sprawl with IoT. Oracle’s pluggable databases (PDBs) eliminate that problem. As many as 4,000 microservices can be consolidated into PDBs with a single copy of the data. “That’s a very big thing,” emphasizes Staimer. You’re not only saving time, you’re also eliminating manual processes that increase the risk of errors entering in. The combination of Oracle’s engineered systems that can live on-premises, in the cloud, in the fog, or on the edge, along with PDBs that streamline the processing make for solutions ideally suited to this new world of computing anywhere and everywhere. To learn more about Oracle Database Appliance and Oracle Exadata Machine, visit us online. About Dragon Slayer Consulting: Marc Staimer, as President and CDS of the 21-year-old Dragon Slayer Consulting in Beaverton, OR, is well known for his in-depth and keen understanding of user problems, especially with storage, networking, applications, cloud services, data protection, and virtualization. Marc has published thousands of technology articles and tips from the user perspective for internationally renowned online trades including many of TechTarget’s Searchxxx.com websites and Network Computing and GigaOM.  Marc has additionally delivered hundreds of white papers, webinars, and seminars to many well-known industry giants such as: Brocade, Cisco, DELL, EMC, Emulex (Avago), HDS, HPE, LSI (Avago), Mellanox, NEC, NetApp, Oracle, QLogic, SanDisk, and Western Digital.  He has additionally provided similar services to smaller, less well-known vendors/startups including: Asigra, Cloudtenna, Clustrix, Condusiv, DH2i, Diablo, FalconStor, Gridstore, ioFABRIC, Nexenta, Neuxpower, NetEx, NoviFlow, Pavilion Data, Permabit, Qumulo, SBDS, StorONE, Tegile, and many more.  His speaking engagements are always well attended, often standing room only, because of the pragmatic, immediately useful information provided. Marc can be reached at marcstaimer@me.com, (503)-312-2167, in Beaverton OR, 97007.        

By Marc Staimer Ground-breaking changes are happening on the edge of computing. We’re long past the days when all analytics can be centralized in datacenters or even in the cloud. It’s an...

Engineered Systems

Computing in the Cloud, in the Fog, and on the Edge

The proliferation of the Internet of Things (IoT) has made it possible to collect and analyze data, and respond, in real time. Think about an autonomous vehicle that encounters an obstacle ahead. There must be a split-second response that determines whether that obstacle is a shadow or a person in the road. In a case like this, there’s no room for error, and no room for latency. Latency, a.k.a. delay, is the enemy of response time. To make this real-time response happen, the data analysis needs to take place near the sensor on the vehicle: at the edge of the network. We had an opportunity to sit down with renowned technology consultant Marc Staimer to talk about edge computing and how it’s being used with advanced technologies to overcome the latency issues around public cloud. Edge Computing Emerged as a Response to Latency in Public Cloud At its most basic definition, edge computing is simply having compute power and analytics close to the source of the data that’s being processed. According to Staimer, edge computing came about as a response to applications moving into the public cloud where the centralized processing can lead to unacceptable latency. Explains Staimer, “Every kilometer of distance between the device collecting data and the device processing and analyzing that data adds latency. Let’s put this in perspective. If the distance latency between a smart meter and the device processing that smart meter data in a public cloud is approximately 1,000 milliseconds, it creates a roundtrip delay of two seconds before accounting for the latency of the processing and analytics. Whereas, if the edge computing is much closer physically, it can reduce that distance latency to a few dozen milliseconds. In a metropolitan area that distance latency is likely to be no more than 50 milliseconds (depending on circuit miles), or 20 times less than cloud or core in the example just discussed.” Analytics Makes Edge Computing Critical Many IoT sensors have limited analytics capabilities. Instead, they send the data collected somewhere else to be processed. Monitors on a refrigerator or robot vacuum are examples of this type of analytics. Latency isn’t a big issue because split-second processing isn’t necessary. “The analytics is the issue. What is being done locally? Is the local processing being done for one device or multiple devices?  What decisions are required based on policy engines or AI/machine learning to be completed locally?” says Staimer. “In the case of the autonomous vehicle, it's computing for hundreds of sensors on that vehicle. It's analyzing, running analytics against that and making decisions based on machine-learning against its database, and it has to be in real time.” Low-level edge computing typically has minimal analytics, primarily dealing with a single device. For example, a wind turbine is not collecting data from other wind turbines. It's got its own data, and sends it somewhere to be aggregated centrally, at a database of some kind in the cloud or on-premise. The Emergence of the Fog Staimer goes on to explain that when the processing and analysis of multiple data devices takes place closer to the edge, it can provide actionable information in real time with much lower latencies. This is called fog computing. Fog computing devices are distributed near the edge and aggregate, analyze, sub-filter, and even make decisions for multiple edge devices if they have policy engines or, more importantly today, AI/machine learning. In the case of vehicles, sensor data related to driving and safety are processed in real time in the vehicle, which is the edge device. At the same time, traffic or performance-related data can be collected by each car, summarized in another edge process, and then further analyzed in a metropolitan area fog that's covering a certain number of vehicles in near real time to improve traffic flows and fleet efficiency. And finally, non-time-sensitive data from both the edge and the fog can be sent in highly summarized form to the cloud for further analysis. With edge, fog, and cloud, you have a solution that takes into consideration the urgency of the analysis. Many analytics performance problems can be correlated to distance and network latencies. Others can be tied to the performance of the analytics engines. Edge and fog computing can solve the intransigent distance latency, but they cannot resolve the performance of the analytics engines. The Fog Is Clearly the Growth Area “The fog is where there’s a significant amount of growth,” notes Staimer. “Traditionally, that would have been what was known as the edge: remote offices, branch offices, etc. And that's where analytics can take place.” A good way to think of this fog computing is as pre-processing. Some of the analytics are done locally, but additional, more in-depth analytics can run in the core where real-time responses are not as important. Where fast decision-making is a requirement, it's going to be in the edge or the fog. And these fog-based appliances don’t need to be in a server room. They can live in a closet or in the base of a wind turbine, as an example. In fact, a single fog device could handle multiple wind turbines. “Oracle's got some very smart solutions for the edge, fog, cloud, or core computing space,” says Staimer. “They have the Oracle Autonomous Cloud; Exadata or Oracle Database Appliance (ODA) on-prem – a more traditional CapEx implementation; and Exadata Cloud at Customer managed service. Oracle additionally has an outstanding fog or edge play with its ODA. Oracle’s solutions are unique in that they are engineered to provide extremely low latency, fast analytics in real time. The built-in AI/machine learning automates and simplifies real-time decision making.” Not all of the analytics are decentralized, adds Staimer. Just the portion that's required for real-time interactions. When fast actionable information is needed, there are solutions, such as the Oracle Exadata and the Oracle Database Appliance. These engineered systems are not just for the data center anymore. With built-in AI/machine learning, they can be placed close to the edge. There they can do analytics as required, make real-time decisions, and then pass off the results and pre-filtered data onto the cloud or core for more intensive analytical processing.   Oracle Engineered Systems are used in the Oracle Cloud, Cloud at Customer, in the fog, or on the edge in a closet. This enables the processing to be as close to the edge or as centralized as required. What it comes down to is time. Oracle Engineered Systems (Exadata and Oracle Database Appliance or ODA) are architected to save time. Time is saved by faster processing, database consolidation, and multi-database analytics on a single copy of the data where the processing is moved to data instead of the data moved to the process. And as everyone knows, time is money. “Those time-saving processes are unique to the ODA and Oracle Exadata,” concludes Staimer. The potential is so big here, we’ll focus specifically on how to address the spectrum of analytics requirements in part two of our conversation with Marc Staimer. To learn more about Oracle Database Appliance and Oracle Exadata Machine, visit us online. About Dragon Slayer Consulting: Marc Staimer, as President and CDS of the 21-year-old Dragon Slayer Consulting in Beaverton, OR, is well known for his in-depth and keen understanding of user problems, especially with storage, networking, applications, cloud services, data protection, and virtualization. Marc has published thousands of technology articles and tips from the user perspective for internationally renowned online trades including many of TechTarget’s Searchxxx.com websites and Network Computing and GigaOM.  Marc has additionally delivered hundreds of white papers, webinars, and seminars to many well-known industry giants such as: Brocade, Cisco, DELL, EMC, Emulex (Avago), HDS, HPE, LSI (Avago), Mellanox, NEC, NetApp, Oracle, QLogic, SanDisk, and Western Digital.  He has additionally provided similar services to smaller, less well-known vendors/startups including: Asigra, Cloudtenna, Clustrix, Condusiv, DH2i, Diablo, FalconStor, Gridstore, ioFABRIC, Nexenta, Neuxpower, NetEx, NoviFlow, Pavilion Data, Permabit, Qumulo, SBDS, StorONE, Tegile, and many more.  His speaking engagements are always well attended, often standing room only, because of the pragmatic, immediately useful information provided. Marc can be reached at marcstaimer@me.com, (503)-312-2167, in Beaverton OR, 97007.        

The proliferation of the Internet of Things (IoT) has made it possible to collect and analyze data, and respond, in real time. Think about an autonomous vehicle that encounters an obstacle ahead....

Engineered Systems

5 Ways Smart Technology Will Transform Retail Businesses

We spoke with Bryan Amaral, founder and CEO of Clientricity, LLC, a boutique retail consultancy based in Atlanta and New York, to capture his insights on emerging technology trends in retail for both brick-and-mortar stores and ecommerce.   Lately, headlines about the retail sector conjure apocalyptic images populated by empty storefronts and deserted shopping malls. But reports of retail’s death are greatly exaggerated, to paraphrase Mark Twain. What’s happening isn’t devastation—it’s transformation, driven by technology and more efficient business models. Ecommerce and in-store sales and interactions are becoming more seamless and more convenient for consumers. “What’s really happening in stores is facilitating a new range of buying behaviors: buy online, pick up in-store; reserve online, pick up in-store; buy online, return to store. All these types of workflows require a store, a physical place where people can go and try or buy a product,” says Amaral. He has been at the forefront of retail technology for decades, working with retailers like Saks Fifth Avenue, Neiman Marcus, Harrods, Brooks Brothers, and many others. Technology, especially smart technologies like artificial intelligence (AI), the internet of things (IoT), predictive analytics, mobile, and others are figuratively removing the store walls. As technology infrastructure becomes more robust, retailers that can create agile operations and engaging in-store experiences, each enhanced by technology, can triumph over the wave of store closings. Here are five ways technology can make the in-store experience better for everyone.   Make on-the-money recommendations Customers have preferences and patterns. When AI is trained to understand that data and make good predictions, retailers can achieve a new level of service, says Amaral. Let’s say you’re a clothing retailer. An AI-powered app that maps new lines to certain lifestyles or career demands may merge those categories with information captured about a customer’s preferences or buying behavior. So, when Ashley arrives and you know she’s an avid hiker and works in an office, you can make disparate recommendations about new arrivals that suit both her personal and professional lives. Add her measurements or a scan of her body, and fit recommendations can become more accurate. In a department store, the technology can pre-filter a very broad range of options—think “Tinder for furniture,” Amaral jokes. Simply swipe right to share your preferences and up pops recommendations on similar styles, colors, and sizes that meet your taste. Spur the right type of engagement Most of us have been greeted at a retail store only to have the employee go missing when we need her. IoT sensors combined with digital devices can “bubble up” real opportunities for engagement between retail workers and customers, says Amaral. IoT-powered systems will monitor where customers are in-store, alerting employees when customers seem to need help, adds Amaral. These systems can also track traffic patterns, which may lead to product assortment and placement improvements and help with loss prevention. Sensing and alerting IoT technologies finally removes the blind spot in the shopping process and allows retailers to know when and how to better engage their customers. Amaral sees a day in the not-too-distant future where store owners and managers will be able to get “all of the feeds of all the data elements that they have, have an AI platform that can sift through all of that, and then help them understand where they should be putting their focus and making transformational changes inside their business,” he says. Blockchain will be a game-changer While its power is just being explored and new applications tested, blockchain technology holds promise for everything from supply chain sourcing to inventory control to payments. This distributed ledger allows companies to monitor their products at every step along the supply chain, ensuring greater authenticity, ethical sourcing, and product safety. When inventory levels get low, the technology can trigger re-orders, helping retailers ensure they never lose a sale because products are out of stock. Mobile is the great facilitator As 5G is rolled out over the coming months and years, providing new enhancements to connectivity, other technologies like AI and mobile will become more powerful. Already, customers are using their devices to check prices, gather information, and shop. Location-based promotions and incentives are delivered directly to devices in an effort to get customers into the store. Retailers can further capitalize on these new paradigms. For example, they can develop chatbots that provide information and answer questions digitally, while incentivizing in-store purchases. If a product is out of stock, inventory control and automated ordering can tell customers exactly when their purchase will arrive in-store or be sent to their homes. Devices will be the portal through which in-store and ecommerce transactions are merged rather than existing as adversaries. Deeper insights Overall, the data, insights, and interactions these technologies will deliver and facilitate can change retail and help stores deliver engaging customer experiences. By making better recommendations and truly understanding what customers want, retailers can win loyalty and develop new, cross-platform methods of helping their businesses grow. Oracle Can Provide the Power Behind Retailers’ Resilience To take full advantage of the power of these advanced technological capabilities, retailers will need an IT infrastructure with the power to collect, sort through, and process enormous quantities of data securely. Converged infrastructure engineered to optimize performance and deliver real-time analytics with predictive capabilities will be key. Oracle Exadata on-premises and Oracle Cloud solutions, engineered identically to support any hybrid environment, can help retailers simplify, manage, and scale their architecture to ensure that reports of their brand’s potential passing are greatly exaggerated.        

We spoke with Bryan Amaral, founder and CEO of Clientricity, LLC, a boutique retail consultancy based in Atlanta and New York, to capture his insights on emerging technology trends in retail for both...

Engineered Systems

How the Autonomous Database Will Change the DBA Role—For the Better

When Oracle announced the release of its Autonomous Database in 2018, DBAs had to wonder how it would change their jobs. Using artificial intelligence (AI) and machine learning (ML) technologies, the Autonomous Database handles patching, upgrades, and tuning; manages its own security needs; and can perform repairs on itself, eliminating human error in the process. What’s left for a DBA to do? Well, the fun stuff: As automation absorbs mundane tasks such as tuning, backups, optimization, configuration, and provisioning, DBAs will spend less time maintaining the physical database and more time extracting value from the data itself. Specifically, the role will expand into data architecture and modeling even as it becomes more strategic and collaborative with other areas of the business. Focus on the Data, Not the Database As companies collect increasing volumes of data and their business models become more data-driven, DBAs must leverage their (very human) expertise to provide true value. If you are a DBA, congratulations! Your evolving role will now include helping developers and business users get the most out of the information you manage. With your knowledge of data structure and organization, you can devise more agile development techniques to help developers build better applications, for example, or provide insight into how the system will perform under various conditions. At the same time, you should expand your understanding of areas such as business intelligence (BI), cloud computing, and data security in order to meet the requirements of your new role. Among the specific skills DBAs should develop are: Analytics: Now is the time to explore the analytical capabilities built into Oracle Autonomous Data Warehouse. These include an extensive library of machine learning (ML) algorithms that can help predict customer behavior, identify cross-selling opportunities, and detect anomalies. Data modeling: While database maintenance can be automated, the valuable work of data modeling requires people. A well-thought-out data model can help an application work more smoothly and help end users get the answers they really need. Development: Since you’ll be interacting more and more with the in-house development team, you should get up to speed using developer tools such as GitHub, Docker, and REST services. Align with the Business As the data experts in the organization, DBAs can create value by making more data available to more people. Seize this opportunity to become more involved in helping the business extract value from its data capital. It will be critical for DBAs to take a more proactive role in problem solving, which means understanding the importance of specific types of data to key business stakeholders. Stretch your networking and collaborative muscles by reaching out to multiple business functions with offers of help. You should learn how to communicate the value you can contribute effectively but, even more important, you should listen with an open mind to better understand the needs of users. Then, make yourself invaluable by coming to them with new ideas about what insights they can draw from their data. Be on the lookout for solutions, and be willing to investigate and innovate to bring those ideas to fruition. Play a Bigger Role in Application Development and Data Science Data scientists and business analysts need access to clean, real-time data to do their work, making a DBA’s knowledge of data sources and formats especially valuable. You can help them find ways to discern trends and patterns, bring in external data, or connect to outside analytics tools to augment their analyses. In-house developers also need access to data and the database services you can offer. Engage developers and help them understand what the database is capable of so that they expand the functionality of their applications. Fear not, database administrators. Your role isn’t going away. In fact, it’s evolving to become more valuable than ever. Far from threatening your job, the age of the autonomous database will open greater opportunities to take a seat at the strategy table. But it won’t happen without your initiative. By forging broader partnerships across the organization, being in tune with the business, and making yourself invaluable to the data science, BI, and analytics teams, you can take on a greater and more rewarding role than ever before. Suggested CTA: Learn more about how the DBA role will transform with the arrival of the Autonomous Database at https://www.oracle.com/database/autonomous-database/for-database-admins.html.            

When Oracle announced the release of its Autonomous Database in 2018, DBAs had to wonder how it would change their jobs. Using artificial intelligence (AI) and machine learning (ML) technologies, the A...

Engineered Systems

Oracle Exadata Scalabilty: Plan for Success, not Failure

On October 1, 2013, HealthCare.gov launched with much fanfare.  An estimated 250,000 users attempted to connect to the system, and it came crashing down.  Although a government report identified many different factors that contributed to the failure, ultimately the system was unable to keep up with demand. Most businesses and enterprises will never have to scale their applications to meet demand like that seen by HealthCare.gov, but failure to properly plan capacity can have similar catastrophic consequences.  Unfortunately, very few businesses have the luxury of spending the resources needed to satisfy the most optimistic forecasts.  Over-building for demand that does not yet exist is expensive and consumes resources that may never be utilized.  Balancing the costs of under-forecasting against the cost of over-forecasting has, and may always be, a difficult task.  The best defense against being held hostage to capacity planning missteps is to choose a platform that scales well, eliminating the need to accurately forecast capacity because you can always easily add more after the system is initially deployed.  Scalability is one of the key strengths of Oracle Exadata.  Oracle Exadata elastic configurations let customers add capacity while the system is deployed, and the databases are online.  More importantly, overall system performance scales linearly as capacity is added.  In addition, storage scales independently from compute, ensuring customers can scale their critical resources to eliminate bottlenecks without being forced to scale less scarce resources. Many vendors claim scalability, but if you look closely, your mileage may vary.  Scaling CPU for many workloads is subject to licensing rules.  Oracle engineered systems like Exadata support not just scaling across servers, but scaling within a server, in a manner that is fully compliant with licensing rules.  Exadata combined with Oracle RAC effectively and linearly scale database workloads across multiple servers, with no application changes—a pitfall that affects scalability solutions that rely on partitioning and sharding.  Perhaps the hardest component to scale is storage and IO.  On the surface, scaling storage and associated IO resources seem as simple as adding storage to an array.  However, in practice this does nothing to scale the performance of a database system.  Even modern SANs and networks dedicated to a database can quickly bottleneck the IO of an array as storage is added.  The only viable workaround is to bring the processing to the storage devices, where the effective bandwidth of devices can be leveraged.  Only Exadata is capable of doing this, as it requires cooperation between the database servers and the storage subsystems.  That’s why a full rack of Exadata can provide 10x the IO resource of a typical all-flash storage array, and why the most demanding and scalable workloads are run on Exadata. This is the seventh in a series of blog posts celebrating the 10th anniversary of the introduction of Exadata, exploring the unique features of Exadata, and why they are important.  Next, we will look more closely at the true cost of ownership of a database system, and why Exadata can save you a lot of money. About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.  

On October 1, 2013, HealthCare.gov launched with much fanfare.  An estimated 250,000 users attempted to connect to the system, and it came crashing down.  Although a government report identified many...

Engineered Systems

Why Oracle Exadata X8 Makes DIY Oracle Database Systems Obsolete

The latest Oracle Exadata X8 is the 9th generation.  Every generation has been laser focused at raising the expectations bar for increased performance and automation; reducing database administrator (DBA) expertise, skills, knowledge, and experience requirements; radically reducing wasted time for both the DBA and user; and continually lowering the total cost of ownership (TCO).  Each generation has wildly succeeded in raising that bar increasing the distance between Exadata and every other database system or white box.  Exadata X8 has raised the bar so high that it’s out of sight for the nearest competitor.  A closer look shows why. For decades, databases have been an essential mission critical application running on servers.  DBAs trained for years to perfect database performance tuning, database optimization, troubleshooting, healing, prioritization, and more to make critical applications running on the database run faster and more reliably.  Each generation of server, storage, and networking would increase database performance.  The past decade has seen exponential database growth while storage performance has seen a massive increase with the proliferation of low latency flash SSDs, NVMe SSDs, and storage class memory (SCM) drives.   Concurrently there have been hiccups in Moore’s law.  It has slowed to a crawl.  Apparently, there are limits to how small the lithography can go.  CPU performance increases have been marginal focusing more on adding cores instead of performance.  This is true for both all CPUs – x86, RISC, even ARM.   This has resulted in the compute server becoming the database performance bottleneck.  The compute server vendors have attempted to solve this problem with hypervisors, clustering, and sharding.  Each having some but limited success in scaling performance and capacity while introducing other problems.  Problems such as increased complexity, vendor management, patching, complicated lengthy troubleshooting and problem resolution.  All of which causing user frustration. Another problem has been the massive loss of DBA knowledge, skill, and experience as the baby boomer generation retires in large numbers.  College graduates lack the underlying expertise and have long DBA learning curves.  Databases must be simpler to use, operate, tune, manage, patch, troubleshoot, etc., to be utilized efficiently and effectively.  Meanwhile, the number of distinct database types (relational, key value, graphical, time series, object, columnar), open source, cloud database services, and specialized databases have exploded into IT organizations ecosystems.  This plethora of database products and services demands expanded DBA programming skills in SQL, JSON, XML, r, and q command languages; as well as DBA knowledge and expertise in as many processes as there are database types; plus, extensive costly training.  Multiple databases introduce other challenges and problems as well.  Analyzing raw data in different types of databases too often require the data be moved between the databases.  That requires a manually labor-intensive extract transfer and load (ETL) requiring a lot of time.  It’s difficult and takes so much time that the information derived by the analytics is likely out-of-date by the time the data is moved and analyzed.  And it’s not a one-time thing.  There’s a reason Gartner has said 85% or more of big data projects fail.  One of the most onerous is the requirement to move data between databases.  The most common workaround to the multi-database problem is to create islands of data for each database type.  This causes a nontrivial amount of data duplication that cannot be solved at the storage level.  That explosive data expansion complicates data center infrastructure, storage, servers, data protection, disaster recovery, business continuity, and cost.  Keeping all the data synchronized and up to date is also problematic at best and costly in both human and IT assets. All of these problems combined are causing a brisk unsustainable rise in database costs.  IT organizations are demanding a faster, less complicated, more automated, lower cost answer. Many IT pros are surprised that Oracle, the world-wide leader in databases for decades, solves all of these problems and more with the Exadata X8.  They shouldn’t be.  Oracle has been doing this with Exadata releases for over a decade.  Exadata X8 raises standards to a level previously not thought possible.  It solves the compute server bottleneck problem by making the Oracle Database hardware aware and the server, storage, and network Oracle Database aware.  Then offloads many of the database processes including SQL, XML, JSON, encrypt, decrypt, RMAN backup filtering, fast file creation, many in-database analytics and machine learning (ML) functions to the storage processors.  Those processors are much closer to the data shortening speed of light latencies speeding up results.  That offload additionally frees up compute resources to do more queries, higher-level AI/ML, and more analytics.  It also makes clever use of flash caching to put more database processes in-memory, making scans much faster, automatically reducing I/O, and automates I/O prioritization.  Internal use of RDMA makes internodal RAC performance as fast as if it were in a single node while eliminating the ills of a distributed or clustered database.  The implementation of built-in algorithms and AI/ML has automated database operations to a level not seen before.  Operations such as Automatic Indexing based on policies, machine learning, and reinforced machine learning.  Indexing that takes a skilled experienced DBA hours or days is accomplished in seconds to minutes.  And the automated indexes are more efficient and significantly faster than the ones created by the most experience DBAs.  There are more than 60 unique features in Exadata X8 that are simply not available in any other platform.  All of these features are designed to increase performance, automation, simplicity, reliability, availability, and database effectiveness. Exadata X8 also solves the scalability, multiple database, and multiple database type conundrums.  Based on the latest and greatest all-inclusive Oracle Database, it supports up to 4,000 pluggable databases (PDB) in a multitenant container database environment.  It also supports databases in the hundreds of TBs to PBs of data.  It effectively supports 10-15x that amount of data because of the Hybrid Columnar Compression (HCC).  Each Exadata scales from 1/8th rack to 18 full racks.  It can scale further with external switches supporting RDMA. Exadata X8 solves the multiple database type problem by incorporating those database types into the Oracle Database.  It supports relational, document, key value, graphical, time series, object and more.  This enables each database type to access the same data without an ETL or data copy. But what about the cost issue?  Once again Oracle has attacked this problem head-on.  Oracle’s co-engineering of the Exadata X8 platform with the Oracle database measurably reduces the amount of hardware and IT infrastructure required to run optimized.  Exadata X8 is backwards compatible with multiple generations of Exadatas and will therefore be compatible with multiple generations of future Exadatas making growth and tech refresh a non-event.  Oracle also offers Exadata X8 in three ways: Exadata hardware purchased with Oracle Database licensed on a subscription or perpetual plus maintenance. Exadata Cloud at the customer licensed as a fully managed cloud service on-demand with full elasticity. Oracle Autonomous Database cloud that runs on Exadata in the Oracle public cloud. Cost comparisons with do it yourself (DIY) white box and named vendor implementations show a much lower TCO for Exadata X8.  The medium average Exadata advantage was much more than 50%.  When Exadata’s performance advantage is considered, the difference is multiple orders of magnitude in favor of Exadata. Exadata X8 solves today’s extensive database problems, is faster, far more automated, more complete, and much more cost effective than any other database platform.  This is why Oracle Exadata X8 makes DIY Oracle Database hardware systems obsolete and why Oracle remains number 1 in databases.  Oracle’s Exadata X8 convergence is at the PhD level. Whereas everyone else is in elementary school, including Dell-EMC, Nutanix, and HPE. About Dragon Slayer Consulting: Marc Staimer, as President and CDS of the 21-year-old Dragon Slayer Consulting in Beaverton, OR, is well known for his in-depth and keen understanding of user problems, especially with storage, networking, applications, cloud services, data protection, and virtualization. Marc has published thousands of technology articles and tips from the user perspective for internationally renowned online trades including many of TechTarget’s websites and Network Computing and GigaOM.  Marc has additionally delivered hundreds of white papers, webinars, and seminars to many well-known industry giants such as: Brocade, Cisco, DELL, EMC, Emulex (Avago), HDS, HPE, LSI (Avago), Mellanox, NEC, NetApp, Oracle, QLogic, SanDisk, and Western Digital.  He has additionally provided similar services to smaller, less well-known vendors/startups including: Asigra, Cloudtenna, Clustrix, Condusiv, DH2i, Diablo, FalconStor, Gridstore, ioFABRIC, Nexenta, Neuxpower, NetEx, NoviFlow, Pavilion Data, Permabit, Qumulo, SBDS, StorONE, Tegile, and many more.  His speaking engagements are always well attended, often standing room only, because of the pragmatic, immediately useful information provided. Marc can be reached at marcstaimer@me.com, (503)-312-2167, in Beaverton OR, 97007. For More Information on Oracle Exadata Go to: Oracle Exadata  

The latest Oracle Exadata X8 is the 9thgeneration.  Every generation has been laser focused at raising the expectations bar for increased performance and automation; reducing database administrator...

Engineered Systems

Move to a New Stage of Competitiveness and Growth—and Transform Your Organization On-Premises or In the Cloud

By Harald Kehl, VP, Cognizant                                                                                                                                              Businesses are discovering infrastructure as a service (IaaS) as a catalyst for digital transformation, accelerated application development, and increased customer engagement through mobile apps and social interactions. Gartner forecasts that IaaS spending will grow 27.6% in 2019 to reach nearly $40 billion, up from $31 billion last year. Major business drivers are fueling this rapid IaaS adoption: Reduced cost and reliance on on-premises corporate data centers Immediate availability of infrastructure and dynamic changes to capacity Reduced time to deploy new functionalities and go to market More robust and highly available solutions Minimizes risk and business disruption while transitioning workloads to the cloud Leverages the security expertise of the provider Access to lines of business (LOB) to procure/provision resources, IaaS consoles and APIs based on business needs, while IT simply monitors activity The availability of on-demand processing power, storage, and bandwidth has paved the way for companies to take a cloud-native – or IaaS-first – approach for all workloads. These companies use the cloud to create business models not previously possible and drive change in how traditional industries operate. Where IaaS Cloud Is Taking Off As an Oracle Cloud Premier Partner in North America, EMEA, APAC, and Latin America, Cognizant is enabling clients to remain competitive and profitable in today’s digital business world through adoption of new operating models, processes, and information systems. Our long-term, 19+ year global alliance leverages our combined strengths and resources to help unleash the full potential of various next-gen technologies in this digital economy. A big part of the growth today is driven by adoption of IaaS. The areas where we see significant investment in IaaS include small- and medium-sized businesses, companies with more security concerns that want to migrate to IaaS cloud, and businesses where the current technology stack is Oracle in on-premises environments. As far as specific industries that are investing in Oracle IaaS Cloud, the insurance and utilities sectors stand out. When IaaS Is and Isn’t an Option When a hardware refresh of existing infrastructure is due If an organization is looking at modernizing or upgrading its infrastructure, rather than investing heavily in new hardware that will become obsolete and require significant resources to maintain, it makes sense to make the move to IaaS. But enterprises don’t need to move everything at once. They can make a strategic move and keep some infrastructure on-premises. Oracle Cloud, with all the infrastructure co-engineered to optimize performance and security, provides a more secure environment than its competitors and provides seamless performance between on-premises and cloud. When you are setting up a new application environment (Dev, Test, Production, and DR) If you’re ready to set up a new application environment, that’s an excellent application to start on IaaS cloud. Oracle IaaS Cloud delivers a complete set of services based on open source technologies, such as Docker and Kubernetes, for orchestration, scheduling, management, operations, and analytics of your applications. Meanwhile, other database intensive processing can remain on-premises to optimize the performance of all workloads. When you are looking to reduce your opex bills from other cloud vendors Customers see significant discounts on licensing and service costs when they move to Oracle Cloud. In fact, costs are around 30% less than Amazon Web Services (AWS) and Microsoft Azure. Oracle Cloud offers bare metal infrastructure and virtualization for all workloads. It can also reduce operating costs by as much as 50%. In contrast, Oracle Cloud-Ready Infrastructure, including Engineered Systems and Cloud at Customer, provide viable cost-saving options for organizations with workloads that must remain on-premises by consolidating databases and applications onto a single-vendor stack of purpose-built, highly performant, highly available systems. The reduction in data center space and the staff to maintain that infrastructure often results in $157,712 annual benefits per 100 users and 25% less time spent “keeping the lights on.” When your business imposes higher workloads periodically and you need elastic infrastructure If your organization experiences peak workload times when you need extra capacity to avoid performance degradation, Oracle IaaS Cloud allows you to add capacity as needed and then ratchet back down when it’s no longer needed. As a subscription-based service, you only pay for the capacity you use, eliminating the need to have additional, costly on-premises hardware standing idle. Thankfully, with Oracle Engineered Systems, stranded capacity is not an issue for customers with workloads that must remain on-premises. Combining Engineered Systems and bursting to the cloud allows you to leverage the extreme performance and rock-solid security of on-premises infrastructure while accommodating for any spikes in usage. What Makes Oracle Cloud Different? Oracle offers three unique ways to consume the Cloud: On-Premises Engineered Systems with cloud equivalents for an effective hybrid cloud solution, Cloud at Customer with the ability to run a self-contained instance of Oracle Public Cloud in your data center behind your firewall, and via Oracle Cloud Infrastructure (OCI or IaaS). All three cloud options are enterprise-grade and built from the ground up for the Oracle Database. It can run all your workloads equally well, whether they are traditional multi-tiered enterprise applications, high-performance workloads, or modern serverless and container-based architectures. Security is built in and integrated, eliminating gaps at any layer. Plus, you have secure access, control of your cloud resources, and visibility at scale. Reduce your opex costs Because it’s a subscription model, IaaS can reduce costs by moving away from costly capital investments and on-premises management. You only pay for the capacity you use, and you can add or reduce capacity as needed to meet demand for ultimate elasticity. Eliminate manual application management with autonomous functionality With Oracle IaaS Cloud, applications can be managed completely autonomously, removing all human intervention. That eliminates manual time spent monitoring systems and diagnosing problems and removes the risks of human error. Reduce licensing costs and get BYOL mobility In addition to reducing licensing costs by 30% on average versus competitor pricing, Oracle also offers “bring your own license” (BYOL). This gives you complete license mobility as you move from on-premises to Oracle Cloud, or if you choose to have your Oracle Cloud in your own data center with the Cloud at Customer option. You can bring your on-premises license entitlement and get license support using your existing support contract. Scale as needed to support business expansion Because it’s “as a service,” you can add capacity as your business expands. You have easy and unlimited scalability. Since on-premises and Oracle Cloud infrastructure is co-engineered and cloud-ready, your on-premises infrastructure can be moved to the cloud seamlessly whenever you’re ready. And you can scale without investments in new hardware and software. HAVI Goes Hybrid with Impressive Results A global supply chain manager for leading food service brands, HAVI computes 5.8 billion supply forecasts every day, down to the individual ingredient level, for 24,000 restaurants. When the Downer’s Grove, Illinois-based, enterprise’s on-premises infrastructure was maxing out, Oracle provided the ingredients to solve the dilemma. Consolidating 34 databases onto 1 quarter rack configuration of Oracle Exadata Database Machine X6-2 and deploying a disaster recovery solution in the cloud, HAVI found the best of both worlds: On-premises performance without sacrificing its cloud-first strategy. Watch the video to see how HAVI did it. Moving to IaaS makes sense for organizations that want to modernize their IT infrastructure, implement advanced digital solutions, and reduce their overall infrastructure costs. With Oracle IaaS Cloud, customers get the elasticity, scalability, governance, and security that supports the modern, competitive enterprise. Only Oracle gives you such a broad range of deployment options: Public cloud, hosted private cloud, on-premises Cloud at Customer, and any hybrid architecture. With Oracle’s extensive cloud options, enterprises can make the right moves on their schedule. Harald Kehl has over 20 years of experience running Systems Integrations, Consulting and Solutions Implementation businesses within large IT companies, such as IBM, Siebel, and Oracle. He brings a wide range of experience in creating value add solutions for customers as well as steering and execution of large transformational programs. He currently manages Cognizant’s Oracle business in Europe.  

By Harald Kehl, VP, Cognizant                                                                                                                                              Businesses are discovering...

Engineered Systems

Oracle Exadata: Labor Is Not That Cheap

https://blogs.oracle.com/infrastructure/oracle-exadata:-ten-years-of-innovation)How much is your time, or your employees’ time, worth? That’s a question you may want to think about when evaluating your database server infrastructure. Until we are all running Autonomous Databases, it’s going to require some effort to manage your database environments. As anyone who follows Oracle knows, Autonomous Database is now a reality—if you are ready to move to the cloud. But since you are reading an infrastructure blog, I’m going to guess you’re not quite there yet. The good news is that Oracle Exadata can help eliminate much of that management burden, yet still give you the control you desire. The manageability benefits of an engineered system like Oracle Exadata are pretty straightforward. By adopting an engineered solution, you avoid having to put together all the pieces and parts. You know that everything works together as designed because Oracle has put it together and tested everything. Think of all the time you spend assembling components, testing components, validating interoperability, searching for patches, applying patches, testing again for interoperability, and then troubleshooting when something goes wrong. With an engineered system, you download tested bundles of software, pre-validated to work on your system, and apply them with tools designed specifically for your environment. Perhaps more importantly, you are on a well-traveled road, so chances are high that bugs and issues that might affect you have been already discovered, and fixes implemented.  And we’ve gone further with Oracle Exadata for even easier deployment and management. The Oracle Exadata Deployment Assistant (OEDA) and the Exadata patch manager tool take a lot of the guesswork and room for error out of deployment and patching. These tools validate configurations, run pre-checks to verify dependencies, and can even automatically back up and restore previous software and configuration state should something go awry. They also abstract and simplify the steps associated with these operations, reducing the likelihood of error, and eliminating the need to analyze and operate the lower level tools. Recently Exadata has adopted fleet patching tools, that dramatically simplify patching of both databases and storage cells at scale, further increasing management productivity. Exadata also provides an Enterprise Manager plug-in, to make the EM agents Exadata-aware so they properly model and manage Exadata systems within Enterprise Manager. A customer using EM packs such as the Lifecycle Management across their estate can use these tools with Exadata, providing management consistency for all their database environments. If you are already an Exadata customer, you may be wondering what all the fuss over patching is about. Oracle Exadata includes Oracle Platinum Support. With Platinum Support, customers need not even patch the systems. Oracle’s Exadata Experts can connect to the system and apply required patches for you—all you need to do is negotiate a patching window with the patching team. Exadata’s extreme performance and scalability can even have a greater impact on reducing management load. Many customers have consolidated hundreds of databases into their Exadata environment, reducing the number of servers, operating systems, and databases that need to be managed. The Panasonic Group recently consolidated a large number of databases into Exadata and increased its management efficiencies from under 2 databases per DBA to over 24 databases per DBA. Lastly, back to Autonomous Databases. Oracle Autonomous Database Service runs on top of Oracle Exadata in the Oracle Cloud. As features are developed to support this service, some will just work with Exadata. For example, Oracle Database 19c introduced automated statistics gathering and index creation. The database can eliminate the time-consuming job of tuning indexes to optimize performance. It can only do this, of course, if it fully understands the performance characteristics of the underlying system. In other words, such features, originally developed for the cloud, have been extended to on-premise, if the database is running on Oracle Exadata. Thousands of customers have standardized on Oracle Exadata. Three quarters of the Fortune 100 companies run Exadata. The community effect of all these customers running the same platform further increases efficiencies and reduces overall management effort. All these customers have concluded that throwing bodies at the management problem is not the solution. Oracle Exadata provides a better way. This is part 6 in a series of blog posts celebrating the 10th anniversary of the introduction of Oracle Exadata.  Our next post will focus on Scalability, and examine how Exadata is the best platform for tackling the largest workloads. About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.      

https://blogs.oracle.com/infrastructure/oracle-exadata:-ten-years-of-innovation)How much is your time, or your employees’ time, worth? That’s a question you may want to think about when evaluating...

Engineered Systems

The Key to 5G is the Enterprise

“Many people think about 5G as just new base station technologies and a new radio technology. That’s a narrow view that shows a very flawed understanding of the technology. I see 5G as a foundational layer for the digital transformation of industries and consumer lifestyles,” explains Peter Jarich, head of mobile operator research firm GSMA Intelligence. With 5G rolling out globally over the next months and years, this next wave in mobile technology offers greater speed, lower latency, and the ability to connect many more devices at once compared to the current 4G standard. But the promise of 5G for carriers and their enterprise customers goes far beyond incremental performance improvements. We spoke to Jarich about some of the trends that will be accompanying the rollout of 5G. Here’s what he told us. The Enterprise Is the New Frontier Consumers have embraced mobile technology to such a degree that every improvement is eagerly anticipated and followed by a wave of device upgrades. Meanwhile, public and private enterprise has fed the consumer revolution without fully participating on its own behalf. Jarich sees that changing with the rollout of 5G. “We know consumers understand the benefit of mobile broadband,” he says. “If operators are looking for new revenue sources, a deeper push into the enterprise market would get them there.”  He believes that operators have an opportunity to serve verticals, such as smart cities, manufacturing, fleet management, logistics, and utilities, by leveraging 5G to meet specific performance or bandwidth requirements. “Take the automotive industry,” he says by way of example. “Autonomous vehicles will have to communicate with a 5G network. But what type of network architecture will support low latency for critical applications? You may need an edge computing node on the side of the road or in the car itself.” A New Provider Metric: Network Services With the new opportunities that 5G affords telecommunications providers, they should evaluate their success beyond the traditional metrics of network performance (user base, coverage, etc.) and architecture (virtualization, network slicing, etc.) to include network services. “The network services metric would encompass what providers are actually doing with this new technology,” Jarich explains. “Are they just rolling out faster speeds, as they did with 4G LTE? Or are they leveraging 5G to tap new markets like the enterprise? Are they leveraging it to enable new industries? Are they leveraging it to develop value-added services, even for consumers, as opposed to just connectivity? If 5G is a rare opportunity to move into the enterprise, are operators executing on that goal?” 5G Will Impact Other Technologies Jarich also argues that the move toward 5G will have a profound impact on the parallel development of technologies, such as virtualization, blockchain, and artificial intelligence (AI). “As new markets and opportunities arise with 5G, they will necessarily pull in those other technologies,” he notes. Examples include: Internet of Things (IoT): Given 5G’s massive capacity for concurrent connections, the big winner is likely to be IoT, both consumer and industrial. Blockchain: The explosion of IoT will raise security issues that may best be addressed with blockchain. Digital Twins: As companies leverage 5G to enable new augmented reality (AR) and virtual reality (VR) use cases, manufacturing and other verticals can use digital twinning to speed product development and service. Virtualization: Jarich also suggests that 5G will accelerate the current trend toward virtualization. “The push toward virtualization is nothing new. But the need to upgrade networks to 5G provides an opportunity to start thinking about rolling out virtualization at scale.” Artificial Intelligence: The increased capacity and reduced latency of 5G networks also has implications for AI, which will be required by technologies like autonomous vehicles. The Rollout: Different Regions, Different Approaches The rollout of 5G technology has just begun, and Jarich predicts it will be a gradual one, even as 4G tech continues to evolve alongside. By 2025, 5G will represent about 15 percent of all connections globally, according to GSMA’s Mobile Economy 2019 Report. History and culture will shape the deployment of 5G around the world, he adds. Asia, for example, will likely prioritize enhanced consumer mobile broadband, driven by governmental priorities, particularly in China. In the U.S., Verizon’s and AT&T’s experiments with fixed 5G broadband service appear to be a foreshadowing of the technology’s future there, while the initial focus in Europe is panning out to be industrial IoT. The advent of 5G technology offers tremendous opportunities for telecom providers to offer new value-added services to the enterprise market. To succeed in this new environment, however, operators need solid infrastructure upon which to build a data-driven, 5G network strategy. One option is to optimize on-premises infrastructure with Oracle cloud-ready engineered systems, which provide a clear migration path to the cloud. A second option is to build a hybrid cloud infrastructure in which they can lift and shift workloads to the cloud easily between identical on-premises and cloud architectures. For others, the best option will be to bring the public cloud into their data centers and behind their firewalls, with Oracle’s Cloud at Customer. Learn more about Oracle offerings for the telecommunications industry. Peter Jarich leads the GSMA Intelligence analyst team, driving its content strategy and agenda. Working across the GSMA membership and broader mobile ecosystem, he is responsible for developing insights into the intelligence products required by the market as well as the best ways in which to deliver them.    

“Many people think about 5G as just new base station technologies and a new radio technology. That’s a narrow view that shows a very flawed understanding of the technology. I see 5G as a foundational...

Engineered Systems

Kingold Group Takes a Bold Step into the Cloud

What do you think your company’s CFO would prefer: more coffee breaks for the finance team, or more analysis? At China’s Kingold Group, a multi-industry conglomerate with more than 10,000 employees and business operations across Australia, China, and Europe, one critical financial report used to take about 12 minutes to run. That was enough time for a quick coffee break. This and other time-consuming processes meant that reports would be run less frequently than was optimal, or that new data dimensions would be ignored for fear of taking too much time. That changed when the company moved the database from an on-premises server to Oracle Exadata Cloud at Customer, explains CIO Steven Chang. “Now it takes about 40 seconds,” Chang says. The last time he spoke with us, Chang described how Cloud at Customer allowed Kingold Group to transform its legacy infrastructure to a cloud architecture while ensuring that its data was secure within its own data center. Oracle Exadata Cloud at Customer is a cloud offering that is provided in the customer’s data center and offers the simplicity of the cloud coupled with the control of an on-premises deployment. This solution gives users the following advantages, all in a proven mission-critical database and platform: Faster time to market with web-based database provisioning Pay-as-you-go, subscription-based pricing Easy migration of existing databases (with no application changes) Cloud-based management tools to minimize IT administration tasks Extreme performance for OLTP, analytics, hybrid, and consolidation workloads Exadata Cloud at Customer not only gave Kingold’s financial team more time to uncover meaningful insights from its data. It also freed up Chang’s IT team from day-to-day systems management, so they could turn their expertise to delivering innovation to the company and to its discerning customers. For example, a four-person Kingold IT team in the conglomerate’s real estate division wrote a program to glean property listings in top prospect markets, creating a database of highly desirable acquisitions for the M&A group to consider. Chang is particularly impressed at how easy it has been to lift-and-shift critical systems from legacy infrastructure to Oracle Exadata Cloud at Customer. “It took us only 7 days to lift-and-shift from (Microsoft) Azure, and we were able to reduce the cost by 44%,” he reports. Increased performance, improved data sovereignty, and no-hassle security patches and maintenance, as well as reduced cost, are just the tip of the iceberg when it comes to the benefits Kingold realized with the move to Exadata Cloud at Customer. Flexible scaling has also helped support the company’s growth goals. What’s more, Exadata Cloud at Customer offers a straightforward and promising path to a future in the cloud. “For the first time in my life,” says Chang, “I think I can kind of see into the future and be able to realize my dream.” To hear more from Kingold Group’s CIO, Steven Chang, watch this video.

What do you think your company’s CFO would prefer: more coffee breaks for the finance team, or more analysis? At China’s Kingold Group, a multi-industry conglomerate with more than 10,000 employees...

Engineered Systems

Thanks to Oracle Exadata, Pharmaceutical Distributor AmerisourceBergen Triples the Number of Patients It Helps Every Day

For AmerisourceBergen, a global pharmaceutical sourcing and distribution-service company with $150 billion in revenue, making sure that patients receive the medications they need when they need them is central to its mission. Not long ago, the company was processing and shipping more than 1 million products a day but was facing the prospect of dramatically higher demand. The problem? AmerisourceBergen’s hardware was having difficulty keeping up as its mission-critical SAP EEC application processed 1.7 million line items daily to its 70TB database. “We were having extreme problems with stability and performance,” says technology manager Mike White. The company needed to find a reliable and highly available IT environment that could be scaled as needed with near-zero downtime. It was also looking for easy extensibility and the flexibility to support its ambitious future business requirements. After researching its options, AmerisourceBergen chose to port its Oracle databases from third-party machines to Oracle Exadata Database Machine. Oracle Exadata is a preconfigured, pretested system optimized for all database applications with capacity-on-demand software licensing for pay-as-you-grow scalability. Its software algorithms implement database intelligence in storage, compute, and networking to deliver higher performance and capacity in a cost-effective manner. The end-to-end solution provided by Exadata meant that AmerisourceBergen’s stability problem and I/O performance issues disappeared immediately. “It covered all aspects of what we needed—the hardware, the software, the storage layer—all serviced by Oracle,” White says of Exadata. “Everything became so much better.” By deploying Oracle Exadata, AmerisourceBergen was able to scale up operations significantly and has tripled its original order volume to 3 million products per day. “We've done all that with Exadata, without having to expand additional storage nodes, additional compute power,” White marvels. “What I love about Exadata is that we keep ramping up and adding more things to it, and it just gobbles it up.” With the new platform’s increased speed and reliability, White’s team has been able to turn from database support to focus on projects aimed at supporting business growth. White is also impressed with the support he’s received from Oracle Advanced Customer Service (ACS) and Oracle Advanced Monitoring and Resolution Services (AM&R). Together, these services optimize IT management to ensure high performance, with proactive monitoring and accelerated issue identification and resolution spanning the entire Oracle technology solution. The 70TB SAP ECC database environment has been running successfully for three and a half years on a dual-rack X3-8 Oracle Exadata Database Machine. The remaining SAP applications on approximately 20 databases run on a separate X3-2 Exadata machine. “The AmerisourceBergen credo is to get people the care they need into their hands,” White says. “And Oracle Exadata has helped us do that.” To hear AmerisourceBergen technology manager Mike White explain how Oracle Exadata improved database stability and performance, watch this video.

For AmerisourceBergen, a global pharmaceutical sourcing and distribution-service company with $150 billion in revenue, making sure that patients receive the medications they need when they need them...

Engineered Systems

Seeing Double: Digital Twins Connect the Physical and Virtual Worlds

The concept of digital twins is a trending topic today. But digital twins aren’t really anything new. What is new are the more sophisticated applications that are becoming possible – across more industries – with the adoption of new technology. Two of the technology advances making these new avenues of applications possible are the Internet of Things (IoT) and cloud computing. And today’s applications are just the beginning of what will become possible in the future. In essence, a digital twin is a virtual model of a physical product or a process. Using a digital twin allows businesses to analyze their physical assets to troubleshoot in real time, predict future problems, minimize downtime, and even perform simulations to create new business opportunities. Some of the earliest examples of digital twins were with computer-aided design software, or CAD. Engineers were able to create digital representations of structures before actually building them. NASA employed digital twin technology for pairing its Apollo missions. Today’s technology opens a much wider set of products and processes to digital twins thanks to rich data feeds that allow digital twins to be used throughout a product’s lifecycle. We recently spoke with Monica Schnitger, president and principal analyst with the Schnitger Corporation, a market analysis firm that specializes in engineering software, about how digital twins are being used today and the exciting new possibilities that lie ahead. Using Digital Twins to Change the Business Model Digital twins are being used across a variety of industries with more use cases popping up regularly. One application is in aerospace. Aircraft engine manufacturers like General Electric and Rolls Royce can now lease and maintain the airlines’ engines and charge the airlines for “power by the hour”; that is, the number of hours they fly. Schnitger explains, “What that means is that the makers of those engines have to have really good, solid information about the efficiency of the engine. They need to be able to predict when and how maintenance will be carried out, so that the makers can maximize the revenue that they get from that engine. They have to ask, ‘What do I need to know? Then, how do I get that data and then, how do I analyze that data? Finally, what decisions can I draw from it and what does that mean in terms of where I put people on the ground to do the maintenance jobs?’” Another example is air conditioners. Rather than sell a unit to a customer, an air conditioning manufacturer can instead sell “cooling degrees” but maintain ownership of the physical air conditioners. Using sensors, the company is able to go beyond monitoring into predictive analytics. By analyzing weather patterns, the air conditioner maker can predict how customers are likely to use their air conditioners and plan ahead for its power needs to prevent power surges – and, therefore, downtime. Digital twins can reduce risk by being employed in settings that pose physical danger to workers, such as wind farms and oil rigs. Virtual and augmented reality (VR and AR) can also be deployed for digital twin technology, Schnitger says. For example, a car mechanic might use an AR headset to help identify the full maintenance history of a car, which appears as an overlay when looking at a vehicle. Massive Effect on IT Infrastructure To make digital twin applications possible – and effective – all the data related to a product has to be integrated and managed over its lifecycle. For example, if a product is producing real-time sensor data for the purposes of predictive maintenance, that data needs to be gathered and analyzed. Cloud and edge computing enable enterprises to turn data into insights. “There is a cost implication as well as a bandwidth implication,” Schnitger says. “If you can figure out what sensor data is important to send somewhere else and what’s not important, then you’re not paying to transmit lots of useless data.” The cloud is a cost-effective place for data storage. And many of the applications that run the technologies can live there as well. But for some purposes, edge computing might be the better solution because it exists away from centralized cloud computing and close to the sources of data, such as manufacturing equipment or sensors. The Challenges Are Real As promising as digital twins are, enterprises shouldn’t discount the number of potential roadblocks along the way to implementation. It’s important to be clear-eyed about these challenges: Expense. A digital twin program isn’t free. It relies on sophisticated software, data storage, and sensors. Without a clear business case and identification of the issue needing to be solved, enterprises might write off digital twins as too costly to explore. You can bring down the cost of a digital twin strategy by employing a cloud-based solution, such as Oracle Engineered Systems that use cloud equivalents like Oracle Exadata Cloud at Customer and Oracle Exadata Cloud Service. They offer flexibility, scalability, agility, and cost savings to make a digital twin strategy a reality. Data overload. At its heart, digital twins rely on large amounts of data to gain insights. Unfortunately, not all the data is relevant. “The vast majority of the data is not going to help us with anything,” Schnitger acknowledges. “It’s that one tiny piece in the middle of an overwhelming stream that’s going to tell us something critical.” Enterprises can take advantage of fully built-out big data infrastructure like Oracle Big Data Appliance to wade through all the data. That allows businesses to leverage insights immediately without having to spend the time to develop a custom big data solution in-house. Security. Even though much of the data from digital twins can be relayed through the cloud or over a public Internet, there are security concerns. Oracle Cloud at Customer allows enterprises to take advantage of the public cloud in their own data center behind their own firewall. It provides the flexibility and security needed for a digital twin strategy. Ready to Adopt Digital Twins? Start Small. If you’re planning to test a digital twin project, Schnitger recommends picking one small thing that you want to try to understand it and focus on what you need to do for it. Finding, gathering the data, and sanitizing it is a huge issue, so stay tightly focused. Another challenge is the business case. Schnitger emphasizes, “This isn’t necessarily easy or cheap to create, and so if you’re not clear on what it is that you’re trying to solve, you're not going to succeed at this. And one of the things that works really, really well in doing this whole digital twin exercise is creating some sort of a pilot or sandbox where someone who cares about that data is responsible for answering the question of how I can make this better, or cheaper, or whatever the particular question is. Prove the success and the business case and then get bigger.” A good place to start is maintenance because that’s the low hanging fruit. “It’s a good way of both proving that you can do this, because that’s a big hurdle, and then you can say there’s a benefit to do it and, therefore, it should be scaled,” she adds. Oracle Engineered Systems and cloud-ready solutions help enterprises address today’s infrastructure complexity and maintenance issues while preparing for tomorrow’s shifting market demands. What Does the Future Hold? Finally, we had Schnitger give us a glimpse of the transformational change that digital twins can bring: “We start having all of these opportunities for people to change the structure of the way that their industries currently work, and that’s really exciting because it means, ultimately, we will wind up with a much more efficient ecosystem in whatever industry we’re in.”   Monica Schnitger is Founder, President, and Principal Analyst of the Schnitger Group. She has developed industry forecasts, market models, and market statistics for the CAD/CAM, CAE, PLM, GIS, infrastructure and architectural/engineering/construction and plant design software market since 1999. She holds a B.S. in Naval Architecture and Marine Engineering from MIT and an honors MBA from the F.W. Olin School of Management at Babson College.      

The concept of digital twins is a trending topic today. But digital twins aren’t really anything new. What is new are the more sophisticated applications that are becoming possible – across more...

Cloud Infrastructure Services

Don't Sleep on These Top 5 Enterprise IT Trends for 2019

2019 may be more about laying groundwork than historical breakthroughs, but don’t let that lull you into sleeping through these top five enterprise IT trends. It should be an exciting and busy year as new technologies find their way into enterprise applications, and some existing ones are refined and redeployed to better protect and connect our rapidly digitizing world.   1. Running the public cloud behind your firewall. With an explosion of massive, high-profile business data breaches in the past few years, no one can blame organizations for wanting a more private cloud. Private clouds provide the cost efficiency and agility of the public cloud in an on-premise deployment that increases security and decreases latency. For companies that are heavily regulated and fearful of moving data to a public cloud, this makes sense. However, some businesses struggle to justify the organizational changes and expense needed to implement private clouds.  That’s why 2019 will be the year organizations go private but in a very public way. More companies will take advantage of private cloud services that offer public-cloud scalability and OPEX friendly subscription models, like Oracle Cloud at Customer. This will enable businesses to keep their data safely behind their firewalls and enjoy the benefits of the public cloud—but without the DIY hassle of management, monitoring, and troubleshooting.   2. Autonomous enterprise software will gain more trust and traction. As the world gears up for self-driving vehicles, it’s a reminder of the true potential of machine-learning and artificial-intelligence technologies. In 2019, expect these automated technologies to bring real benefit to enterprise software.  How? By making systems easier and smarter, autonomous enterprise software will spur increases in productivity, while faster data analysis will drive improved business decisions and predictive insights for organizations. Many business applications already come with built-in machine-learning models that improve over time, laying the foundation for a truly autonomous experience. We will soon see disparate enterprise applications talk to and learn from each other, which will help information flow much faster.  As more companies begin to see firsthand the transformation of their own processes through intelligent enterprise software, trust in autonomous technologies, like Oracle's adaptive intelligence apps, will continue to gain traction.   3. Chatter around enterprise-grade AI-powered assistants will grow louder. TechCrunch reports that about 43 million people in the US have at least one smart speaker that can tell them tomorrow’s weather or the give them the latest sports scores. That’s a big buy-in by consumers of AI-powered voice-control technologies. Still, adoption of digital assistants by enterprises has lagged.  That’s set to change in 2019, when voice-enabled technology will progress beyond enterprise-grade chatbots and spur a shift in how businesses operate and customers are served.  That’s because a new generation of conversational interactions and interfaces powered by AI and the cloud now have the increased ability to “know” a user and learn his or her preferences, actions, and even behaviors. Then they can predict or act on behalf of the user. At the same time, digital assistants—AI used for natural language processing and understanding—can automate engagements with conversational interfaces that respond instantly and better understand customer intent while increasing business efficiencies.    4. Edge computing will explode. According to Forrester, 27% of global telecom decision-makers said that their firms were either implementing or expanding edge computing in 2019. With edge computing, data gets processed as close to the collection source as possible, rather than in a centralized cloud location.  What’s driving this trend? The growth in Internet of Things (IoT) sensor data analysis and aggregation is one factor, along with real-time customer interactions being driven through mobile applications and edge video and audio equipment—for example, when a consumer obtains location-specific information through a smart watch. Edge computing accelerates the gathering and sharing of data, and faster access to data means that companies can make continuous informed decisions.  The desire to get closer to the customer will also see an uptick in microservices, in which a cloud application is structured as smaller, connected services, as well as containers, which houses the application’s code so the application runs smoothly in different computing environments. These are two edge computing technologies that significantly enhance the speed and agility of an application when implemented effectively.    5. A surge in DBAs finding a sweet spot with autonomous database. Data warehouses can be the bane of the C-suite’s existence. Given that data warehouses have to be continually built, maintained, and expanded by a team of data engineers and administrators, it’s a costly, albeit, crucial part of any enterprise. However, getting DBAs to trust robots to do the work was a big challenge. Look for that to change in 2019 as AI-fueled autonomous database platforms gain steam like the Oracle Autonomous Database. These self-driving, self-securing, and self-repairing data systems mean less tedious work for DBAs, while delivering high-performance data warehousing right out of the box. This allows DBAs to head more innovative projects in the future. Best-in-class autonomous technology, especially those powered by Oracle engineered systems, will allow companies to scale on demand so they can raise or lower compute resources at any time with no downtime. These systems will also help manage costs by switching off compute resources when the data warehouse isn’t being used. This is the year that smart businesses invest more in smart data warehousing.   Help Your Organization Rise and Shine For enterprises still hesitant to tap these emerging technology trends, it’s time to wake up to the rich opportunities they offer. Oracle helps customers explore these emerging technologies, thoughtfully and with consideration to your current IT infrastructure, through powerful yet flexible cloud-ready solutions such as Oracle Engineered Systems, including Oracle Exadata and Cloud at Customer.  

2019 may be more about laying groundwork than historical breakthroughs, but don’t let that lull you into sleeping through these top five enterprise IT trends. It should be an exciting and busy year as...

Engineered Systems

How Blockchain and Chatbots Are Changing Financial Services

It’s rare that the financial services industry (FS) is a leader in implementation of emerging technologies, but that’s indeed the case with blockchain and chatbots. Both of these technologies are transforming internal and customer-facing processes and adding new capabilities for FS businesses. Part of the reason this is happening now is because of a growing provider ecosystem and knowledgebase.   Blockchain Links Customers and Savings In 2018, the industry spent $1.7 billion on blockchain according to research by Greenwich Associates. The typical top-tier bank now has a full-time team of 18 working on the technology, and one in 10 banks has a blockchain development budget that exceeds $10 million, according to the research firm. One of the biggest benefits of deploying a distributed ledger technology in the FinServ space is reduced operating costs, something that Arab Jordan Investment Bank (AJIB) was able to experience after deploying Oracle Blockchain Platform. AJIB was able to immediately reduce the cost of money transfers between subsidiaries by removing third-party intermediaries with accompanying fees and charges that occur at each transactional stage of a transfer. Users can build a blockchain network and let Oracle manage the network infrastructure (offsite or Cloud at Customer) as internal developers build contracts and applications on top of the network. The artificial intelligence (AI) that powers chatbots is built in to every layer of Oracle’s engineered systems   Chatbots Bring Costs Savings and Richer Customer Experiences Chatbots, too, are becoming more common in FS. Like blockchain, conversational intelligence—aka chatbots—are becoming a part of the modern FS world. We’re seeing them in areas where there has been enough work done to develop rich data sets that go beyond the human ability to process. These intelligent agents could be customer-facing or be internal helpers for things like uncovering business insights or making recommendations for a client. The key to user satisfaction is seamlessness. Nobody wants to be mired in a frustrating closed-loop conversation with a chatbot, but the good news is that the technology has evolved and works quite elegantly. Oracle offers a ready-to-go Intelligent Bot platform for customers to build a custom conversational interface that fits the needs of its customers. One of the major reasons FS companies are experimenting with intelligent agents like chatbots is that certain customer segments prefer them because they offer a richer, more personalized and more consistent experience. From the business point of view, intelligent agents offer a way to standardize processes (such as onboarding and document retrieval), and they have an inherent audit trail and can be used for learning and teaching. Bank of America rolled out its customer chatbot, Erica, last month after it demoed two years prior. The bank said it wanted to be sure users would have a smooth experience before introducing Erica, who can help customers in limited locations check their balances, remember to pay their bills, and find out information about the bank and its services.   The Necessity of a Cloud-Ready Infrastructure for FS The predominate strategies for these technologies revolve around cloud enablement, which for FS involves the migration of the majority of existing applications and infrastructure into a cloud environment. To be effective at turning blockchain and AI into productive applications, developers need to abstract the complexity of enterprise technology, which is what cloud does. Additionally, cloud provides the limitless and scalable infrastructure for the much higher storage and compute requirements of these transformative technologies, and it can replace disconnected legacy systems with a stable foundation for the new workloads. For example, Oracle Engineered Systems is cloud-ready and is purpose-built for the database where FS companies keep their most precious asset—their customers’ data. Leveraging the Oracle Database, Oracle blockchain technology, intelligent bot from the Oracle Cloud, and Oracle Engineered Systems on-premises infrastructure together as a holistic solutions provides the type of security and manageability required to thrive in this digital age. People often ask “what’s next for blockchain and intelligent agents?”. The truth is, we don’t yet know because we don’t know where possibility ends. While we all didn’t get flying cars ala “the Jetsons,” we did however get the equivalent of Rosie the Maid in the form of Roombas, Nest thermometers, Amazon Alexa, and more… These two transformative technologies are not the end of the road; far from it. With endless experimentation with low-hanging-fruit use cases, who knows what tomorrow holds.

It’s rare that the financial services industry (FS) is a leader in implementation of emerging technologies, but that’s indeed the case with blockchain and chatbots. Both of these technologies...

Data Protection

What’s New with Oracle Database 19c and Exadata 19.1

Oracle Database 19c is now available on the Oracle Cloud and also for on-premises Oracle Engineered Systems. Database 19c is the final release of the Oracle Database 12c family of products and has been nominated as the extended support release (in the old version name scheme, 19c is equivalent to Though we encourage our Exadata customers to upgrade to the latest version of the Oracle Database to benefit from all our continuous innovation, we do understand that customers may take up new versions more slowly. Release 19c comes with 4 years of premium support and an additional 3 years of extended support. As always, curious readers can refer to MOS Note 742060.1 for details on the support schedule. Dom Giles’s blog discusses Database 19c in detail. This post focuses on the unique benefits of Database 19c on Exadata, the best platform for running the Oracle Database. Before that, it’s useful to quickly go over some highlights of Exadata System Software 19.1, which is required to upgrade to Oracle Database 19c.   Exadata System Software 19.1 Highlights Exadata System Software 19.1, generally available last year, was one of the most ground-breaking Exadata software release to date (see post and webcast), and is required to run Database 19c on Exadata. Upgrading to Exadata 19.1 will also upgrade the operating system to Oracle Linux 7 Update 5 in a rolling fashion, without requiring any downtime. The most popular innovation of Exadata 19.1 was Automated Performance Monitoring, which combines machine learning techniques with the deep lessons learned from thousands of mission critical real-world deployments to automatically detect infrastructure anomalies and alert software or persons so they can take corrective actions. To learn more about this please tune in to this webcast. Eye Candy:   Better Performance with Unique Optimizer Enhancements For many years optimizing the performance of the Oracle database required bespoke tuning by performance experts with the help of the Oracle optimizer. Database 19c introduces Automatic Indexing, an expert system that emulates a performance expert. This expert system continually analyzes executing SQL and determines which existing indexes are useful and which are not. It also creates new indexes as it deems them useful depending on executing SQL and underlying tables. To learn more about this unique capability please read here (Dom’s Blog). Automatic Indexing continually learns and keep tuning the database as the underlying data model or usage path changes. Some of the most critical database systems in the world run on Exadata. Tuning these critical systems requires capturing most current statistics. But capturing statistics is a resource intensive task that impinges on the operation of these critical systems. Database 19c solves this dilemma by introducing Real Time Statistics. Now they can be collected as DML operations insert, update or delete the data in real time.   Eye candy:   More In-Memory Database innovations The future of analytics is In-Memory, and Exadata is the ideal platform for In-Memory processing. Exadata’s unique capability to execute vector instructions against in-memory columnar formatted data makes it possible to use In-Memory technology for all your data sets and not just the most critical ones. Every database and Exadata software release continues to extend the capabilities of In-Memory technology on Exadata. Database 19c unlocks another unique capability for In-Memory, Memoptimized Rowstore - Fast Ingest. Some modern applications, such as Internet of Things (IoT) applications, need to process high frequency data streams. These data streams are generated by a potentially large number of data sources (e.g., devices, sensors). The Memoptimized Rowstore - Fast Ingest feature enables fast data inserts into an Oracle database. These “deferred inserts” are buffered and written to disk asynchronously by background processes. This enables the Oracle database to easily keep up with the high-frequency, single-row data inserts characteristic of modern data streaming applications.   Eye candy: Summary Enabled by unique-to-Exadata functionality, Oracle Database 19c delivers substantial performance and ease-of-management improvements for all workloads. Machine learning complements lessons from real-world deployments to monitor performance and provide safe and efficient optimizations. In-memory enhancements enable the best analytics functionality, and support for deferred inserts enable fast on-line operations. We are always interested in your feedback. You are welcome to engage with me via Twitter @ExadataPM and by comments here. About the Author Gurmeet Goindi is the Master Product Manager for Oracle Exadata at Oracle. Follow him via Twitter @ExadataPM.

Oracle Database 19c is now available on the Oracle Cloud and also for on-premises Oracle Engineered Systems. Database 19c is the final release of the Oracle Database 12c family of products and...

Engineered Systems

How to Build a High Availability Strategy for Oracle SuperCluster

As well as providing a platform that delivers outstanding performance and efficient consolidation for both databases and applications, Oracle SuperCluster M8 offers a solid foundation on which highly available services can be deployed. How can such services best be architected to ensure continuous service delivery with a minimum of disruption from both planned and unplanned maintenance events? The first step in architecting highly available services on Oracle SuperCluster M8 is to understand the building blocks of the platform and the ways in which they support redundancy and high availability (HA).   Hardware Redundancy Oracle SuperCluster M8 is built around best of breed components. The mean time between failures on these components is typically extremely long. Nevertheless, even well designed and manufactured hardware can fail. With that in mind, Oracle SuperCluster M8 is architected to avoid single points of failure, thereby reducing the likelihood of outage due to hardware failure. The redundancy characteristics of some of the key components of Oracle SuperCluster M8 are described below.   Compute servers. The compute servers used in Oracle SuperCluster M8 are robust SPARC M8-8 servers that boast many features designed to maximize reliability and availability.
Each SPARC M8-8 server in Oracle SuperCluster M8 also includes Physical Domains (PDoms) that are electrically isolated and function as independent servers. Either one or two SPARC M8-8 servers can be configured in an Oracle SuperCluster M8 rack, and each SPARC M8-8 server includes two PDoms. With multiple PDoms always present, it is possible to avoid single points of failure in compute resources. Exadata storage. Three or more Exadata X7 Storage Servers are configured in every Oracle SuperCluster M8 rack. A minimum of three Exadata Storage Servers allows a choice of normal redundancy (double mirroring) and high redundancy (triple mirroring). It is possible to achieve high redundancy with as few as three Exadata Storage Servers thanks to the included Exadata Quorum Disk Manager software.
Up to eleven Exadata Storage Servers can be accommodated in a rack that hosts a single SPARC M8-8 server, and up to six in a rack that hosts two SPARC M8-8 servers. Shared storage. A ZFS Storage Appliance (ZFSSA) that delivers 160TB of raw storage capacity is included in every Oracle SuperCluster M8 to provide shared storage, satisfying infrastructure storage needs and also providing limited capacity and throughput for user files such as application binaries and log files. Appliance controllers are delivered in a cluster configuration, with a pair of controllers set up in an active-active configuration. Two equally sized disk pools (zpools) are set up, with one associated with each controller.
Should a controller fail for any reason, the surviving controller takes over both of the disk pools and all services until the failed controller becomes available again. The result is that a controller failure need not lead to a shared storage outage.
Disks in the shared storage tray of the ZFS Storage Appliance are mirrored to provide redundancy in the event of disk failure, with hot spares that are automatically swapped into the configuration in the event of disk failure.
It’s worth noting that hardware failures typically result in a service request being raised automatically if Oracle Auto Service Request (ASR) is configured.
On Oracle SuperCluster M8, iSCSI devices are assigned for all types of system disks and for zone root file systems. It’s worth noting that all iSCSI devices for any specific PDom are stored in the same ZFS Storage Appliance zpool (as already noted, a single zpool is associated with each of the two ZFSSA controllers). The intent is that any ZFSSA controller failure will only affect half of the PDoms (although any affect is of very brief duration thanks to an automated failover). All iSCSI devices associated with other PDoms will be unaffected. InfiniBand Switches. All Oracle SuperCluster M8 configurations include two InfiniBand Leaf Switches for redundancy. Each dual-port InfiniBand HCA is connected to both leaf switches, allowing packet traffic to continue even if a switch outage occurs.
The entry Oracle SuperCluster M8 configuration consists of one CMIOU in each PDom of a single M8-8 server, plus three Exadata Storage Servers. All larger Oracle SuperCluster M8 configurations include a third InfiniBand Spine Switch as well. The spine switch, which is connected to each leaf switch, provides an alternative path for InfiniBand packets as well as additional redundancy. Ethernet Networking. Although Oracle SuperCluster M8 does not include 10GbE switches (the customer supplies these switches), 10GbE NICs in SPARC M8-8 servers and on the ZFS Storage Appliance are typically connected to two different 10GbE switches to ensure redundancy in the event of switch or cable failure. The operating system automatically detects any loss of connection, for example due to cable or switch failure, and routes traffic accordingly. Each quad-port 10GbE NICs used in Oracle SuperCluster M8 is configured as two dual-port NICs, allowing redundant connections to be established for each NIC. Other components. A number of other components, including the SPARC M8-8 Service Processor, power supply units, and fans are also designed and configured to provide redundancy in the event of component failure.   Software Redundancy Oracle SuperCluster M8 is not totally reliant on the hardware redundancy outlined in the previous section, extensive as it is. The design of Oracle SuperCluster M8 also allows a number of other mechanisms to be leveraged, providing users with the opportunity to layer software redundancy on top of the hardware redundancy.   Oracle SuperCluster M8 leverages a number of mechanisms to achieve software redundancy.   Oracle Database Real Application Clusters (RAC) has long provided a robust and scalable mechanism for delivering highly available database instances based around shared storage. On Oracle SuperCluster M8, RAC database nodes can be placed on different PDoms to build highly resilient clusters, with data files located on Exadata Storage Servers. The end result is database service that does not need to be impacted by either a PDom or a storage server outage. Oracle Solaris Cluster, an optional software add on for Oracle SuperCluster M8, provides a comprehensive HA and disaster recovery (DR) solution for applications and virtualized workloads. On Oracle SuperCluster M8, Oracle Solaris Cluster delivers zone clusters, virtual clusters based on Oracle Solaris Zones, to support clustering across PDoms with fine-grained fault monitoring and automatic failover. Zone clusters are ideal environments for consolidating multiple applications or multitiered workloads onto a single physical cluster configuration, providing service protection through fine-grained monitoring of applications, policy-based restart, and failover within a virtual cluster. In addition, Solaris 10 branded zone clusters can be used to provide high availability for legacy Solaris 10 workloads. Oracle Solaris Cluster Disaster Recovery framework, formerly known as Solaris Cluster Geographic Edition, supports clustering across geographically separate locations, facilitating the establishment of a Disaster Recovery solution. It is based on redundant clusters, with a redundant and secure infrastructure between them. When combined with data replication software, this option orchestrates the automatic migration of multitiered applications to a secondary cluster in the event of a localized disaster. Built in clustering support is inherently provided with some applications (Oracle’s WebLogic Server Clusters is an example). Such support delivers redundancy without the need for specialized cluster solutions.   Note that both RAC and Oracle Solaris Cluster use the redundant InfiniBand links in each domain when setting up cluster interconnects. For example, Oracle Solaris Cluster on Oracle SuperCluster M8 leverages redundant IB partitions, each in a separate IB switch, to configure redundant and independent cluster interconnects.     Architecting a Highly Available Solution for Oracle SuperCluster M8   Although considerable redundancy is provided in hardware components on Oracle SuperCluster M8 (for example, all 10GbE NICs and InfiniBand HCAs include two ports, which are connected to different switches), Oracle does not recommend putting the primary focus on low-level components when considering HA.   For example, Exadata Storage Servers use InfiniBand to send and receive network packets associated with database access. The InfiniBand HCAs used in storage servers have two ports, thereby providing resilience in the event of switch or cables issues. But each storage server has a single HCA, which means that an HCA failure will take the storage server offline. While this might seem like a problem at first glance, there are a number of reasons why this design not only makes sense, but has proven enormously successful:   Given the long mean time between failures of InfiniBand HCAs, replacement due to failure is extremely rare. Building redundancy into every possible failure point would not only add cost, it would increase both hardware and software complexity. Exadata Storage Servers are never installed as single entities. The key unit of redundancy is the storage server itself, not its components.   Another key factor is that outages are not solely caused by hardware failures. Planned maintenance, such as applying a Quarterly Full Stack Download Patch (QFSDP), may necessitate an outage of affected components. Other unplanned events, such as shutdowns caused by external issues (such as power or cooling problems), software or firmware errors, and even operator error, can sometimes lead to outages.   It is important to architect solutions that focus at a high level on solving real-world problems, rather than to place the focus on low level problems that may never occur. This design principle can usefully be applied to every configuration in your data center.   The most effective way to ensure continuous availability of mission critical services is to set up a configuration that is resilient to component outage, wherever it occurs, and whatever the cause. For such a strategy to be effective, it will need to include a Disaster Recovery element based on offsite replication and failover. An offsite mirror of the production environment is a necessary precaution against both natural and man-made disasters, and a key component in any highly available deployment. The simplest and safest strategy at the disaster recovery site is to deploy the same components that are in use at the primary site. Best practices for disaster recovery with Oracle SuperCluster are addressed in the Oracle Optimized Solution for Secure Disaster Recovery whitepaper, subtitled Highest Application Availability with Oracle SuperCluster.   At the local level, clustering capabilities can be used to deliver automatic failover whenever required. The extensive hardware redundancy of Oracle SuperCluster M8 is not wasted—it will contribute by greatly reducing the likelihood of hardware failure that results in downtime.   Quarterly patches, and in particular the SuperCluster QFSDP, can be applied in the fastest and most efficient manner possible using a disruptive approach of shutting down the system and applying updates in parallel. One benefit of a highly available configuration, though, is that a QFSDP can be applied to the various components of a SuperCluster system in a rolling fashion without loss of service. Rolling updates take longer overall to complete since components are not updated in parallel. They are much less disruptive, though. Speak to an Oracle Services representative to understand whether rolling updates can applied to your SuperCluster system.     Backup and Recovery   
A crucial element of any highly available environment is the ability to perform backups and restores as required. The Oracle Optimized Solution for Backup and Recovery of Oracle SuperCluster whitepaper specifically addresses this requirement, documenting best practices for backup and recovery on Oracle SuperCluster.
Backup and restore must cover infrastructure and configuration metadata as well as customer data, and for SuperCluster, Oracle provides the osc-config-backup tool for this purpose. The tool stores its backups on the included ZFS Storage Appliance. Note, though, that the ZFS Storage Appliance itself and the Exadata Storage Servers must be backed up independently.
The SuperCluster platform includes multiple components of which the following are backed up by osc-config-backup:   M8 Logical Domains (LDoms) configuration (note that the older SuperCluster M7, T5-8, M6-32, and T4-4 platforms are also supported)GbE management Switch Infiniband Switches iSCSI mirrors of rpool and u01-pool on dedicated domains ZFS snapshots of rpool and u01-pool on dedicated domains Explorer from each dedicated domain SuperCluster configuration information (OES-CU) data ZFS Storage Application configuration information For SuperCluster environments that include Root Domains and IO Domains, Root Domains can be treated like Dedicated Domains and backed up accordingly. IO Domains use iSCSI LUNs located on the included ZFS Storage Appliance for their boot disks, and these LUNs can be backed up simply by creating a ZFS snapshot. Redundancy is provided by the disk mirroring used with the ZFS Storage Appliance.   Applications and Optimized Solutions Using Oracle SuperCluster   For further information about application deployments in a highly available environment on Oracle SuperCluster, refer to the following links:    SuperCluster for SAP Oracle Optimized Solution for Oracle E-Business Suite on Oracle SuperCluster Teamcenter on Oracle Engineered System   The hardware components of Oracle SuperCluster M8 provide a key set of ingredients for delivering highly available services. In combination with clustering software such as Oracle Solaris Cluster and Oracle Database RAC, services can continue without interruption during both planned and unplanned outages. An offsite configuration that replicates the main site can ensure that even a disaster need not lead to an extended loss of service.

As well as providing a platform that delivers outstanding performance and efficient consolidation for both databases and applications, Oracle SuperCluster M8 offers a solid foundation on which highly...

Data Protection

Oracle Exadata: Can You Trust Yourself to Secure Your System?

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. Can you trust yourself with the security of your company’s critical data?  At first, this must sound like a ridiculous question, as the old adage says, “If you can’t trust yourself, who can you trust?”  But I’m not talking about trusting your own integrity, fearing you will steal your data, or sabotage your system.  I’m asking if you have enough confidence in your abilities to trust them to the security of your system.  After all, do you trust yourself to fly the plane on your next trip, or deliver your next child, or even do your own taxes?  Some things are best left to the experts, and securing your database server is clearly in that camp. It seems as if we hear about a new data breach every few weeks. It could be credit bureaus, hotel chains, social media sites—no one seems immune.  But there is no single vulnerability affecting all these victims.  That makes it especially hard to avoid the next breach.  There’s no checklist you can walk through that is going to guarantee you are secure.  Rather it takes hard work and lots of testing to ensure your system is secure. An IBM study conducted by the Ponemon Institute found the average cost of a data breach in 2018 was $3.86 million.  Given the stakes, it makes sense to leave security to the security professionals.  Security researchers have years of experience in locking down systems.  They understand common vulnerabilities and have developed best practices and methodologies to reduce the risk of break-ins dramatically.  Are you a security professional?  My guess is "probably not," and that is another reason to use engineered systems like Oracle Exadata.  So, how does Exadata protect from unauthorized access to data?  It uses a defense-in-depth approach to security, starting with giving services and users only the minimal privileges required to operate the system.  Customers following Exadata’s default settings are protected using the following techniques:   Minimal software install Exadata does not install any unnecessary packages, eliminating any potential vulnerabilities associated with these packages Implements Oracle Database secure settings Locks down the Oracle database through settings developed through years of testing by Oracle developments Enforces minimum password complexity Greatly reduces the risk that a user on the system chooses an easy to crack or guess password Locks accounts after too many failed login attempts Prevents someone from programmatically trying passwords to break into the system Default OS accounts locked Prevents log in from accounts that need not support login, reducing the password or key management burden Limited ability to use su command Prevents users from elevating their privileges on the system or from changing their identity Password-protected secure boot Prevents unauthorized changes to the boot loader, or booting the system with unauthorized software images Unnecessary protocols, services, and kernel modules disabled Eliminates threats from vulnerabilities in services not required for operation of the system Software firewall configured on storage cells Prevents anyone from opening additional ports to access storage cells, enabling services that are not required and may present vulnerabilities Restrictive file permissions on key security-related files Prevents accidental or intentional changes to security files that may compromise security SSH listening only on management/private networks Prevents users on the public network from logging into a database server Supports SSH V2 protocol only, and insecure SSH authentication mechanisms are disabled Prevents use of version 1 of the SSH protocol, which contains fundamental weaknesses that make sessions vulnerable to man-in-the-middle attacks, and other insecure mechanisms Cryptographic ciphers properly configured for best security Prevents improperly configured ciphers from compromising security and uses hardware cryptographic engines to improve performance Fine-grained auditing and accounting All user activity is monitored and recorded on the system   Now you might be thinking, these are all database and system configuration settings, and you can do it yourself.  That is true, and if you are a security professional, you likely can.  But what if you are not—do you know how to secure and harden the system properly?  Regardless, there are also many features of Exadata that improve security further—features engineered into Exadata and not available on self-built platforms. By default, all clusters running in a consolidated environment can access any ASM disks.  Exadata tightens that security with ASM-scoped security, which limits access to underlying disk partitions (grid disks) to only authorized clusters.  Because a single cluster may host multiple databases, DB-scoped security provides even finer grained control, limiting access to specific grid disks to only authorized databases. Exadata also checks for unauthorized access by scanning the machine for changes to files in specific directories.  If changes are detected, Exadata will raise software alerts, notifying the administrator of a potential intrusion.  Management operations and public data access are segregated to different networks, allowing tighter security on the public network interfaces without compromising manageability.  VLAN support protects users from unauthorized access to network data by isolating network traffic.  Similarly, access to the storage cells from the compute servers is also on an isolated network—one that is InfiniBand partitioned to ensure network traffic from one cluster is not accessible to another, eliminating the chance an attacker can steal data as it transits between compute and storage.   During the development process, security is built in.  The Exadata development team routinely runs a variety of industry-standard security scanners, to ensure the software deployed on the system is free from known vulnerabilities.  If vulnerabilities are detected, monthly software updates quickly provide fixes to ensure your system is protected. All these features are critical to security, but studies have repeatedly shown the most significant contributor to security vulnerabilities is not keeping up with software updates.  Given the complexity and risk of patching today's critical database systems, it’s not surprising.  Many opt not to touch what is not broken, but what’s broken may not always be visible. Exadata takes the risk and pain out of software updates.  Risk is reduced as all database and Exadata software updates are extensively tested in the Exadata environment before shipping.  Exadata customers also benefit from a community effect.  There is a community of customers running Exadata, and issues are quickly discovered and fixed.  If you build your own database environment, it’s possible only you will experience the issue, and you will suffer the associated disruption.  The pain of software updates is reduced with Exadata Platinum support.  This level of support, exclusive to Exadata, regularly patches your systems on your behalf, eliminating your having to deal with patching all together.  With the risk and pain of software updates reduced, Exadata systems are patched more frequently, kept up to date with security fixes, and overall more secure. Finally, don’t forget all the database security features.  Oracle Database has a rich set of features to protect your data, and all are compatible with Exadata.  Oracle Database protects your data with encryption for data at rest and in transit over the network.  It can enforce access restrictions for ad hoc data queries by filtering results based on database user or a data restriction label.  Databases themselves can be isolated within a rack using virtual machine clusters, within a single VM using OS user-level isolation, or within a container database using the Multitenant database option.  You can even protect valuable data from your administrators using Oracle Database Vault, a security feature that prevents DBAs from accessing arbitrary data on the systems they are managing.  Lastly, to ensure compliance, Oracle Audit Vault and Database Firewall monitor Oracle and non-Oracle database traffic to detect and block threats and consolidate audit data from databases, operating systems, directories, and other sources. With the attention to security and the rich set of security features built in or available as options, Oracle Exadata is the world’s most secure database machine.  Proven by FIPS compliance and many deployments satisfying PCI DDS compliance, it’s no wonder that hundreds of banks, telecoms, and governments worldwide have evaluated Exadata, and found it delivers the extremely high level of security they require. If you truly value security, don’t trust yourself to do it right.  Follow the path of these leading enterprises and protect your data with Oracle Exadata. This is part 5 in a series of blog posts celebrating the 10th anniversary of the introduction of Oracle Exadata.  Our next post will focus on Manageability, and examine the benefits Engineered Systems bring to managing your database environments.   About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. Can you trust yourself with the security of your company’s critical data?  At first, this must sound like a...

Cloud Infrastructure Services

How to Achieve Public-Cloud Innovation in Your Own Data Center

Today's guest post comes from John Cooper, Vice President of Oracle Solutions Practice at Cognizant. Organizations are transitioning from traditional IT delivery models to cloud-based models to better serve customers, improve productivity and efficiency, and rapidly scale their businesses. Despite the benefits of public-cloud platforms, concerns over data sovereignty, compliance and privacy have deterred many organizations from accelerating the migration of their workloads to the cloud.   Boomeranging IT Deployment Models In fact, 80% of the 400 IT decision-makers who participated in IDC’s 2018 Cloud and AI Adoption Survey said their organization has migrated either applications or data that were primarily part of a public-cloud environment to an on-premises or private-cloud solution in the last year. The reasons? No. 1 is security, followed by performance and cost control. Often referred to as “reverse migrations,” organizations coming back from public cloud are clearly getting smarter about which workloads belong there and which ones do not. This is certainly true in highly regulated industries, such as finance, government, and defense, where major concerns around data security and data placement have traditionally meant that data must stay within the organization’s firewall. Other concerns include limited control over assets and operations, Internet-facing APIs, and privacy. Privacy in particular has taken on greater significance after passage of the European Union’s General Data Protection Regulation (GDPR). This legislation is intended to safeguard EU citizens by standardizing data privacy laws and mechanisms across industries, regardless of the nature or type of operations. Requirements such as client consent for use of personal data, the right to personal data erasure without outside authorization, and standard protocols in the event of a data breach carry heavy fines if not strictly adhered to. Even with these challenges, IT leaders are still seeking the types of benefits that public cloud can bring. Foremost among them are technology total cost of ownership (TCO) reduction, integration with DevOps and Agile methodologies, ease of complexity in provisioning and duplication, as well as dynamic scaling horizontally and vertically. These leaders want to feel confident that moving to the public cloud won’t wipe out their data or leave it vulnerable to hackers, that customization efforts on mission-critical systems such as ERP won’t be adversely affected, and that service level agreements (SLAs) for high-performance and predictability will be met. Fortunately, Oracle can help organizations take advantage of public-cloud-like innovation in their own data centers with Oracle Exadata Cloud at Customer. Exadata Cloud at Customer provides the same database hardware and software platform that Oracle uses in its own public-cloud data centers and puts it into the customer’s data center. Oracle’s integrated technology stack offers wide-ranging benefits that enable organizations to run database cloud services similar to public cloud, while more easily complying with data regulations and governance. Oracle patches, maintains, and updates the Exadata Cloud at Customer infrastructure remotely and brings best-in-market hardware to your data center for periodic refreshes while maintaining a tight leash on your data. It’s essentially your next step closer to public cloud, but without the risks. Cognizant works closely with Oracle as a premier partner with our Oracle Cloud Lift offering, which enables enterprises to rapidly obtain value from their Oracle Cloud platform and infrastructure investments. We are an Oracle Cloud Premier Partner in North America, EMEA, APAC, and Latin America. Our offerings include migrating Oracle and non-Oracle enterprise workloads to Oracle Exadata Cloud at Customer and other Oracle Cloud models. Our tested and reliable solution enables clients with application inventory, assessment, code analysis, migration planning and execution, and post migration support. We start with an in-depth inventory of the client’s current enterprise landscape, collecting data that feeds into our cloud assessment tools. The data is key for calculating the appropriate fit among public, hybrid, and private clouds. This process also helps predict the most appropriate model for migrating client’s environment to cloud: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS).   The Benefits of On-Premises Cloud While public-cloud vendors have built software-defined/automated IT services that are very attractive to developers, it’s important to understand that those technologies are not unique to public cloud. With automated, modernized, and software-defined infrastructure, on-premises cloud solutions are safer, more predictable, and cost-effective. You stay in control of your data and where it’s located, and subscribe only to the infrastructure, platform, and software services you need. Here are the types of benefits you can expect: Flexibility: Unlike some cloud providers, Oracle gives you access to the underlying infrastructure in both a dedicated or multi-tenanted environment. This provides great flexibility and enables you to implement appropriate foundations for your specific needs. Performance: Choose an appropriate level of cloud performance for your applications, from standard compute to faster, high-end performance. Ease of migration: No need to significantly re-architect your applications or platform when you migrate to on-premises cloud. This is because Oracle’s cloud is specifically engineered to work with Oracle databases, middleware, and applications. If you choose a different cloud provider, it’s more than likely that there will be major re-architecting involved. Security: Implement your own enterprise-grade security in the same manner as on-premises, thanks to the ability to access the underlying infrastructure. Scalability: Scale vertically and horizontally to support large, persistent data workloads. Ease of management: Manage your cloud estate with the same enterprise tooling that most organizations already use for their existing ecosystem.   A New Paradigm for Infrastructure Strategy Having public-cloud-like power in your data center offers a new infrastructure strategy paradigm: Current systems are not disturbed, and organizations can build cloud-native applications or modernize existing ones more rapidly while infrastructure and data reside safely behind the firewall. For organizations interested in embracing digital transformation, on-premises cloud can be a positive initial step. Your organization will have access to a host of cloud services for data management. Why make costly missteps in public cloud, risk your data, or both? Constellation Research’s 2018 Next-Gen Computing: The Enterprise Computing Model for the 2020s confirms that Oracle Cloud at Customer offers “the most complete on-premises offering” among leading cloud vendors, with “the ability to move workloads back and forth between cloud and on-premises” as one of its key differentiators. Cognizant and Oracle work together to help clients implement new operating models, processes, and information systems needed to remain competitive and profitable in today’s digital revolution. If the time to start securely scaling your business is now, then it’s time to take a harder look at Oracle Exadata Cloud at Customer and Cognizant’s Oracle Cloud Lift offering.   About the Author John Cooper leads the Oracle Solutions Practice in North America. In a career spanning 29+ years, John has worked extensively in G&A (HR, Finance, IT, Procurement) and Consulting. His areas of specialization include design and implementation of G&A organizations, and processes and technology solution enablers. John has led HR functions and served as the operational head of a multi-line consulting company. He has expertise in the alignment of business and information technology strategies. As the practice lead, John is responsible for delivery of services related to consulting, implementation, upgrade and application management services across the entire Oracle spectrum of products and services.

Today's guest post comes from John Cooper, Vice President of Oracle Solutions Practice at Cognizant. Organizations are transitioning from traditional IT delivery models to cloud-based models to better...

Engineered Systems

10 New Year’s Resolutions Your IT Organization Should Adopt

IT organizations of all sizes should make a resolution… We are well into 2019 now, but it’s not too late to resolve to make improvements in your IT organization—improvements that can fix your processes, increase your productivity and, most importantly, make your job easier. Here are 10 resolutions you should consider putting into practice this year. Resolution #1: Refine Your Internal Systems Are you spending more than 80% of your IT budget just to keep the lights on? Have you considered a hybrid approach to your internal systems? Making strategic decisions about which applications to keep on premises, which to move to the cloud, and which you should consider keeping in the Oracle Cloud behind your firewall—with Oracle Cloud at Customer—can help you reallocate your spending to more business-critical projects. Resolution #2: Know Your Competition Industries are changing at an unprecedented rate to overcome challenges presented by new entrants, new business models, ever-increasing customer demands, and changing workforce demographics. It is imperative to understand your industry and the trends that you will face in 2019 so that you can anticipate and get ahead of them. We can look at a few examples. In the auto industry, you have to worry about autonomous cars, regulations that may ban gasoline-powered cars, and competition from electric vehicles.  In the financial industry, fintechs continue to transform offerings and customer expectations. Will you explore collaboration that can help your organization move into new markets or expand its offerings rapidly? Or will you try to go it alone? If you decide to go it alone, you will need to anticipate how you’ll compete with those nimbler fintech-powered competitors. Retail is another industry that has undergone incredible changes driven by the internet and mobile. The way we shop and purchase everything from fashion to electronics to groceries has been disrupted. How can your IT organization help respond to this radical change? Resolution #3: Understand How You Can Take Advantage of the Power of AI and Blockchain New technologies are presenting opportunities to help you succeed in the face of new challenges, fundamentally changing how we interact with technology and data. Emerging technologies can help you overcome many challenges in 2019. Some emerging technologies to consider include apps with built-in artificial or adaptive intelligence (AI), chatbots to automate and improve the user experience, IoT analytics to gain deep insights, blockchain applications that create high-trust transactions, and a self-driving autonomous database that can virtually eliminate many mundane IT processes. Resolution #4: Reduce Operational Costs If your main goal for this year is to reduce operational costs, consider leveraging AI, optimizing licensing costs, and deploying newer and better hardware. While there are many ways to do this this, Oracle Engineered Systems with a capacity-on-demand feature is one option to consider. Resolution #5: Deliver More Value to Clients Staying ahead of the competition is key, and that starts with delivering increasing value to clients by innovating and continually improving solutions. Incorporating components of AI and machine learning can help your organization break free from the status quo and achieve significant gains in business innovation. Resolution #6: Make Faster Business Decisions Leveraging advanced analytics and AI can help you make better and faster data-driven decisions. Data is your biggest commodity, so it makes sense to put it to work for your business and so you can serve customers more effectively. If you are evaluating analytics solutions, Oracle provides machine learning with years of database optimization to provide a self-driving database that includes the highest level of security. Resolution #7: Improve Data Security Security breaches are no joke. Constantly having to upgrade and patch your environment can be a tedious task and is easily overlooked—with potentially serious business consequences. Using AI embedded into security can help to predict possible breaches. Look for solutions that can deliver this kind of security. Resolution #8: Consolidate Applications onto the Cloud Are you considering signing on with best-of-breed cloud providers to move your organization forward? While this seems like a sound strategy, you are going to face some big challenges across cloud providers, including different data, management, service, and security models. Essentially, you are recreating the same challenges that exist today in your data center. And, once again, the burden falls on you to manage all of this complexity and make it all work together. Oracle offers a different option. We have designed the most complete and integrated cloud available in the industry across data-as-a-service, software-as-a-service, platform-as-a-service, and infrastructure-as-a-service layers. Simply put, it’s the only cloud you’ll need to run your entire organization. Resolution #9: Adopt Modern Solutions Instead of spending most of your IT budget on legacy systems, consider the cost effectiveness of updating to new, more modern solutions. It will help to reduce your IT footprint and save on power and cooling in your data center. Moreover, you will see improved performance to run your business and keep up with ever-increasing customer demands. Resolution #10: Consider Oracle’s Flexible Deployment Options Whether you are tied to on-premises or you plan to move to cloud—or have a hybrid infrastructure—Oracle offers a unique approach you may want to explore more in 2019. We provide three distinct deployment models: If you’re not ready to move to the cloud, you can take advantage of our engineered systems on-premises in the traditional data center. We’re also the only cloud provider to offer a fully managed cloud in your data center. With Cloud at Customer, we deploy an instance of our cloud in your data center, behind your firewall—giving you the security and compliance you need with the benefits of cloud. And, of course, you can subscribe to our complete services in the Oracle Cloud. We believe that this approach enables us to serve all types and sizes of customers and meet their business needs and maturity levels, no matter where they are today and where they plan to be at the end of 2019 and beyond. Here’s to a Better, Happier IT Organization in 2019! It may not be possible to execute on all 10 resolutions this year, but using these as a starting point, you can choose those that will help your IT organization move the needle the farthest and fastest toward your goals. So, from all of us at Oracle, we wish your IT team a happy and productive new year.

IT organizations of all sizes should make a resolution… We are well into 2019 now, but it’s not too late to resolve to make improvements in your IT organization—improvements that can fix your...

Engineered Systems

What is Storage Index?

What is a Storage Index? To answer that, let's first take a look at a database. The purpose of a database is to store information in columns called fields and rows called records.   For example, let's say you're looking for all blue shoe purchases. Here's how you would do it. First at the compute layer … You run a query asking for all the transactions done between January-March where someone has bought blue shoes. A signal is sent by that query at the compute layer own to what is called the storage layer. This is where all your blocks of data from your database are stored. When you've run a query asking for blue shoes but your storage says to the compute guys that "I cannot filter through my blocks of storage to give you just blue shoe purchases.” So your storage reads and sends up all the blocks of data for all shoes. This causes I/O bottlenecks in your storage and network because too many blocks of data are being pushed up. And now, you sit there waiting for results.   A Better Way Oracle Exadata Storage Indexes There is a better way to get faster results and eliminate performance bottlenecks with Oracle Exadata’s storage indexes. Oracle Exadata’s smart storage can actually figure out which blocks of storage will definitely not contain blue shoes.   How does this work? The idea of storage indexes is that each region of storage holds information called key statistics, which describe the data in each region. In this example, the descriptive information, or key statistics, are about colored and no colored shoes. The storage index tells that the key statistics about this region reveal colored shoes. We now have to read that region and process it to find blue shoes. This allows us to skip irrelevant regions where the Exadata storage indexes tell us the value cannot be there. The storage index uses the key statistics of this region to tell us there are no colored shoes, so we don't have to ever read that region or process it.   Exadata’s smart storage indexes dramatically increase performance and accelerate the speed of a query This is because we automatically ignore data blocks in regions where there are no colored shoes. And this eliminates all the I/O and CPU needed to search that region. It's like having fine-grained partitioning, but on multiple columns instead. You get the descriptive information, or key statistics, about shoes across many columns. For example, you can get the type of shoe, the size of a shoe, and the brand of a shoe - all at once and automatically. And the best part? While most storage indexes incur an overhead expense, particularly on updates which get borne by the database server. Exadata storage indexes have a near-zero cost! As a result, the CPU and memory cost of maintaining a storage index is very very small and it is offloaded to the storage servers. To learn more, speak to an Oracle Exadata Sales Specialist today.

What is a Storage Index? To answer that, let's first take a look at a database. The purpose of a database is to store information in columns called fields and rows called records.   For example, let's...

Data Protection

New ESG Research Highlights GDPR Data Management and Protection Challenges and how Oracle Engineered Systems may Help Customers Address Them

The European Union General Data Protection Regulation (GDPR) represents a broad new approach to customer privacy. GDPR currently applies to companies who have or process personally identifiable information (PII) of individuals located in the European Union, but it represents a global trend that is already being implemented in other countries. These and similar new laws will have lasting effects on the way global corporations do business. Regulatory compliance has affected organizations around the world for decades, and with our digital economy, IT is now at the center of the effort. Compliance isn’t easy when access, retention, and deletion of data throughout an enterprise are involved. Indeed, ESG has determined that 65% of organizations that have been subject to regulatory agency audits have failed part of one at least once in the past five years due to issues with data access or retention. Past audits, increasing stakeholder pressure, and new data protection regulations are leading to new concerns for IT managers and their teams.   GDPR touches on many different aspects of how an enterprise manages PII, including:  Personal consent and data management: Since GDPR took effect, businesses must have, in certain instances, their clients’ expressed permission via “opt-in” before logging any data. When requesting consent, firms must outline the purpose for which the data will be collected, and they may need to seek additional consent to share information with third parties. This change in regulation means many businesses must reexamine their CRM and database management systems to ensure they are maintaining the required records in the proper ways. For instance, are a minimum number of data copies being retained for a minimum amount of time and are all forms of personally identifiable information, including pictures and videos, being anonymized through encryption or other means? Data access and the right to be forgotten: GDPR gives consumers significant control over their private data including the right to access, review, and correct it on demand. Consumers can similarly, under certain circumstances, request the removal of their personal information as well, a process known as the right to be forgotten. Data breaches and notifications: GDPR ups the ante significantly in the case of a data breach. In certain instances, data controllers must report incursions to the relevant data protection authority within 72 hours for certain types of data breaches that are likely to result in a risk to people’s rights and freedoms and to provide details regarding the nature and size of the breach. Additionally, if a breach is likely to result in a high risk to the rights and freedoms of individuals, the GDPR says data controllers must inform those concerned directly and without undue delay. For serious violations, companies may be fined amounts up to the greater of 10 million Euros or 2% of their global turnover. Processors and vendor management: Enterprises are increasingly relying on outsourced development and support functions, so the private consumer data they maintain is often accessed by external vendors. Whenever a data controller uses a data processor to process personal data on their behalf, a written contract needs to be in place between the parties. Such contracts ensure both parties understand their obligations, responsibilities, and liabilities. Similarly, non-EU organizations working in collaboration with companies serving EU citizens need to ensure adequate contractual terms and safeguards while sharing data across borders. How Oracle Engineered Systems May Help Customers Meet GDPR Requirements There is no “silver bullet” for meeting GDPR requirements. An organization’s internal processes will have as much or more impact on their ability to become GDPR compliant than the hardware and software that they use to process and protect their data. However, software and hardware can play a beneficial role that supports an organization’s compliance efforts. The Enterprise Strategy Group, a leading IT analyst, research, validation, and strategy firm has authored a report that looks at how the combination of Oracle Database, Oracle Software, and Oracle Engineered Systems, specifically Exadata and Recovery Appliance, may help customers meet GDPR and similar data protection compliance requirements. ESG examined how the combined capabilities of these software and hardware products may help customers develop and maintain internal processes that simplify their efforts to meet GDPR compliance requirements. Engineered Systems work together to deliver greater efficiency and flexibility to production and data protection environments alike. These solutions give customers powerful tools that can be used to strengthen their compliance efforts. ESG outlined 10 ways that Oracle Database, Software, and Engineered Systems may help customers meet GDPR requirements and protect consumer data. Specifically, while a significant portion of GDPR compliance involves improving business processes and ensuring broad participation across an organization, the data-centric nature of GDPR makes it imperative to look at mission-critical databases because many of them contain PII. The ten ways that ESG identified where Oracle Engineered Systems may help customers create and maintain their compliance processes include: Data Discovery Data Minimization Data Deletion Data Masking/Anonymization Encryption/Security Access Control Monitoring/Reporting Continuous Protection Integrity Checking Recoverability To discover and learn more how Engineered Systems can solve your GDPR and meet data regulation requirements, read the full ESG report.  

The European Union General Data Protection Regulation (GDPR) represents a broad new approach to customer privacy. GDPR currently applies to companies who have or process personally...

Data Protection

Wikibon Reports PBBA Operating Costs are 68% Higher than Oracle’s Recovery Appliance

Leading tech influencer Dave Vellante, Chief Research Officer at Wikibon, recently published an enlightening new research report comparing Oracle’s Recovery Appliance with traditional Purpose-Built Backup Appliances (PBBAs). The analysis, titled “Oracle’s Recovery Appliance Reduces Complexity Through Automation” found Oracle’s Recovery Appliance helped customers reduce complexity and improve both Total Cost of Ownership (TCO) and enterprise value.   Traditionally, the best practice for mission-critical Oracle Database backup and recovery was to use storage-led PBBAs, such as Dell EMC Data Domain, integrated with Oracle Recovery Manager. However, this approach remains a batch process that involves many dozens of complicated steps for backups and even more steps for recovery, which can prolong the backup and recovery processes as well as cause errors leading to backup and recovery failures.   Oracle’s Recovery Appliance customers report that (TCO) and downtime costs—lost revenue due to database or application downtime—are significantly reduced due to the simplification and the automation of the backup and recovery processes. The Wikibon analysis estimates that over four years, an enterprise with $5 billion in revenue can potentially reduce their TCO by $3.4M and have a positive impact on the business of $370M. Wikibon findings indicate that operational costs are 68% higher for PBBAs such as Data Domain relative to ZDLRA for a typical Global 2000 enterprise running Oracle Databases.     Bottom Line Wikibon has exposed what Oracle clients have known all along, choosing Oracle’s Recovery Appliance results in higher efficiencies through automation, an overall reduced TCO, and positive impact to both an enterprise’s top and bottom line. Read the full report   Discover more about Oracle’s Recovery Appliance

Leading tech influencer Dave Vellante, Chief Research Officer at Wikibon, recently published an enlightening new research report comparing Oracle’s Recovery Appliance with traditional Purpose-Built...

Engineered Systems

Fast Food, Fast Data: Havi is Feeding QSR’s Massive Data Appetite with Cloud-Ready Technology

Quick-service restaurants (QSRs) have always focused on speed, value, and convenience for their competitive advantage, but recent trends have made that mission exponentially more complex for companies in this $539 billion global industry. Consumers increasingly demand greater choice, more customization, and a more personalized marketing experience. They want the ability to order, plan delivery, and pay on their mobile devices. In fact, 25% of consumers report that the availability of tech figures into their decision of whether to visit a specific QSR location. As a global company providing marketing analytics, supply chain management, packaging services, and logistics for leading brands in food service, HAVI Global Solutions may be a behind-the-scenes player in the QSR arena, but it is on the front lines of technology-driven innovation. For a very large global QSR and one of its customers, HAVI computes 5.8 billion supply forecasts every day, down to the individual ingredient level, for 24,000 restaurants across the globe. With data points and locations continuing to grow, HAVI’s on-premises infrastructure was reaching capacity. “Traditional build-your-own IT hardware infrastructure stacks were not helping us with all our problems,” says Arti Deshpande, Director, Global Data Services at HAVI. “we were always bound by the traditional stack: storage, network, compute—and when our workload is mainly IO bound, that traditional stack was not helping us with all our problems.”     Ensuring the Right Product at the Right Time “In the QSR business, if you don’t have the right food in the restaurant at the right time, it’s very difficult to meet customer expectations,” says Marc Flood, CIO at HAVI. When Flood joined HAVI as the company’s first global corporate CIO in 2013, he found a complex IT infrastructure environment across multiple data-centers and co-location providers. “I wanted to establish a common backbone with a partner that would work with our cloud-first strategy,” he recalls. Ultimately, Flood chose to consolidate data operations for their ERP solutions—NetSuite and JD Edwards—onto Oracle Exadata Database Machine running in the primary data centers and DR in five Equinix data centers around the globe. HAVI chose Equinix not only for its global footprint that closely matched its own, but also because if its dedicated interconnection with the Oracle Cloud via Oracle FastConnect. “One of the crucial capabilities we sought was the ability to leverage Oracle’s cloud solutions to complement our on-premises solution,” he says. “The cross-connect quality is incredible; the latency on the cross-connect is very low.” HAVI consolidated 34 databases onto two racks of Exadata Database Machine x6-2, resulting in 25% to 35% performance gains versus the previous HP infrastructure. Exadata met HAVI’s requirement for elastic scalability without performance degradation to stay ahead of its QSR client’s projected worldwide growth.   Streamlining Disaster Recovery Without Sacrificing Speed When it came to re-examining the company’s disaster recovery (DR) strategy, HAVI determined that it would need its DR system to achieve 75% to 80% of native performance. “It is essential that we be able to continue to forecast regardless of whether we would have an event in our primary data center while also keeping costs under control,” Flood says. “That means meeting our DR requirements in the right way, establishing appropriate RPOs (recovery point objectives) and RTOs (recovery time objectives) while being able to maintain capability and the cost model in alignment with our clients’ expectations.” To meet these criteria, HAVI worked with Oracle to create a DR solution using the Oracle Cloud to offload the huge overhead required for the DR system from the primary database servers. The solution not only resulted in a cost savings of approximately 35%, but also exceeded performance requirements. “Almost 95% of our workload ran at 100% performance of Exadata, of which 60% actually ran 200% faster,” Deshpande says. Flood and Deshpande were impressed with the speed with which the custom solution could be developed and implemented. “It was a very fast process—a great example of partnering and then moving quickly from proof of concept (POC) into live production,” Flood says. Together, Oracle and HAVI ran eight POCs over three months and fully deployed the system over the course of another three months.   Preparing for the Future with Cloud-Ready Infrastructure QSR is hardly the only industry experiencing change, thanks to the proliferation of data. Finance, ecommerce, and healthcare are just some of the other industries evolving as companies learn how to mine the data deluge for competitive advantage. For HAVI, migrating to a cloud-ready environment means removing the barriers to growth for itself and its customers. “We were able to grow the service that we provide without experiencing any reduction in performance to our customer, and we’re able to assure them of continuous service at the level they expect” Flood concludes. Learn more about how Oracle Exadata and cloud-ready engineered systems can enable your company to scale and innovate your competitive advantages. Subscribe below if you enjoyed this blog and want to learn more about the latest IT infrastructure news.  

Quick-service restaurants (QSRs) have always focused on speed, value, and convenience for their competitive advantage, but recent trends have made that mission exponentially more complex for...

5 Exciting Moments at Oracle OpenWorld

OpenWorld is an exciting conference with excellent networking and information on how emerging technologies are effecting the IT industry. With over 2000 sessions and events, OpenWorld has a lot to offer. We know that not everyone can make it to the conference in person. If you didn’t get a chance to attend, here are the top 5 exciting things that happened at OpenWorld. 1. Exadata Customers Uncover Their Keys To Success One of the most insightful moments at Oracle OpenWorld 2018 was listening to Exadata customers uncover amazing performance improvements and better business results that have helped them develop a competitive edge in the market. Wells Fargo and Halliburton both shared their significant cost savings as well as operational benefits from consolidating their hardware and software onto Oracle Exadata in this session. David Sivick, technology initiatives manager at Wells Fargo, shared how they leveraged 70 racks of Exadata to replace several thousand Dell servers. Sivick said that the company has “realized a multi-million dollar a year saving…There’s a 78% improvement in wait times, 30% improvement on batch, 36% reduction in space from compression and an overall application speed improvement of 33%.” (Source diginomica). Shane Miller, senior director of IT at Halliburton, also explained that he experienced significant cost savings and business results. For instance, Shane mentioned that with Exadata, “we saw a 25% reduction in the time it takes to close at the end of the month… We saw load times from 6 hours to like 15 minutes.” (Source diginomica). 2. Constellation Research and Key Cloud at Customer Customers Share Stories About innovation In the 2 years since the Cloud at Customer portfolio has been announced, customers have seen significant innovation with their Cloud deployments.  As an example, Sentry Data Systems’ Tim Lantz shared how Exadata Cloud at Customer allows them to have their cake and eat it too.  Kingold Group’s Steven Chang shared how important data sovereignty is with their digital transformation with Exadata Cloud at Customer.  And other customers in other sessions, including Dialog Semiconductor, Galeries Lafayette, Quest Diagnostics, and more shared their stories at OpenWorld.    To learn more, read Jessica Twentyman’s article in Diginomica. 3. Oracle Database Appliance Customers Shared How They Maximize Availability  During the Oracle OpenWorld customer panel, we heard how Oracle Database customers are driving better outcomes with Oracle Database Appliance versus traditional methods of building and deploying IT infrastructure. We covered the business value, and customer perspectives on how Oracle Database Appliance has delivered real value for their Oracle software investments while simplifying the life of IT without additional costs. Our special guests operate in education, mining, finance and real estate development. One of the main topics was using a multi-vendor approach vs. an engineered system. As a DBA managing day to day operations, many faced performance and diagnostic issues and a multi-vendor solution was not helping. With ODA they can manage the entire box which provides easy patching with one single patch that does it all. David Bloyd, Nova Southeastern University stated: “In the past, we would take our old production SUN SPARC server that was out of warranty to be our dev/test/stage environment when purchasing a new production server to save money.  Now we can test our ODA patches on the same software and hardware as our production environment by having the same ODAs for both environments. Furthermore, our panelists expressed the need to have 24x7 availability with no downtime. Konstantin Kerekovski, Ramymond James stated:   “The ODA HA model is key because being in financial services you cannot go down, high availability is key. We have two set ups in Dev we are using RAC ONE Node. And also for DR, we can consolidate many databases on one ODA. In production, we have two instances of RAC running on ODA compute nodes, so no downtime.” As we approach the latest generation of Oracle Database Appliance we are seeing more performance, security and reliability increase. “Rui Saraiva, KGHM International stated: “With the latest implementation of the ODA X7 we were able to significantly increase application performance and thus improve business efficiencies by saving time to the business users when they run reports or execute their business processes.” Are you considering Oracle Database Appliance to run your Oracle Database and Applications? Check out this blog By Jérôme Dubar, dbi services on “5 mistakes you should avoid with Oracle Database Appliance.” 4. Oracle Products Demo Floor The OpenWorld demo-grounds featured 100+ Oracle Product Managers explaining the technical details of each  product. This is an excellent opportunity to learn how to get the most out of your Oracle investments by the person who designed the product!  In case you missed it, here is a video showing the exciting things that were happening at Exadata demo booth:   5. Oracle CloudFest Concert Oracle hosted a private party exclusively for customers! This intimate concert featured Beck, Portugal the Man, and the Bleachers. Guests enjoyed a night out at the ball park with free food, drinks, entertainment and networking. Overall, the Exadata experience at Oracle OpenWorld was amazing and to learn more, do check out the new Exadata System Software 19.1 release which serves as the foundation for the Autonomous Database.      

OpenWorld is an exciting conference with excellent networking and information on how emerging technologies are effecting the IT industry. With over 2000 sessions and events, OpenWorld has a lot to...

Engineered Systems

Top 5 Must See Exadata & Recovery Appliance Sessions at Oracle OpenWorld

Are you feeling butterflies in your stomach yet? Oracle OpenWorld 2018 is around the corner and we want to make sure you’re able to maximize your time at the event from October 22-25. So, we’ve decided to give you a personal guide on the top five sessions that you should attend while exploring Oracle Exadata and Oracle Zero Data Loss Recovery Appliance (ZDLRA). What’s more, you can attend all of the key Exadata sessions by checking out this Exadata Focus-On-Document which highlights the top: customer case study sessions, product overview sessions, business use case sessions, product roadmap sessions, product training sessions, and the key tips and tricks sessions. As you can imagine, Oracle has some exciting innovations in store for Exadata across the Exadata on-prem, Exadata Cloud at Customer, and Exadata Cloud Service consumption models. You also need to check out the interesting and latest developments happening on Oracle ZDLRA. So, we’ve recommend the below key five sessions on Exadata and ZDLRA to make it easier for you to navigate through the event. Top 5 Exadata and ZDLRA sessions that you can’t miss while at OpenWorld: Monday Sessions: Exadata Strategy & Roadmap 1. Oracle Exadata: Strategy and Roadmap for New Technologies, Cloud, and On-Premises Speaker: Juan Loaiza, Senior VP at Oracle   When: Monday 10/22 9:00-9:45 am Where: Moscone West - Room 3008 Many companies struggle to accelerate their online transaction processing and analytics efforts so they face faltering business performance. Sound familiar? This session is a perfect gateway to understanding how Exadata can help erase this problem and power faster processing of database workloads while minimizing costs. In this session, Oracle’s Senior VP, Juan Loazia, will explain how Oracle’s Exadata architecture is being transformed to provide exciting cloud and in-memory capabilities that power both online transaction processing (OLTP) and analytics. Juan will uncover how Exadata uses remote direct memory access, dynamic random-access memory, nonvolatile memory, and vector processing to overcome common IT challenges. Most importantly, Juan will give an overview of current and future Exadata capabilities, including disruptive in-memory, public cloud, and Oracle Cloud at Customer technologies. Customers like Starwood Hotels & Resorts Worldwide, Inc. have used they key Exadata capabilities to improve their business. For instance, they have been able to quickly retrieve information about things like customer loyalty, central reservations, and rate-plan reports for efficient hotel management. With Exadata, they can run critical daily operating reports such as booking pace, forecasting, arrivals reporting, and yield management to serve their guests better. Check out this session to see how Exadata helps customers like Starwood hotels gain these results. Customer Panel on Exadata & Tips to Migrate to the Cloud 2. Exadata Emerges as a Key Element of Six Journeys to the Cloud: Customer Stories Speaker: David Sivick, Technology Initiatives Manager, Wells Fargo Claude Robinson III, Sr. Director Product Marketing, Oracle Shane Miller, Halliburton When: Monday, 10/22, 9:00- 9:45 am Where: Moscone South - Room 215 Every company is trying to build a cloud strategy and make a seamless migration to the cloud without impacting their current, on-premises IT systems today. This is a pretty challenging feat and hard to accomplish when having a multi-vendor environment. The good news is that Oracle has helped more than 25,000 companies transition to the cloud. For these large multinational customers, their journey to the cloud began years ago with Oracle Exadata as a cornerstone. They’ve modernized by ditching commodity hardware for massive database consolidation, saving millions in Oracle Database licensing, and improving the safety and soundness of their data. Well Fargo’s Technology Initiatives Manager, David Sivick and Haliburton’s Shane Miller have experienced such transformations. And within the last few years, customers like David and Shane have started to consume Exadata in more flexible ways in their digital transformation drive. Well Fargo’s David Sivick and Haliburton’s Shane Miller will sit down with Oracle’s Sr. Director Product Marketing, Claude Robinson to share their Exadata cloud journey stories around: How they optimized their database infrastructure Successfully drove their application and database migration Achieved application development and data analytics This interesting session will feature Well Fargo’s and Haliburton’s stories and tips that you can use as you build a cloud strategy as well as understand how Exadata can help you achieve this path to the cloud. Tuesday Sessions: Customer Panel on Exadata, Big Data, & Disaster Recovery 3. Big Data and Disaster Recovery Infrastructure with Equinix and Oracle Exadata Speaker: Claude Robinson III, Sr. Director Product Marketing, Oracle Arti Deshpande, Director, Global Data Services, Havi Global Solutions Robert Blackburn, Global Managing Director, Oracle Strategic Alliance, Equinix When: Tuesday, 10/23, 3:45 - 4:30 pm Where: Moscone South - Room 214 We think that some of the most powerful sessions are those that come from customers and partners who openly share their experiences and so can you relate to their challenges and how and they have achieved IT and business success. So, we picked this session which uncovers how the Director of Global Data Services at Havi Global Solutions, Arti Deshpande, leveraged an Oracle offering to achieve Havi’s IT success. Arti will give you the inside scoop on how they were able to streamline disaster recovery in the Oracle Cloud without sacrificing speed and also consolidated dozens of databases onto Exadata to improve performance. Beyond learning from Havi’s customer experience, you will also hear about the solution architecture created through Oracle’s and Equinix’s partnership. Equinix will share how it partnered with Oracle’s Engineered Systems and Oracle Cloud teams to create a distributed on-premises and cloud infrastructure. The company will reveal how they created an on-premises and cloud infrastructure that consists of a private, high-performance direct interconnection between the Oracle Exadata Database Machine solution and Oracle Cloud by using the Oracle Cloud Infrastructure FastConnect on Equinix Cloud Exchange Fabric. Finally, Equinix will share how this combined solution bypasses the public internet, allowing for direct and secure exchange of data traffic between Oracle Exadata and Oracle Cloud services on Platform Equinix, the Equinix global interconnection platform. Customer Panel on Exadata Cloud at Customer 4. Unleash the Power of Your Data with Oracle Exadata Cloud at Customer Speaker: Vishal Mehta, Sr Manager, Architecture, Quest Diagnostics Maywun Wong, Director of Product Marketing, Cloud Business Group, Oracle Jochen Hinderberger, Director IT Applications, Dialog Semiconductor Cyril Charpentier, Database Manager, Galeries Lafayette When: Tuesday, 10/23 5:45-6:30 pm Where: Moscone South - Room 214 If you’re looking for some more insight about Exadata, specifically Exadata Cloud at Customer, this is great session to check out because it features first-hand experiences from customers using the Cloud at Customer consumption service and how it has impacted their businesses. In this interactive customer panel, IT and business leaders from Quest Diagnostics, Dialog Semiconductor, and Galaries Lafayette will discuss their business success with bringing the cloud into their own data center for their Oracle Database workloads, as well as answer your questions. Vishal Mehta, the Sr. Manager, Architecture at Quest Diagnostics, will share how they consolidated dozens of database servers onto Exadata and freed up many of their admins to drive more strategic tasks. By using Exadata Cloud at Customer, they were able to standardize their database services and configurations to yield benefits across many dimensions. Jochen Hinderberger, the Director of IT Applications at Dialog Semiconductor, will share the company’s decision to select Exadata Cloud at Customer because it had the capacity and performance needed to support their highly demanding tasks which included collecting and analyzing complex data to assure product quality for semiconductors and integrated circuits. Cyril Charpentier, the Database Manager at Galeries Lafayette will share their story around selecting Exadata Cloud at Customer to gain the cloud-like capabilities of agility and flexibility while improving their database performance. The customer will also discuss how Exadata Cloud at Customer has helped them offload tedious management and monitoring tasks while focusing on the real needs of the business. By attending this session, you get an idea of how Oracle’s Database enterprise customers use Oracle Exadata Cloud at Customer as part of their digital transformation strategy. This is a perfect session to learn how these customers harnessed their data and the benefits of a public cloud within their own data center behind their firewall to improve business performance. Wednesday Session: ZDLRA Architectural Overview and Tips 5. Zero Data Loss Recovery Appliance: Insider’s Guide to Architecture and Practices Speaker: Jony Safi, MAA Senior Manager, Oracle Tim Chien, Director of Product Management, Oracle Stefan Reiners, DBA, METRO-nom GmbH When: Wednesday, 10/24, 4:45- 5:30 pm Where: Moscone West - Room 3007 What keeps you up at night when it comes to IT challenges? Security and downtime no doubt. It is incredibly difficult to overcome this IT challenge around improving database performance while making sure the infrastructure is immune to security attacks or database downtime and performance problems. They good news is that we think long and hard about these challenges at Oracle and have a solution to address these issues. In this session, you will learn how to mitigate the problems of data loss and improve data recovery for your database workloads with Zero Data Loss Recovery Appliance so you avoid problems around downtime and security. In this session, you will learn how Zero Data Loss Recovery Appliance (ZDLRA) is an industry-innovating, cloud-scale database protection system that hundreds of customers have deployed globally. ZDLRA’s benefits are unparalleled when compared to other backup solutions in the market today, and you will get a chance to learn how this is the case. Jony, Tim, and Stefan will share how this offering the eliminates data loss and backup windows, provides database recoverability validation, and ensure real-time monitoring of enterprise-wide data protection. Attend this session to get an insider’s look at the system architecture and hear the latest practices around management, monitoring, high availability, and disaster recovery. This is a perfect session for you to learn tips and tricks for backing up to and restoring from the recovering appliance. After this session, you’ll be able to walk away and implement these practices at your organization to fulfill database-critical service level agreements. Other Sessions You’ll Really Want to Check Out: That’s it! Those are the top five session that you don’t want to miss while attending Oracle OpenWorld this year. However, keep in mind that if you want a deeper exploration on Oracle Exadata and Oracle Zero Data Loss Recovery Appliance, you should check out these additional sessions. Here are three more sessions you should look into and use for brownie points. Maximum Availability Architecture 1. Oracle Exadata: Maximum Availability Best Practices and Recommendations Speaker: Michael Nowak, MAA Solutions Architect, Oracle Manish Upadhyay, DBA, FIS Global   When: Tuesday, 10/ 23, 5:45 - 6:30 pm Where: Moscone West - Room 3008 Exadata Technical Deep Dive & Architecture 2. Oracle Exadata: Architecture and Internals Technical Deep Dive Speaker: Gurmeet Goindi, Technical Product Strategist, Oracle Kodi Umamageswaran, Vice President, Exadata Development, Oracle When: Monday, 10/22 4:45-5:30 Where: Moscone West - Room 3008 Exadata Cloud Service 3. Oracle Database Exadata Cloud Service: From Provisioning to Migration Speaker: Nitin Vengurlekar, CTO-Architect-Service Delivery-Cloud Evangelist, Viscosity North America Brian Spendolini, Product Manager, Oracle Charles Lin, System Database Administrator, Beeline When: Thursday, 10/25 10:00-10:45 am Where: Moscone West - Room 3008            

Are you feeling butterflies in your stomach yet? Oracle OpenWorld 2018 is around the corner and we want to make sure you’re able to maximize your time at the event from October 22-25. So, we’ve...


Oracle Exadata: Deep Engineering Delivers Extreme Performance

In my previous post, "Yes, Database Performance Matters", I talked about those I met at Collaborate, and how most everyone believed Oracle Exadata performance is impressive.  However, every now and then I run into someone who agrees Exadata performance is impressive, but also believes they can achieve this with a build your own solution.  I think on that one, I have to disagree... There are a great many performance enhancing features, not just bolted on, but deeply engineered into Exadata.  Some provide larger impact than others, but collectively they are the secret sauce that makes Exadata deliver extreme performance.  Let’s start with its scale out architecture.  As you add additional compute servers and storage servers, you grow the overall CPU, IO, storage, and network capacity of the machine.  As you grow a machine from the smallest 1/8th rack to the largest multi-rack configuration, performance scales linearly.  Key to scaling compute nodes is Oracle Real Application Clusters (RAC).  This allows a single database workload to scale across multiple servers. While RAC is not unique to Exadata, a great deal of performance enhancements have been done on RAC’s communication protocols specifically for Exadata, making Exadata the most efficient platform for scaling RAC across server nodes. Servers are connected using a high-bandwidth, low-latency 40 Gb per second InfiniBand network.  Exadata runs specialized database networking protocols using Remote Direct Memory Access (RDMA) to take full advantage of this infrastructure, providing much lower latency and higher bandwidth than possible if you tried this in a build-your-own environment.  Exadata also understands the importance of the traffic on the network, and can prioritize important packets.  This, of course, has a direct impact on the overall performance of the databases running on the machine. t’s common knowledge that IO is often the bottleneck in a database system.  Exadata has impressive IO capabilities.  I’m not going to overwhelm you with numbers, but if you are curious, check out the Exadata data sheet for a full set of specifications.  More interesting is how Exadata provides extreme IO.  The most obvious technique, is to use plenty of flash memory.  Exadata storage cells can be fully loaded with NVMe flash, providing extreme IOPS and throughput for any database read or write operation.  This flash is placed directly on the PCI bus, not behind bottlenecking storage controllers.  Perhaps surprisingly, most customers do not opt for all flash storage.  Rather they choose a lesser (read that as less expensive) flash configuration backed by high capacity HDDs.  The flash provides an intelligent cache, buffering most latency sensitive IO operations.  The net result is the storage economics of HDDs, with the effective performance of NVMe flash. You might be wondering how flash can be a differentiator for Exadata.  After all, many vendors sell all flash arrays, or front-end caches in front of HDDs.  The key is understanding the database workload.  Only Exadata understands the difference between a latency-sensitive write of a commit record to a redo log, and an asynchronous database file update.  Exadata knows to cache database blocks, that are very likely to be read or updated repeatedly, but not to cache IO from a database backup or large table scan, that will never be re-read again.  Exadata provides special handling for log writes using a unique algorithm that reduces the latency of these critical writes and avoids the latency spikes common in other flash solutions.  Exadata can even store cached data in an optimized columnar format, to speed processing on analytical operations that need only access a subset of columns.  These features require the storage server to work in concert with the database server, something no generic storage array can do.   Flash is fast, but there is only so much you can solve with flash.  You still need to get the data from the storage to the database instance, and storage interconnect technologies have not kept up with the rapid rise in the database server’s ability to consume data.  To eliminate the interconnect as a potential bottleneck, Exadata takes advantage of its unique Smart Scan technology to offload data intensive SQL operations from the database servers directly to the storage servers.  This parallel data filtering and processing dramatically reduces the amount of data that need be returned to the database servers, correspondingly increasing the overall effective IO and processing capabilities of the system.  Exadata’s intelligent storage further improves processing by tracking summary information for data stored in regions of each storage cell.  Using this information, the storage cell can determine whether relevant data may even exist in a region of storage, avoiding unnecessarily reading and filtering that data.  These fast in-memory lookups eliminate large numbers of slow HDD IO operations, dramatically speeding database operations.  While you can run the Oracle database on many different platforms, not all features are available on all platforms.  When run on Exadata, Oracle database supports Hybrid Columnar Compression (HCC) which stores data in an optimized combination of row and columnar methods, yielding the compression benefits of columnar storage, while avoiding the performance issues typically associated with columnar storage.  While compression reduces disk IO, it traditionally hurts performance as substantial CPU is consumed with decompression.  Exadata offloads that work to the storage cells, and once you account for the savings in IO, most analytic workloads run faster with HCC than without. Perhaps there is no better testimonial to Exadata’s performance than real-world examples.  Four of the top five banks, telcos and retailers run on Exadata. For example Target consolidated database from over 350 systems onto Exadata.  They now enjoy a 300% performance improvement and 5x faster batch and SQL processing.  This has enabled them to extend their ship from store option for Target.com to over 1000 stores, allowing customers to get their orders sooner than before.  I’ve really just breezed over 10 years of performance advancements.  Those interested can find more detail in the Exadata data sheet.  Hopefully, you see it would be impossible to get the same performance from a self-built Exadata or similar system.  In the case of database performance, only deep engineering can deliver extreme performance. This is the third blog in a series of blog posts celebrating the 10th anniversary of the introduction of Oracle Exadata.  Our next post, "Oracle Exadata Availability," will focus on high availability. About the Author   Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

In my previous post, "Yes, Database Performance Matters", I talked about those I met at Collaborate, and how most everyone believed Oracle Exadataperformance is impressive.  However, every now and...

Engineered Systems

Implementing a Private Cloud with Oracle SuperCluster

Oracle SuperCluster is an integrated server, storage, networking, and software platform that is typically used either for full stack application deployments or consolidation of applications or databases. Because it incorporates Oracle’s unique and innovative Exadata Storage, Oracle SuperCluster delivers unrivaled database performance. And the platform also hosts the huge range of Oracle and third-party applications supported on Oracle’s proven, robust, and secure Oracle Solaris operating environment. Virtualization is a particular strength of Oracle SuperCluster, with Oracle VM Server for SPARC serving up high performance virtual machines with zero or near zero virtualization overhead. These virtual machines are known as I/O domains. Further, an additional layer of highly optimized nested virtualization is offered in the form of Oracle Solaris Zones. All of these virtualization capabilities come at no additional license cost. For more information about virtualization on Oracle SuperCluster, refer to the recent blog Is "Zero-Overhead Virtualization" Just Hype? The platform also utilizes a built in high throughput, low latency InfiniBand fabric for extreme network efficiency within the rack. As a result, Oracle SuperCluster customers enjoy outstanding end-to-end database and application performance, along with the simplicity and supportability featured on all of Oracle’s engineered systems. Can these benefits be realized in a cloud environment, though? Oracle SuperCluster is not available in Oracle’s Cloud Infrastructure, but private cloud deployments have been implemented by a number of Oracle SuperCluster customers, and Oracle Managed Cloud Services also hosts many Oracle SuperCluster racks in their data centers worldwide. In this blog we will consider the building blocks provided by Oracle to simplify deployments of this type on Oracle SuperCluster. An Introduction to Infrastructure-as-a-Service (IaaS) In the past, provisioning new compute environments consumed considerable time and effort. All of that has changed with Infrastructure-as-a-Service capabilities in the Cloud. Some of the key attractions of cloud environments for provisioning include: Improved Time to value. The period of time that usually elapses before value is realized from a deployment is considerably reduced. Highly capable virtual machines are typically deployed and ready to use almost immediately. Greater Simplicity. Specialized IT skills are no longer required to deploy a virtual machine that encompasses a complete working set of compute, storage, and network resources. Better Scalability. Provisioning ten virtual machines requires little more effort than provisioning a single virtual machine. IaaS environments typically include the following characteristics: User interfaces are simple and intuitive. Actions are typically either achieved with a few clicks from a browser user interface (BUI), or automated using a REST interface. Virtual machines can be created without sysadmin intervention and without the need to understand the underlying hardware, software, or network architecture. Newly created virtual machines boot with a fully configured operating system, active networks and pre-provisioned storage. Virtual machine components are drawn from pools or buckets of resources. Component pools typically deliver a range of resources including CPU, memory, network interfaces, storage resources, IP addresses, and virtual local area networks (VLANs). Virtual machines can be resized or migrated from one physical server to another as the need arises, without the need for manual sysadmin intervention. Where costs need to be charged to an end user, the actual resources allocated can be used as the basis for charging. Resource usage can be accounted to specific end users, and optionally tracked for billing purposes. Resource usage may also be optionally restricted per user. The end user is responsible for managing and patching operating systems and applications, but not for managing the underlying cloud infrastructure. Oracle SuperCluster IaaS The virtual machine lifecycle on Oracle SuperCluster is orchestrated by the SuperCluster Virtual Assistant (SVA), a browser-based tool that supports the creation, modification, and deletion of domain-based virtual machines, known as I/O domains. Functionality has progressively been added to this tool over the years, and it has now become a single solution for dynamically deploying and managing virtual machines on SuperCluster, including both I/O domains and database-oriented Oracle Solaris Zones. SVA is a robust tool that is widely used by SuperCluster customers across a range of different environments. The current SuperCluster Virtual Assistant v2.6 release offers a set of capabilities that deliver benefits and features consistent with those outlined above in the IaaS Introduction. As an alternative to SVA’s intuitive browser user interface, SVA’s IaaS functionality on Oracle SuperCluster can be managed from other orchestration software using the provided REST interfaces. SVA REST APIs are self-documenting and therefore easier to consume, thanks to the included Swagger UI. SuperCluster Virtual Assistant in Action The following screenshot shows an initial window from the tool listing I/O domains in a range of different states. Both physical domains and I/O domains (virtual machines) are managed, along with their component resources. New I/O domains can be created, and existing I/O domains modified or deleted, with additional cores and memory able to be added dynamically to live I/O domains. Database Zones based on Oracle Solaris can also be managed from the tool, and a future SVA release will allow Oracle Solaris Zones of all types to be managed. I/O domains can be frozen at any time to release their resources, and thawed (reactivated) whenever required. As well as providing a cold migration capability, the freeze/thaw capability allows resources used by non-critical I/O domains to be temporarily freed during peak periods for use by other mission critical applications. Resources are assigned automatically from component pools that manage CPU, memory, network interfaces, IP addresses, and storage resources. VLANs and other network properties can be pre-defined, allowing access to DNS, NTP, and other services. An integrated resource allocation engine ensures that cores, memory, and network interfaces are optimally assigned for performance and effectiveness. Compute resources are allocated to I/O domains at a granularity of one core and 16GB of memory, or using pre-defined recipes. Network recipes can also be set up to simplify the allocation of network resources, including simultaneous redundant connectivity to different physical networks thanks to quad-port 10GbE adapters. Recipes are illustrated in the screenshot below. A number of SVA policies can be set according to customer requirements. One set of policies relates to users. User roles are supported, allowing both privileged and non-privileged users to be created. A single SVA user can consume all resources. Alternatively, multiple SVA users can be created, with resource usage tracked by user. Resources can be unconstrained, allowing a user to consume any available resource, or limits can be set, to ensure that no user consumes more than a pre-defined allowance.  The screenshot below illustrates an early step in the process of creating an I/O domain. A comprehensive Health Monitor examines the state of SVA services to ensure that the tool and its resources remain in a consistent and healthy state. SVA functionality continues to be extended, with a number of new features currently under development. Oracle SuperCluster M8 and Oracle SuperCluster M7 customers are typically able to leverage new features simply by installing the latest quarterly patch bundle, which also upgrades the SVA version. Enjoying the Benefits Oracle SuperCluster customers can realize cloud benefits in their own data centers, taking advantage of improved time to value, greater simplicity, and better scalability, thanks to the Infrastructure-as-a-Service capabilities provided by the SuperCluster Virtual Assistant. Database-as-a-Service (DBaaS) capabilities can also be instantiated on Oracle SuperCluster using Oracle Enterprise Manager. The end result is that Oracle SuperCluster combines the proven benefits of Oracle engineered systems with IaaS and DBaaS capabilities, allowing customers to reduce complexity and increase return on investment. About the Author Allan Packer is a Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle. He has worked on issues related to server systems performance, sizing, availability, and resource management, developed performance and regression testing tools, published several TPC industry-standard benchmarks as technical lead, and developed a systems/database training curriculum. He has published articles in industry magazines, presented at international industry conferences, and his book "Configuring and Tuning Databases on the Solaris Platform" was published by Sun Press in December 2001.  Allan is currently the technical lead and architect for Oracle SuperCluster.  

Oracle SuperCluster is an integrated server, storage, networking, and software platform that is typically used either for full stack application deployments or consolidation of applications...


Prescription for Long-Term Health: ODA Is Just What the Doctor Ordered

Healthcare providers face so many complex challenges, from a shortage of clinicians to serve an aging population that requires more care, to changing regulations, to evolving patient treatment and payment models. At the same time, these providers struggle to manage the ever-increasing amount of data being generated by electronic health records (EHRs). How can they focus on providing the best possible patient care while keeping costs tightly under control? Data Drives the Modern Healthcare Organization One of the most important steps is to manage the data that’s the heartbeat of their organization. Data makes it possible to provide quality patient care, streamline operations, manage supply inventories, and build sound long-term organizational strategies, among other things. Perhaps today’s most critical healthcare challenge—outside of the frontline clinician-patient encounter—is efficiently, securely, and affordably managing data. Clinicians need to be able to access patient data in real time, around the clock. In the case of acute-care situations, they can’t afford for systems to go down, or to lose data. Administrators need to ensure the security of patient information to protect privacy, meet regulations, and avoid fines and bad PR. Materials management requires systems to monitor critical supplies and keep them stocked at optimal levels to ensure availability, prevent waste, and reduce costs. Executives need real-time analytics to make day-to-day decisions, plan for the long term, and ensure patients continue to receive the best possible care while the industry experiences seemingly constant change and uncertainty. How do you implement innovative and life-saving procedures and technology, hire the best talent, and expand services without going bankrupt? It all comes down to balancing patient care with controlling costs. Technology That Performs the Perfect Balancing Act Healthcare organizations need to manage enormous quantities of data, but they don’t always have the budget for top-of-the-line database solutions. Nor do they always have the resources required to manage these systems day-in and day-out. For many of these midsize healthcare providers, Oracle Database Appliance offers a realistic, affordable option that optimizes performance for the Oracle Database. The completely integrated package of software, compute, networking, storage, and backup makes setup simple and fast. At the same time, it delivers the performance and the fully redundant, high availability so critical to healthcare environments. And it’s cloud-ready, so that organizations can migrate to the cloud seamlessly. With all the uncertainty healthcare organizations operate under today, they need IT solutions that can adapt and change as their needs change. Oracle Database Appliance was designed with flexibility to meet organizations’ changing database requirements. Compute capacity can be scaled up on demand to match workload growth. Protecting Patient Data Must Take Top Priority Because patient data is so critical to healthcare organizations, they must have reliable, secure backup. Oracle Database Appliance also has an option that makes backup just as simple as system deployment and management. Healthcare organizations can choose between backing up to a local Oracle Database Appliance or to the Oracle Cloud if they don’t want to manually manage backups or maintain backup systems. In healthcare, protecting patient data has to be a top priority. The backup feature of Oracle Database Appliance offers end-to-end encryption and is specifically designed to include the archive capabilities needed to ensure compliance with the healthcare industry’s stringent regulations. One Brazilian healthcare organization ended a two-year search for a solution when it found the Oracle Database Appliance. Santa Rosa Hospital Takes Good Care of Its Patients—and Its Data Santa Rosa Hospital in Cuiaba, Brazil, needed a database system that could scale to match its rapid growth in patient procedures—and the accompanying growth in the hospital’s data. Some non-negotiable capabilities for a solution included improving performance, ensuring uninterrupted access to the database 24/7, a safe and efficient backup process, and expandable storage capacity. According to IT Manager Andre Carrion, Santa Rosa searched for two years for a solution but couldn’t find one that fit its budget, until it found Oracle Database Appliance with cloud backup. The results were impressive: Ensured full access to the database even when a server crashed, and increased patient data security. Systems now run on the virtual server in the cloud while the physical server is re-established. Reduced backup time from 24 hours to 2 hours. Reduced time to retrieve patient information from as much as 3 minutes to 2 seconds. Reduced average ER consultation time from 15 minutes to 6 minutes. Replaced 10 servers with 1 server. As a bonus, everything was installed and ready to go in just a week. The Oracle Database Appliance with easy cloud backup was just what the doctor ordered to meet Santa Rosa’s growing business without compromising the security of sensitive patient information or breaking the budget.  

Healthcare providers face so many complex challenges, from a shortage of clinicians to serve an aging population that requires more care, to changing regulations, to evolving patient treatment and...

Engineered Systems

Improving ROI to Outweigh Potential Upgrade Disruption

Today's guest post is by Allan Packer, Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle with a focus on Oracle SuperCluster. Hardware upgrades have always been supported on Oracle SuperCluster, but how flexible are they? And will any benefits be outweighed by the disruption to service when a production system is upgraded? Change is an ever-present reality for any enterprise. And with change comes an opportunity cost, unless IT infrastructure is flexible enough to satisfy the evolving demand for resources. From the very first release of Oracle SuperCluster, a key attraction of the platform has been the ability to upgrade the hardware as business needs change. Modifying hardware can be very disruptive. Hardware configuration changes create a ripple effect that penetrates deep into the software layers of a system. For this reason, an important milestone in the upgrade landscape for both Oracle SuperCluster M8 and Oracle SuperCluster M7 has been the development of special purpose tools to automate the upgrade steps. These tools are able to reduce the necessary downtime associated with an upgrade, and also minimize the opportunity for misconfiguration during what can be a complex operation.    CPU upgrades Compute resources on both Oracle SuperCluster M8 and Oracle SuperCluster M7 are delivered in the form of CPU, Memory, and I/O Unit (CMIOU) boards. Each SPARC M8 and SPARC M7 chassis supports up to eight of these boards, organized into two electrically isolated Physical Domains (PDoms) hosting four boards each.    Each CMIOU board includes: One processor with 32 cores—a SPARC M8 processor for Oracle SuperCluster M8, or a SPARC M7 processor for Oracle SuperCluster M7. Each core delivers 8 CPU hardware threads, so each processor presents 256 CPUs to the operating system. Sixteen memory slots, fully populated with DIMMs. Oracle SuperCluster M8 uses 64GB DIMMs, for a total of 1TB of memory. Oracle SuperCluster M7 uses 32GB DIMMs, for a total of 512GB of memory. Three PCIe slots. One slot hosts an InfiniBand HCA, and another hosts a 10GbE NIC. In the case of Oracle SuperCluster M8, the 10GbE NIC is a quad-port device. Oracle SuperCluster M7 provides a dual-port NIC. The third PCIe slot is empty on all except the first CMIOU in each PDom, where it hosts a quad-port GbE NIC. Optional Fiber Channel HBAs can be placed in empty slots.   Adding CMIOU boards CMIOU boards can added to a PDom whenever more CPU and/or memory resource is required. Up to four CMIOU boards can be placed in each PDom. The diagram below illustrates a possible sequence of upgrades in a SPARC M8-8 chassis, from a quarter-populated configuration with two CMIOUs (one per PDom), to a half-populated configuration with four CMIOUs, to a fully-populated configuration with eight CMIOUs.   PDoms can be populated with as many CMIOUs as required—there is no requirement to use the same number of CMIOU boards in both PDoms on the same chassis. The illustration below shows two SPARC M8-8 chassis with different numbers of CMIOUs in each PDom.       Adding a second chassis Many Oracle SuperCluster installations are initially configured with a single compute chassis. Every SPARC M8-8 and SPARC M7-8 chassis shipped with Oracle SuperCluster includes two electrically isolated PDoms, so highly available configurations begin with a single chassis. When the need for additional compute resources exceeds the capacity of a single chassis, a customer can add a second chassis with one or more CMIOUs, thereby allowing total compute resources to be increased by up to two times. Since each CMIOU board in the second chassis comes equipped with its own InfiniBand HCA, additional resources immediately become available on the InfiniBand fabric after the upgrade. Note that both SPARC M8-8 and SPARC M7-8 chassis consume ten rack units. Provided no more than six Exadata Storage Servers have been added to an Oracle SuperCluster rack, sufficient space will be available to add a second chassis.     Memory upgrades Where memory resources have become constrained, the simplest way to increase memory capacity is to add one or more additional CMIOU boards. Such upgrades come with the extra benefit of additional CPU resources as well as greater I/O connectivity.   Note that it is not supported to exchange existing memory DIMMs for higher density DIMMs. Adding additional CMIOUs achieves a similar effect in a more cost effective manner. The cost of a CMIOU populated with lower density DIMMs, a SPARC processor, an InfiniBand HCA, and a 10GbE NIC, compares favourably just with the cost of higher density DIMMs.     Exadata storage upgrades Exadata Storage Servers can be added to existing Oracle SuperCluster configurations. Even early Oracle SuperCluster platforms can benefit from the addition of current model Exadata Storage Servers.   Customers adding Exadata Storage quickly discover that both the performance and available capacity of current Exadata Storage Servers far outstrips that of older models. Best practice information is available for such deployments, and should be followed to ensure effective integration of different storage server models into an existing Exadata Storage environment.   Note that Oracle SuperCluster racks can host eleven Exadata Storage Servers with one SPARC M8-8 or SPARC M7-8 compute chassis, or six Exadata Storage Servers with two compute chassis.   The graphic below illustrates an Oracle SuperCluster M8 rack before and after an upgrade that adds a second M8-8 chassis and three additional Exadata Storage Servers.       External storage upgrades General-purpose storage capacity can be boosted by adding a suitably configured ZFS Storage Appliance that includes InfiniBand HCAs. This storage can then be made available via the InfiniBand fabric and used for application storage, backups, and other purposes.   Implications for domain configurations Additional compute resources can be assigned in a number of different ways: Creating new root domains Root domains provide the resources needed by I/O domains, which can be created on demand using the SuperCluster Virtual Assistant. I/O domains provide a flexible and secure form of virtualization at the domain level. Although they share I/O devices using the efficient SR-IOV, each I/O domain has its own dedicated CPU and memory resources. Oracle Solaris Zones are also supported in I/O domains, providing nested virtualization.
A one-to-one relationship exists between CMIOU boards and root domains, which means that a root domain can be created for each new CMIOU that is added. Each root domain supports up to sixteen additional I/O domains.
Note that creating new I/O domains is not the only way of consuming the extra resources. CPU cores and memory provided by an additional CMIOU board can also be used to increase resources in existing I/O domains. Creating new dedicated domains
Dedicated domains provide CPU, memory, and I/O resources—specifically an InfiniBand HCA and a 10GbE NIC—that are not shared with other domains (and are therefore dedicated). Virtualization within dedicated domains is provided by Oracle Solaris Zones.
New CMIOU boards can be used to create new dedicated domains. Dedicated domains can be created from one or more CMIOU boards. If two CMIOU boards are added, for example, they could be used together to create a single dedicated domain, or they could be used individually to create two dedicated domains.
When multiple dedicated domains have been created in a PDom, CPU and memory resources do not need to be split evenly between the dedicated domains. These resources can be assigned to dedicated domains at a granularity of one core and 16GB of memory.
The largest possible dedicated domain on both Oracle SuperCluster M8 and Oracle SuperCluster M7 contains four CMIOU boards. Expanding existing dedicated domains
A new CMIOU board can be used to boost the resources of an existing dedicated domain, up to the maximum capacity of four CMIOU boards per dedicated domain.   The available upgrade options will depend on the specifics of an existing domain configuration as well as the number of CMIOU boards being added. Customers should consult their Oracle account team to explore possible options.   I talk more about Oracle domains in my previous blog, Is "Zero-Overhead Virtualization" Just Hype?     What is the required downtime for hardware upgrades? Two deployment approaches are available for hardware upgrades: Rolling upgrades 
Rolling upgrades allow service outages associated with a hardware upgrade to be minimized or eliminated. The reason is that only one PDom is affected at a time. Provided the Oracle SuperCluster configuration has been configured to be highly available, services need not be affected during a rolling upgrade. High availability can be achieved using clustering software, such as Oracle Database Real Application Cluster (RAC) for database instances and Oracle Solaris Cluster for applications.
The downside of rolling upgrades is that the overall period of disruption is greater. The reason is that PDoms are only upgraded one at a time, so the upgrade process takes longer.
 Non-rolling upgrades
 The benefit of non-rolling upgrades is that the overall period of disruption is shorter, since PDoms are upgraded in parallel. The downside of non-rolling upgrades is that all services become unavailable during the upgrade, since a full system outage is required. Before the hardware upgrade process can begin, a suitable Quarterly Full Stack Download Patch (QFSDP) must be applied to the existing system, and backups taken with the osc-config-backup tool.   For information about the expected period of time required to complete rolling or non-rolling upgrades for a particular configuration, the customer’s Oracle account team should be consulted.   Hardware upgrades allow the available resources of Oracle SuperCluster to be extended as required to satisfy changing business requirements. Upgrades of varying complexity can be handled smoothly while minimizing downtime, thanks to tool-based automation of the upgrade process. The end result is that customers are able to realize the benefits of hardware upgrades without the need for extended periods of disruption to production systems   About the Author Allan Packer is a Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle. He has worked on issues related to server systems performance, sizing, availability, and resource management, developed performance and regression testing tools, published several TPC industry-standard benchmarks as technical lead, and developed a systems/database training curriculum. He has published articles in industry magazines, presented at international industry conferences, and his book "Configuring and Tuning Databases on the Solaris Platform" was published by Sun Press in December 2001.  Allan is currently the technical lead and architect for Oracle SuperCluster.

Today's guest post is by Allan Packer, Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at...


June Database IT Trends in Review

This summer has been an exciting one for converged infrastructure; lots of announcements past month!  In case you missed it... Oracle debuted "Oracle Soar" on Jun 5th, an automated enterprise cloud application upgrade product that will enable Oracle customers to reduce the time and cost of cloud migration by up to 30%. Larry Ellison discussed the details of Oracle Soar which includes a discovery assessment, process analyzer, automated data and configuration migration utilities, and rapid integration tools. The automated process is powered by the True Cloud Method, Oracle’s proprietary approach to support customers throughout the journey to the cloud.  According to Wikibon, Do-It-Yourself x86 servers cost 57% more than Oracle Database Appliance over 3 years. The Wikibon research paper also shows above-the-line business benefits of improved time-to-value from a hyperconverged Full Stack Appliance are over 5x greater than the IT operational cost benefits. They also claim that the traditional enterprise strategy of building and maintaining low-cost x86 white box piece-part infrastructure is unsustainable in a modern hybrid cloud world. The experts talk converged infrastructure and AI We invited Neil Ward-Dutton, one of Europe's most experienced and high-profile IT industry analysts, to discuss how robotic process automation (RPA) and artificial intelligence (AI) have the potential to transform not just routine administrative business processes but also those that have traditionally depended on skilled workers. Read the interview here. Top fintech influencer and founder of Unconventional Ventures, Theodora Lau, joined us to discuss how AI is transforming banking. To be able to process all the data that modern enterprises create, such as financial data, at speed and scale, enterprises need better infrastructure to support it. Learn more about the interview here. Internationally recognized analyst, and founder of CXOTalk Michael Krigsman joined us on the blog to discuss the positive influence of digital disruption. The way we approach business today, he says, is being turned on its head by new demands from internal and external customers. We’re at a crossroads where innovative technologies and new business models are overtaking traditional approaches, creating significant pressure and challenges for tech infrastructure and the people who manage it. Read the interview here. The future of banking Srinivasan Ayyamoni, transformation consulting lead at Cognizant focusing on the banking industry, discusses the relentless cycle of innovation, rising consumer expectations, and business disruptions that have created major challenges as well as lucrative opportunities for the banking industry today. Read more here. Chetan Shroff, Oracle Commercial Leader at Cognizant, discusses why banks must look carefully at their IT infrastructure before they can benefit from new, exciting tech innovations. Don’t Miss Future Happenings: subscribe to the Cloud-Ready Infrastructure blog today!

This summer has been an exciting one for converged infrastructure; lots of announcements past month!  In case you missed it... Oracle debuted "Oracle Soar" on Jun 5th, an automated enterprise...

Engineered Systems

Oracle Exadata: Ten Years of Innovation

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. I recently read some interesting blog posts on the driving forces behind many of the today’s IT innovations.  One of the common themes was the realization that sometimes purpose-built engineering is better at solving the toughest problems.  Given 2018 marks the 10-year anniversary of the introduction of Oracle’s first engineered system, Oracle Exadata, I started thinking about many of the drivers that led to the development of this system in the first place.  Perhaps not surprisingly, I realized Oracle introduced Exadata for the same reason driving other innovations--you can't reliably push the limits of technology using generalized "off-the-shelf" components.     Back in the mid-2000's, the conventional wisdom was that the best way to run mission critical databases was to use a best-of-breed approach, stitching together the best servers, operating systems, infrastructure software, and databases to build a hand-crafted solution to meet the most demanding application requirements.  Every mission critical deployment was a challenge in those days, as we struggled to overcome hardware, firmware, and software incompatibilities in the various components in the stack.  Beyond stability, we found it difficult to meet the needs of a new class of extreme workloads, that exceeded the performance envelopes of the various components.  What we found was we were not realizing the true potential of the components, as we were limited by the traditional boundaries of dedicated compute servers, dumb storage, and general purpose networking.   We revisited the problem we were trying to solve:   Performance:  how to optimize the performance of each component in the stack and eliminate bottlenecks when processing our specific workload. Availability:  how to provide end-to-end availability, from the application through the networking and storage layers. Security:  how to protect end-user data from a variety of threats both internal and external to the system. Manageability:  how to reduce the management burden to operate these systems. Scalability:  how to grow the system as customer's data processing demands ballooned. Economics:  how to leverage the economics of commodity components while exceeding the experience offered by specialized mission critical components.   Reviewing these objectives in light of the limits of the best-of-breed technology led to a simple solution--extend the engineering beyond the individual components and across the stack.  In other words, engineer a purpose-built solution to provide extreme database services.  In 2008, the result of this effort, Oracle Exadata, was launched. The mid-2000’s saw explosive growth in compute power, as Intel continually launched new CPUs with greater and greater numbers of cores.  But databases are I/O hungry beasts, and I/O was stuck in the slow lane.  Organizations were deploying more and more applications on larger and larger SANs, connecting the servers to the storage with shared-bandwidth pipes that were fast becoming a bottleneck for any I/O intensive application.  The economics and complexity of SANs made it difficult to provide databases the bandwidth they required, and the result was lots of compute power starved for data.  The burning question of the day was, “how can we more effectively get data from the storage array to the compute server.” The answer, in hindsight, was quite simple, although quite difficult to engineer.  If you can’t bring the data to the compute, bring the compute to the data.  The difficulty was you couldn’t do this with a commercial storage array—you needed a purpose built storage server that could cooperatively with the database process vast amounts of data, offloading processing to the storage servers and minimizing the demands on the storage network.  From that insight, Exadata was born. Over the years, we’ve built upon this engineered platform, refining the architecture of the system to improve performance, availability, security, manageability, and scalability, all while using the latest technology and components and minimizing overall system cost.    Innovations Exadata has brought to market:   Performance:  Pushing work from the compute nodes to the storage nodes spreads the workload across the entire system while eliminating I/O bottlenecks; intelligent use of flash in the storage system provides flash based performance with hard disk economics and capacities.  The Exadata X7-2 server can scan 350GB/sec, 9x faster than a system using an all flash storage array. Availability:  Proven HA configurations based on Real Application Clusters running on redundant hardware components ensures maximum availability; intelligent software identifies faults throughout the system and reacts to minimize or mask application impact.  Customers are routinely running Exadata solutions in 24/7 mission critical environments with 99.999% availability requirements. Security:  Full stack patching and locked down best-practice security profiles minimize attack vulnerabilities.  Build PCI DSS compliant systems or easily meet DoD security guidelines via Oracle-provided STIG hardening tools. Manageability:  Integrated systems management and tools specifically designed for Exadata simplify the management of the database system.  New fleet automation can update multiple systems in parallel, enabling customers to update hundreds of racks in a weekend. Scalability:  Modular building blocks connected by a high-speed low latency Infiniband fabric enable a small entry level configuration to scale to support the largest workloads.  Exadata is New York Stock Exchange’s primary transactional database platform supporting roughly one billion transactions per day. Economics:  Built from industry standard components to leverage technology innovations provides industry leading price performance.  Exadata’s unique architecture provides better than all flash performance, at low-cost HDD capacity and cost.   Customers have aggressively adopted Exadata, to host their most demanding and mission critical database workloads.  Chances are you indirectly touch an Exadata every day—by visiting an ATM, buying groceries, reserving an airline ticket, paying a bill, or just browsing the internet.  Four of the top five banks, telcos, and retailers run Exadata.  Fidelity Investments moved to Exadata and improved reporting performance by 42x. Deutsche Bank shaved 20% off their database costs, while doubling performance.  Starbucks leveraged Exadata’s sophisticated Hybrid Columnar Compression technology to analyze point-of-sale data while saving over 70% on storage requirements. Lastly, after adopting Exadata, Korea Electric Power processes load information from their power substations 100x faster allowing them to analyze load information in real time to ensure the lights stay on. The funny thing about technology is you must keep innovating.  Given today’s shift to the cloud, all the great stuff we’ve done for Exadata, could soon be irrelevant—or will it?  The characteristics and technology of Exadata has been successful for a reason—that’s what it takes to run enterprise class applications!  The cloud doesn’t change that.  Just as in an on-premise world where people don’t run their mission critical business databases on virtual machines, because they can’t, customers migrating to the cloud will not magically be able to suddenly run those same mission critical business databases in VMs hosted in the cloud.  They need a platform that meets their performance, availability, security, manageability and scalability requirements, at a reasonable cost.  Our customers have told us they want to migrate the Cloud, but they don’t want to forgo the benefits they realize running Exadata on-premises.  For these customers, we now offer Exadata in the cloud.  Customers get a dedicated Exadata system, with all the characteristics they’ve come to appreciate, but hosted in the cloud, with all the benefits of a cloud deployment:  pay-as-you-go, simplified management, self-service, on-demand elasticity, paid for with a predictable operational expense budget with no customer-owned datacenter required. However, not everyone is ready to move to the cloud. While the economics and elasticity are extremely attractive to many customers, we’ve repeatedly found customers unwilling to put their valuable data outside their firewalls.  It may be because of regulatory issues, privacy issues, data center availability, or just plain conservative tendencies towards IT—they are not able or willing to move to the cloud.  For these customers, we offer Exadata Cloud at Customer, an offering that puts the Exadata Cloud Service in your data center, offering cloud economics, with on-premises control. So, it’s been a wild 10 years, and we are continuing to look for ways to innovate with Exadata.  No matter whether you need an on-premises database, a cloud solution, or are looking to bridge the two worlds with Cloud at Customer, Exadata remains the premier choice for running databases.  Look for continued innovation, as we adopt new fundamental technologies such as lower-cost flash storage and non-volatile memory, that promise to revolutionize the database landscape.  Exadata will continue as our flagship database platform, leveraging these new technologies, and making their benefits available to you, regardless of where you want to run your databases. I hope this post gives you a sense of the history behind Exadata, and some of the dramatic shifts that will be affecting your databases in the future.  This is the first in a series of blog posts that will examine these technologies.  Next, we will look more closely at performance, and why performance is critical in a database server, and how we’ve engineered Exadata to provide the best performance for all types of database workloads. Stay tuned for more: Oracle Exadata: Ten Years of Innovation Yes, Database Performance Matters Deep Engineering Delivers Extreme Performance Availability: Why Failover Is Not Good Enough Security: Can You Trust Yourself? Manageability: Labor is Not That Cheap Scalability: Plan for Success, Not Failure Oracle Exadata Economics:  The Real Total Cost of Ownership Oracle Exadata Cloud Service:  Bring Your Business to the Cloud Oracle Exadata Cloud at Customer:  Bring the Cloud to your Business About the Author Bob Thome is a Vice President at Oracle responsible for product management for Database Engineered Systems and Cloud Services, including Exadata, Exadata Cloud Service, Exadata Cloud at Customer, RAC on OCI-C, VM DB (RAC and SI) on OCI, and Oracle Database Appliance. He has over 30 years of experience working in the Information Technology industry. With experience in both hardware and software companies, he has managed databases, clusters, systems, and support services. He has been at Oracle for 20 years, where he has been responsible for high availability, information integration, clustering, and storage management technologies for the database. For the past several years, he has directed product management for Oracle Database Engineered Systems and related database cloud technologies, including Oracle Exadata, Oracle Exadata Cloud Service, Oracle Exadata Cloud at Customer, Oracle Database Appliance, and Oracle Database Cloud Service.

Today's guest post comes from Bob Thome, Vice President of Product Management at Oracle. I recently read some interesting blog posts on the driving forces behind many of the today’s IT innovations. ...

Engineered Systems

Is "Zero-Overhead Virtualization" Just Hype?

At its first release—Oracle SuperCluster T4-4—Oracle claimed zero-overhead virtualization for the domain technology used on Oracle SuperCluster. Was this claim just marketing hype, or was it real? And is the claim still made for current SuperCluster platform releases? To answer these questions we need to examine the virtual machine implementation used on SuperCluster: Oracle VM Server for SPARC, also known as Logical Domains (LDoms for short). Oracle VM Server for SPARC is a Type 1 hypervisor that is implemented in firmware on all modern SPARC systems. The virtual machines created as a result are referred to as Domains. The diagram below illustrates a typical industry approach to virtualization. In this case, available hardware resources are shared across virtual machines, with the allocation of resources managed by a hypervisor implemented using a software abstraction layer. This approach delivers flexibility, but at the cost of weaker isolation and increased virtualization overheads. Optimal performance is delivered only by “bare metal” configurations that eliminate the hypervisor (and therefore do not support virtualization). By contrast, Oracle VM Server for SPARC has a number of unique characteristics:   SPARC systems always use the SPARC firmware-based hypervisor, whether or not domains have been configured—there is no “bare metal” configuration on SPARC that eliminates the hypervisor. For this reason, the concept of bare metal that applies to most other platforms has no meaning on SPARC systems. An important implication is that no additional virtualization layer is required on SPARC systems when configuring domains. That means no additional performance overheads are introduced, either. The SPARC hypervisor partitions CPU and memory resources rather than virtualizing them. That approach is possible because CPU and memory resources are never shared by SPARC domains. Each hardware CPU strand is uniquely assigned to one and only one domain. In other words, each virtual CPU in a domain is backed by a dedicated hardware strand. Further, each memory block is uniquely assigned to one and only one domain. This approach has a number of important implications: Since each domain has its own dedicated CPU resources, no virtualization layer is needed to schedule CPU resources in a domain-based virtual machine. The hardware does the scheduling directly. The result is that the scheduling overheads inherent in most virtualization implementations simply don’t apply in the case of SPARC systems. Memory resources in each domain are also dedicated to that domain. That means that domain memory access is not subject to an additional layer of virtualization, either. Memory access operates in the same way on all SPARC systems, whether or not they use domains. Over-provisioning does not apply to either CPU or memory with SPARC domains. We have seen that access to CPU and memory resources on SPARC systems used in Oracle SuperCluster does not impose overheads, both because these resources are dedicated to each domain, and also because the same highly efficient SPARC hypervisor is always in use, whether or not domains are configured.   We’ve examined CPU and memory. What about I/O? I/O virtualization is a major source of performance overhead in most virtualization implementations.   I/O virtualization with Oracle VM Server for SPARC takes one of three forms:   Partition at PCIe slot granularity. 
In this case one or more PCIe slots, along with any PCIe devices hosted in them, are assigned uniquely to a single domain. The result is that I/O devices are dedicated to that domain. As for CPU and memory, the virtualization in this case is limited to resource partitioning and therefore does not incur the usual overheads inherent in traditional virtualization.
This type of virtualization has been available on every Oracle SuperCluster platform release, and indeed virtualization of this type was the only option available on the original SPARC SuperCluster T4-4 platform. In this implementation, InfiniBand HCAs (which carry all storage and network traffic within SuperCluster), and 10GbE NICs (which carry network traffic between the SuperCluster rack and the datacenter), are dedicated to the domains to which they are assigned. As is true for CPU and memory access, I/O access for this implementation follows the same code path whether or not domains are in use.
Domains of this type are referred to as Dedicated Domains on SuperCluster, since all CPU and memory resources, and InfiniBand and 10GbE devices, are uniquely dedicated to a single domain. Such domains have zero overheads with respect to performance. SuperCluster Dedicated Domains are illustrated in the diagram below. Virtualization based on SR-IOV. 
For Oracle SuperCluster T5-8 and subsequent SuperCluster platform releases, shared I/O has also been available for InfiniBand and 10GbE devices. The resulting I/O Domains leverage SR-IOV technology, and feature I/O virtualization with very low, but not zero, performance overheads. The benefit of the SR-IOV technology used in I/O Domains is that InfiniBand and 10GbE devices can be shared between multiple domains, since domains of this type do not require dedicated I/O devices. SuperCluster I/O Domains are illustrated in the diagram below.   Virtualization based on proxies in combination with virtual device drivers.
 This type of virtualization has been used on all SuperCluster implementations for functions that are not performance critical, such as console access and virtual disks used as domain root and swap devices.   All Oracle SuperCluster platforms since Oracle SuperCluster T5-8—including the current Oracle SuperCluster M8—support hybrid configurations that deliver InfiniBand and 10GbE I/O virtualization via Dedicated Domains (domains that use PCIe slot partitioning), and/or via I/O Domains (domains that leverage SR-IOV virtualization).   An additional layer of virtualization is also supported, with one or more low overhead Oracle Solaris Zones able to be deployed in domains of any type. An example of a configuration featuring nested virtualization is illustrated in the diagram below.     The Oracle SuperCluster tooling leverages SuperCluster’s built in redundancy, along with both the resource partitioning and resource virtualization described above, to allow customers to deploy flexible and highly available configurations. High Availability will be the subject of a future SuperCluster blog.   In summary, SPARC domains are able to offer efficient and secure isolation with zero or very low performance overheads. The current Oracle SuperCluster M8 platform delivers domain-based virtual machines with zero performance overheads for CPU and memory operations. Oracle SuperCluster M8 virtual machines also deliver I/O virtualization for InfiniBand and 10GbE with either zero performance overheads via Dedicated Domains, or with very low performance overheads via I/O Domains. Learn more here.   About the Author Allan Packer is a Senior Principal Software Engineer working for the Solaris Systems Engineering organization in the Operating Systems and Virtualization Engineering group at Oracle. He has worked on issues related to server systems performance, sizing, availability, and resource management, developed performance and regression testing tools, published several TPC industry-standard benchmarks as technical lead, and developed a systems/database training curriculum. He has published articles in industry magazines, presented at international industry conferences, and his book "Configuring and Tuning Databases on the Solaris Platform" was published by Sun Press in December 2001.  Allan is currently the technical lead and architect for Oracle SuperCluster.

At its first release—Oracle SuperCluster T4-4—Oracle claimed zero-overhead virtualization for the domain technology used on Oracle SuperCluster. Was this claim just marketing hype, or was it real? And...

Cloud Infrastructure Services

Mapping a Path to Profitability for the Banking Industry

Just over 20 years ago, a supercomputer named Deep Blue made history by beating the world’s best chess player, Garry Kasparov, in a six-game match. It was able to do this using hardware with a little over 11 Gflops of processing speed. In contrast, the iPhone X you might be holding right now is capable of about 346 Gflops. That’s enough raw computing power to take on Kasparov plus 30 more grandmasters... at the same time. Such comparisons remind us even by modern technology-industry standards, mobile technology continues to advance at a breakneck pace. The result of this trend—a relentless cycle of innovation, rising consumer expectations, and business disruptions—has created major challenges as well as lucrative opportunities for the banking industry. Today, more banks are discovering that a successful mobile strategy offers a clear path to a profitable future. They are also discovering, however, that the wrong IT infrastructure decisions —especially those involving legacy infrastructure—risk turning this journey into a costly dead end. Understanding the Mobile Banking Opportunity There are many reasons why banks increasingly view long-term success through a mobile banking lens. Consider a few examples of the opportunities that an institution can unlock with a successful mobile strategy:   Room to grow: According to the Citi 2018 Mobile Banking Survey, 81 percent of U.S. consumers now use mobile banking at least nine days a month, and 46 percent increased their mobile usage in the past year. Mobile banking apps are now the third most widely used type of app—trailing only social media and weather apps.   A global opportunity: According to World Economic Forum, 500 million adults worldwide became bank accountholders for the first time—but two billion more remain without banking services. As with access to healthcare and education, easy access to affordable mobile connectivity – with 1.6 billion new mobile subscribers coming online by 2020—will put banking and payment services in front of many people for the first time.   A mobile-banking revenue boost: According to a 2016 Fiserv study, mobile banking customers tend to hold more bank products than branch-only customers—a trend that suggests bigger cross-selling opportunities. As a result, mobile banking customers bring an average of 72 percent more revenue than branch-only customers.   Millennials are “mobile-first” banking customers: 62 percent of Millennials increased their mobile banking usage last year, and 68 percent of those who use mobile banking see their smartphones replacing their physical wallets.   Second-Rate Mobile Banking Technology Is Risky Business—and Getting Riskier As mobile technology advances, however, so do the risks associated with a second-rate mobile banking presence. This is especially true for banks that previously settled on a “good enough” mobile strategy—an approach that, in many cases, was designed to work within or work around the limitations of a bank’s legacy systems.   Two risks stand out for banks that continue to accept a “good enough” approach. First, as competitors invest in cutting-edge mobile technology, they expose the glaring usability, reliability, and capability gaps associated with legacy IT infrastructure.   Second, it’s clear that technology innovation drives rising consumer expectations. When a bank’s mobile offerings fall short, the consequences can be profound, far-reaching, and extremely difficult to rectify:   Unhappy consumers are ready and willing to abandon their banks: In 2016, about one in nine North American consumers switched banks.    Millennials are even faster to switch: During the same period, about one in five adults age 34 or younger switched banks. Another 32 percent of those surveyed said they would switch in the future if another institution offered easier-to-use digital banking services.   Bad banking apps are a big deal: Seeking a better mobile app experience is now the third most common reason for switching banks—ahead of security concerns and customer-service failures.   Digital lag leaves mobile apps lacking: A recent survey of UK bank customers found just one in four said they were able to do everything they wanted using a bank’s mobile app, and 34 percent found their bank’s app easy to use.   There are many reasons why a bank might continue to rely on a lower-caliber mobile presence built using aging legacy infrastructure. It’s very difficult, however, to imagine why any of those reasons would justify this level of possibly grievous damage to a bank’s customer relationships, brand image, and industry reputation.     It’s Not Too Late to Invest in Mobile Banking Success   I know that I have painted a foreboding picture—especially for banks that want to embrace a modern technology infrastructure but haven’t yet been able to follow through.    That’s why it’s important to make another point: It’s not too late to get ahead of these challenges and to make the investments that enable a truly first-rate mobile banking strategy.   First, bear in mind that traditional banks still hold some very important cards: Consumers still consider them more deserving of trust than most businesses, their physical branches (though declining in numbers) are important for certain types of advisory and high-value services, and they have the compliance and legal expertise required to navigate the treacherous regulatory waters of the banking mainstream.   Second, it’s crucial to recognize that moving away from legacy infrastructure—the sooner the better—may be the single most important move a bank can make to trigger a quick and decisive pivot toward mobile banking success.    4 Keys to Winning with Bank IT Infrastructure   Let’s focus now on specifics: four action items that a bank IT leader can use to drive a fast and effective infrastructure modernization program:    1. Embrace the cloud to support global growth. Mobile technology performance is a key to creating a good user experience; nobody likes to wait, especially when they want to access their money. Cloud-ready infrastructure is a much better foundation for building robust and reliable mobile offerings—for example, eliminating the latency problems that happen when on-premises systems try to service a global customer base.   2. Get and stay ahead with help from integrated, co-engineered systems. Hardware and software designed to work together and offered in simple pre-configured and pre-optimized packages, offer better performance and faster deployment than DIY non-optimized alternatives. This can be a bank’s most powerful technology weapon for fighting back against the complexity, management, and reliability issues that accompany rapid growth and pressures to scale.   3. Liberate your IT staff to do the things that matter. Co-engineered systems and cloud infrastructure both contribute to many of the same goals: attacking complexity, enabling growth, designing scalable and resilient systems. This means less time spent on tedious maintenance tasks—and more time focusing on business goals that drive success.   4. Build infrastructure that’s ready to handle today’s data and analytics challenges. An entire category of fintech upstarts is focused on reaching new markets through the use of unconventional credit analytics and scoring systems. These firms incorporate everything from educational achievements to call center records and website analytics into models that identify preferences and assess risk for customers who don’t yet have—and might never get—conventional credit scores. In many cases, the only way to serve these customers will be over mobile banking apps and systems.    Many banks could pursue similar opportunities, given the massive quantities of customer data at their disposal. But first, they’ll have to put systems into place that are capable of pulling this data from dozens of siloed sources; combine it with the masses of data flowing into the organization; and apply the right management, analytical, and storage solutions to unlock the insights within.   Oracle Sets Up Banks for Mobile-Tech Success   Oracle’s engineered systems are especially adept at giving banks everything they need for a truly modern, mobile-ready IT infrastructure. First, that’s because they are built to be fully integrated systems. Engineered systems like Oracle Exadata enable banks with one dedicated, high availability environment to run Oracle Database, and another like Oracle Exalytics or Oracle Exalogic to run advanced analytics and other critical business applications.   Second, along with a single, integrated technology stack, Oracle gives banks a single, integrated technology partner to support a modern mobile banking strategy. This is a powerful advantage combined with Oracle’s ability to deliver openness where it matters: In compliance with open architectures, open industry standards, and open APIs; and to achieving interoperability and integration.   These are the qualities that truly give an IT team freedom and flexibility to support innovative mobile banking functions that are money in the bank.    About the Author Srinivasan Ayyamoni is a Certified Accountant with 20 years of experience in business transformation, technology integration, and establishing finance shared services for large global enterprises. As a transformation Consulting lead with Cognizant's Oracle Solution Practice, he manages large digital transformation engagements focused on helping clients establish a high-performance finance function and partnering with them to achieve superior enterprise value.

Just over 20 years ago, a supercomputer named Deep Blue made history by beating the world’s best chess player, Garry Kasparov, in a six-game match. It was able to do this using hardware with a little...

The Future of Banking: How AI is Transforming the Industry

Today's blog post is a Q&A session with top fintech influencer and founder of Unconventional Ventures, Theodora Lau. Named one of 44 "2017 Innovators to Watch" by Bank Innovation, ranked No. 2 Top FinTech Influencers 2018 by Onalytica, and named to the list of LinkedIn Top Voices 2017 for Economy and Finance, she's a powerful voice in the industry. If you probe into the rapid adoption of artificial intelligence (AI) initiatives in the enterprise, it quickly becomes clear what’s behind it: big data. In a 2018 NewVantage Partners survey of Fortune 1000 executives, 76.5 percent cite the greater proliferation and availability of data is making AI possible. As Randy Bean in an MITSloan Management Review article puts it, “For the first time, large corporations report that they have direct access to meaningful volumes and sources of data that can feed AI algorithms to detect patterns and understand behaviors….these companies combine big data, AI algorithms, and computing power to produce a range of business benefits from real-time consumer credit approval to new product offers.” To be able to process all that data, such as financial data, at speed and scale, enterprises need infrastructure to support it. Infrastructure specifically designed for financial and big data applications, with hardware and software that has been co-engineered to work optimally together, can offer better performance and faster analytics. It’s definitely helping deliver a better customer experience—and that’s especially true in the financial services industry. We asked fintech influencer Theodora Lau to talk about the major innovations taking place in the traditionally conservative world of financial services. One key driver of this innovation is the infiltration of AI technology into the financial services industry. A second driver is a new era of partnerships between fintech startups and traditional financial institutions. Traditional financial institutions and fintechs have discovered that, by partnering, they can take advantage of each other’s strengths to develop innovative, revenue-generating offerings. PwC’s Global FinTech Report 2017 found that 82 percent of mainstream financial institutions expect to increase their fintech partnerships in the next three to five years. Theo, how are fintech startups disrupting the industry, and how are the traditional financial services companies responding to that? If you’d asked that question a few years ago, most people would have said banks are in trouble and need to defend against fintechs. But, starting sometime around 2017, the industry began to turn around and become more willing to collaborate. It makes sense because the services fintech startups are typically more focused on specific use cases: they hone in on those and do it really well. They have really good ideas and they tend to be very customer experience-driven, though they lack the scale compared to incumbent banks. And, as much as we talk about how bank infrastructure is aging, banks still have a large customer base and can scale. Traditional financial services companies have existing customers and brand recognition; whereas, fintech startups are typically starting from scratch. At the end of the day, it’s money that we're talking about, and money is very personal and emotional. How much will a consumer actually go out and trust a company that has no history? While a startup may have the most beautiful customer experience, will I trust it enough to hand over my money? I see the two of them [traditional banks and fintechs] working together as the best outcome from a consumer perspective as well as for their own survival. Is it true that new technology is making more collaboration possible as well? Yes, exactly—through APIs and open banking. I don't believe that any single bank can offer everything that the consumer wants, and I don’t think it’s in their best interest to try to be everything for everybody. For instance, ING, a large bank based in Amsterdam, has multiple operating units in different countries. Its German operations formed a partnership with a startup called Scalable Capital, which is an online wealth manager and robo-advisor, to offer a fully digital solution for its customers in Germany. This is a brilliant example of a partnership where the bank extends its product offerings by leveraging the solutions and capabilities that someone else has. What AI technology is changing the industry? Open banking Open banking is the big game changer. One example is Starling Bank in the UK, which does a really good job being an online marketplace. Using APIs, it acts as a hub through which consumers can get access to different things that traditional banks don’t offer, including spending insights, location-based intelligence, and links to retailers’ loyalty programs. Technology companies with banking services Another example is Tencent and Alibaba in China and the big ecosystem that they’ve built. Between the two companies, they own over 90 percent of all of the mobile payments in China. They're not banks, but they’re technology companies that put the consumer in the center of everything they do. They view payments and financial services not as an end in itself, but as a tool to further enhance their offerings. Voice banking We can't forget about voice banking. We see more banks trying to get into that space—though we are not quite there yet. Voice is very intuitive. It’s just easier to talk than it is to remember how to navigate a menu, which is a challenge in online/mobile banking. Imagine if you can actually say, “Hey, pay my bills,” instead of having to remember where you need to go on the menu tree. Let's go deeper into how AI has changed the customer experience. How has it affected personalization and the omnichannel experience? When we’re talking about AI in customer experience, it’s important to remember that banks are not really competing with other banks anymore. When consumers do their “banking,” they're comparing their experience to that which they get with all the other online businesses. How does banking compare to me getting something from an ecommerce site? Is it quick and easy? Is it available when I want it and where I want it? The threat to banks isn’t so much fintech companies as the big tech companies like Apple, Amazon, Alibaba, and Tencent. They are the ones banks should be worried about. Look how many customers they have. Look at the products and services they offer, even payments. It’s because of the vast amount of customer information they collect, as well as data analytics and AI, that big tech companies can provide data insights into user behavior and spending habits, allowing them to anticipate your needs and offer contextual, personalized recommendations. That’s how payments are supposed to work as well. Consumers shouldn't have to think, “I need to pay something.” They have a specific task they want to do, and banking services are just a means to an end. From a consumer perspective, hopefully, AI can make banking ambient and transparent in our increasingly connected world. We've been talking a lot about retail banking, but I presume AI is also making similar changes to other areas of financial services. Marketing is a good example. A big thing is figuring out how to entice people to open an email because everything is digital now. HSBC ran a trial using AI to figure out whether its members would prefer rewards for travel or merchandise versus rewards in the form of gift cards or cash. It sent emails to 75,000 credit card members using recommendations that were generated by AI while a control group received emails with rewards from a random category. As it turned out, the emails using AI-generated recommendations had a 40% higher open rate. That’s a fascinating business use case because you don't want to waste your marketing dollars if people are not going to open your emails. Do traditional financial service companies have the infrastructure in place to fully leverage AI or even to partner with fintechs? How is AI changing processes within their firms’ infrastructure? Financial institutions have a lot of data, but when it comes to being able to leverage AI, which is very heavily data-dependent, the challenge is being able to access that data. A lot of times, all of these systems are very siloed. So while a bank may have a ton of data about a customer, how well can it actually pull all of the data together to be able to generate insights that are useful and can be leveraged? The other challenge: If you can get the data together in a meaningful way, are they explainable? If you are using AI to make decisions, such as in lending, are you going to be able to explain what the AI is recommending, and how someone gets qualified for a loan, for example? That's something you need to do. What’s holding the banks back in terms of modernizing their technology? It’s a couple of things. You need to look at the make-up of the people, because it has to start from the top. Embrace technology At the upper layer, finance people have been doing the same thing for many years. Until you have leaders, including senior executives and board members, that are passionate about and actually understand technology, it's hard to transform. It goes beyond just having a mobile app —true digital transformation and modernization involve change in culture, mindsets, and processes. Data security Of course, it’s also a heavily regulated industry. If you're going to be upgrading something, and you already have customers and money and transactions there, you need to be very careful about what you're doing. Privacy and security of data is of paramount importance. The pain of upgrading infrastructure It’s also a very expensive and lengthy process to upgrade core systems, so money is definitely one part of why financial institutions aren’t modernizing their infrastructure. Some of my friends would say that some banks are actually not scared enough yet. Look at their earnings—they're still making good money. So if they’re not feeling the pain as much yet, then how urgent is it for them to actually do something drastic? Yet many mid-size financial companies don’t have large budgets, but still need to modernize their technology solutions to manage the explosion of data. There are banks that are certainly more at the forefront of technology, and they’re betting big on technology. For example, JPMorgan Chase’s technology budget is over $10 billion in 2018, with most of it going toward enhancement of mobile and web-based services. Where do you see AI taking financial services in the future? What I would like to see in the future in the US is what we see right now in China in regard to their platforms. In India, China, and Africa, the mobile adoption is so much higher compared to the US where the mode of doing things is so different. We shouldn't be looking at banking as an entity per se. Consumers are looking for banking services. That’s what we will be evolving to in the future, and we'll need AI to be the brain and the engine to offer a deeper, richer, more personalized experience. Authentication is another interesting area. No one wants to remember passwords or carry those little tokens. That's not customer-friendly at all. So biometrics and voice authentication will be very fascinating. At least that’s true of voice banking, which is still in the exploratory stage. Checking balances is not really exciting, but, in the future, AI will let the bank know I got a work bonus and will automatically ask whether I’d like to put aside 10 percent of that into savings. Things like that will enable financial wellness and more value overall for customers. That’s where I think AI can help in the future—and that’s how we can make banking better. And behind this future will be the enormous quantities of data that make this customer knowledge possible, and the ability to collect and analyze the data in real time, built on the right infrastructure. Learn more about how machine learning and AI can add substantial value to the financial services ecosystem.    

Today's blog post is a Q&A session with top fintech influencer and founder of Unconventional Ventures, Theodora Lau. Named one of 44 "2017 Innovators to Watch" by Bank Innovation, ranked No. 2 Top...

Cloud Infrastructure Services

Why Should CMOs Care About GDPR?

What GDPR means for CMOs: “Is all the hype justified?” As a direct link to customers and their data, marketers will be uniquely affected by GDPR so we asked Oracle’s Marie Escaro, Marketing Operations Specialist for OMC EMEA SaaS, and Kim Barlow, Director, Strategic and Analytic Services for OMC EMEA Consulting, to discuss how GDPR affects marketing teams. Is all the hype around GDPR justified? How seriously should marketers be taking it? Kim: European regulators have a clear mandate to tighten controls on the way businesses collect, use and share data, and the prospect of large fines for non-compliance is enough to make companies err on the side of caution. Marketers should take this very seriously, as a large part of their role is to ensure the organization has a prescriptive approach to acquiring, managing and using data.  Marie: Businesses increasingly rely on data to get closer to their customers. With data now viewed as the soft currency of modern business, companies have every reason to put the necessary controls in place to protect themselves and their customers. What does this mean for CMOs and marketing teams? Marie: Marketing teams need a clear view of what data they have, when they collected it, and how it is being used across the business.  With this visibility, they can define processes to control that data. I once worked with a company that stored information in seven different databases without a single common identifier. It took two years to unify all this onto a single database, which should serve as motivation for any business in a similar position to start consolidating their data today. It’s equally important to set up processes to prioritize data quality. Encryption is a good practice from a security standpoint, but marketers also need to ensure their teams are working with relevant and accurate data. What’s been holding marketers back?  Kim: There is still a misconception around who is responsible for data protection within the organization. It’s easy to assume this is the domain of IT and legal departments, but every department uses data in some form and is therefore responsible for making sure it does so responsibly. Marketing needs to have a clear voice in this conversation. Many businesses are also stuck with a siloed approach to their channel marketing and marketing data, which makes the necessary collaboration difficult. These channel siloes within marketing teams have developed through years of growth, expansion and acquisitions, and breaking them down must be a priority so everyone in the business can work off a centralized data platform.   Is this going to hamper businesses or prove more trouble than it is worth? Kim: Protecting data is definitely worth the effort for any responsible business. But GDPR is not just about data protection. It’s a framework for new ways of working that will absolutely help businesses modernise their approach to handling data, and benefit them in the long term. If we accept data is an asset with market value, then it’s only natural customers gain more control over who can access their personal information and how it is used and shared. Giving customers the confidence their data is safe and being looked after responsibly, while ensuring that data is better structured and higher quality will be good for the businesses deriving value from that data. What should CMOs do to tackle GDPR successfully?   Marie: As with any major project, success will come down to a structured approach and buy-in from employees. CMOs need to stay close to this issue but in the interests of their own time should at least appoint a strong individual or team as part of an organization-wide approach to compliance. Marketing needs to be a part of that collaborative effort and should be working in a joined-up way, with finance, IT, operations, sales and any other part of the business to ensure all data is accounted for and properly protected.  Find out more and discover how Oracle can help with GDPR . About the Authors Marie Escaro is at Oracle. She has more than 15 years in coordinating partnership between sales and marketing, using high performance tools to improve marketing usage, data quality management in CRM and marketing automated processes. She specializes in marketing automation, CRM, direct marketing, international localization, communication, Eloqua Master and finally sharing the feeling of having a positive impact and changing the world working with the best marketers in the industry. Kim Barlow is currently the Director of Strategic and Analytical Services EMEA at Oracle. She has had an extensive career in tech. She is currently working on a number of clients to help drive their lifecycle and digital strategies using Oracle technology. She loves her life, her family, her friends and her work colleagues.

What GDPR means for CMOs: “Is all the hype justified?” As a direct link to customers and their data, marketers will be uniquely affected by GDPR so we asked Oracle’s Marie Escaro, Marketing Operations...

Cloud Infrastructure Services

What is IT's Role in Regards to GDPR?

Usually when any sort of new compliance and regulation regarding personal data comes out, it is automatically assumed to be solely ‘IT’s problem" because technology is such a huge component of the data collections and data processing system. But compliance is in fact an organization-wide commitment. No individual or single department can make the organization compliant. If you've somehow missed the May 25th deadline, don't panic too much, you're not alone. But you do need to move quickly because there are clear areas where IT can add significant value in helping the organization achieve GDPR compliance a whole lot faster and more methodically. 1. Be a data champion Organizations know how valuable their data is, but many departments, business units and even board members may not realize how much data they have access to, where it resides, how it is created, how it could be used and how it is protected. This is one of the main reasons why organizations are lagging; unclear oversight into where all personally identifiable data (PID) resides.   The IT department can play a clear role in helping organizations understand why data, and by extension GDPR, is so important and determine the best way to use and protect it. By helping educate the greater organization on what exactly GDPR is and the ramifications of non-compliance will help influence a sense of urgency across the organization and ensure that everyone is moving quickly to comply. In addition, GDPR is an excellent opportunity for IT to explore intergraded infrastructure technology and different approaches to data management that can help unify where and how PID is used and processed. Oracle Exadata is a complete engineered system that is ideal for consolidation and performance of the Oracle Databases that handle much of an organizations PID. 2. Ensure data security GDPR considers protection of PID a fundamental human right, so organizations need to ensure they understand what PID they have access to and put in place appropriate protective measures. IT has a role to play in working with the organization to assess security risks and ensure that appropriate protective measures, such as encryption, access controls, attack prevention and detection, are in place.   In my previous post on the new regulations that the telecommunications industry is facing, I mentioned that PCI-DSS compliance is being used as a basic guideline for IT to help achieve GDPR compliance. GDPR is unfortunately quite broad and not well defined, so the more clear demands on PID security so many companies are intelligently using that as a starting point. Engineered systems, including Exadata, have gone under rigorous review to determine its compliance with PCI DSS V3.2 so customers can take care of at least the technological requirements of that regulation.   At a glance, Exadata features extensive database security measures to help customers protect and control the flow of PID: Perimeter Security, Defence in depth, Open Security by default, DB Scoped Security and ASM Scoped Security (CellKey.ora – Key, asm, realm), Infiniband, Open Security by default but particular gateways can be assigned to segregate the networks, Auditd monitoring enabled (/etc/audit/ audit.rules), Cellwall: iptables firewall, Boot loader is password protected. All of which align perfectly with many industry compliance strategies for GDPR that focus on: 1) Authentication, 2) Authorization, 3) Credential Management, and 4) Privilege Management.   3. Help the organization be responsive GDPR requires organizations to not only protect personal data but also respond to requests from individuals who, among other things, want to amend or delete data held on them. That means that their personal data must be collected, collated and structured in a way that enables effective and reliable control of all this information. This means breaking down internal silos and ensuring an organization has a clear view of its processing activities with regard to personal data.    4. Identify the best tools for the job GDPR compliance is as much about process, culture and planning as it is about technology. However, there are products available that can help organizations with key elements of GDPR compliance, such as data management, security and the automated enforcement of security measures. Advances in automation and artificial intelligence mean many tools offer a level of proactivity and scalability that don’t lessen the responsibility upon people within the organization but can reduce the workload and put in place an approach which can evolve with changing compliance requirements.    5. See the potential An improved approach to security and compliance management, fit for the digital economy, can give organizations the confidence to unlock the full potential of their data. If data is more secure, better ordered and easier to make sense of, it stands to reason an organization can do more with it. It may be tempting to see GDPR as an unwelcome chore. However, companies should also bear in mind that this is also an opportunity to seek differentiation and greater value, to build new data-driven business models, confident in the knowledge that they are using data in a compliant way.  Giving consumers the confidence to share their data is also good for businesses.    The IT department will know better than most how the full value of data can be unlocked and can help businesses pull away from seeing GDPR as a cost of doing business and start seeing it as an opportunity to do business better.   Learn more about GDPR and how Oracle can help

Usually when any sort of new compliance and regulation regarding personal data comes out, it is automatically assumed to be solely ‘IT’s problem" because technology is such a huge component of...


May Database IT Trends in Review

April and May flew by! Check out the latest database infrastructure happenings you may have missed in the last 2 months... In case you missed it... General Data Protection Regulation (GDPR) took effect last week on May 25th and many companies were "unprepared" despite having 2 years to plan for it. if you're set, great! Otherwise, check out these posts to get you up to speed ASAP: What is GDPR? Everything You Need to Know. ​It's Not Too Late: 5 Easy Steps to GDPR Compliance Your Future Is Calling: Surprise! There’s (Always) More Regulation on the Way The experts take over We've recently invited tech luminaries to talk about the intersection of new, emerging technologies and the challenges that organizations are facing now in the digital age. Welcome to the ‘Self-Driving’ Autonomous Database with Maria Colgan, master product manager, Oracle Database Going Boldly into the Brave New World of Digital Transformation with internationally recognized analyst, and founder of CXOTalk, Michael Krigsman The Transformative Power of Blockchain: How Will It Affect Your Enterprise? with blockchain expert and founder of Datafloq, Mark van Rijmenam How is the telecommunications industry changing? Your Future Is Calling: How to Turn Data into Value-Added Services Telcos, Your Future Is Calling: It Wants to Show You What’s Possible Telcos, Your Future is Calling! Is Your Back Office Holding Your Back? Your Future Is Calling: Get Connected—With Everything Don’t Miss Future Happenings: subscribe here today!

April and May flew by! Check out the latest database infrastructure happenings you may have missed in the last 2 months... In case you missed it... General Data Protection Regulation (GDPR) took effect...

Cloud Infrastructure Services

​It's Not Too Late: 5 Easy Steps to GDPR Compliance

GDPR went into effect last week May 25th with, unsurprisingly, many organizations scrambling to make the deadline. If you've been keeping up on this blog, you know that we've been highlighting this topics for months. But don't worry, it’s not too late to take control of your data and prepare your organization. Here, we outline five surprisingly simple steps that can help you get on the path to getting your organization to compliance.   Step 1: Don’t panic! Seriously! You may have missed the deadline, but you're not the only one. A recent report estimated that 60% of businesses were likely to miss the GDPR compliance deadline and the articles coming out since the 25th indicate this to be quite true. It might be tempting to hastily implement as many data protection measures as possible as quickly as possible. While this sense of urgency is warranted, as always a measured and strategic approach is best. Companies first need to understand GDPR, how it applies to them, and exactly what their obligations are. This will give them a clear view of the data management and protection measures they need to address their compliance needs.   Step 2: Centralize your data GDPR asks that only the absolute minimum of necessary user information be collected and processed and that users have control over what you do and how you hold that data. Thus, having greater visibility in how and where the organization collects data is imperative. To better monitor data, organizations first need to make relevant information easily accessible to all the right people internally. Years of growth and diversification may have left them with disjointed systems and ways of working, making it difficult for individual teams to understand how their data fits in with data from across the organization. This makes customer information almost impossible to track in a cohesive way, which is why it’s crucial to centralize data and ensure it is constantly updated. This is one of the reasons why a unified Oracle stack is so attractive. Performance, speed, and cost savings of Oracle Engineered Systems and the cloud are great, but it is the consolidation, standardization, and security from chip to cloud that makes complying with regulations like PCI-DSS and GDPR so much easier.    Step 3: Build in data transparency Once you have a solid grip on your data and data-related processes, the next step is to facilitate the exchange of information between teams. Teams like customer service and sales draw on more customer data from more touch-points than ever before to help personalize products or services, but this also means the information they collect is spread thinly across the organization.  To gain a more accurate view of their data, organizations need to integrate their systems and processes so every team has access to the data they need.  Step 4: Choose consistency and simplicity over breadth With businesses collecting such large volumes of data at such a rapid rate, complexity quickly becomes the enemy of governance. Rather than opting for a breadth of technologies to manage this information, your business may want to consider using a single system that sits across the organization and makes data management simple. Cloud-based applications are well-suited to this end, as they allow businesses to centralize both data and data-driven processes, making it easier to track where and how information is being used at all times. As I mentioned before, consolidating your Oracle Database infrastructure onto Oracle Engineered Systems like Oracle Exadata delivers the standardization and security needed to help comply with new regulations like GDPR and beyond. With exact equivalents in the cloud, Exadata allows customer to get their systems in compliance today while still keeping an eye on the demands of tomorrow.   Step 5: Put data protection front-of-mind for employees New technologies can only go so far in making an organization GDPR compliant. As ever, change comes down to employees, culture and processes. Data protection must be baked into the organization’s DNA, from decisions made in the boardroom down to the way service teams interact with customers.    Much of the focus around GDPR has been on the cost organizations will incur if their data ends up in the wrong hands, but it’s worth remembering that above all else the law requires them to show they have the people, processes and technologies in place to protect their information. By following these simple steps organizations can put themselves in a better position to take control of their data. Learn more about how Oracle solutions like Oracle Engineered Systems can help support your response to GDPR.

GDPR went into effect last week May 25th with, unsurprisingly, many organizations scrambling to make the deadline. If you've been keeping up on this blog, you know that we've been highlighting...

How to Build a Digital Business Model

Many companies understand the opportunities presented by digital technologies, but lack a common language or framework to transform their organizations. Through extensive interviews and surveys, researchers at the MIT Sloan Center for Information Systems Research (CISR) have developed a framework to guide thinking about digital business models. The framework focuses on business design considerations and aimed to discover how much revenue is under threat from digital disruption and whether the company is focused on transactions or building a network of relationships to meet customers’ life event needs. CISR analyzed 144 business transformation initiatives to determine the underlying factors that drive next-generation business models and they found two common key dimensions: Customer knowledge. Many companies are launching products and initiatives to learn more about their end customers. Business design. Many firms are striving to shift from value chains to networks or ecosystems. CISR took these two dimensions and created a two-by-two matrix which highlights the business models that will be important in the next five to seven years, and beyond. Now, not every organization transforms in the same way because there is no one-size-fits-all approach to building digital business models. As companies evaluate their digital business model, they must answer several key questions. For organizations developing new digital business models best practices, the research suggest answering these 4 key questions is a good starting point: How much revenue is under threat from digital disruption? It is important to think beyond traditional competitors. What parts of your value chain or business might be attractive to another company? Is the business at a fork in the road? Key decisions include whether to focus on transactions and become an efficiency play, or meet customers’ life events and build a network of relationships. Investments must be driven by what the company is great at. What are the buying options for the future? Moving a company’s business model is the equivalent of buying options. One path is to buy an option that helps the company evolve a little bit at a time. What is your digital business model? Woerner recommends focusing on the business model you want to become. It is important to know where you want to go as a company. Curious about the framework and Woerner's research? Join this Harvard Business Review webinar on Wednesday, May 30th to hear Woerner speak live with Oracle and share her research findings and insights about digital business models. http://ora.cl/oe8SH 

Many companies understand the opportunities presented by digital technologies, but lack a common language or framework to transform their organizations. Through extensive interviews and...

Data Protection

GDPR: Too late? Too complicated? Too flexible? Don’t panic.

‘GDPR is coming tomorrow!’ The Wall Street Journal just reported today that as many as 60% to 85% of companies say they don’t expect to be in full compliance by Friday’s deadline. Suggestions as to why this is the case include businesses weighing the cost of compliance against the cost of non-compliance and deciding to accept the risk, while others will simply fail to get their affairs in order in time.  So, as we approach the deadline, what's next? A great many organizations will be compliant and should find their preparations stand them in good stead.  But what about those organizations who miss the deadline tomorrow, either by delay or design? Should they start panicking now? Should they throw resources and money at the problem in the hope of scrambling over the finish line at the eleventh hour? Is it now more risky to rush a response than it is to miss the deadline, but do so with a deliverable approach in place that demonstrates a commitment to compliance?  If businesses are rushing to compliance, what should they be prioritizing?  Part of the problem in answering that question is the fact the regulation itself doesn’t provide a convenient tick box guide to compliance. Lori Wizdo, principal analyst at Forrester has written: “The GDPR is a comprehensive piece of legislation. But even at 261 pages long, with 99 articles, [it] doesn’t provide a lot of specificity.”  Wizdo was writing for B2B marketers, but the conclusion is the same for all parties. “In practice this renders the GDPR more flexible than traditional “command and control” frameworks”.”  This conclusion is right, of course, but if you’re asking, in a panic, what constitutes best practice compliance, “it’s flexible” isn’t necessarily the answer you’re looking for. All the more reason to stop panicking, pause and consider an appropriate response. If an organization has only now decided it needs to address GDPR then the one thing it cannot change is when it started. Rather than wishing they could turn the clocks back, they should focus on clearly understanding what they want to achieve and how best to go about it. For example, within GDPR there is a clear focus on security and data protection. But organizations should not develop tunnel-vision for those objectives alone.  In our recent series on the future of IT infrastructure and the telecommunications industry, we suggested that following PCI-DSS guidelines can get businesses closer to GDPR compliance. So that is a great first step. “A panicked response to GDPR, which focuses almost exclusively on data protection and security, distorts an organization’s data and analytics program and strategy. Don’t lose sight of the fact that implementing GDPR consent requirements is an opportunity for an organization to acquire flexible rights to use and share data while maximizing business value," says Lydia Clougherty Jones, Research Director at Gartner. Flexibility again, but this time as a benefit to organizations trying to come to terms with GDPR. And this is an issue – and inherent contradiction - at the heart of GDPR. The same regulation can be seen as an unwelcome overheard that some organizations try to avoid, put off, or weigh up but dismiss, or it can be seen as an opportunity to modernize and create a data-driven business that also carries less risk.  While organizations may not be able to change when they started the process, every one remains in control of how effectively they respond. One of the first steps is to educate yourself before you rush into any hasty decisions.

‘GDPR is coming tomorrow!’ The Wall Street Journal just reported today that as many as 60% to 85% of companies say they don’t expect to be in full compliance by Friday’s deadline. Suggestions as to...

Engineered Systems

Cognizant Guest Blog: Supercharged JIT and How Technology Boosts Benefits

Just-in-time manufacturing (JIT) strategies date back to the 1980s, and manufacturers today continue to embrace JIT as they navigate a fast-changing business and technology landscape. This kind of staying power raises an obvious question: How has JIT adapted and evolved to be as useful today as it was 30 years ago? We recently discussed this question with two experts on modern manufacturing technology: Vinoth Balakrishnan, Associate Director at Cognizant Technology Solutions, and Subhendu Datta Bhowmik, Senior Solution Architect at Cognizant. Their insights reveal the critical role that cloud infrastructure plays in creating a new generation of high-performing, “supercharged” JIT manufacturing organizations.    JIT has a pedigree that dates back to the 1980s. Why do modern manufacturing organizations continue to embrace JIT strategies? Balakrishnan: The key to understanding JIT is to realize that it is not just a functionality or feature—it is an organization-wide discipline. In addition, there are two distinct pillars of a JIT strategy: One that is focused on organizational and process issues, and another that is more technology-focused. It is the organizational/process pillar of JIT that keeps it relevant even as technology evolves and changes. This is especially true for continuous improvement (CI), which is a core element of any modern JIT strategy. This is a concept that rises above shifting technology and business trends—giving manufacturers a proven and scalable model for building agile, efficient, and highly competitive operations. Of course, technology plays an important role in JIT, which excels at combining established practices with modern technology innovation. This versatility allows JIT to adapt readily to new manufacturing challenges and competitive pressures, and to meet the demands of global, multi-plant operations with very complex supply chains. This combination also leads to what we think of as “supercharged” JIT strategies that unlock new just-in-time benefits and capabilities. Technology innovation is transforming JIT into a truly frictionless materials-replenishment loop—one that shifts from manual to automated processes, and that enables supply chains linking hundreds or even thousands of companies via strings of real-time, fully automated transactions. Another way to think of this transformation is to imagine a supply chain that replaces material with information. When you can share reliable, real-time information up and down any supply chain, you enable huge efficiency gains, and drastic cuts in waste and misallocated resources. These benefits are relevant to all types of manufacturers, by the way, but they are especially important in industries where we see the most complex supply chains and the greatest scalability challenges—for example, the aerospace and automotive industries. Can you discuss a few areas where you have already seen technology innovation combine with JIT strategies to deliver game-changing benefits?  Bhowmik: Two examples come immediately to mind. First, the Industrial Internet of Things (IIoT) has enabled major speed, efficiency, and accuracy gains in key JIT manufacturing practices. The IIoT leverages its core capabilities—machine-to-machine communication and real-time data flows—to elevate JIT performance. Manufacturers gain real-time visibility into manufacturing processes and performance; and they are able to adjust and improve manufacturing processes on the fly. Value stream mapping—an exercise that identifies waste in a manufacturing process stream—illustrates the value of combining the IIoT with JIT activities. Value stream mapping was previously a manual exercise using individual observations and pencil-and-paper notes. The IIoT enables real-time, fully automated value stream mapping—a much faster and more accurate approach—and allows manufacturers to fix problems on the spot.  Second, cloud services are fueling a transformation in JIT capabilities and performance. One of the best examples involves supply chain management—an area where manufacturers face major challenges dealing with application and data integration, scalability and complexity, among many others.  Cloud services allow manufacturers to solve many of these issues by defining a common information-exchange framework—one in which each supplier represents a node in a virtual supply chain. This framework allows manufacturers to adapt and adjust in real time to shifts in demand, supply chain disruptions, time-to-market requirements, and other potential risks to JIT performance.  Looking ahead, which emerging technologies are most likely to have a similar impact on JIT capabilities and performance? Balakrishnan: Assuming a reasonable time frame—let’s say five years—I would look first at intelligent process automation (IPA).  IPA has implications for JIT manufacturing when it combines existing approaches to process automation with cutting-edge machine learning techniques. The resulting IPA applications can learn and adapt to new situations—a key to combining process automation with continuous improvement. Distributed ledger technology—also known as blockchain—is another important area of innovation. Blockchain has the potential to enable “frictionless” transactions that minimize cost, errors, and business risk, and some firms are already using blockchain to create private trading networks within their enterprise supply chains. Continuous improvement remains a pillar of a modern JIT strategy. Does CI present any special challenges or opportunities related to technology innovation? Bhowmik: I think it’s important to answer a question like this one by restating—first and foremost—that JIT is a technology-independent concept.  Certainly, this is true of Kanban, Five S and other CI methodologies that play a role in JIT strategy. These concepts have proven staying power and rely on timeless concepts—and these qualities make them even more valuable as strategic tools. At the same time, it’s important to understand that “technology independent” doesn’t mean “technology free.” Instead, it means that manufacturers are free to choose the right technology that complements a chosen CI methodology and meets their business needs. Fortunately, it is very easy to find examples that illustrate this point. Perhaps the most useful of these involves the ability to shift from physical Kanban cards to “eKanban” signaling systems. These rely on IIoT machine-to-machine communications and data flows to track the movement of materials; to distribute and route Kanban signals; and to integrate Kanban systems with ERP and other enterprise applications. eKanban systems based on IIoT capabilities are fully automated, and they scale to accommodate global manufacturing organizations of any size. They virtually eliminate the risk of manual entry errors and lost cards. Technology doesn’t change the principles that make Kanban useful, but it does radically improve your ability to apply those principles. For a second example, consider the role that machine learning and artificial intelligence can play in upgrading the IT security measures protecting your JIT manufacturing infrastructure. If a cyberattack stops the flow of eKanban signals, it can also stop your manufacturing processes. The benefits of eKanban are real, and they’re incredibly valuable—and it’s worth protecting those benefits with appropriate security technology choices.  These examples are a great lead-in to our final question: How can manufacturers set themselves up for success with their own “supercharged” JIT strategies?  Balakrishnan: My first piece of advice would be to partner with an integrator, or another source of expert advice and technology services. I realize this sounds like self-serving advice coming from a technology integrator. Nevertheless, it’s a valid recommendation, given the sheer number of technology options available to manufacturers. Most JIT-related technology initiatives, however, are built on the same foundation: cloud-ready infrastructure. It’s very important to understand what it means to be “cloud ready,” especially in a manufacturing context. First, a cloud-ready infrastructure must support easy and efficient integration of the infrastructure (IaaS), platform (PaaS) and application (SaaS) layers of a manufacturing technology stack. It must also facilitate integration with other systems—within and outside of the enterprise—and support interoperability standards such as service-oriented architectures. Second, cloud-ready infrastructure must offer a level of availability that is suitable for business-critical applications. Third, it must support Big Data applications—ingesting, storing, managing, and processing massive quantities of manufacturing and IIoT data. Next, it must be highly scalable—enabling fast and economical hardware upgrades, as well as scaling capabilities without scaling cost and risk. Finally, cost is always a concern. The most common way to control costs is to use commodity hardware optimized specifically for a cloud-ready manufacturing technology stack. Bhowmik: We’ve had a great deal of experience assessing and implementing cloud infrastructure solutions, of course, and we find that Oracle Exadata does the best job of satisfying these requirements. This is largely due to Oracle’s use of engineered systems: pre-integrated, fully optimized hardware-software pairings that incorporate the company’s expertise and experience building cloud-ready systems for the manufacturing industry.  Oracle Exadata meets our scalability, security, availability, and cost requirements; and it performs exceptionally well in Big Data and IIoT environments. As a result, Oracle Exadata remains our first choice for building cloud-ready infrastructure solutions for our manufacturing clients. About the Authors Vinoth Balakrishnan, CPIM (supply chain certified), Six Sigma Black Belt (ASQ certified), Total Productive Maintenance (Japan) certified Oracle Manufacturing, Supply and Demand Planning Architect with 16+ years of experience in manufacturing, supply chain and ERP domain in the U.S., Europe, and Asia. He leads the Oracle VCP/OTM practice at Cognizant.    Subhendu Datta Bhowmik, CSCP (supply chain certified), IOT (internet of things) and Machine Learning (Stanford) certified Oracle Solution Architect with 20 years of Oracle experience in large program management, supply chain management, product development lifecycle, and digital transformation. At Cognizant, he’s working on all Oracle Digital Transformation initiatives. 

Just-in-time manufacturing (JIT) strategies date back to the 1980s, and manufacturers today continue to embrace JIT as they navigate a fast-changing business and technology landscape. This kind of...

Engineered Systems

Oracle Database Appliance: Simplicity and Performance Go Hand-in-Hand

Financial transactions are an essential part of life. For retail bank customers, paying monthly bills online helps avoid late fees. For business owners, rapidly processing customer payments keeps the cash flowing. For investors, buying or selling a perfectly priced security helps keep portfolio objectives on target. Given the importance of such matters, seamless service and access to real-time data are critical.Indeed, when a lapse in data access occurs, the impact on a financial service company’s bottom line can be significant. A Ponemon Institute study estimated that the average cost of an unplanned data center outage in the financial services industry neared $1 million, encompassing: Damaged or lost data Reduced productivity Detection and remediation costs Legal and regulatory headaches Tarnished reputation and brand Downtime-related risks are significant for small and large financial service providers alike. Fortunately, building the infrastructure to help ensure high availability data access can be more budget-friendly than you think. Customer connections multiplying Fintech firms are leading the way in developing individual relationships with their customers, according to EY’s 2017 FinTech report. EY found that a third of digitally active consumers in 20 markets around the world use fintech services and project usage will exceed 50% in the coming years. Traditional financial services companies are now moving aggressively to catch up to and get ahead of these nimble industry disrupters. Interestingly, even as digital channels explode, EY’s 2017 Global Consumer Banking Survey found that between 60% and 100% of retail banking customers worldwide still visit local branches. While delivery platforms vary from cutting edge to old school, the foundation of all financial services remains data: Real-time insights into information such as balances, transactions, and rates accessible at any time of day. Yet collecting, managing, and analyzing that data must be balanced with controlling costs and sustainable profit margins. Keeping it simple High-end, sophisticated database technology is great, but sometimes isn’t a fit from a cost or business perspective. For example, a large financial service company may operate a broad network of remote or branch offices with small business-type needs, while a smaller firm may contend with a tight budget and limited resources. Increasingly, however, financial service providers have found that the Oracle Database Appliance (ODA) offers a streamlined, cost-effective approach to data management. This purpose-built system is optimized for Oracle Database and Oracle applications, and it can be configured and deployed in 30 minutes or less. Engineered to grow with a firm’s database needs, it leverages standardized, proven configurations that don’t require specialists or a team of installers. Plus, the Oracle Database Appliance eases budget concerns as clients only license the CPU cores they need (up to a robust 72). Certainty in an uncertain world Underlying the simplicity and cost effectiveness of the ODA is Oracle’s tradition of reliability and durability. Full redundancy and high availability rates allow data to be accessed 24/7 while protecting databases from both planned and unplanned downtime. Designed to eliminate a single point of failure, the system also reduces the service area of attack with a single-system patch feature. For high-availability solutions, the Oracle Database Appliance may be paired with Oracle Real Application Clusters, Oracle Active Data Guard, and Oracle GoldenGate. Built-in flexibility The Oracle Database Appliance works seamlessly with the Oracle Exadata Database Machine to provide unlimited scalability as businesses grow. Better suited for large enterprises, the Exadata system is simply too powerful in some situations. For example, a small and growing financial services company may not need the full Exadata solution at this stage of the business—or have the internal resources to support it. Similarly, for a multinational bank that employs Exadata at a macro level, a new office or branch may have modest database needs as it builds a local footprint. The Oracle Database Appliance is ideal in both situations. Additionally, in the latter case, the branch-level installation will fully integrate with the Exadata system housed at any regional or international base. The two systems were designed to be complementary, with smooth data movement between connected databases and the cloud as well. Ultimately, Exadata has its place, but with the Oracle Database Appliance, you aren’t forced to take on the complexity and cost if it doesn’t fit. Customer success story: Yuanta Securities keeps it real-time Taiwan-based Yuanta Securities Company is an investment banking firm that provides assorted brokerage and other investment services across a 176-branch network. To realize the benefits of its merger with Polaris Securities, a popular transaction platform operator, Yuanta Securities needed to ensure seamless, real-time data synchronization between the two firms’ distinct transaction systems without disrupting the customer experience. In addition, it sought to consolidate six databases into a single platform, simplify system management, and rely upon a single support vendor. To tackle these challenges, Yuanta Securities deployed three Oracle Database Appliance units—one for its production site, a second for its disaster discovery site, and the third for development and testing. While one Oracle Database Appliance unit required just three hours for installation and configuration, the entire implementation, which included Oracle GoldenGate and Oracle Active Data Guard, was live within 45 days. The disruption to customer transactions was minimal as the company achieved near-real-time, back-end data synchronization with GoldenGate. Furthermore, Yuanta Securities slashed its hardware costs by 70% and saved on licensing costs due to Oracle Database Appliance’s flexible, capacity-on-demand licensing model. Customer success story: Coopenae grows full-speed ahead Costa Rica-based Coopenae is a credit union that serves 100,000 members through 27 locations nationwide. Founded in 1966, the cooperative offers a full array of financial services aimed at meeting the financial needs of its members and their families and communities. Coinciding with Coopenae’s 50th anniversary, management modernized the company’s systems environment to address existing challenges as well as prepare for future opportunities. Key requirements of the upgrade included: Accelerated batch processing times that didn’t affect other business critical applications such as funds management A highly efficient and scalable engineered system A high-performing, server-virtualization environment featuring a simplified, cost-effective single-vendor support approach Oracle Database Appliance fit the bill on all fronts, along with redundant databases, servers, storage, and networking. In turn, Coopenae reported that its database performance improved by three-fold, financial statements and other reports were generated five times faster, and monthly closing processing time dropped from six hours to two hours. A smart way to fulfill your database needs As Yuanta Securities and Coopenae discovered, always-on, high-performing database technology doesn’t have to break the bank. Nor does it require debilitating deployment times or complicated support requirements. Instead, the Oracle Database Appliance offers a simple path to improved data performance and the adaptability to align with growing business needs.    

Financial transactions are an essential part of life. For retail bank customers, paying monthly bills online helps avoid late fees. For business owners, rapidly processing customer payments keeps...