We Provide the Cloud. You Change the World.

Recent Posts

Oracle for Research

Expanding research infrastructure at Rice University

OpenNebula now interfaces with Oracle Cloud It always surprises me when researchers say, “wait…Oracle has a Cloud?” In fact, we do – a very robust and secure cloud with High-Performance Computing (HPC), fast networking, Autonomous Database, and more.  Through Oracle for Research, researchers get both free access to Oracle Cloud and free technical collaboration to optimize the use of cloud to accelerate their research. Through Oracle for Research, I’ve had the great pleasure of working with Klara Jelinkova, CIO, and her wonderful team at Rice University. Rice is leading the expansion of research in Houston, sparking innovation and growing national visibility and computational capacity.  When Klara applied for a 2-year National Science Foundation (NSF) infrastructure grant to expand Rice’s HPC capacity for research, I was thrilled she invited Oracle for Research to collaborate with her. Under the grant, Rice would build out its on-premises data center, develop infrastructure to burst into a commercial cloud when needed, and share 20% of this goodness with the Open Science Group to advance research across the U.S. At Oracle, we were eager to advance her vision and the research mission at Rice, and supporting the NSF grant application to expand computational capacity was a natural fit.  If awarded, we anticipated a close collaboration, with teams from Rice and Oracle working side by side. This was 6 months B.C. (Before COVID).  By the time Rice learned it had been awarded the grant, the world was in lockdown. Like everything else, our collaboration went virtual.  We kicked off the project via Zoom in October, 2020, with the overarching objective of exploring the efficacy of cloud-bursting research workloads from the on-premises HPC cluster into Oracle Cloud.  To explore this, the team had to enable Open Nebula to interface with Oracle Cloud Infrastructure (OCI). Bursting from an on-prem environment into a commercial cloud for extra capacity sounds like it should be straightforward – just take the research workload that needs more capacity and move it between environments. In reality, this is a complex technical challenge. Research users, some of whom have little technical knowledge, need to be able to interface with the system to submit their workloads. Research workloads must be organized and prioritized within the on-prem environment. The on-prem environment must determine when and what type of extra capacity is needed, and be able to manage the workloads to optimize use of the cloud capacity. The research workloads that will move to the cloud must be transmitted and processed securely, efficiently and without disruption of the on-prem environment. Research results must be returned to the researcher the same way – securely, efficiently and without disruption of the on-prem environment. And, cloud capacity must automatically be spun up and spun down so research dollars are optimized. In understanding, this, a project that sounded like it was going to be an easy, one-mile flat loop is suddenly seeming a lot more like climbing Mt. Everest.  In both cases, though, the wisdom of Chinese philosopher Lao-Tzu holds true: a journey of a thousand miles begins with a single step. At Oracle, our technical teams engage in complex collaborative projects with customers every day. They are experts in simplifying the process through sophisticated project plans, assigning project roles and tasks, organizing cadence calls, and writing and testing code, and communicating in technical shorthand via Slack. Oracle for Research brings these technical collaborations to research teams at no cost.  Working with Erik Engquist, Director, Center for Research Computing, and his team at Rice, Oracle for Research and the OCI team plotted a map of milestones to move us forward, identifying a systems architecture and required capacity, integrating tools like Kubernetes and SlateCI for containerizing and orchestrating research workloads, and connecting and aligning the on-prem architecture with OCI.  The work includes developing and delivering an auto-scalable IaaS GPU enabled Kubernetes solution and the seamless integration of SlateCI with the Oracle Kubernetes Engine. Together, this enables Rice to administer and scale multiple on-premises and OCI Kubernetes clusters from a single interface. When Oracle for Research engages in research collaborations, we actively look for ways to develop community and help make connections between researchers. We also aim to build reusable images, tools, and documentation that can help researchers across disciplines accelerate their results. Sometimes, we find our way to interesting projects because researchers bring them to us. That was the case here: we were especially excited about the opportunity to develop the OpenNebula API driver and contribute it to the open source community.  OpenNebula is an open source platform to build and manage customized, enterprise cloud environments, and is used across the Rice University campus to make it easy for students, faculty and researchers to use Rice’s IT infrastructure.  It is a critical component of our collaboration with Rice to help expand their research computing capacity. “We are delighted that this project with Oracle has enabled Oracle and Rice to contribute this resource back to the OpenNebula community,” said Klara Jelinkova, CIO, Rice University. “The close collaboration between Oracle technical experts and my team has provided learning opportunities and enabled us to explore new and innovative technical solutions.” The creation and donation of the OpenNebula API is an early and exciting mile marker in the current journey that Oracle and Rice are on together, with many more to come. We are grateful for the ongoing and committed collaboration of our colleagues at Rice. For more information about Oracle for Research and how you can gain access to Oracle Cloud and technical collaborations to advance your work, please visit Oracle for Research.  

OpenNebula now interfaces with Oracle Cloud It always surprises me when researchers say, “wait…Oracle has a Cloud?” In fact, we do – a very robust and secure cloud with High-Performance Computing (HPC),...

Oracle for Research

Four Facts You Might Not Know About Oracle for Research

Recently, Alison Derbenwick Miller, Vice President of Oracle for Research, was a guest on “Kickin’ it with Karan” with host Karan Batta, where they talked about Oracle’s program for researchers and how Oracle Cloud is enabling breakthroughs in discovery. Although a relatively new program, the caliber of the research collaborations the program is engaged in is impressive. As host Batta remarked, “These are world-changing things. Many don’t realize some of the projects Oracle is involved in across the globe.” Below are four key takeaways from the interview, providing some facts you might not know about Oracle for Research. You can watch the full video here. World-changing projects are happening From COVID to climate change to cancer, researchers are embarking on world-changing projects with Oracle for Research. Take, for instance, Dr. Dan Ruderman of the Lawrence J. Ellison Institute for Transformative Medicine of USC and his breast cancer research that could democratize the way we diagnose and treat the disease. Or Dr. Adrian Mulholland of the University of Bristol, whose research into 3D modeling of molecular dynamics is enabling new ways to develop vaccines for COVID.  Around the world, across all disciplines, researchers are working with Oracle to make faster progress toward real, meaningful change for humanity. Cutting-edge research pushes cloud to new heights It’s not just researchers who benefit; Oracle’s products are getting stretched and pushed by demanding research requirements. And that’s making Oracle Cloud more robust and powerful. “Researchers are really interesting to Oracle not just for their research, but also because they use Oracle Cloud in ways commercial customers might not,” says Derbenwick Miller. For example, researchers use Oracle’s HPC Cloud for simulations that require massive mathematical calculations and computational power. Their feedback allows Oracle to develop better products that benefit both researchers and our customers. It’s more than cloud credits, it’s a partnership While Oracle for Research offers generous cloud credits for research, the program goes beyond just free tech. With a focus on collaboration and community, the program provides hands-on consulting, technical mentoring, custom images and tools built for research, and more. “By working collaboratively, we help researchers start using Oracle Cloud quickly, optimize their workloads, and reach results faster.” Working with researchers at Royal Holloway, University of London on their carbon capture sequestration project, the Oracle for Research team optimized a configuration that worked with the researcher’s software package on Oracle Cloud and cut their research time in half. “Oracle has helped us break the barrier of how much computational power we have in the lab,” says Dr. Saswata Hier-Majumder, who was the principal investigator on the Royal Holloway project. As Derbenwick Miller says, the dedicated team of technical experts and architects within Oracle for Research allow researchers to focus on what they know best – research, discovery, results, and ultimately, solutions. Bridge to commercialization and community Most researchers in the program prefer to stick with academics. But for those who might want to commercialize, or connect with other innovators, Oracle can help. “One of the advantages of being a sister program to Oracle for Startups is we can help connect researchers with innovators who might want to commercialize their discoveries,” says Derbenwick Miller. Providing this community connection unlocks potential in many ways. For example, Kinetica, a startup in Oracle for Startups, which has done work with the San Francisco Estuary Institute (SFEI), now engages with Oracle for Research to unlock new opportunities for researchers, including a project with the government of Denmark and a local research university in Denmark.   Oracle Cloud offers researchers autonomous technologies, high performance compute, GPUs, machine learning and artificial intelligence tools, and more, all protected by enterprise-level privacy and security.  And while the foundation of the Oracle for Research program is Oracle’s Cloud technologies, it’s the technical collaboration, nurturing community, and commitment to researcher success that set the program apart.  Contact us to explore how Oracle for Research can help you accelerate your research-driven discoveries.

Recently, Alison Derbenwick Miller, Vice President of Oracle for Research, was a guest on “Kickin’ it with Karan” with host Karan Batta, where they talked about Oracle’s program for researchers...

Cloud Tech for Researchers

Setting Up Oracle Cloud: Consider Compute Images and Instance Shapes

Welcome to part three of our blog series – Oracle Cloud Fundamentals for Researchers. In this post, we will explore additional features to help optimize your use of Oracle Cloud when setting up your compute instance.  A compute instance can be a virtual machine (VM) or a physical bare metal machine (BM). Key elements to consider when setting up your compute instance include: Compute images Instance shapes Network tiers and security lists Usage control, automation and credits  Here, we take a closer look at the first two key elements, compute images and instance shapes. We will take a deeper dive into network tiers and security lists, and usage control, automation and credits in upcoming blog articles in this learning series. Compute Images A compute image is a template of a virtual hard drive that determines the instance’s operating system (OS) and other software. Oracle Cloud offers researchers the option of using a platform or a scratch-build image as follows:  Platform Images – These are pre-built Linux or Windows operating system images ready to be deployed in the Oracle Cloud. Platform images are tested by Oracle against various hardware shapes and are optimized to perform on Oracle Cloud. Each image offers multiple release build options, which can be viewed and selected via advanced options for compute instance creation. The platform image for ARM based devices is available through the Oracle Linux 8 image. TIP: When starting your project, choose pre-built platform images. If your software configurations are not closely tied to a Linux distribution, like Centos or Ubuntu, we recommend using Oracle Linux. Oracle Linux has maintenance, security and compatibility advantages (e.g., automated patching) over other releases on Oracle Cloud. You can convert your current Linux distributions to Oracle Linux quickly with this link. TIP: Windows images cannot be exported out of the Oracle Cloud tenancy due to Microsoft licensing considerations. Therefore, if you are using Windows and need to export data (e.g. to your laptop, etc.), you will need to make a backup of your installed software and data and export them. Oracle Images – These are pre-built images created by Oracle with pre-installed software tools. They are tested for software version compatibilities against the OS version and are installed with the latest patches. Oracle images are also tested and benchmarked against relevant shapes with representative sample data. They are designed to jump-start research projects and deliver a common image framework for researchers within and across universities. Some of the popular high-performance computing (HPC) and data science images are listed below:   AI (All-in-One) GPU Image for Data Science Genome analysis toolkit Julia AI/HPC GPU Image NVIDIA images and NVIDIA GPU image TIP: The following steps will help you determine whether you need to use an Oracle provided image or build your own image from scratch: Compare the toolset and the version provided by the Oracle image with the toolsets and version you require for your project. Oracle images always provide the latest compatible software versions with applied patches. However, this may not always work and you may want to work with the older versions of the software. Please check and see if this is your scenario and in such cases you may decide to use your own images or use the Oracle provided upgraded version. If the software tools and operating system versions you require are specific to your needs and are older than the versions provided by Oracle image, we recommend starting out by using a base software image that works for you. Consult with Oracle for Research or Oracle for Research Github to identify a solution that works best for your situation. Oracle Cloud Marketplace Images - These are images for OCI developed by various Oracle partners (third parties). These images can be directly provisioned to your Oracle Cloud tenancy from the Marketplace without any download. Some marketplace images of particular interest to researchers, that are free and will not consume cloud credits, include:  Oracle HPC Cluster and Oracle HPC File system NVIDIA GPU Cloud Machine Image Oracle Linux 7 Cluster Networking Image Molecular Dynamics Images ( NAMD runbook and GROMACS runbooks)  Oracle Marketplace Slurm Image (HPC + Slurm combo) Oracle Cloud Slurm Image BeeGFS On-demand TIP: Consider testing with the above images if you are looking to build a cluster networking infrastructure using Lustre or BeeGFS. Github Images – Oracle for Research has developed custom and customized images specifically to enable researchers using OCI.  These, including associated code and documentation, are available in the Oracle for Research Github.  Additional images with associated code and documentation are also located in the OCI-HPC Github. OCI images also are available as containers and can be found in the open containers Github repository. For additional customization and a more collaborative, community approach, you can clone, or fork, the Github repositories.    Custom Images – These are images you create, potentially from on-campus resources, from existing Oracle Cloud instances, or from other cloud environments. They contain your specific software versions, configurations and data. Once you upload a custom image, you can share it across tenancies within Oracle Cloud. You can also export it for external usage. Custom images provide a point-in-time snapshot of an instance and can be used to store multiple versions of your software at different points of time. TIP: Use custom images to build the same instance in another availability domain. For example, if you need to move the image out of Oracle Cloud, then export the image to object store and download it to on-premise. You can also move attached block volumes between OCI tenancies and regions. Boot Volumes – These are a persistent way to keep software installs and configurations in a volume to use later in another instance. Boot volumes cannot be shared concurrently by multiple instances; however, they can be used within the same availability domain in your tenancy provided use is not concurrent. Boot volumes can be cloned to replicate and build another instance. Additionally, the volume storage can be extended quickly.   Image OCIDs – These are the unique identity tags allocated to an image in Oracle Cloud. It is possible to have multiple OCIDs for an image based on various regions (e.g. Ashburn, Frankfurt). OCIDs are great for sharing or publishing images across researcher tenancies or by Oracle to researchers. You may also share your own OCIDs for custom images you build with other researchers. To learn more check out Oracle Cloud Provided Images or Oracle Custom Images. TIP: Use the Image OCID feature to uniquely identify and share resources in an environment where many researchers are simultaneously working with a large number of cloud images. Instance Shapes Instance shapes are hardware specifications (e.g., CPU, memory or storage) that can be used to spin up a compute instance of a specific image. Instance shapes are different from images in that instance shapes define the hardware allocated to the instance, while images define the software in the instance. Instance shapes are broadly categorized as virtual (VM) or physical bare metal (BM) and are available from multiple vendors. Instance shapes provide you with the flexibility to scale your application across low cost to high performance hardware available in the Cloud. Oracle Cloud provides both flexible (AMD Rome) and fixed shapes (Intel Skylake). More detailed information on available shapes, their specifications and usage can be found here. Key Takeaways In this blog, we’ve explored two key considerations when setting up an Oracle Cloud instance, compute images and instance shapes.  Remember, a compute image is a template of a virtual hard drive for the Cloud instance – it determines the operating system and software available for use, the same way the hard drive on a laptop contains the operating system and software available to the laptop user.  An instance shape is a template that determines the number of CPUs, amount of memory and other hardware resources that are allocated to an instance.  Both are important considerations to ensure that an Oracle Cloud instance will be fit for purpose for a defined research project. In the next Oracle Cloud Fundamentals for Researchers blog, we will take a closer look at the remaining two key elements: network tiers and security lists; and usage control, automation and credits. As always, if you need help or have questions, we are here to help you every step of the way. Contact us and connect with us on Oracle for Research Github.   

Welcome to part three of our blog series – Oracle Cloud Fundamentals for Researchers. In this post, we will explore additional features to help optimize your use of Oracle Cloud when setting up your...

Research Computing

High Performance Computing Helps Researchers Predict Whether a Drug Will Harm Your Heart

(Don't miss our December 8, 2020 webinar with Internet2. Register here.) Cardiotoxicity. It’s a terrible-sounding word for an equally terrible circumstance: it describes when a drug to cure one ailment also does harm to a patient’s heart.  Cardiotoxicity is a serious and expensive problem for the pharmaceutical industry; nearly 10% of drugs in the past four decades have been pulled from the worldwide clinical market due to cardiovascular concerns, despite efforts to gauge cardiotoxicity risk in the development and testing stages. It’s also a serious problem for doctors and patients, who have to balance the need for a cure with the risk of heart problems. Cardiotoxicity occurs with several classes of drugs, especially when they are administered at higher doses. It is particularly prominent in drugs used to treat cancer.  The same drugs that kill cancer cells may also attack the tissues of the heart.  Sometimes the effects are immediate. Sometimes they occur years later, after a patient has long been cancer free.  Doctors who treat cancer work with cardiologists – doctors who treat heart ailments – to mitigate the cardiac harm that a patient undergoing chemotherapy may suffer.  Their challenge is to administer enough of the drug to eliminate the cancer while keeping the dosage low enough to avoid damaging the heart.  It would undoubtedly be better if they had better options, including new drugs that can fight cancer without harming the heart, even at high dosages. Researchers in the Vorobyov Lab and Clancy Lab at the University of California, Davis, are working with Oracle for Research to develop an in silico (computer)  model to better understand and predict the likelihood of pharmaceutical compounds to cause heart pattern disturbances, known as arrhythmias.  Led by Dr. Igor Vorobyov, the first task for the team is testing the model against compounds that are known to bind to a cardiac ion channel protein encoded by a specific human gene (hERG), blocking critical molecular interactions that manage the heart’s rhythms.  Once the model is tested and trained, it will try to predict the pro-arrythmic proclivities of compounds whose effects are not known. The research team also aims to resolve an additional challenge that is not well-addressed by current pharmaceutical testing protocols.  Cardiotoxicity often occurs in heart tissue that is unhealthy, but in many patients, unhealthy tissue is masked by healthy heart tissue. To truly assess cardiotoxicity risk, drugs must be tested in the context of comorbidities: what will this drug to do a patient who already has complicating factors that increase his or her risk of heart problems, like a cancer patient who also suffers from diabetes? The focus of the research project is to use advanced molecular dynamics simulations to develop an AI-driven in silico multi-scale functional model that predicts – in the early stages of drug development - the likelihood that a drug will harm the heart, at what doses that harm will occur, and what additional risks might exist in the context of common comorbidities.  The work is computationally intensive, and requires high performance CPU and GPU processors that exceed the researchers’ local computing resources. By moving the research work to Oracle Cloud Infrastructure, Dr. Vorobyov and his team gained access to enterprise scale computing including high performance bare metal CPU and GPU shapes that can be used in combination with Oracle’s ultra-fast networks. This enables the team to process more data and run more simulations more quickly. Introducing high performance, scalable, enterprise cloud computing accelerates discovery and is transforming medical research and treatment.  If successful, the work that Dr. Vorobyov and his team are doing will save pharmaceutical companies billions of dollars, which could potentially lower the consumer cost of drugs.  More importantly, it will save lives, more accurately predicting safer dosages and supporting faster development of new and different drugs that more effectively fight disease and while leaving untargeted tissues untouched. Learn more about this critical research. Sign up to attend  a webinar on December 8, 2020 at 12:00PM ET, hosted by Oracle and Internet2, to learn more about how UC Davis and Oracle for Research are combating drug-induced cardiotoxicity.   

(Don't miss our December 8, 2020 webinar with Internet2. Register here.) Cardiotoxicity. It’s a terrible-sounding word for an equally terrible circumstance: it describes when a drug to cure one...

Cloud Tech for Researchers

Oracle Cloud Fundamentals for Researchers: Getting Started with Your Cloud Tenancy

Written by Rajib Ghosh, Senior Solutions Architect, Oracle for Research As a researcher, you know that cloud computing accelerates research results.  Yet, even if you’ve used cloud computing before, your Oracle Cloud tenancy might be a new and unfamiliar place – and figuring out how to navigate and optimize your use of it might be challenging. We get it.  That’s why we’re launching this new blog series – Oracle Cloud Fundamentals for Researchers – to make it easier for you to get the Oracle Cloud technical information you need to succeed.  In the coming weeks, we’ll provide information, insights, and guidance to help accelerate your use of Oracle Cloud Infrastructure (OCI). Each blog will help you with different aspects of setting up and using your Oracle for Research OCI tenancy.  Each blog starts with an outline of the included content for easy reference. Happy learning! In this blog, we introduce setting up your Oracle cloud tenancy, including: Logging into Oracle Cloud – the First Time Logging into Oracle Cloud – After the First Time Things to Know Before Creating an Instance Generating and Downloading Keys  Using Custom Keys Creating an Instance Connecting to a Running Instance through Secure Shell (SSH) Figure 1 below shows a simple workflow of the key steps for getting started. Figure 1. Simplified workflow for getting started on Oracle Cloud Logging into Oracle Cloud - the First Time The primary technical contact for your Oracle for Research tenancy (also called the “Cloud Administrator” in your Oracle for Research award notice) will need to complete the initial login and set up your tenancy.  When your OCI account is provisioned by Oracle for Research, your Cloud Administrator will receive an email from Oracle that includes a web link to OCI. The first time your Cloud Administrator logs in to OCI, they will want to use this link. The user name for your OCI tenancy is the email address that you provided to Oracle for Research in the “Primary Technical Contact’s university email address” field when you applied to Oracle for Research. Your Cloud Administrator will get a second email from Oracle that includes a temporary password. Once the Cloud Administrator has logged in to OCI, please change the password. The Cloud Administrator will need this new password to log back into the tenancy.  If a note must be made to remember this password, please be sure to keep it in a secure place. The Cloud Administrator should not share their password with others.  Instead, you will learn below how to create users within the tenancy for other members of the research team.  These users will have their own user names and passwords. A typical first-time OCI login page is shown in Figure 2 below. The tenancy name is displayed beneath the “Oracle Cloud” heading. This tenancy name is assigned by Oracle Cloud and cannot be changed. Figure 2. First-time Oracle Cloud login screen. We recommend that you note the tenancy name and keep it in a safe place. All users will need to use this tenancy name for subsequent logins, and you will need it if you need OCI support. Logging into Oracle Cloud – After the First Time Once you have activated your tenancy by logging in for the first time, you will follow a different path to log in to your Oracle Cloud tenancy for subsequent use. To navigate to the Oracle Cloud sign in page, go to oracle.com, and click on the accounts icon (a stick-figure head) on the top right side of the page. Choose “Sign in to Cloud.”  (Alternatively, navigate directly to oracle.com/cloud/sign-in.html.) You will be presented with a page as in Figure 3 below.  The “Cloud Account Name” is your tenancy name – not your user name! Figure 3. Cloud Account entry screen. After you enter your Cloud Account Name, you will be presented with a login screen that provides the preferred method of logging into OCI:  authenticating through Oracle Identity Cloud Service (IDCS), as shown on the single-sign-on screen (SSO) section in Figure 4. This is a more secure login method where your Oracle Cloud identity is stored in one place with access to multiple OCI services.  The Cloud Administrator’s account will be enabled to log in using either option in Figure 4.  (Technically speaking, the Cloud Administrator is automatically provisioned an IDCS account for their SSO login.) It is recommended that you login with the provided IDCS account.  For a more detailed understanding, check out Understanding Sign-In Options. Figure 4. Subsequent IDCS enabled login screen. TIP: Once you are logged in, you may use the “Quick Actions” buttons on the Oracle Cloud console home screen to explore common introductory actions - creating a VM instance, creating a transactional autonomous database, an autonomous data warehouse, setting up a network, storing data in object storage, and creating a stack with resource manager. Additional options can be explored from the drop down menu available on the top left corner of the screen in Figure 5. Figure 5. Oracle Cloud console home screen. Things to Know Before Creating an Instance An “Instance” is a compute machine running on Oracle cloud. It has the following key attributes:  The hardware shape – Specifies the CPU / GPU, memory and the associated local storage. The shape can be a virtual machine (“VM”) or a physical machine (bare metal or “BM”). Compared to VM shapes, BM shapes have no hypervisor overhead and thus typically are provisioned for larger or dedicated workloads. BM shapes generally cost more than VM deployments.  To learn more about virtualization, you can explore An Introduction to Virtualization.  In Oracle Cloud, compute power for a shape is measured by “OCPU” (Oracle CPU) - the physical CPU cores allocated to the shape. If you’ve used other clouds or VMs, you may be familiar with vCPUs. Be aware! OCPUs are not directly comparable to vCPUs. An OCPU is a dedicated core with at least two threads.  Each CPU core also can be hyper-threaded, and multiple virtual machines can take advantage of this technology. CPU power for a VM instance is measured using “vCPU” (virtual CPU) and reflects the number of threads in the VM.  For example, a VM that includes one core with 2 threads would have one OCPU and 2 vCPUs, while a VM that includes 3 cores, each with 2 threads, would have 3 OCPUs and 6 vCPUs.  Still confused?  This YouTube video might help. As a researcher, this information is important to you for two reasons. First, it’s important to ensure your configuration has the compute power you need. Second, Oracle pricing is based on OCPUs, so understanding what an OCPU is will help you better manage your spend. TIP: For researchers, we recommend that you start small and scale your implementation from smaller to larger shapes. This helps us and you.  It helps you optimize use of your Oracle for Research Cloud credits (or your dollars, in a paid tenancy).  It helps us optimize allocation of our hardware.  What does this mean in practice?  Choose VM.Standard shapes when you are starting out and exploring your OCI tenancy. Choose flexible AMD shapes or Legacy VM.E2.x in accordance with your computational needs. Choose BM shapes for very large computational loads for better price performance.  As you progress through your research project, you may also explore advanced compute options such as creating an instance in the dedicated host. This ensures that all your VMs will be created in a single dedicated machine, which is advantageous for securing your data and computations. The operating system image – Determines the operating system for an instance. Oracle Cloud instances may be deployed with Linux (Oracle Linux, Centos, Ubuntu) or Windows operating systems. Oracle provides flexibility in choosing various OS images, and you can choose from the OCI platform, Oracle pre-built images, custom images, OCI marketplace, or OCI Github library. The pre-built images come with a variety of software tools pre-installed on the OS and can help you jumpstart your project.  TIP: Within the Cloud console, you can also spin up compute instances based on the custom images or boot volumes from another compute instance. This feature can be used to:  a. Preserve your software installation and data across  instance terminations b. Move your installed software image to another availability domain (defined below) c. Keep multiple versions of images d. Download, upload or share images across on-campus locations or other clouds e. Terminate instances to save on cloud credits & keep your installed software intact Availability domain (“AD”) – OCI is hosted in regions and ADs. A region is a localized geographic area, and an AD refers to one or more physical data centers located within an OCI region. A region may have one or more ADs. Compute instances are AD-specific, and traffic between ADs and regions is encrypted. While ADs do not share infrastructure to help ensure availability, ADs within the same region are connected by low latency, high bandwidth networks. You can choose the region and the AD for your instance, though we recommend deploying your instance in your geographic region when possible. Please be aware that while all regions offer core OCI services, not all Cloud services are available in all regions, so check availability when creating your instance to avoid making service level requests in the future.   TIP: For researchers, we recommend that you create all your VM resources within a single AD. Doing so lets you take advantage of the lower network traffic latency. Network configuration (VCN, subnets and compartments) – Within OCI, your cloud resources connect using a virtual cloud network (VCN). Oracle’s Networking service uses virtual versions of traditional networking components. Your network resources can be configured to isolate various instances and set up complex cloud architectures and implementations as well. Please consult the OCI network documentation for more details. TIP: In most cases, researchers should choose the default network parameters. However, if you have questions or specific requirements and need assistance, please email us at OracleForResearchTech_ww@oracle.com. Generating and Downloading Keys (Linux Only)  For Linux instances, Oracle recommends using public and private keys to login with a secure shell (SSH), as this is more secure than maintaining passwords. Oracle Cloud automatically generates both the public and the private keys, but you will need to save (download) the private key before you create the compute instance as shown in Figure 6 below. Figure 6. Downloading a private key. Oracle values your privacy and security, and does not keep records of private keys.  Accordingly, it is very important that you store your downloaded, private SSH keys in a safe place. Losing these keys means losing access to your VM instances. Oracle cannot help you reset or remember your private keys.  In case of a key loss, you will need to terminate the instance and recreate it from your last back up with a different key. Using Custom Keys (Linux Only) Some users prefer to use custom keys. If desired, you may use any open source tools to generate your own custom private and public key pairs.  Some examples of open source key generation tools are Puttygen or ssh-keygen. To use custom keys you have generated, instead of “Generate SSH Keys” as shown in Figure 6 above, click “Choose SSH key files” to push your public key to Oracle Cloud. Please note that using and managing custom keys is outside of OCI’s scope.  However, if you have questions about custom keys, you may email OracleForResearchTech_ww@oracle.com for additional information. Creating an Instance Now that you are familiar with OCI instances and terminology, you’re ready to create your instance.  Recall that you will create your instance from the OCI Cloud Console, shown in Figure 5 above. To create a VM instance, simply click on the “Create a VM instance” box.  Or, if you prefer to navigate from the menu choose: Compute   Instances Create Instance screen. Specify the image, shape, availability domain, and network parameters, and download your generated private keys if applicable, using the guidance provided above if needed.  Once you have entered this information, click on the “Create” button to actually create the instance. Once the instance is created, you will be able to view the instance details as shown in Figure 7 below. Figure 7. Viewing the instance details. Connecting to a Running Instance through Secure Shell (SSH) The last step of setting up your instance is configuring the instance to enable connection while the instance is running. To do this, start by logging in to your instance.  You can securely login using Putty, Powershell (Windows), or Linux bash shell. Figure 8 below shows an example of SSH login through Powershell. Figure 8. SSH Login with Powershell. You can also forward a local port to connect to any desired port in your OCI VM. An example is given in Figure 9 below: Figure 9. SSH Login Forwarding a Local Port to the OCI VM. If you prefer using a GUI interface, you can use Putty (shown in Figure 10 below) or TightVNC: Figure 10. SSH Login Using Putty. Your OCI instance is now configured and ready to use. The next steps on your path with OCI include loading software, creating users, and creating your support account.  You may also need to copy data into your instance. We will explain how to do these things in upcoming blogs in the Oracle Cloud Fundamentals for Researchers series.  

Written by Rajib Ghosh, Senior Solutions Architect, Oracle for Research As a researcher, you know that cloud computing accelerates research results.  Yet, even if you’ve used cloud computing before,...

Transforming Research

Realizing a Vision: Accelerated Cancer Research

In our current world, a good deal of energy in the health sciences has shifted toward understanding SARS-CoV-2, as well as treating and finding a vaccine against the disease it causes, COVID-19. Oracle for Research has been proud to collaborate with researchers working on these efforts. Yet, not all of our energy has been focused on COVID-19, as the same challenges that existed before the pandemic persist.  Over the past year, I have had the honor of working with researchers at the Lawrence J. Ellison Institute for Transformative Medicine at USC, who are part of an exciting experiment that envisions both the opportunity and the space to take a multidisciplinary approach in exploring and treating cancer.  Founded by Dr. David Agus, and supported by Oracle Chairman and Chief Technology Officer Larry Ellison, the vision is that interdisciplinary teams armed with modern tools, modern technology and modern solutions can redefine cancer care. The vision is to transform lives and enhance health. A major milestone on the road to realizing this vision was recently achieved. Nature published a groundbreaking study by Ellison Institute researchers that used Artificial Intelligence (AI) and machine learning to develop a proof of principle that has the potential to transform breast cancer diagnosis and treatment. The research team included a medical doctor, a biomedical engineer who is also a computer scientist, a pathologist, a theoretical physicist who specializes in signalizing dynamics in cancer cells, and a professor of medicine and practicing physician.  The project was funded in part by a grant from the Breast Cancer Research Foundation; the data came from tissue samples housed in different databases; and the analytical work and machine learning algorithms were developed using Oracle Cloud. Oracle technical advisors lent their expertise to optimize the use of Oracle Cloud technologies. In brief, the research team found evidence that with deep learning, tissue morphology can be used to successfully identify histologic H&E features that predict the clinical subtypes of breast cancer, potentially providing a viable and cost effective alternative to more expensive molecular screening. If their approach – using “tissue fingerprints” identified by deep learning to classify breast cancers – continues to prove successful, they will essentially democratize the diagnosis and treatment of breast cancer, making individualized medicine more accessible and more affordable for more women and men around the world. This, in and of itself, is rich reward for Dr. Agus’s early vision, and the Ellison Institute is just at the beginning.  We are only now getting a glimpse of what becomes possible when innovative research and cloud technology intersect and are used for good.  The Ellison Institute’s approach to research – bringing together multidisciplinary research teams, clinicians, patients, and technology – means successful outcomes potentially can happen much faster, and be found in unexpected ways. In pre-COVID days, I was fortunate to visit the Ellison Institute in person.  At the time, they were preparing to move from existing buildings at USC to their new, state-of-the-art facility, so I was only able to glimpse the reality that will come to pass when the move is complete.  I was excited to begin to understand their methodical approach of combining traditional “wet lab” research using tissue samples and microscopes with new, computational experiments.  Contrary to what I expected, the Ellison Institute isn’t just running computational simulations; instead, they are both leveraging data from “wet lab” experiments to qualitatively new kinds of results by using computational approaches and conducting novel experiments in computational labs that enable exploration beyond the limits of current “wet lab” technologies.  This isn’t just about getting to results faster – though that is certainly happening too – it’s about changing the kinds of questions scientists can ask and the ways they test hypotheses.  They are reimagining scientific experimentation. This is truly transformative work.   The Ellison Institute brings together the biologists, physicists, data scientists, researchers, doctors and patients. Oracle contributes the computational tools that enables these multidisciplinary teams to do their work faster.  I am excited about the possibilities that lie ahead. To learn more about the Ellison Institute’s revolutionary findings on breast cancer research, Oracle for Research will host the paper’s co-author Dr. Dan Ruderman in a live webinar on August 20th at 1pm PT. Registration is open to all. Attendees will have an opportunity to engage in thoughtful conversation with Dr. Ruderman and researchers worldwide.  

In our current world, a good deal of energy in the health sciences has shifted toward understanding SARS-CoV-2, as well as treating and finding a vaccine against the disease it causes, COVID-19....

Advances in Research

As Tropical Viruses Creep Northward, Visualizing a Potential New Vaccine

This article was written by Aaron Ricadela and was originally published in Forbes. As tropical diseases spread from their historical home territories into new regions including Europe and the United States, UK researchers equipped with high-performance cloud computing have designed a novel way to vaccinate against one of the most resilient to treatment. Scientists at the University of Bristol and the French National Center for Scientific Research are proposing using a lab-produced protein molecule that can act as a delivery system for future vaccines against the mosquito-borne illness chikungunya. The findings, which will be published in a paper this week in the journal Science Advances, show how the protein can be readily manufactured and stored for weeks at warm temperatures, making it easier to ship in regions where refrigeration is an obstacle. It’s also easy to produce in high volumes, an advantage as the disease spreads northward. “The sample is so stable you can transport it at room temperature—that’s the big deal,” says Imre Berger, a biochemistry professor at the University of Bristol and one of the paper’s authors. Designing the so-called vaccine scaffold, or delivery system, involved constructing detailed 3D images from cryogenically frozen samples scanned by an electron microscope, using high-performance cloud computing from Oracle. “You need to see what you’re actually engineering,” he says. “This is tailor-made for the cloud because every image can be processed in parallel.” The work could contribute to efforts to thwart tropical viruses that have spread beyond their usual zones. Chikungunya, whose name derives from the East African Makonde language and describes walking bent with pain, is related to dengue fever and causes high temperatures, joint pain, and exhaustion. It’s spread among humans by the bite of tiger mosquitos under the right conditions. There are no available treatments or vaccines, though the French biotech company Valneva in May reported promising results of a chikungunya vaccine trial of 120 healthy volunteers in the US. The illness belongs to a group of diseases, including Zika virus, that were previously found largely in sub-Saharan Africa, Asia, and India and have spread to the Northern Hemisphere as globalization and warmer climates push infected mosquitos far north of their normal ranges. A mysterious illness outbreak a dozen years ago in northern Italy among villagers who hadn’t traveled abroad turned out to be chikungunya. The disease spread to Florida in 2014. Zika too has broken out in the US with a rash of cases in 2016 and an infection in Texas two years ago. Deep Freeze To develop a vaccine candidate called ADDomer, scientists synthesized a protein that resembles a buckyball, or 12-sided molecule, that can carry an antigen, or substance capable of stimulating an immune response to a virus. Experiments showed ADDomer mimicked the virus’ behavior in mice and triggered immune responses, according to the Science Advances paper. The scientists’ innovation is the scaffold, which they showed can accommodate hundreds of different epitopes, the target to which an antibody binds in an immune reaction. Visualizing the proteins with help from Oracle Cloud Infrastructure was key to the molecule’s design and done at a fraction of what it would have cost to use an on-premises supercomputer cluster, according to Frederic Garzoni, a co-author of the paper and co-founder of Imophoron, a startup founded to commercialize the scientists’ approach. The ADDomer scientists studied the molecular structure of the synthetic protein using computer-generated images stitched together from exposures made by the University of Bristol’s cryo-electron microscope. The apparatus rapidly freezes samples with liquid nitrogen to nearly 200 degrees below zero Celsius to yield two-dimensional pictures. These can be constructed using special software into 3D images at nearly atomic resolution. “How the protein works is very strongly coupled to its 3D shape,” says Christopher Woods, a research software engineering fellow at the University of Bristol, who was involved in the computational work. By spending just 217 British pounds ($270)  on Oracle CPU and GPU power delivered as a cloud service, the team processed a large number of cryo-electron microscope images to generate a single 3D structure. “If you only need to generate an image occasionally it’s obviously much cheaper to do it in the cloud,” he says. Public cloud computing is increasingly augmenting or replacing traditional supercomputers in molecular biology, physics, and other scientific fields. Discover what you can accomplish with Oracle for Research.

This article was written by Aaron Ricadela and was originally published in Forbes. As tropical diseases spread from their historical home territories into new regions including Europe and the...

Advances in Research

The Woolcock Institute of Medical Research Explores Insomnia's Causes Using Oracle Autonomous Database

This article was written by Lisa Morgan and was originally published in Forbes. Doctors tell patients that a healthy lifestyle requires a nutritious diet, exercise, and adequate sleep. And they can give you lots of tips on the right food and activity level. But they understand much less about the causes of insomnia, how it affects individuals, and how to help those suffering from poor sleep. The Woolcock Institute of Medical Research in Sydney is using data science to discover how treatment can be tailored to a patient's insomnia characteristics. Specifically, Woolcock researchers are studying the brain signals of sleeping patients to understand the physiology of insomnia in greater depth. Using Oracle Autonomous Data Warehouse, Woolcock researchers can build a data model in as little as an hour versus the weeks or more it used to take using shared high-performance computer resources. By using machine learning to automate many of the steps in the data science process, Woolcock researchers can dive into problem-solving sooner. "With Oracle, we don't have to focus so much on the technical part, we can focus on what's needed to sleep,” says Dr. Tancy Kao, a data scientist at the Woolcock Institute. The institute is a network of more than 200 researchers and clinicians working to improve sleep and respiratory health globally through research, clinical care, and education. The institute is working with Oracle for Research, a global program that provides scientists, researchers, and university innovators with cost-effective cloud technologies, participation in the Oracle research user community, and access to Oracle’s technical support network.   Types of Insomnia At some point in their lives, most adults will complain of trouble sleeping. Acute insomnia tends to be circumstantial, such as the lack of sleep one gets when they're nervous about giving a speech or upset about losing a job. Chronic insomnia, which tends to affect adults age 40 to 60, is the inability to sleep well for three or more nights per week for at least a month, according to Kao. One form of chronic insomnia is the "wired and tired" brain that suffers from a less-than-normal sleep duration. These individuals have trouble falling asleep or staying asleep, so their total sleep time is about three to four hours per night versus healthier individuals who sleep seven to eight hours. "Paradoxical insomniacs" sleep for the same duration as healthy patients, but their EEG slow wave activity—deep sleep—is comparatively weak. Sleep duration and sleep quality are both important, although sleep quality matters more. When a person doesn't sleep well for prolonged periods, they are at a higher risk of anxiety, depression, high blood pressure, and heart disease. One-third of Australians will experience insomnia at some point in their lives, the Woolcock estimates. "There are all these people with insomnia that aren't treated effectively, so the data science is going to let us understand the condition better and look for new treatments that are going to be targeted based on our understanding of the condition," says Christopher Gordon, associate professor at the University of Sydney, and a contributor to the Woolcock's research. Right now, the two most typical treatments are pills to temporarily relieve symptoms, and cognitive behavior therapy (CBT) which identifies and treats the root causes of insomnia. Both treatments have limitations because people tend to take sleep aids for too long and not everyone is willing to go to or stay in therapy. "It really is a 24-hour condition, not just something associated with sleep," Gordon says. Data Science Helps Provide Answers The Woolcock Institute collects a lot of data about individual patients, including a detailed questionnaire about patients' perceived sleep patterns, medical history, work environment, and home environment. They collect two weeks of data from a wearable activity monitor, observations from a sleep lab, and diary entries that track behavioral factors, such as caffeine and alcohol intake. Data science is used to connect habits to sleep changes to identify which activities and the activity duration help or hurt their sleep. “For example, is it an indoor or outdoor activity? Are they spending more time chatting with friends and family, or did they consume more drinks?" says the Woolcock Institute's Kao. "This helps us understand who can get the most benefits from the activities and who cannot." A major source of data comes from patients spending a night in the institute’s sleep lab, hooked up to a high-density electroencephalogram (EEG) device. The device records brain activity from 256 electrodes every 2 milliseconds at a sample rate of 500 hertz. The result is millions and sometimes billions of data points per patient. The Woolcock uses Oracle Autonomous Data Warehouse, running on Oracle Cloud Infrastructure for data collection, preparation, and analysis. The team can then separate the different types of data, helping researchers understand the relationships among variables. Before using Oracle Autonomous Data Warehouse, Kao had to manually clean the data and study individual variables and their relationships. Only after that could she prepare the data for analysis, build models, and do the actual analysis. If there was data missing from a variable, she would have to determine what to do next. Kao likes how the Oracle Autonomous Data Warehouse offers suggestions for doing analysis and using machine learning, and she can decide whether a suggestion is helpful. "You can follow the suggestions to clean your data, explore the data plot of all the variables, and understand what the chain looks like,” Kao says. “It also gives you simple classifications, and you can decide whether the classification is reasonable or not and whether we should use it in machine learning." Previously, the Woolcock was storing data on servers and using a high-performance computer for analysis, and the process for using it was technical and time-consuming. With that previous system, "you have to use Linux commands to assign a task like modeling or machine training. If you want to visualize the result, you have to go back to the computer," Kao says. "When you submit one modeling task, it may take two or three days to come back with the results. It depends on how heavy, how demanding your machine learning is to do that." Building an entire model used to take one or two months, assuming it proved accurate. If the model wasn't accurate, it meant starting over. Now, Kao can build a model in as little as an hour without coding or having to understand mathematical modeling. Kao does know how to do all of that—she can work in R and Python, use Linux commands, and she understands mathematical modeling well—but Oracle Autonomous Data Warehouse saves her time by automating a lot of the manual work she and Gordon had to do previously. "We have tremendous amounts of data on [each] individual patient, and we need to be able to process that data quickly," Gordon says. "I can look at it through various mechanisms, different ways of visualizing it, come up with different ideas and then we can just literally click buttons to do the machine learning, and we can explore the models that we think are going on, and it gives us the answers right away." Data Visualizations Will Get More Sophisticated So far, the Woolcock has built a 2D visualization that shows the location in the brain of each of the 256 channels and the importance of individual variables as they relate to a specific patient’s insomnia. The goal is to build 3D visualizations, so the researchers can understand the path of individual signals as they travel from one part of the brain to another. Doing so could let them not only understand what’s happening in one part of the brain, but also how it might affect other areas, and whether it’s related to a given symptom. "Oracle can show you how the brain wave changed or moved across the brain overnight,” Kao says. “It helps us not only identify the parts of that brain that play a role in insomnia but how these areas talk to each other." Discover what you can accomplish with Oracle for Research. 

This article was written by Lisa Morgan and was originally published in Forbes. Doctors tell patients that a healthy lifestyle requires a nutritious diet, exercise, and adequate sleep. And they can...

Advances in Research

Ellison Institute Uses AI to Accelerate Cancer Diagnosis and Treatment

This article was written by Margaret Lindquist and was originally published in Forbes. Nearly 150 years after the introduction of modern tissue staining to detect cancer, doctors are still diagnosing the disease much the way they did then. They still look at each biopsy sample under a microscope and hone in on suspicious areas, although new technologies are now allowing them to identify the molecular markers, like DNA mutations, that indicate whether a patient would respond best to one therapy or another. But that diagnostic odyssey is beginning to change, thanks in part to groundbreaking research conducted at the Lawrence J. Ellison Institute for Transformative Medicine of USC. There, the introduction of artificial intelligence into cancer research and treatment has the potential to revolutionize both oncology fields, just as AI is doing in transportation (self-driving cars and self-flying planes), education (chatbot advisers), finance (predictive investment models), and a range of other sectors. “I asked a pathologist, ‘If there’s one thing a computer could do for you, what would it be?’ He said, ‘I want it to tell me where to look,’” says Dr. Dan Ruderman, director of analytics and machine learning at the institute and assistant professor of research medicine at the Keck School of Medicine at the University of Southern California. “Digital pathology is how the medical field is going to catch up with Silicon Valley. Computers are showing us that they can see things we can’t, so we’re taking these techniques that have been perfected for other domains and transferring all that knowledge over to pathology.” The institute is scanning slides that show slices of biopsies and analyzing that visual data with AI algorithms trained to recognize areas of concern. After enough training, the algorithm will be able to not only recognize cancer, but even recommend a course of treatment. The use of deep learning and neural networks will make sophisticated diagnoses available even in developing countries, where there may be only one doctor in a region who has experience with diagnosing cancer. “There’s a basic diagnostic slide that is taken for every patient—but not every place has someone who can read the slide and determine the subtype of cancer and be able to recommend a specific course of drugs,” Dr. Ruderman says. “If the AI can handle that, it will not only save these hospitals money, but it should also result in better patient care.” Unique Combination The Ellison Institute is unique among research facilities, Dr. Ruderman notes, because it combines an established research group with an established clinical group, both focused on discovering new ways to diagnose and treat cancer. The Ellison Institute is building a new, wireless 5G-enabled building in West Los Angeles that will be home to a community outreach team, a health policy think tank, educational sessions—even an art gallery. Most importantly, the new facility is designed to bring together researchers and patients, making it possible to follow a patient’s progress from diagnosis through outcome. “The notion of bringing patients and researchers together is really novel,” Dr. Ruderman says. “Patients are able to tour the research labs and talk to researchers about what they're doing. Researchers will be better able to understand what patients are going through.” Facilitated through the Oracle for Research program, the Ellison Institute’s extended research team also includes Oracle computing experts and technology resources, enabling Dr. Ruderman and other scientists to conduct computational experiments alongside their laboratory research. “Oracle for Research was developed to enable researchers to use the power of cloud computing to solve some of the world’s hardest problems,” says Oracle for Research vice president Alison Derbenwick Miller. “We provide access to Oracle Cloud and to technical advisors who collaborate with researchers. At the Ellison Institute, this further expands the patient-doctor-researcher team and broadens the approach to improving patient diagnosis and treatment.” Intense Workloads Are Just the Beginning The 3D imaging at the center of the pathology research is computationally intense and requires massive amounts of storage. For example, the data for just 1,000 patients, with 100 images per patient at 10 gigabytes per image, requires 1 petabyte—or a million gigabytes—of storage. Patient data comes not only from the patients at the clinic, but also from public sources such as The Cancer Genome Atlas and the Australian Breast Cancer Tissue Bank. To handle its intensive, scalable computing needs, the institute is using Oracle Autonomous Database, Oracle Cloud Infrastructure bare metal compute instances, and Oracle Cloud storage, as well as Oracle Cloud Infrastructure FastConnect for network connectivity. “We've been building neural networks using Oracle Cloud, and now that we are working in three dimensions, we have much, much more data,” Dr. Ruderman says. “It's going to be taxing the system a lot more and requiring a lot more computation.” “Using Oracle’s autonomous capabilities eliminates much of the manual IT labor that’s required for standard databases,” says Muhammad Suhail, Oracle Cloud Infrastructure principal product manager. “Dr. Ruderman’s team was using four CPUs, and they wanted to go to eight,” Suhail says. “All we did was tell them where to find the button. They clicked it, and now the data warehouse is running with eight CPUs without needing any downtime.” Dr. Ruderman says the institute stopped worrying about scaling issues once it started using Oracle Autonomous Database. “Now I can think more about science and less about technology,” he says. And there is more to come. The most revolutionary aspect of the institute’s work is still on the horizon: using AI not just for diagnosis, but also for experimentation—whereby, in effect, scientists are running experiments within the computer. For example, Ruderman says that the plan is to use artificial intelligence to interpret complex 3D patterns and determine whether those images provide more information than can be derived from 2D views. “We want to answer a very basic question—whether we can learn a lot more about a patient by looking at this added dimension in pathology,” says Dr. Ruderman. Dr. Ruderman is particularly excited about applying the institute’s research to better the lives of clinic patients. “We want to understand everything about the patient pipeline. Rather than focusing on just the piece that we think is important, we focus on all the pieces,” he says. “It gives us perspective on the patient's journey and how we can improve it. I hope that everybody on my team would be able to answer this question: ‘How does your work impact a patient's care?’” This unique vision of the future of patient care and research is a crucial benefit that Derbenwick Miller emphasizes. “We want to bring together patients, clinicians, researchers, and cloud computing into a single setting. The Ellison Institute’s approach seeks to rapidly advance and improve the diagnosis and treatment of cancer, and this should ultimately result in a better quality of life and prognosis for patients,” says Derbenwick Miller. Discover what you can accomplish with Oracle for Research. 

This article was written by Margaret Lindquist and was originally published in Forbes. Nearly 150 years after the introduction of modern tissue staining to detect cancer, doctors are still diagnosing...

Advances in Research

Research, Oracle Cloud and AI converge to help reduce the risk of diabetic amputations

Losing a limb is life changing…and expensive The loss of a limb can be devastating to a person’s life.  As well as the impact to mobility, independence and participation in day-to-day activities, it can also have a significant impact on a person’s relationships, community and social life.  To add to this, an amputation can radically change how a person views themselves and their future. Amputees often have to cope with ongoing health issues (e.g., pain), learn new skills, and adjust their expectations about their capabilities. Not only is an amputation a major life changing event for a person; the burden and cost of ulceration on the UK NHS is over £5bn pounds a year.  One in four people with diabetes will develop a foot ulcer in their lifetime. Of those, about a quarter will require a lower limb amputation as a life-saving procedure. Astoundingly, though, experts believe that with more proactive care up to half of all amputations could be avoided.  A significant challenge for at-risk individuals is accessing effective care early and having the information and tools to self-care thereafter.  To avoid eventual amputation, leg and foot ulcers and associated problems need to be treated quickly and correctly to reduce the risk of non-healing wounds, secondary health problems and deteriorating health1.  Researchers at Manchester Met have made a breakthrough Imagine what would change if AI could offer early detection of ulcers, and proactively refer patients for care.  For example, if AI could assist a patient, their carer, or a relatively low-skilled clinician, to identify early and monitor the progression of a foot or leg ulcer? Not only could the patient avoid an amputation; such a solution would also deliver significant time and cost-savings for health services. A clinical tool that is simple to use, widely accessible, and scientifically robust could relieve clinical burden and provide a paradigm-shift for diabetic footcare.  A team of researchers at Manchester Metropolitan University, co-led by Prof. Neil Reeves and Dr. Moi Hoon Yap, have been working on a solution to achieve just that.  Enabled by Oracle’s high performance computing, they have developed AI algorithms that use computer vision technology to identify a foot ulcer at various stages of its development.  Based on robust lab testing, the application, called FootSnap AI, can automatically identify diabetic foot ulcers and associated pathologies using deep learning.   Developed using thousands of diabetic foot images and subjected to extensive scientific peer review and published in a number of medical and computer vision journals234,  in lab trials, FootSnap AI has a high sensitivity (0.934) and specificity (0.911) in identifying diabetic foot ulcers from foot images.  Testing in a real world setting The NHS Manchester Amputation Reduction Strategy (The MARS Project) , which Oracle has also been supporting, will be shortly commencing a programme to test the efficacy of the technology in a real world setting.  The original performance of the standalone mobile app was constrained by its hardware capability and could only support a light-weight AI/deep learning model. Built on AI technology developed by Manchester Met and now using Oracle cloud infrastructure, FootSnap AI is scalable and can respond to new demands rapidly. The cloud infrastructure equipped with GPU can speed up the inference time and provide better accuracy in ulcer detection. “Understanding the treatment of ulceration and whether these wounds are getting better or worse is essentially pattern recognition.  Further, the real breakthrough will come if we - health professionals and patients - can identify these wounds much earlier and therefore initiate much more timely treatment.  This is where artificial intelligence is potentially a game changer,” says Naseer Ahmad, Consultant Vascular Surgeon at Manchester University NHS Foundation Trust. “Oracle Cloud has provided the framework for our AI architecture to be deployed to the cloud as a service to mobile clients. Oracle Cloud delivers an online enterprise scale solution where our data can be stored, processed, and monitored seamlessly using state of the art web technologies.​,” says Bill Cassidy, Research Associate at Manchester Metropolitan University, in Manchester, UK.  References: 1An NHS England study estimates that having effective care early, reduces leg ulcer healing times from approximately two years to just a few months and is 10 times less expensive. But many patients suffer unnecessarily for several years due to a lack of knowledge and not accessing the right care.  NHS England (2017). NHS RightCare scenario: The variation between sub-optimal and optimal pathways. 2 Goyal, M., Reeves, N., Rajbhandari, S., & Yap, M. H. (2019). Robust Methods for Real-Time Diabetic Foot Ulcer Detection and Localization on Mobile Devices. IEEE Journal of Biomedical and Health Informatics. 23(4), 1730-1741, doi:10.1109/JBHI.2018.2868656 3Yap, M. H., Chatwin, K. E., Ng, C. C., Abbott, C. A., Bowling, F. L., Rajbhandari, S., . . . Reeves, N. D. (2018). A New Mobile Application for Standardizing Diabetic Foot Images. Journal of Diabetes Science and Technology, 12(1), 169-173. doi:10.1177/1932296817713761 4Goyal, M., Reeves, N. D., Davison, A. K., Rajbhandari, S., Spragg, J., & Yap, M. H. (2018). DFUNet: Convolutional Neural Networks for Diabetic Foot Ulcer Classification. IEEE Transactions on Emerging Topics in Computational Intelligence. doi: 10.1109/TETCI.2018.2866254

Losing a limb is life changing…and expensive The loss of a limb can be devastating to a person’s life.  As well as the impact to mobility, independence and participation in day-to-day activities, it...

Research Computing

What is HPC in the Cloud? Exploring the Need for Speed

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. High Performance Computing (HPC) refers to the practice of aggregating computing power in a way that delivers much higher horsepower than traditional computers and servers. HPC is used to solve complex, performance-intensive problems—and organizations are increasingly moving HPC workloads to the cloud. HPC in the cloud is changing the economics of product development and research because it requires fewer prototypes, accelerates testing, and decreases time to market. I recently sat down with Karan Batta, who manages HPC for Oracle Cloud Infrastructure, to discuss how HPC in the cloud is changing the way that organizations new and old, develop products and conduct cutting-edge scientific research. We talked about varying topics including the key differences between legacy, on-premises HPC workloads, and newer HPC workloads that were born in the cloud. Listen to our conversation here and read a condensed version below: Your browser does not support the audio player Let's start with a basic definition. What is HPC and why is everyone talking about it? Karan Batta: HPC stands for High Performance Computing—and people tend to bucket a lot of stuff into the HPC category. For example, artificial intelligence (AI) and machine learning (ML) is a bucket of HPC. And if you're doing anything beyond building a website—anything that is dynamic—it's generally going to be high performance. From a traditional perspective, HPC is very research-oriented, or scientifically-oriented. It's also focused on product development. For example, think about engineers at a big automotive company making a new car. The likelihood is that the engineers will bucket all of that development—all of the crash testing analysis, all of that modeling of that car—into what's now called HPC. The reason the term HPC exists is because it's very specialized. You may need special networking gear, special compute gear, and high-performance storage, whereas less dynamic business and IT applications may not require that stuff. Why should people care about HPC in the cloud? Batta: People and businesses should care because it really is all about product development. It's about the value that manufacturers and other businesses provide to their customers. Many businesses now care about it because they've moved some of their IT into the cloud. And now they're actually moving stuff into the cloud that is more mission-critical for them—things like product development. For example, building a truck, building a car, building the next generation of DNA sequencing for cancer research, and things like that. Legacy HPC workloads include things like risk analysis modeling and Monte Carlo simulation, and now there are newer kinds of HPC workloads like AI and deep learning. When it comes to doing actual computing, are they all the same or are these older and newer workloads significantly different? Batta: At the end of the day, they all use computers and servers and network and storage. The concepts from legacy workloads have been transitioned into some of these modern cloud-native type workloads like AI and ML. Now, what this really means is that some of these performance-sensitive workloads like AI and deep learning were born in the cloud when cloud was already taking off. It just so happened that they could use legacy HPC primitives and performance to help accelerate those workloads. And then people started saying, "Okay, then why can't I move my legacy HPC workloads into the cloud, too?" So, at the end of these workloads all use the same stuff. But I think that how they were born and how they made their way to the cloud is different. What percentage of new HPC workloads coming into the cloud are legacy, and what percentage are newer workloads like AI and deep learning? Which type is easier to move to the cloud? Batta:  Most of the newer workloads like AI, ML, containers, and serverless were born in the cloud so there already ecosystems available to support them in the cloud. Rather than look at it percentage-wise, I would suggest thinking about it in terms of opportunity. Most HPC workloads that are in the cloud are in the research and product development phase. Cutting-edge startups are already doing that. But the big opportunity is going to be in legacy HPC workloads moving into the cloud. I'm talking about really big workloads—think about Pfizer, GE and all these big monolithic companies that are running production workloads of HPC on their on-premises clusters. These things have been running 30 or 40 years and they haven't changed. Is it possible to run the newer HPC workloads in my old HPC environment if I already have it set up? Can companies that have invested heavily in on-premises HPC just stay on the same trajectory? Batta: A lot of the latest HPC workloads are the more cutting-edge workloads were born in the cloud. You can absolutely run those on old HPC hardware. But they're generally cloud-first, meaning that they have been integrated into graphics processing units (GPUs). Nvidia, for example, is doing a great job of making sure any new workloads that pop up are already hardware accelerated. In terms of general-purpose legacy workloads, a lot of that stuff is not GPU accelerated. If you think about crash testing, for example, that's still not completely prevalent on GPUs. Even though you could run it on GPUs if you wanted, there's still a long-term timeline for those applications to move on. So, yes, you can run new stuff on the old HPC hardware. But the likelihood is that those newer workloads have already been accelerated by other means, and so it becomes a bit of a wash. In other words, these newer workloads are built cloud-native, so trying to run them on premises on legacy hardware is a bit like trying to put a square peg in a round hole. Is that correct? Batta: Exactly. And you know, somebody may do that, because they've already invested in a big data center on premises and it makes sense. But I think over time this is going to be the case less and less. Come talk with Karan and others about HPC on Oracle Cloud Infrastructure at SC18 in Dallas next week in booth #2806.

Welcome to Oracle Cloud Infrastructure Innovators, a series of occasional articles featuring advice, insights, and fresh ideas from IT industry experts and Oracle cloud thought leaders. High...

Advances in Research

Critical Research Gets a Boost From Free Oracle Cloud Computing

This article was written by Sasha Banks-Louie and was originally published in Forbes. One research team is combining artificial intelligence and computer vision technology to help treat diabetics. Another is using 3D imaging to analyze rocks and predict their capacity to absorb carbon dioxide, and thereby reduce global warming. A third team created a platform used in designing new vaccines. These life-changing efforts are part of a program called Oracle for Research. Its goal is to help researchers take on some of the world’s most pressing problems and yield measurable results within the next five years. As part of the program, Oracle is providing researchers with cloud computing resources, technical support, and data expertise. Problems like those described above are data-intensive and require massive amounts of information to be processed quickly. Researchers affiliated with academic institutions or nonprofit research organizations worldwide can submit online their own projects for application to the Oracle program. “Granting access to high-performance computing power alone is not enough,” says Alison Derbenwick Miller, who runs Oracle for Research. “Most researchers are neither computing experts nor data scientists, so we give them access to a dedicated team of technical experts and architects to allow researchers to focus on what they know best—their research and their results.”  For Moi Hoon Yap, that research involves using artificial intelligence to help clinicians treat patients with diabetes. A professor of computer vision and artificial intelligence at Manchester Metropolitan University, Yap, Professor Neil Reeves, and their team of researchers are working with the UK’s National Health Services and Oracle for Research to develop FootSnap AI, a mobile app that lets diabetics and their doctors quickly diagnose foot ulcers. Diabetics frequently suffer nerve damage to their extremities that can cause a loss of foot sensation, so they might not notice a problem with their skin—“even when it’s breaking down or forming an ulcer,” Yap says. If such ulcers go untreated, they can infect the foot and require it to be amputated. FootSnap AI can respond to new demands rapidly, “with the cloud infrastructure speeding up the inference time and providing better accuracy in ulcer detection,” Yap says. To train its machine-learning algorithm, FootSnap AI ingests thousands of images of diabetic foot ulcers, supplied and annotated by podiatrists at Lancashire Teaching Hospitals NHS Foundation Trust. When a patient uploads an image of his or her foot to the app, the FootSnap algorithm looks for similar characteristics to those other images. The model runs on a virtual machine and an Nvidia P100 GPU on Oracle Cloud Infrastructure. Since upgrading to Oracle, “we’re not spending time maintaining servers anymore,” says Bill Cassidy, a research associate and lead application developer on the project. “It affords us a lot more time to do the real work of researching and writing papers about how to solve this health crisis.” Removing Carbon from the Atmosphere Another researcher in the program is Saswata Hier-Majumder, a professor of geophysics at Royal Holloway University of London, who is working on a project to capture carbon dioxide (CO2) in the atmosphere and permanently store it in rocks underground. With his team of PhD students, he has developed a simulation that analyzes digital images of rocks and predicts their capacity to absorb CO2 and organically remineralize it. The team takes images captured with 3D microtomography and runs them through its simulation engine to determine the pore volume of each fragment. A rock with 15% porosity might be able to hold and mineralize twice the amount of liquid CO2 as one with 5%. Royal Holloway University’s simulation also runs on Oracle Cloud Infrastructure, which lets researchers pick the amount of memory and threads needed to process the massive amounts of scanned images in a way that the team’s previous, on-premises computing options couldn’t. Says Hier-Majumder: “Oracle has helped us break the barrier of how much computational power we have in the lab.” A third effort involves researchers from the University of Bristol and vaccine-technology startup Imophoron. They tapped Oracle’s program to help build what they describe as a vaccine design platform. The platform provides an “atomic blueprint of the common nanoparticle scaffold we now use for all vaccine designs,” says Imre Berger, professor of synthetic biology at the University of Bristol and cofounder of Imophoron. Building that scaffold involved huge volumes of 3D images taken by an electron microscope and then processed using the high-performance computing capabilities of Oracle Cloud Infrastructure. Last year, the lab used the design platform for work on a vaccine against the mosquito-borne illness called chikungunya. Discover what you can accomplish with Oracle for Research. 

This article was written by Sasha Banks-Louie and was originally published in Forbes. One research team is combining artificial intelligence and computer vision technology to help treat diabetics....

Research Computing

ANSYS and Oracle: ANSYS Fluent on Bare Metal IaaS

If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, you’ve likely used a product in whose creation ANSYS software played a critical role. ANSYS is a global leader in engineering simulation. Oracle is pleased to announce its partnership with ANSYS. Oracle Cloud Infrastructure bare metal compute instances enable you to run ANSYS in the cloud with the same performance as you would see in your on-premises data center. Why Bare Metal Is Better for HPC Oracle Cloud Infrastructure continues to invest in HPC. Nothing beats the performance of bare metal. The virtualized, multi-tenant platforms common to most public clouds are subject to performance overhead. Traditional cloud offerings require a hypervisor to enable management capabilities that are required to run multiple virtual machines on a single physical server. This additional overhead has been demonstrated by hardware manufacturers to significantly affect performance [i]. Bare metal servers, without a hypervisor, deliver uncompromising and consistent performance for high performance computing (HPC). Instances with the latest generation NVMe SSDs, providing millions of IOPS and very low latency, combined with Oracle Cloud Infrastructure's managed POSIX file system, ensure that Oracle Cloud Infrastructure supports the most demanding HPC workloads. Our bare metal compute instances are powered by the latest Intel Xeon Processors and secured by the most advanced network and data center architecture, yet they are available in minutes when you need them—in the same data centers, on the same networks, and accessible through the same portals and APIs as other IaaS resources. With Oracle Cloud Infrastructure’s GPU instances, you also have a high performance graphical interface to pre- and post-process ANSYS simulations. ANSYS Performance on Bare Metal OCI Instances The performance of ANSYS Fluent software on Oracle Cloud Infrastructure bare metal instances meets and in some cases exceeds the raw performance of other on-premises HPC clusters, demonstrating that HPC can run well in the cloud. Additionally, consistent results demonstrate the predictable performance and reliability of bare metal instances. The following chart shows the raw performance data of the ANSYS Fluent f1_racecar_140m benchmark on Oracle Cloud Infrastructure's Skylake and Haswell compute instances. The model is 140 million cell CFD model. Visit the ANSYS benchmark database to see how Oracle Cloud Infrastructure compares favorably to on-premises clusters. Figure 1: ANSYS Fluent Rating on Oracle Cloud Infrastructure Instances Installation and configuration of ANSYS Fluent on Oracle Cloud Infrastructure is simple, and the experience is identical to the on-premises installation process. Bare metal enables easy migration of HPC applications; no additional work is required for compiling, installing specialized virtual machine drivers, or logging utilities. Although the performance is equal to an on-premises HPC cluster, the pricing is not. You can easily spend $120,000 or more on a 128-core HPC cluster [ii], and that's just for the hardware; that number doesn’t include power, cooling, and administration. That same cluster costs just $8 per hour on Oracle Cloud Infrastructure. That’s an operating expense you’re paying for only when you use it, not a large capital expense you have to try to “right-size” and keep constantly in use to experience the best ROI. Running on Oracle Cloud Infrastructure means that you can budget ANSYS Fluent jobs precisely, in advance, and the elastic capacity of the cloud means that you never have to wait in a queue. Scaling Is Consistent with On-Premises Environments When virtualized in your data center, CPU-intensive tasks that require little system interaction, normally, experience very little impact or CPU overhead.[iii] However, virtualized environments in the cloud include monitoring, which adds significant overhead on per node. Virtualization overhead is not synchronized across an entire cluster, which creates problems for MPI jobs, such as ANSYS Fluent, which effectively have to wait for the slowest node in a cluster to return data before advancing to the next simulation iteration. You’re only as fast as your slowest node, noisiest neighbor, or overburdened network. With Oracle Cloud Infrastructure’s bare metal environment, no hypervisor or monitoring software runs on your compute instance. With limited overhead, ANSYS Fluent scales across multiple nodes just as well as it would in your data center. Our flat, non-oversubscribed network virtualizes network IO on the core network, instead of depending on a hypervisor and consuming resources on your compute instance. The two 25Gb network interfaces on each node guarantee low latency and high throughput between nodes. As shown in the following chart, many ANSYS Fluent models scale well across the network.     Figure 2: ANSYS Fluent Scaling on an Oracle Cloud Infrastructure Instance The following chart illustrates greater than 100% efficiency with respect to a single core from 400,000 cells per core to below 50,000 cells per core. Figure 3: Efficiency Remains at 100% Even as Cells Per Core Drop Serious HPC Simulations in the Cloud Oracle Cloud Infrastructure has partnered with ANSYS to provide leading HPC engineering software on high performance bare metal instances so that you can take advantage of cloud economics and scale for your HPC workloads. Our performance and scaling with ANSYS matches on-premises clusters. It’s easy to create your own HPC cluster, and the cost is predictable and consistent. No more waiting for the queue to clear up for your high-priority ANSYS Fluent job or over-provisioning hardware. Sign up for 24 free hours of a 208-core cluster or learn more about Oracle Cloud Infrastructure's HPC offerings. And for more examples of how Oracle Cloud outperforms the competition, follow the #LetsProveIt hashtag on Twitter. [i] http://en.community.dell.com/techcenter/high-performance-computing/b/general_hpc/archive/2014/11/04/containers-docker-virtual-machines-and-hpc [ii] Example price card: https://www.hawaii.edu/its/ci/price-card/ [iii] https://personal.denison.edu/~bressoud/barceloleggbressoudmcurcsm2.pdf

If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, you’ve likely used a product in...

Research Computing

Exabyte.io for Scientific Computing on Oracle Cloud Infrastructure HPC

We recently invited Exabyte.io, a cloud-based, nanoscale modeling platform that accelerates research and development of new materials, to test the high-performance computing (HPC) hardware in Oracle Cloud Infrastructure. Their results were similar to the performance that our customers have been seeing and what other independent software vendors (ISVs) have been reporting: Oracle Cloud Infrastructure provides the best HPC performance for engineering and simulation workloads. Exabyte.io enables their customers to design chemicals, catalysts, polymers, microprocessors, solar cells, and batteries with their Materials Discovery Cloud. Exabyte.io allows scientists in enterprise R&D units to reliably exploit nanoscale modeling tools, collaborate, and organize research in a single platform. As Exabyte.io seeks to provide their customers with the highest-performing and lowest-cost modeling and simulation solutions, they have done extensive research and benchmarking with cloud-based HPC solutions. We were eager to have them test the Oracle Cloud Infrastructure HPC hardware. Exabyte.io ran several benchmarks, including general dense matrix algebra with LINPACK, density functional theory with Vienna Ab-initio Simulation Package (VASP), and molecular dynamics with GROMACS. The results were impressive and prove the value, performance, and scale of HPC on Oracle Cloud Infrastructure. The advantage of Oracle Cloud Infrastructure's bare metal was obvious with LINPACK, throughput is almost double the closest cloud competitor and consistent with on-premises performance. Latency is even more interesting: the BM.HPC2.36 shape with RDMA provides the lowest latency at any packet size and is orders of magnitude faster than cloud competitors. In fact, for every performance metric that Exabyte.io tested on VASP and GROMACS, they saw Oracle's BM.HPC2.36 shape with RDMA (shown as OL in the following graph) outperform the other cloud competitors. Below is a great example of both performance and scaling of Oracle Cloud Infrastructure on VASP. When parallelizing over electronic bands for large-unit-cell materials and normalizing for core count, the single node performance of the BM.HPC2.36 exceeds it's competitors and then scales consistently as the cluster size increases. The BM.HPC2.36 runs large VASP jobs faster and can scale larger than any other cloud competitor. Exabyte.io has provided the full test results on their website. Their blog concluded that "Running modeling and simulations on the cloud with similar performance as on-premises is no longer a dream. If you had doubts about this before, now might be the right time to give it another try." By offering bare metal HPC performance in the cloud Oracle Cloud Infrastructure enables customers running the largest workloads on the most challenging engineering and science problems to get their results faster. The results that Exabyte.io has seen are exceptional, but the results are not unique among our customers. Spin up your own HPC cluster in 15 minutes on Oracle Cloud Infrastructure.  

We recently invited Exabyte.io, a cloud-based, nanoscale modeling platform that accelerates research and development of new materials, to test the high-performance computing (HPC) hardware in Oracle...