X

EMEA A&C CCOE Partner Technology Cloud Engineering

Computing for your Dynamic Workloads

Alper Ozisik
Oracle EMEA A&C Cloud Adoption & Implementation Consultant

Your applications, mostly custom applications are running on computing services on OCI. There are other ways to run the applications on OCI. Kubernetes is an extension on computing in a way. Serverless OCI Functions is another standalone way of running piece of code.

Requests coming to your applications create your workload. You can have a persistent workload also some dynamic workloads. In persistent workload scenario, you can fine-tune your utilization. On a dynamic approach, this might need a selection of technologies, which will suit your workload.

Compute Instances

Compute instances are most versatile among the options. You can use either a fixed shape or a flexible shape. You have options for GPU, HPC, DenseIO and Virtualization (VM or BareMetal). You can select the best suitable host for your application.

For dynamic workloads, you have two options out of the box:

Compute Instance Pool

You create a compute configuration. From that configuration, you define a pool. In most of those scenarios, it involves a Load Balancer service. There are thresholds defined within the Instance Pool, to govern the rules for scaling in and out. The idea behind this scaling is to release all of the unused resources. When scaling out, new compute instances are provisioned; when scaling in, instances and their storages are terminated. Since every time this is creating new resources from scratch, it will take some time while scaling out and does not use existing resources

Burstable Instances

Burstable instances is a new feature for OCI compute instances. VM instances running on an overprovisioned host, with the possibility of automatic scaling up and down the CPU on demand as long as there is capacity within the host. This happens without shutting down the VM. You are getting charged for the committed amount.

If your workload requires certain period to scale up, this might not be suitable for you. If the host is out of capacity to scale up your VM, you might not be able to meet your SLAs for your customers

Kubernetes

Oracle Kubernetes Engine (OKE) service is free. Within the Kubernetes cluster, you have nodes running to host your workloads. Those are compute instances with certain shapes. You do not have versatile options as you have with regular compute instances, such as GPU.

There are several scaling options for Kubernetes:

  1. Kubernetes Horizontal Pod Autoscaler to adjust the number of pods in a deployment
  2. Kubernetes Vertical Pod Autoscaler to adjust the resource requests and limits for containers running in a deployment's pods
  3. Cluster autoscaling by autoscaling node pools, you can deploy the Kubernetes Cluster Autoscaler

The first two options, plays within the same node pool, without changing the limit of the pool; the pool that you have already booked even that you might not utilized fully; which you have created for your maximum workload.

The third option is adding new compute instances/nodes to the pool. Similar Instance Pool, it spins a new instance from scratch. This takes around 15 minutes. Scaling in can cause some distribution for the application.

Kubernetes for non-dynamic workloads

Kubernetes is not just for scaling. It has other features that makes sense to use Kubernetes over just simple plain compute instances:

  1. Better utilization – Please watch the Partner Webcast - OCI Intro to Refactoring your Applications to Cloud Native
  2. Maintains for the desired state
  3. Updates and rolling back

Serverless – OCI Functions

Serverless option is great for dynamic workloads. Compared to Compute and Kubernetes options, you do not pay for the idle resources; also, you are not responsible for maintaining the system where it is running at.

Functions are suitable for certain workloads, which require low resources, such as memory limit as 1 GB and no disk persistence. If the function is invoked infrequently, response time for the function could be a little bit delayed by the nature of the serverless architecture. Functions need to respond within certain period (max 2 minutes); otherwise, it will be terminated.

If your workload is within that limit or you can break down your application to meet those limits, it will be the best approach for the dynamic workloads

Most of the serverless applications are combined with other Compute Instance and Kubernetes based services. The others are responsible to greet the customer when they visit your application and most the SOA based workload, in which are compatible with OCI Functions will be running on the Serverless platform.

Compute instances with Programmatic scaling

The solutions mentioned above (Compute Instance Pools and Kubernetes) are for generic use cases. They are not fully aware what or how the application operates. In order to fully release the resources, for them not to charge, they completely destroy them. This prevents from re-allocating the resources within short time.

Serverless Functions are handy, because they do this in an automated fashion with a built-in queuing system. There might be cases, which the limitations of the Serverless systems are not applicable.

In that case, you need to develop a custom solution, which maintains your compute instances, with knowing their workloads.

A compute instance with standard shapes, which has been stopped, does not charge (see Resource Billing for Stopped Instances). A compute instance regularly starts (boots) approximately within 40 seconds

What is next?

The diagram above shows the common decision flow chart. This is applicable for most of the cases. We, the EMEA A&C Partner Technology Cloud Engineers can assist you making the correct choice, working with you to optimize your offering on Oracle Cloud. You can just simply email us with your inquiry to engage.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.