Amar Gowda

Sr. Principal Product Manager

Part of the AI/ML Incubations team leading efforts for multiple initiatives. Passionate and active contributor to generative AI offerings, containers, container security, confidential computing and efficient use of infrastructure. Amar also follows and contribute to open source projects in Cloud Native Cloud Foundation (CNCF).  

Recent Blogs

Optimized Performance and Results on MI300x with LoRA Fine-Tuning for LLMs

ML finetuning performance results with AMD MI300X GPUs.

Serving Llama 3.1 405B model with AMD Instinct MI300X Accelerators

In this blog we share the latest results of serving the largest LLama models on AMD MI300X GPUs on Oracle Cloud Infrastructure (OCI) by benchmarking various common scenarios.

Announcing General Availability of OCI Compute with AMD MI300X GPUs

BM.GPU.MI300X.8 is generally available now. Get in touch with your Oracle sales representative or Kyle White, VP of AI infrastructure sales at kyle.b.white@oracle.com. Learn more about this BM instance with our documentation.

Early LLM serving experience and performance results with AMD Instinct MI300X GPUs

As OCI Compute works towards launching AMD Instinct MI300X GPU bare metal machine offerings in the coming months, this blog post recounts our technical journey running real-world large language model (LLM) inference workloads using Llama 2 70B and shares our insights from experiments on this AMD MI300X hardware. This post shares the LLM serving and inference workload development, deployment, and performance benchmark results.

  1. View more