Sid Padgaonkar

Sr. Director - Product Management (Gen AI) - Strategic Customers

Sid Padgaonkar is the Senior Director with OCI's Strategic Customers Group. Sid if focused on GEN AI product incubations, outbound product management and GTM strategy.

Recent Blogs

Monitoring Oracle Cloud Infrastructure with Datadog

OCI + Datadog integration enables you to monitor your entire OCI stack within a single platform alongside other third-party technologies within your environment. After installing the integration and configuring OCI for metrics collection, Datadog will begin collecting metrics from your OCI services in minutes and populate out of the box (OOTB) dashboards for metrics such as the total bytes traveling in and out of your network, average database execution time, GPU performance, and more.

Serving Llama 3.1 405B model with AMD Instinct MI300X Accelerators

In this blog we share the latest results of serving the largest LLama models on AMD MI300X GPUs on Oracle Cloud Infrastructure (OCI) by benchmarking various common scenarios.

Announcing General Availability of OCI Compute with AMD MI300X GPUs

BM.GPU.MI300X.8 is generally available now. Get in touch with your Oracle sales representative or Kyle White, VP of AI infrastructure sales at kyle.b.white@oracle.com. Learn more about this BM instance with our documentation.

Early LLM serving experience and performance results with AMD ...

As OCI Compute works towards launching AMD Instinct MI300X GPU bare metal machine offerings in the coming months, this blog post recounts our technical journey running real-world large language model (LLM) inference workloads using Llama 2 70B and shares our insights from experiments on this AMD MI300X hardware. This post shares the LLM serving and inference workload development, deployment, and performance benchmark results.

Finetuning in large language models

Large language model (LLM) finetuning is a way to enhance the performance of pretrained LLMs for specific tasks or domains, with the aim of achieving improved inference quality with limited resources. Finetuning is crucial for domain-specific applications where pretrained models lack necessary context, or specialized knowledge. This blog post delves into different finetuning options, discussing the appropriate use case for each method.

Receive the latest blog updates