Tzvi Keisar

Director of Product Management

Product Manager in OCI Data Science

Recent Blogs

Simplifying AI Integration: Bringing Hugging Face models to AI Quick Actions in OCI Data Science

In the latest release of AI Quick Actions, we are supporting bring-your-own-model from Hugging Face. Hugging Face is a very popular AI model repository hosting many state-of-the-art LLMs including Meta’s Llama models and Mistral.

Enterprise chatbot with Oracle Digital Assistant, OCI Data Science, LangChain & Oracle Database 23ai

This blog post describes the process of building a state-of-the-art chatbot by leveraging the latest technologies: Oracle Data Science capabilities like AI Quick Actions and Model Deployment, Mistral-7B-Instruct-v0.2, Oracle Database 23ai, LangChain, and Oracle Digital Assistant. We will guide you through each step, from setting up the foundational AI models to integrating them into a seamless conversational experience.

Revolutionizing Healthcare with AI: Building an Advanced Chatbot Using Mixtral, Oracle 23AI, RAG, LangChain, and Streamlit

This blog focuses on creating an advanced AI-powered healthcare chatbot by integrating Mixtral, Oracle 23AI, Retrieval-Augmented Generation (RAG), LangChain, and Streamlit. The chatbot leverages the PubMed library to augment the data for RAG wherein accessing a vast repository of medical research, ensuring accurate and up-to-date information for users. By combining these cutting-edge technologies, the chatbot aims to provide reliable, efficient, and interactive healthcare support.

Generative AI made easy in OCI Data Science

This blog post describes how to enable AI Quick Actions, a no-code solution, to simplify the use of LLMs (large language models) and make it more accessible to a wider audience.

Quantize and deploy Llama 2 70B on cost-effective NVIDIA A10 Tensor Core GPUs in OCI Data Science

NVidia A10 GPUs have been around for a couple of years. They are much cheaper than the newer A100 and H100, however they are still very capable of running AI workloads, and their price point makes them cost-effective. With the quantization technique of reducing the weights size to 4 bits, even the powerful Llama 2 70B model can be deployed on 2xA10 GPUs. In this blog post we will show how to quantize the foundation model and then how to deploy it.

  1. View more

Receive the latest blog updates