Generative AI adoption is trending, and Fortune 500 leaders are eager to create adoption plans for their enterprise. This post is the third of a five-part series that equips you to harness generative AI’s potential effectively. In this blog post, we dive into the basics of prompt engineering for large language models (LLMs) for your application.

Engineers can effectively use large language models (LLMs) to generate human-like text and understand natural language. Effective prompt engineering is crucial for exploiting their potential. This blog post provides a guide for identifying best practices in prompt engineering, such as establishing clear objectives, providing context and specificity, considering prompt length, using user instructions, and iterating based on outputs. Understanding these basics ensures optimal interaction with LLMs, enabling users to generate coherent and relevant text outputs.

Understanding the basics

Before diving into the basics of prompt engineering, grasping the fundamentals is essential. LLMs such as Cohere are designed to generate text based on inputs. These inputs provided by the user are referred to as prompts. These prompts act as instructions, guiding the model to produce coherent and contextually relevant output. The key lies in formulating prompts that effectively convey the aims of the user interacting with the LLM.

Establish clear and concise objectives

The first step in engineering effective prompts is to clearly define the objective you want to achieve from the prompt. Whether it’s generating content or answering questions, a well-defined objective with a concise scope sets the foundation for crafting effective prompts.

For example, if you want the LLM to write a creative short story, the clearer your objectives with respect to subject, length, and parts of speech, the more precise the nature of the output.

Context and specificity

Context is key when working with LLMs. The more context you provide in your prompts, the better the model can understand and generate relevant output, including specific details, such as background information or constraints to guide the LLM in the wanted direction. For example, if you want the model to write product descriptions, include details about the product, its features, and the target audience. This specificity helps the LLM tailor its responses to the given context.

Prompt length and format

The length and format of your prompts can significantly impact the output. While some tasks might require concise prompts, others can benefit from more elaborate instructions. For tasks that demand detailed responses, you can provide a longer and more structured prompt. Conversely, for quick and straightforward tasks, a concise prompt might suffice. The key is to iterate and refine based on the model’s responses.

Use system and user instructions effectively

In the context of LLMs, user instructions and system instructions play a crucial role. User instructions guide the model based on the user’s intent, while system instructions provide high-level guidelines on how the model should behave.

For example, you can use a user instruction like “Write a funny story,” followed by a system instruction like “in the style of Mark Twain.” This combination guides the LLM to generate a poem with a specific theme and style.

Iterate and refine based on outputs

Prompt engineering is an iterative process. After receiving outputs from the LLM, analyze them to understand how well the model is aligning with your objectives. Refine your prompts based on the model’s outputs, you can fine-tune the performance and to tailor the output to your desire. It’s a dynamic and adaptive process that involves learning from the model’s responses.

Let’s use these guiding principles to walk through some examples.

Example use case

Define a function to take a prompt and a temperature value and call the OCI Generative AI service endpoint to access the Cohere Command model.​​ OCI Generative AI is available through an API and integrate with various large language models.

In a Jupyter notebook or Python file, add the following function. The function returns the text response generated by the LLM model.

import oci

# Create a default config using DEFAULT profile in default location

config = oci.config.from_file()

# Initialize service client with default config file

generative_ai_inference_client = oci.generative_ai_inference.GenerativeAiInferenceClient(config=config, service_endpoint=<endpoint>)

def generate_text(prompt, temp=0):

    generate_text_detail = oci.generative_ai_inference.models.GenerateTextDetails()

    generate_text_detail.compartment_id = "ocid1.test.oc1..<unique_ID>EXAMPLE-compartmentId-Value"

   generate_text_detail.serving_mode = oci.generative_ai_inference.models.OnDemandServingMode(model_id="cohere.command")    

    cohere_generate_text_request =   oci.generative_ai_inference.models.CohereLlmInferenceRequest()

    cohere_generate_text_request.temperature = temp

    cohere_generate_text_request.prompt = prompt

    cohere_generate_text_request.max_tokens = 200

    generate_text_detail.inference_request = cohere_generate_text_request

    response = generative_ai_inference_client.generate_text(generate_text_detail)

    return response.data.inference_response.generated_texts[0].text

Provide clear instructions

At its core, prompting a LLM model is about sending a clear instruction to a text generation model and getting a response back. This input can be a complete prompt in a short line of instruction to the model.
user_input = "a pair of sneaker product named SK-20"

prompt = f"""Write a creative product description for {user_input}"""

response = generate_text(prompt, temp=0)

print(response)

The model returns the following response:

Looking for a stylish and comfortable pair of sneakers? Look no further than the SK-20! With its sleek design and high-quality materials, this sneaker is the perfect addition to any fashion-forward individual's wardrobe. Crafted with premium leather and breathable mesh panels, the SK-20 is designed to provide ultimate comfort and durability. The rubber outsole provides excellent traction and stability, making them ideal for everyday wear and any kind of terrain.

The sleek and modern design of the SK-20 features a low-top profile, making them versatile and easy to style. Whether you're running errands, hitting the gym, or attending a casual event, these sneakers will complement any outfit.

Plus, with its affordable price point, the SK-20 is an excellent value for anyone looking for a stylish and versatile pair of sneakers. Upgrade your footwear game with the SK-20 today!

Remember, when you wear the SK-20, you're not just wearing any ordinary

Give directions by adding specificity and context

One of the issues with the earlier text prompt is that output generated is very general.

The prompt wasn’t briefed on the specifics of the product like features or the target audience. In the same way that a human need more information to write a product description, the model must be given a good set of details to guide its response.

Now, in the same prompt, let’s include specific information like features, target audience, and key benefits.

user_input_product = "a pair of sneaker product named SK-20"

user_input_keywords = "stylish", “durable”, “flexible”

user_input_customer = "athletes who are daily runner or jogger"

user_input_describe = "benefits of this product"

prompt = f"""Write a creative product description for {user_input_product}, with the keywords {user_input_keywords} for { user_input_customer} and describe {user_input_describe}"""

response = generate_text(prompt, temp=0.5)

print(response)

Experience unparalleled comfort and style with the SK-20 sneakers, the perfect addition to any athlete's wardrobe. Designed with durability, flexibility, and style in mind, these sneakers are perfect for daily runners and joggers who want to take their performance to the next level.

The sleek design of the SK-20 sneakers is complemented by their durability, making them ideal for intense workouts and long runs. The flexible material ensures a comfortable fit while providing the necessary support for your feet, reducing the risk of injury during strenuous activities.

Whether you're hitting the tracks or making a fashion statement, the SK-20 sneakers check all the boxes. Their stylish design will make you stand out from the crowd, while the practical features will keep your feet happy throughout the day.

Don't settle for anything less than the best. Upgrade your footwear with the SK-20 sneakers today.

You can add specificity to a prompt in many ways, including the following examples:

  • Add prompt for roleplaying a person or a job role for the task, such as a brand manager. For example, “You are a brand manager. Write a product description for…”
  • Mention what the tone of a piece of text should be, such as fun, serious, casual, or inspirational. For example, “Tone: Inspiring.”
  • Add a prompt to generate text of a specific length or paragraphs. This  addition helps guide the model to be coherent and concise. For example. “Write in two paragraphs.”

Add context

Although LLMs can generate text for designed input, they’re still not good at context-aware scenarios. In real-world applications, context is the key, and being able to add context to the prompt personalizes the generative AI for a domain or a company. In the following example, the prompt provides general features of the product without any context. (all the responses are made up.)

user_input = "What are the key features of sneaker SK-20"

prompt = user_input

response = generate_text(prompt, temp=0)

The SK-20 sneaker is a popular shoe model known for its stylish design and comfortable wear. Here are some of its key features:

1. Lightweight Construction: The SK-20 is designed to be lightweight, making it ideal for everyday wear and sports activities. Its breathable mesh upper and lightweight materials help keep your feet comfortable during long wear times.

2. Durable Outsole: The sneaker features a durable outsole made of rubber or synthetic material, providing excellent traction and grip on various surfaces. This makes it suitable for both indoor and outdoor use.

3. Flexible Midsole: The shoe's midsole is designed to provide cushioning and support for your feet. This helps to reduce fatigue during long walks or strenuous activities, providing comfort with every step.

Now, the relevant context is provided for the product. In this example, we assume that the information is already available to add to the context.

context = "SK-20 sneakers showcase a sleek and modern aesthetic that effortlessly complements any outfit. The water-resistant feature of these sneakers is achieved through a special coating that repels water, preventing it from seeping into the shoe. This means you can confidently step through puddles and navigate rainy streets without worrying about soggy feet. Comfort is not compromised with our water-resistant sneakers. The breathable and moisture-wicking interior lining keeps your feet fresh and dry, while the cushioned insole provides excellent support and shock absorption. Whether you're walking, running, or engaging in other activities, these sneakers offer the perfect blend of comfort and functionality."

user_input = "What are the key features of sneaker SK-20"

prompt = f"""{context} Given the information above, answer this question {user_input}"""  

response = generate_text(prompt, temp=0)


The key features of the sneaker SK-20 are:

1. A sleek and modern design

2. Water-resistant coating

3. Complementary look for any outfit

4. Breathable and moisture-wicking interior lining

5. Cushioned insole for support and shock absorption.

Specify the format

Because LLM models can return the response with any specifics or length, these models can return the response in multiple formats like JSON or YAML. For example, in the following list of products and product descriptions, we request the model to generate the list in a JSON format:

prompt ="""Turn the following information into a JSON string with the following keys: Product Name, Product Id, Product Description.

Product ID: #0890 NAME SK-20 DESC “Water proof shoes”

Product ID: #0891 NAME LX-33 DESC “Tall Leather Boots”

Product ID: #0892 NAME OX-29 DESC “Classic Lace-up shoes”

Product ID: #0811 NAME SD-42 DESC “Open toed sandals”

"""

response = generate_text(prompt, temp=0)

```json

[

  {

    "product_name": "SK-20",

    "product_id": "#0890",

    "product_description": "Water proof shoes"

  },

  {

    "product_name": "LX-33",

    "product_id": "#0891",

    "product_description": "Tall Leather Boots"

  },

  {

    "product_name": "OX-29",

    "product_id": "#0892",

    "product_description": "Classic Lace-up shoes"

  },

  {

    "product_name": "SD-42",

    "product_id": "#0811",

    "product_description": "Open toed sandals"

  }

]

```

Conclusion

The ability to effectively engineer prompts for LLMs is a valuable skill that opens doors to a myriad of applications across industries. As you embark on this journey, consider the following goals:

  • Start with clear objectives.
  • Be precise in your instructions.
  • Use instructions to cater the output to your needs.
  • Remember it’s an iterative process.

Now that you’ve gained insights into prompt engineering, it’s time to put your knowledge into action. The following resources can help you get started on the journey. If you’re new to Oracle Cloud Infrastructure, try an Oracle Cloud Free Trial for 30 days with US$300 in credits.

Read part 1 of this five-part blog series – “Navigating the frontier: Key considerations for developing a generative AI integration strategy for the enterprise”

Read part 2 of this 5-part blog series – “Comprehensive tactics for optimizing large language models for your application”

Read part 4 of this 5-part blog series “Finetuning in large language models”

For more information, see the following resources: