Core prompt learning techniques

Fundamental use cases

Having explored some more intricate techniques, it’s time to shift the focus to practical applications. In this section, you’ll delve into fundamental use cases where these techniques come to life, demonstrating their effectiveness in real-world scenarios. Some of these use cases will be expanded in later chapters, including chatbots, summarization and expansion, coding helpers, and universal translators.

Chatbots

Chatbots have been around for years, but until the advent of the latest language models, they were mostly perceived as a waste of time by users who had to interact with them. However, these new models are now capable of understanding even when the user makes mistakes or writes poorly, and they respond coherently to the assigned task. Previously, the thought of people who used chatbots was almost always, “Let me talk to a human; this bot doesn’t understand.” Soon, however, I expect we will reach something like the opposite: “Let me talk to a chatbot; this human doesn’t understand.”

System messages

With chatbots, system messages, also known as metaprompts, can be used to guide the model’s behavior. A metaprompt defines the general guidelines to be followed. Still, while using these templates and guidelines, it remains essential to validate the responses generated by the models.

A good system prompt should define the model’s profile, capabilities, and limitations for the specific scenario. This involves:

  • Specifying how the model should complete tasks and whether it can use additional tools

  • Clearly outlining the scope and limitations of the model’s performance, including instructions for off-topic or irrelevant prompts

  • Determining the desired posture and tone for the model’s responses

  • Defining the output format, including language, syntax, and any formatting preferences

  • Providing examples to demonstrate the model’s intended behavior, considering difficult use cases and CoT reasoning

  • Establishing additional behavioral guardrails by identifying, prioritizing, and addressing potential harms

Collecting information

Suppose you want to build a booking chatbot for a hotel brand group. A reasonable system prompt might look something like this:

You are a HotelBot, an automated service to collect hotel bookings within a hotel brand group,
in different cities.

You first greet the customer, then collect the booking, asking the name of the customer, the
city the customer wants to book, room type and additional services.
You wait to collect the entire booking, then summarize it and check for a final time if the
customer wants to add anything else.

You ask for arrival date, departure date, and calculate the number of nights. You ask for a
passport number. Make sure to clarify all options and extras to uniquely identify the item from
the pricing list.
You respond in a short, very conversational friendly style. Available cities: Rome, Lisbon,
Bucharest.

The hotel rooms are:
single 150.00 per night
double 250 per night
suite 350 per night

Extra services:
parking 20.00 per day,
late checkout 100.00
airport transfer 50.00
SPA 30.00 per day

Consider that the previous prompt is only a piece of a broader application. After the system message is launched, the application should ask the user to start an interaction; then, a proper conversation between the user and chatbot should begin.

For a console application, this is the basic code to incorporate to start such an interaction:

var chatCompletionsOptions = new ChatCompletionsOptions
{
         DeploymentName = AOAI_chat_DEPLOYMENTID
         Messages =

         {
                  new ChatRequestSystemMessage(systemPrompt),
                  new ChatRequestUserMessage("Introduce yourself"),
          }
};
while (true)
{
         Console.WriteLine();
         Console.Write("HotelBot: ");
         var chatCompletionsResponse = await openAIClient.GetChatCompletionsAsync(chatCompletions
Options);
         var chatMessage = chatCompletionsResponse.Value.Choices[0].Message;
         Console.Write(chatMessage.Content);
         chatCompletionsOptions.Messages.Add(new ChatRequestAssistantMessage(chatMessage.
Content));
         Console.WriteLine();
         Console.Write("Enter a message: ");
         var userMessage = Console.ReadLine();
         chatCompletionsOptions.Messages.Add(new ChatRequestUserMessage(userMessage));
}

Summarization and transformation

Now that you have a prompt to collect a hotel booking, the hotel booking system will likely need to save it—calling an API or directly saving the information in a database. But all it has is unstructured natural language, coming from the conversation between the customer and the bot. A prompt to summarize and convert to structured data is needed:

Return a json summary of the previous booking. Itemize the price for each item.
The json fields should be
1) name,
2) passport,
3) city,
4) room type with total price,
5) list of extras including total price,
6) arrival date,
7) departure date,
8) total days
9) total price of rooms and extras (calculated as the sum of the total room price and extra
price).
Return only the json, without introduction or final sentences.
Simulating a conversation with the HotelBot, a json like the following would be generated from
the previous prompt:
{"name":"Francesco Esposito","passport":"XXCONTOSO123","city":"Lisbon","room_type":{"single":15
0.00},"extras":{"parking":{"price_per_day":20.00,"total_price":40.00}},"arrival_date":"2023-06-
28","departure_date":"2023-06-30","total_days":2,"total_price":340.00}

Expanding

At some point, you might need to handle the inverse problem: generating a natural language summary from a structured JSON. The prompt to handle such a case could be something like:

Return a text summary from the following json, using a friendly style. Write at most two
sentences.

{"name":"Francesco Esposito","passport":"XXCONTOSO123","city":"Lisbon","room_type":{"single":150.
00},"extras":{"parking":{"price_per_day":20.00,"total_price":40.00}},"arrival_date":"2023-06-28",
"departure_date":"2023-06-30","total_days":2,"total_price":340.00}

This would result in a reasonable output:

Francesco Esposito will be staying in Lisbon from June 28th to June 30th. He has booked a single
room for $150.00 per night, and the total price including parking is $340.00 for 2 days.

Translating

Thanks to pretraining, one task that LLMs excel at is translating from a multitude of different languages—not just natural human languages, but also programming languages.

From natural language to SQL

One famous example taken directly from OpenAI references is the following prompt:

### Postgres SQL tables, with their properties:
#
# Employee(id, name, department_id)
# Department(id, name, address)
# Salary_Payments(id, employee_id, amount, date)
#
### A query to list the names of the departments that employed more than 10 employees in the
last 3 months

SELECT

This prompt is a classic example of a plain completion (so, Completion API). The last part (SELECT) acts as cue, which is the jumpstart for the output.

In a broader sense, within the context of Chat Completion API, the system prompt could involve providing the database schema and asking the user which information to extract, which can then be translated into an SQL query. This type of prompt generates a query that the user should execute on the database only after assessing the risks. There are other tools to interact directly with the database through agents using the LangChain framework, discussed later in this book. These tools, of course, come with risks; they provide direct access to the data layer and should be evaluated on a case-by-case basis.

Universal translator

Let’s consider a messaging app in which each user selects their primary language. They write in that language, and if necessary, a middleware translates their messages into the language of the other user. At the end, each user will read and write using their own language.

The translator middleware could be a model instance with a similar prompt:

Translate the following text from {user1Language} to {user2Language}:

<<<{message1}>>>

A full schema of the interactions would be:

  1. User 1 selects its preferred language {user1Language}.

  2. User 2 selects its preferred language {user2Language}.

  3. One sends a message to the other. Let’s suppose User1 writes a message {message1} in {user1Language}.

  4. The middleware translates {message1} in {user1Language} to {message1-translated} in {user2Language}.

  5. User 2 sees {message1-translated} in its own language.

  6. User 2 writes a message {message2} in {user2Language}.

  7. The middleware performs the same job and sends the message to User1.

  8. And so on….