Core prompt learning techniques

Summary

This chapter explored various basic aspects of prompt engineering in the context of LLMs. It covered common practices and alternative methods for altering output, including playing with hyperparameters. In addition, it discussed accessing OpenAI APIs and setting things up in C# and Python.

Next, the chapter delved into basic prompting techniques, including zero-shot and few-shot scenarios, iterative refining, chain-of-thought, time to think, and possible extensions. It also examined basic use cases such as booking chatbots for collecting information, summarization, and transformation, along with the concept of a universal translator.

Finally, the chapter discussed limitations of LLMs, including generating incorrect citations, producing biased responses, returning false information, and performing poorly in math.

Subsequent chapters focus on more advanced prompting techniques to take advantage of additional LLM capabilities and, later, third-party tools.