Core prompt learning techniques

LLM limitations

So far, this chapter has focused on the positive aspects of LLMs. But LLMs have limitations in several areas:

  • LLMs struggle with accurate source citations due to their lack of internet access and limited memory. Consequently, they may generate sources that appear reliable but are incorrect (this is called hallucination). Strategies like search-augmented LLMs can help address this issue.

  • LLMs tend to produce biased responses, occasionally exhibiting sexist, racist, or homophobic language, even with safeguards in place. Care should be taken when using LLMs in consumer-facing applications and research to avoid biased results.

  • LLMs often generate false information when faced with questions on which they have not been trained, confidently providing incorrect answers or hallucinating responses.

  • Without additional prompting strategies, LLMs generally perform poorly in math, struggling with both simple and complex math problems.

It is important to be aware of these limitations. You should also be wary of prompt hacking, where users manipulate LLMs to generate desired content. All these security concerns are addressed later in this book.