Categories
AI

Prompting like Pro’s: Ideas and Practical Tips for Interacting with LLMs

Generative AI models, particularly large language models (LLMs), are powerful tools, but much like the use of any other tool, understanding their strengths and limitations is key to using them effectively. 

These models are trained to predict text based on patterns, not deep reasoning or factual accuracy. As the saying goes, “garbage in, garbage out.” While their outputs often sound impressive and convincing, this can be dangerous—especially when people begin accepting AI-generated content without verifying it. In my opinion, this is one of the biggest risks when using LLMs.

What Makes LLMs Great at Certain Tasks

LLMs are particularly adept at generating text based on context, predicting the best possible sequence of words that fit within the prompt. This makes them useful for a variety of tasks that do not heavily depend on factual accuracy. Here are a few examples:

  • Summarization: Since LLMs are trained to understand and synthesize context, summarizing content is one of their strengths. They can quickly generate concise summaries of articles, reports, or long passages while maintaining the core ideas.
  • Translation: LLMs also perform well when it comes to translating text, particularly between widely spoken languages. They utilize patterns and context to generate grammatically correct translations, even capturing nuances to some extent.
  • Tone Adaptation & Sentiment Analysis: Whether you need to adjust the tone of an email, make a text more formal, or assess the sentiment behind a piece of writing, LLMs excel at these tasks. They can modify the style and mood of a text based on your instructions.

These tasks are successful because they rely on the model’s ability to predict patterns and generate language that fits a given context. They don’t demand a deep understanding of the facts involved, making them ideal for tasks that prioritize fluency over factual accuracy.

Photo by Berke Citak on Unsplash

When to Use LLMs Carefully

While LLMs are excellent at generating text that fits a pattern, certain tasks require extra caution:

  • Information Retrieval: When looking for real-time information or highly specific knowledge not present in the training data, LLMs can easily make mistakes. They’re not equipped to search the web for up-to-date facts unless you prompt them with specific information.
  • Mathematical Computations: Though LLMs may appear confident when solving equations, they are not inherently good at performing precise mathematical computations. You should always verify any math-related output.
  • Bias and Ethics: Like any model trained on vast amounts of data, LLMs can reflect biases inherent in the data they were trained on. This can result in outputs that are biased, offensive, or ethically questionable.
  • Handling Sensitive Data: You should avoid inputting sensitive data (such as PII, health, or confidential information) into LLMs, as some may store, log, or use inputs for model improvement. While certain providers offer privacy safeguards, the prompt content can be saved somewhere, you do not have control over any more.

These, by far not exhaustive areas, are where human expertise is essential, and while LLMs can assist in brainstorming or generating ideas, they shouldn’t be relied upon for accurate or reliable solutions.

Prompting: An Interactive Process

Effective prompting is an interactive process that involves clear communication with the model. The more specific and context-rich your prompt, the more likely you are to get a useful response. It’s not enough to just throw words into a model and hope for the best. To get the best results, here are some practical prompting techniques:

1. Be Specific: Instead of asking vague questions, be clear about what you want. 

2. Provide Context: If you want the model to solve a specific problem, provide relevant details. 

3. Use Variables: If you’re solving similar problems across different prompts, you can use variables just like in programming to reuse and adjust your prompts for each new situation. This can save time and improve consistency in responses. For example, if you want a simple summary of a text, you can save this prompt somewhere:

Summarize this for me like I’m 5 years old: [PASTE TEXT HERE].

or this one if you regularly struggle with what to cook for a week:

I am a busy, health conscious person and do not want to spend a fortune on my weekly meals. My fridge currently holds the following ingredients {1}, {2}, {3}, {4}, {5}. Please generate 10 fast and easy recipes for my week including lunch and dinner meals. not all ingredients need to be included in all meals: 
{1}: brocoli 
{2}: potatoes
{3}: tofu
{4}: pasta
{5}: cheese

Adapt the values of the variables so that you can easily reuse prompts and create a personal collection of reusable and relevant prompts for you…

4. Refine your Prompt: have a conversation and add reiterate on the prompt, think about the problem you want to solve and keep asking until you are happy with the generated result.

Photo by Barn Images on Unsplash

Learn and Refine Your Prompting Skills

Prompting is a skill that can be refined over time. There are many ways to tweak your prompts to get better results for different tasks. For example, if you want more creative outputs, you might encourage the model to think outside the box with phrases like “think of an unconventional approach.” If you need more factual information, you can instruct the model to be extra cautious about accuracy. If you’re looking for resources to improve your prompting skills, I highly recommend checking out LearnPrompting.org for inspiration and tips. Alternatively, you can use good old Google (though I recommend search engines with fewer ads 🙂 to discover additional tutorials and articles on the subject.

Checklist

  1. Generative AI models are trained to predict text based on patterns, not deep reasoning, and may produce convincing but inaccurate content.  
  2. LLMs excel at tasks like summarization, translation, and tone adaptation, where factual accuracy is less crucial.  
  3. Certain tasks, like information retrieval, mathematical computations, and legal or medical advice, require extra caution when using LLMs.  
  4. Effective prompting involves being specific, providing context, and using variables to create reusable prompts for different scenarios.  
  5. Keep refining your prompts by iterating and adjusting them based on the model’s responses to achieve better results.