Photo by Brett Jordan on Unsplash

Prompt Engineering

Quick Reference

Ashish Jaiman
3 min readMar 25, 2024

--

Effective prompt design is crucial for extracting the best performance from Large Language Models (LLMs) like GPT. Well-crafted prompts can significantly improve the model’s output by guiding it towards the desired type of response.

Here are some top prompt design patterns that can enhance interaction with LLMs:

Direct Instruction

Directly tell the model what you want it to do, using clear and concise language. This approach works well for straightforward tasks like generating text, summarizing information, or answering questions.

Example: “Write a summary of the following article.”

Example-Based (Few-Shot Learning)

Provide examples within the prompt to illustrate the task you want the model to perform. This is especially useful when you want the model to follow a specific format or style.

Example: “Here are three examples of product descriptions. Based on these examples, write a product description for the following item.”

Chain of Thought Reasoning

Prompt the model to show its reasoning or thought process step by step. This can be particularly helpful for complex problem-solving or when you want the model to explain its conclusions.

Example: “Explain step by step how you would solve the following math problem.”

Iterative Refinement

Start with a broad or general prompt and refine the model’s responses through a series of follow-up prompts. This pattern allows for dynamic interaction and can lead to more precise or tailored outputs.

Example: “Write a brief overview of the French Revolution. Now, focus on the role of the bourgeoisie.”

Prompt Engineering for Creativity

Encourage the model to generate creative or novel outputs by asking open-ended questions or posing challenges that require imagination.

Example: “Imagine a futuristic city where transportation is entirely sustainable. Describe what this city looks like and how people move around.”

Conversational Context

Frame the prompt as part of a conversation, where each response takes into account the context of previous exchanges. This pattern is useful for building chatbots or other interactive applications.

Example: “I’m planning a trip to Japan. What are some must-visit places? … How about traditional Japanese food I should try?”

Role Play

Ask the model to assume a specific role or perspective when generating its response. This can help tailor the output to reflect a particular viewpoint or expertise.

Example: “As a professional nutritionist, what advice would you give someone looking to improve their diet?”

Audience Focus

Instead of providing examples, describe the task or the type of response you’re looking for. This method relies on the model’s pre-trained knowledge and understanding to generate a response without specific examples.

Example: “Explain the theory of relativity in simple terms suitable for a 10-year-old.”

Incorporating External Knowledge

While LLMs have vast amounts of pre-trained knowledge, sometimes specifying that the response should incorporate, or mimic external sources or formats can be helpful.

Example: “Write an article about renewable energy trends in the style of a Wikipedia entry.”

Each of these prompt design patterns can be adapted and combined to suit a wide range of tasks and objectives. Experimenting different approaches and refining prompts based on the model’s responses are key to maximizing the effectiveness of LLM interactions.

--

--

Ashish Jaiman

thoughts on #AI, #cybersecurity, #techdiplomacy sprinkled with opinions, social commentary, innovation, and purpose https://www.linkedin.com/in/ashishjaiman