Using AI Studio templates
In this post:
LinkedIn Post
This template is useful when creating an app using these techniques:
- Zero-shot prompting
- Few-shot prompting
- How to reference an input in your prompt
Video walkthrough
Best practices
Few-shot prompting
When creating apps, it’s always preferred to provide examples for the LLM so that it can learn from these past examples. That said, remember to include only your highest quality examples — you want the LLM to learn from your best work!
Zero-shot prompting
If you don’t have any examples, or you only have examples you’re not happy with, then you can try zero-shot prompting, in which you provide clear instructions to the LLM about what you need. Since you won’t be providing additional examples, it’s important to make your instructions thorough and specific. You can include instructions around output length, voice and tone, whether or not you want bullets included, etc.
Good use cases for few-shot prompting:
- Generating titles or headlines (for articles, blogs, etc.) — share your best titles/headlines with the LLM so that it can learn from these and generate title/headlines that follow the structure of your best-performing examples
- Generating marketing emails — provide examples to the LLM so that it can learn from your best-performing marketing emails
- Generating bios — often times customers generate bios for event speakers, employees, etc. Typically, these bios follow a consistent format and structure. Providing the LLM with past examples of these will help it learn and generate future bios that follow this format and structure
Headlines & Subheadlines
This template is useful when creating an app using these techniques:
- Conversational prompting style (dialog between AI and Human)
- Prompt-chaining (use an output from one prompt as an input for another prompt)
- How to extract data/information from an input (subheadline instructions from the creative brief)
Video walkthrough
Best practices
Dialogue/conversational format prompts
Using a dialogue format prompt is an effective way to prompt! It might seem a little silly at first, but it works. It’s a great way to have the LLM reiterate what you’re asking it to do. It sounds like a conversation between you and the LLM, so you can write it without overthinking.
- Need a little help anyway? If you have a Writer app account, use Ask Writer to generate a conversational prompt between an LLM and a human trying to do X. This will give you a good format to tweak.
Prompt chaining
Prompt chaining is a fancy way of saying “break down a task into multiple parts.” Sometimes if your prompt is too long, or it’s trying to do too much, the LLM gets confused and doesn’t know how to approach the problem. When that happens, you’ll get better results by simply breaking your instructions up into multiple prompts. Sometimes, you’ll use one prompt to generate a particular output, and then write another prompt that uses that output as a starting point. Using a prompt to create a foundation for your next prompt is prompt chaining.
It’s a bit like baking a cake. If you’ve ever made a cake from scratch, you know that you usually need to mix together all of the wet ingredients, then mix together the dry ingredients, then add the mixed dry ingredients into the bowl of wet ingredients. If you simply dump all of the ingredients together into the mixing bowl, you’ll end up with lumps of flour and baking powder in your cake - ew!
If we were prompt-chaining a cake, it might look something like this:
- Prompt 1: Combine sugar, butter, and eggs.
- Prompt 2: In a separate bowl, combine flour, baking powder, and salt.
- Prompt 3: Add {Prompt 1} to {Prompt 2}. Bake in the oven for 45 minutes.
- Prompt 4: Combine butter and powdered sugar to create a frosting. Cover {Prompt 3} with the frosting.
Breaking up your instructions into parts will yield a delicious result.
Good use cases for prompt chaining:
- Taking a large, dense document and generating a summary, then using that summary to generate content (like a blog post or an email).
- Writing a blog post. You might break up your prompts into different sections for the introduction, the body, and the conclusion - but you want the LLM to know what was said earlier in the article so you can summarize your arguments properly. Your {Introduction} prompt can tease the content of your {Body} prompt output, and your {Conclusion} prompt can summarize the {Body} prompt output.
Weekly email status update
This template is useful when creating an app using these techniques:
- Chain-of-thought prompting
- Prompt-chaining (use an output from one prompt as an input for another prompt)
- Using guiding principles for effective instructions
Video walkthrough
Best practices
Prompt chaining
See [the section above] for a guide to prompt chaining.
Chain-of-thought prompting
Have you ever been asked a question, and you were so anxious to respond that you blurted out an answer before you even had time to think it through? LLMs can do this too, especially if you’re asking a complex question. Chain-of-thought prompting is like telling an LLM to slow down, take a deep breath, and approach the problem step by step.
Good use cases for chain-of-thought prompting:
- Tasks where the order of operations matters - i.e. don’t start writing a summary until you’ve finished reading all of the weekly updates.
- When your LLM keeps giving wrong answers.
Earnings call report
This template is useful when creating an app using these techniques:
- When to use a larger context window model (use Palmyra-X-32K for an input that’s longer than 8,000 tokens)
- When to use multiple prompts to build sections of your output
- Zero-shot prompting with instructions
Video walkthrough
Best practices
Zero-shot prompting:
See [the section above] for guidance around zero-shot prompts.
Breaking a prompt up into multiple prompts:
Breaking a prompt up into multiple prompts helps the LLM handle complex instructions.
Sometimes, you might break a prompt into multiple parts to do prompt chaining. See [the section above] for guidance around prompt chaining.
Even if you aren’t using prompt chaining, breaking a prompt into multiple parts can be helpful when you’re trying to troubleshoot any errors in your outputs. Instead of editing and updating one giant prompt, you can better isolate where an error is occurring, and update an individual prompt.
For example, imagine you’re creating an app to generate a blog post, and something keeps going wrong with your conclusion paragraph. If you’ve created separate prompts for your headline, introduction, body, and conclusion, you’ll know exactly which instructions to troubleshoot. You don’t have to scroll through and double-check any of the other prompts!
Choosing the right model
Anytime your inputs are long, lengthy documents (~over 800-900 words), you’ll want to leverage the 32k model in order for Writer to be able to ingest the entire document.
Good use cases for model switching:
Trying to reduce the cost of running your app? Write one prompt that uses the 32k model to summarize the long document. Then, write a second prompt which uses the more affordable Palmyra V2 or V3 models to transform that summary into other content.