Post #5 Gemini and How to Write Prompts
Gemini vs Bard: 5 key differences for developers and power users + 10 practical tips for writing effective prompts
February 8, 2024: Bard was renamed to Gemini – Google also launched a mobile app and Gemini Advanced service based on the Ultra 1.0 model. In this article, I compare both language models and show why Gemini is becoming real support for developers and advanced users.
-
More powerful model Gemini was built on the LaMDA architecture, which offers significant performance improvements compared to LaMDA 1, which Bard was based on. This means Gemini can generate more precise and coherent responses, and better handle tasks requiring extensive knowledge and context understanding.
-
Faster performance Gemini operates significantly faster than Bard, meaning you can get answers to your questions and tasks in less time. This is especially important for developers and power users who need fast and efficient tools for generating text and code.
-
More capabilities Gemini offers a broader range of capabilities than Bard, including:
- Generating various text formats, such as scripts, musical compositions, emails, lists, etc.
- Language translation
- Text summarization
- Answering questions comprehensively and thoroughly
- Writing code in various programming languages
-
Better code understanding: Gemini has better code understanding than Bard, making it a more useful tool for developers. It can generate code based on keyword descriptions, detect errors, and suggest fixes.
-
Easier integration Gemini was designed with easy integration with other tools and platforms in mind. You can use it in your scripts, programs, and websites to automate tasks and generate text and code.
In summary Gemini is a powerful and versatile language model that opens new possibilities for developers and demanding users.
Note: Gemini is still in development, but it already offers many features that can be useful and open new possibilities. We encourage you to try it out and share your opinions!
10 tips for writing good prompts
- Put the prompt in context
-
Define the role: Specify who the recipient of Gemini’s response is. Define the recipient’s role, knowledge level, and prompt purpose. This could be an engineer, layperson, artist, programmer, etc. It should look roughly like this: “You are an experienced software engineer with 10 years of experience in Python and JavaScript. Your task is to help solve coding-related problems.”
-
Establish knowledge level: Specify what level of knowledge the recipient has. This will help Gemini adjust the language and level of detail in the response. It can use well-known step-by-step approaches for beginners, or more advanced language for experts.
-
Define the goal: Explain what the prompt’s purpose is. Do you want to get information, create something new, or just satisfy curiosity? This will allow the model to better understand what you expect and prepare a response that meets your expectations.
- Define the task precisely
-
Formulate it precisely: The more precisely you define the task, the better results you’ll get. Instead of “Write a poem,” try “Write a love poem in romantic style, 12 verses long, with abab rhyme scheme.” Use specific verbs and avoid ambiguous formulations.
-
Use appropriate verbs: Choose verbs that clearly specify what you want Gemini to do. Instead of “Write about cats,” try “Create a cat description,” “Write a story about a cat,” “Generate a list of interesting cat facts.”
- Specify conditions
-
Response language: Choose the language in which the response should be generated.
-
Character count: Specify the maximum response length.
-
Format: Establish the response format, e.g., text, code, poem, email, etc.
-
Style: Specify the response style, e.g., formal, informational, creative, humorous, etc.
- Create agents
-
Questioning agent: Create an agent that asks questions to provoke Gemini to respond.
-
Responding agent: Create an agent that answers questions asked by the user or questioning agent.
-
Prompting as dialogue: Use dialogue between two agents to get a more engaging and instructive response. For example, you can write a prompt like this: create two agents where the first is a salesperson and advisor in cosmetics who wants to sell. The second is a customer with doubts and concerns. Then ask Gemini to generate dialogue between them, where the salesperson convinces the customer to buy the product, answering their questions and addressing concerns. You’ll see a lot of creativity.
- Use comparison techniques:
-
Compare two or more prompts to see how they affect the generated response.
-
Apply different styles, formats, and languages to get different results. Most models were trained on English data, so it’s worth experimenting with different languages.
- Apply Big Data strategies
-
Break the prompt into smaller parts and process them in parallel. You can also divide more complex tasks into stages and generate responses step by step for more complex tasks.
-
Use machine learning techniques to optimize prompts.
- Experiment
- Don’t be afraid to try different combinations of parameters and styles.
- The more you experiment, the more precise AI responses will be.
- Sometimes when you don’t fully know what to expect from a response or whether the asked question is complete, you can ask at the end “Do you need more information to answer this question?” or “Would you like me to provide more details on this topic?” This will allow the model to better understand your needs and adapt the response to your expectations.
My favorite question is: “Is there anything that would allow you to better answer this question?” This allows the model to better understand your needs and adapt the response to your expectations.
- Set priorities
-
Determine what’s most important: Decide which aspects of the prompt are crucial for getting the desired response, and focus on them.
-
Identify limitations: Consider what limitations might affect responses, such as available data or context.
- Use feedback
-
Collect opinions: If possible, get feedback from other users about prompt effectiveness.
-
Adapt based on opinions: Use collected information to improve future prompts.
- Be patient and persistent
-
Don’t expect perfection right away: The process of writing effective prompts may require time and practice.
-
Learn from mistakes: Every interaction is a learning opportunity, so be open to corrections and changes.
These ten tips should help in writing effective prompts that generate valuable responses. If you have additional questions or need more information, let me know! After longer experimentation and many chats, models quite thoroughly “learn” our preferences and style. So it’s worth spending some time perfecting prompts.
OpenAI, like Gemini, enables creating “personalized” models based on your own data. This is worth considering if you frequently use AI for specific tasks.
Create a Prompt Library This is one of the best practices. Save prompts that work best, along with context and results. This will help you quickly find effective prompts in the future and make experimenting with new ones easier. You automate many repetitions in prompt writing and can easily see which prompts work best in different situations.
Gemini is a powerful language model that offers a wide range of possibilities for developers and power users. Its speed, efficiency, better code understanding, and ease of integration make it an ideal tool for automating tasks and generating text and code. Remember that the key to success is the ability to write effective prompts. Use the above tips, experiment, and reap the benefits of Gemini’s potential.
Remember: Gemini is still in development, but it already offers many features that can be useful. We encourage you to try it out and share your opinions!
📚 Related logbook entries
- 📋 Gemini + Art of Prompts – AI workflow session - 3h experiments with Gemini Advanced, GPT-4 comparison and prompt library creation
Context: This article compares Gemini and Bard language models, highlighting Gemini’s benefits for developers and power users. It also contains 10 tips for writing effective prompts.