woman in discussing a lesson plan

Prompts 101

Understanding the Basics of Tology Prompt Engineering

Imagine you’ve stumbled upon a magical tool for entrepreneurs and creators, a genie’s lamp… (or maybe a “genie’s app” in this case). This isn’t just any genie though, it can morph into an industry expert, a team of virtual assistants, or anyone else you may wish to chat with. The power of your interaction hinges on how effectively you curate and communicate with the genie.  We’ve all heard the fables warning to be careful what you wish for….  This is the essence of prompt engineering.

You can get more out of Tology with a little insight into the mechanics behind its magic.  There are actually three layers of prompts interacting with the AI model.  

  • User prompt:  This is the individual question or request that you make during an interaction with the bot.  It provides the specific request in a single prompt interaction.
  • Bot prompt:  This is the unique prompt that is customizable for each bot.  It is found by clicking on the “Settings” button for an individual chatbot.  This prompt can be thought of as the background information that sets the tone, ground rules, and context for all conversations with the specific bot.
  • Data prompt: The data prompt includes the relevant chunks of your uploaded data that the Tology vector database selects based on each user prompt. 

Bot Prompt – Who Do You Want Your Genie to Be?

We’ll start with the Bot Prompt, since it determines the form and function of each individual Bot.  A Bot responds well to being assigned a role, as if you’re casting it in a movie.  You can name a persona, character type, or professional role that describes who you want to interact with.  You can even tell it to impersonate a famous historical figure.  Just naming the type of role (“I want you to act as a marketing expert”) goes a long way, but you can also refine that further (by adding “… who specializes in social media strategies for small service businesses”).  You could also create a bot for a single, specific use, such as “You are the writer of a daily newsletter.  Provide a summary of the provided topics in the style of a news story.”

Default Prompt

The Default Bot Prompt may evolve, but the baseline will be something like: 

“Your name is ‘Tology AI Assistant.’ You will respond by prioritizing the given content. If you can’t answer, add ‘Hmm, I can’t just generate an answer from the content provided, but let me explain it using my knowledge base as well.’ to your reply, and try to answer from your knowledge base. If you can’t reply, type ‘I have no idea, and the attached content does not mention this.’ Never break character. Explain your reasoning.”

Each of those components serves a specific function, which we’ll explore in a deeper article.

For now, let’s just add one framework to help think about tailoring your prompt: Zooming In or Zooming Out.  

  • Zoom In to narrow the bot’s focus by adding more detail to the Bot’s persona.  You can also give it specific boundaries, like deleting the permission to answer from its broader knowledge base.    
  • Zoom out to include anything where you want it to be creative beyond your uploaded material.  Creating a “Brainstorm Partner” persona and directing it to “use the data provided as a stepping stone to explore ideas that I’m not thinking of” will unlock unlimited potential.  

Creativity

One final variable related to the prompting stucture is the Creativity setting (sometimes technically referenced as “temperature”).  This determines the creativity of the language the bot uses in its responses.  Mechanically, it changes the parameters of the LLM’s algorithm to accept looser, lower probability connections between tokens.  Generally, a higher temperature leads to more random, creative, and verbose output, while a lower temperature produces more concise, consistent, and predictable answers. Do you want to empower it to take risks or restrain it to play it safe?

The default temperature for ChatGPT is approximately 70%. However, Tology’s default value is set at zero, which instructs the chatbot to choose the most likely output for consistent answers based on your uploaded data. You can adjust the temperature and experiment with the bot to find the right balance for your needs. Running the same prompts at 0%, 50%, and 100% can help you calibrate the temperature to achieve your desired result.

User Prompt

There are two basic types of User Prompt:

  • Ask a question: Just type a query or a question in the prompt window and prepare to be amazed as Tology provides the answer
  • Request a Generation: You can ask Tology to create or generate a piece of content, like a blog post, tweet, summary document, or a piece of code.

Follow Ups

Note that the “Follow Up” button plays a crucial role here.  Unlike many public LLMs, Tology does not have a conversational short-term memory.  This is intentional for two reasons:

  • Depth and Strength:  This allows for large Bot Prompts and maximum Data Prompt context to give you the most complete and insightful responses
  • Focus:  This ensures that your bot stays tightly focused on the data provided and does not drift outside a memory window

If you want to iterate and go deeper in a conversation, simply use the “Follow Up” button, and Tology will carry the context of the prior response into the next User Prompt.

Data Prompt

The Data Prompt component is derived from Tology’s proprietary vector database. Its key function is to determine what data is pulled from the attached documents to guide the bot in how to respond to each particular user prompt. While understanding the intricacies of this system isn’t necessary, we value transparency in our operations. That’s why you can view all the documents that were referenced for any given response by clicking on the “Sources” button, located at the bottom left of the response. With this feature, you always have immediate access to the original information source.

Conclusion

We hope that this quick overview of the fundamentals of prompting with Tology gives you a confident starting point.  We’ve found that users consistently find insight through peeling back these and learning how to leverage Tology in a variety of directions.  Check out some of our additional posts if you want to go deeper into the tech or figure out how to stoke inspiration with the Tology community.