Prompt Like a Pro - Ask Arbor and Arbor AI

Get real value from Ask Arbor and AI chatbots. This article will help you with practical skills to write effective prompts, troubleshoot bad responses and unlock smarter/faster answers using Ask Arbor. 

 

This article in used as part of our Arbor AI: User Certification therefore there is more information that may not be in this article that will be in the course itself. 

 

Introduction: What AI Chatbots Can (and Can't) Do

Getting started with AI, particularly with Large Language Models (LLMs), is a lot like learning any new skill. While the interface is as simple as typing a question, expecting perfect results from the start is unrealistic. There's a learning curve involved, but once you master the art of prompt engineering, you'll be able to use AI with incredible speed and efficiency.

 

Understanding the Basics: LLMs and How They Work:

A Large Language Model (LLM) is a type of AI that has been trained on a massive amount of text data—billions of words from books, articles, and websites. It learns to identify patterns in this data, allowing it to understand the relationships between words and sentences. When you give it a prompt, the LLM doesn't "think" in the human sense. Instead, it uses these learned patterns to predict the most statistically likely sequence of words that should follow your input. This is why the way you frame your prompt is so crucial; you're not just asking a question, you're guiding the AI's pattern-matching process.

 

The Learning Curve of Natural Language:

The fact that you can use natural language to interact with an AI can be deceptive. While it feels like a conversation, it's not a true two-way street. You can't just speak your mind and expect the AI to fill in the blanks. There's a skill to crafting a prompt that effectively leverages the AI's pattern-matching capabilities. Once you've honed this skill, you'll find that you can get a lot more done in a lot less time.

 

AI is Not a Search Engine:

This is a critical distinction to make. A search engine like Google is designed to find existing web pages that contain keywords you've entered. An AI, on the other hand, is a pattern matcher. It doesn't search for facts; it generates a response based on the patterns it has learned. 

 

The Problem of Context and "Garbage In, Garbage Out":

The AI has no knowledge of your personal or professional context unless you provide it. For example, if you ask an AI about "project deadlines," it doesn't know you're referring to your company's Q3 marketing campaign. It will give a generic answer. This is where the principle of "garbage in, garbage out" comes into play. If your prompt is vague, the response will be vague. No prompt = no magic. It's only as good as the prompt you give it.

 

Hallucination: The AI's Desire to Answer:

One of the biggest challenges with LLMs is hallucination. This is when the AI generates a response that sounds plausible but is factually incorrect. It happens because the AI's primary goal is to answer your prompt, even if it doesn't have the information to do so accurately. It will try to find the most likely pattern to complete the prompt, even if that pattern doesn't correspond to reality. 

 

Why Does It Cost Money? The Role of Tokens:

The cost of using AI is often tied to tokens. A token is a small unit of text, usually a word or part of a word. When you use an AI, you are essentially paying for the computational power required to process your input and generate a response, which are both measured in tokens. The cost of a prompt and its response is determined by the total number of tokens used. This is why a short, simple prompt and response will cost less than a long, complex one. Understanding this can help you be more efficient in how you use AI and manage your budget.

 

How AI interprets Prompts 

Writing good prompts for an AI is a lot like giving a task to an intern. The clearer and more specific you are, the better the result. An AI, like a new intern, doesn't have the context that you do. It can't read your mind, so you have to provide it with all the information it needs to do the job right.

Here are some key points: 

  • When you type a prompt, the AI doesn't see it as a complete sentence or paragraph. It breaks your input down into small pieces called tokens, which can be words or parts of words. The AI then uses a massive amount of data to predict the next most likely token, stringing them together one by one until the response is complete.

    Because the AI is just predicting the next token, the quality of its response is directly tied to the quality of your prompt. In Summary, AI breaks down your input into tokens → predicts next words

  • The Power of Specificity:

    Vague prompts lead to generic answers. If you ask, "Tell me about Pupil Premium pupils," the AI has no way of knowing what you're interested in. It will likely give you a very broad, high-level overview of pupil premium children in the school, which probably isn't what you're looking for.

    Specific prompts lead to useful answers. If you ask, "Tell me about the pupil premium in the school and include information on their attendance and their behaviour using information from this term," you're giving the AI a clear roadmap. It can now provide a detailed, focused response that is much more valuable.

    Therefore, it is useful to remember Vague → generic. Specific → useful

    and remember that The Clearer the Inputs, The Better the Outputs

  • It's important to understand that the more ways that your prompt could be interpreted, the less likely it is that you will get the answer that you're hoping for or what you had in mind 

 

Anatomy of a Good Prompt

Building blocks to include in your prompt:

  • Goals: What do you want? (Summarise, write, explain, compare?)
  • Context: What background is relevant?
  • Format: List, paragraph, table, steps?
  • Constraints: audience, style
  • Timeframe: Data timeframe
  • Tone (optional): Professional, friendly, concise?
  • Data Area: Where in the MIS that you want responses from
  • Objectivity: Avoid subjective adjectives like “wins/losses, best/worst, effective/ineffective, large/small, high/low, broad/narrow, significant/insignificant, Improved/deteriorated, reliable, simple, intuitive, efficient, robust, etc. The AI does not have context for your baselines.
  • Avoid leading questions: “Isn’t it better to…”

Most importantly, make sure you're being specific and speak/type in whole sentences with an explanation of what you want. 

 

Prompt examples and rewrites 

Bad prompt: What's our PTO Policy → Improved Prompt: Summarise our PTO policy in plain language for a new hire who just joined.

Bad prompt: Help with Jira → Improved Prompt: Explain how to create and assign a Jira ticket for a bug report in our engineering workflow.

Bad prompt: How do I do a launch? → Improved Prompt: Outline the key steps in our GTM launch checklist for a new feature.

Useful templates below:

General information retrieval: Explain [X] in simple terms for [audience].

Content Creation: Write a [format] about [topic], using [style/tone], based on [source/doc].

 

Ask Arbor - Specific Tips 

Ask Arbor is an LLM that’s augmented with tools. We do not train the LLMs themselves, they are provided by OpenAI (and Google in future). We give it access to tools. 

Understanding which tools our AI layer has improves your understanding of what it can or can’t do.

As a user, make sure you learn how to use it and give it multiple attempts. 

Good rules of thumb to follow are:

  • Reference as much context as possible: "Summarise the latest Q2 OKRs from the 'Q2_2025_OKRs deck'"
  • Use roles & scenarios to frame the question."I'm a new manager. What's the process to approve expenses?"
  • Say what you're doing: "Draft an onboarding checklist for a remote designer joining next week."
  • Chain your prompts: Follow up instead of restarting. AI remembers context.
  • Pro Tip: Use Arbor’s capabilities like suggested follow-ups or clickable references to refine instead of starting over.
  • Use the dictation feature! It’s faster than typing and easier to be specific.

 

What Can't Ask Arbor Do?  

Simply put, Ask Arbor doesn't have access to all of the data on the MIS. There are more nuanced pieces of data that it doesn't have tools for. It would be difficult to list them all. 

There are certain niche date ranges that it can't do right now, e.g. "every Friday". It treats the data as ranges. 

Another thing it can't do a lot of is take action on behalf of the user. Examples include: 

- logging a behaviour incident / point award

- writing any data to the MIS 

- setting things up (with the exception of interventions)

 

Pro Mode: Layering Context, Constraints and Style 

Techniques:

  • Role prompting: "Act like a senior product marketer. Draft a competitive positioning doc for..."
  • Few-shot prompting (give examples): "Here are two examples of past product briefs. Now write one for our new AI feature..."
  • Instruction stacking: "Summarise Year 10’s attendance in bullet points, then write a 1-paragraph email update to staff."
  • Constraints: "Explain this in less than 100 words, for someone with no tech background." 

     

FAQs 

What tools does Ask Arbor currently have?

  • Why not try going to Ask Arbor and asking “Can you provide me a comprehensive list of your tools and what they can do?” Ask Arbor will let you know!

Will Ask Arbor hallucinate or make something up?

As with all LLMs, Ask Arbor can, in theory, “hallucinate”. However, we have put in a lot of safeguards to reduce the number of hallucinations. 

Two of the techniques that we use are RAG (Retrieval Augmented Generation) and having an "agentic" approach whereby the agent has access to "tools" which have a predetermined set of parameters.

Just to note, the accuracy of the "output" is often determined by the quality of the "input" (in this case, the user's (your) question or prompt).

One thing to say is that you should always have a human in the loop to check the generated content. We never take autonomous actions based on LLM-generated content.

Was this article helpful?
2 out of 2 found this helpful
I'm still stuck!

Comments

0 comments

Article is closed for comments.