A mental model is how we think something works and how our actions affect it. We create these models for everything we deal with, like products or places. They help us know what to expect and what value we can get.
When introducing new technology, it’s important to explain its abilities and limitations in stages. People will give feedback, influencing how the technology works, and this, in turn, changes how people use it.
To avoid misunderstandings, it’s crucial to be clear about what the technology can and can’t do, especially when it comes to human-like interactions. This helps set realistic expectations and builds trust with users.
Understanding Current Mental Models is the Key
Consider how people currently deal with the task your AI product aims to help with. What they’re used to doing will probably influence how they first understand new product.
For instance, if people usually organize their daily schedule on paper, they might think an AI planner works the same way. They could expect the AI to go through their plans, understand the priorities, and arrange them accordingly. However, it might surprise them to find out the AI might use other factors, like the time of day or the duration of the tasks, to organize the schedule.
Mental models are like mental shortcuts that simplify complex concepts and help individuals navigate their environment.
Here are some key points about mental models:
- Abstraction: Mental models are abstractions of reality. They don’t capture every detail of a situation but focus on the most relevant aspects.
- Simplification: They simplify complex phenomena, allowing individuals to grasp and work with complex ideas or systems.
- Interconnectedness: Mental models are often interconnected. One mental model can lead to the development of another, and they can be nested within each other.
- Subjectivity: Mental models are personal and subjective. They are shaped by an individual’s experiences, beliefs, and prior knowledge.
- Adaptability: People can adapt and refine their mental models as they learn and gain new experiences. This adaptability allows them to make better decisions and predictions over time.
- Heuristics: Mental models often involve heuristics, which are mental shortcuts or rules of thumb that simplify decision-making.
Examples of mental models include:
- The Map is not the Territory: This mental model suggests that our mental representations (maps) of reality are not reality itself (the territory). We interpret and navigate the world through our mental models, but these models are not the same as the actual world.
- Confirmation Bias: This mental model highlights our tendency to seek out and interpret information in ways that confirm our preexisting beliefs. Recognizing this bias can help us make more objective decisions.
- Inversion: Inversion is a problem-solving mental model where you consider the opposite of what you want to achieve. By thinking about what you want to avoid, you can often find better solutions to problems.
- Pareto Principle (80/20 Rule): This model suggests that roughly 80% of effects come from 20% of causes. It’s a useful concept for focusing effort and resources on the most significant factors.
Some of the Key considerations for onboarding users to a new Technology like AI
- Be ready for changes. AI lets systems adapt and get better for users. Over time, things like personalized experiences based on probability have become common. Using what people already know can make them feel more comfortable.
- Take it step by step. When you introduce people to a product with AI, explain what it can do, what it can’t, how it might change, and how to make it better.
- Learn together. People will share feedback on AI products, which will make the products better and change how people use them. This, in turn, affects how the AI learns. People’s ideas about how the AI works will also change over time.
- Don’t expect AI to be just like humans. People often think products with AI can do things like humans, but that’s not true. It’s important to explain how these products work using algorithms and to be clear about what they can and can’t do.
Understanding AI can be tricky when it seems to act like a person but actually works quite differently.
Take, for example, an “automatic recipe suggestion” tool. It might sound like it suggests recipes just like a person would, but automatically. However, the tool might miss some nuances, like personal tastes or dietary restrictions, leading to unexpected results.
The key is to make clear that AI has its limits and works differently from humans.

