You+AI: Part V : Trust and Explainability

In this fifth part of the series ( here is the last part) , I’ll explore when and how to explain the actions of your AI, the data it relies on for decision-making, and the level of confidence in the results it produces.

AI-powered systems does not produce cent percent accurate results and  operate on the basis of probability and uncertainty.  For building trust in the produced outcomes, It’s crucial to provide explanations at a level that everyone can understand to help users grasp how these systems function.

When users have a clear understanding of the system’s abilities and limitations, they can decide when to trust it to assist them in achieving their goals and when to use their own judgement.

Here are the Top 4 Considerations for AI based Products

Calibrating User Trust

Assist users in understanding when to trust AI predictions. Since AI relies on statistics, complete trust isn’t advisable. Users should use system explanations to gauge when to rely on the predictions and when to use their own judgment.

For Example: In a health app, if the AI suggests a potential diagnosis, the user should consider the statistical basis for the suggestion rather than blindly relying on it. If the AI has a high confidence level and the user sees a clear explanation,  the prediction can be confidently relied upon

As a Best Practice, provide clear explanations and insights alongside predictions to empower users in making informed decisions. Regularly update users on the limitations of the AI system.

Trust Calibration Throughout the Product Experience

Incorporate trust-building across the user journey. Developing the appropriate level of trust is a gradual process. As AI evolves, users’ interactions with the product will also evolve.

For Example: In a virtual assistant, as users interact more and observe the AI adapting to their preferences, they naturally build trust. The system learns and evolves over time, aligning with the user’s changing needs.

As a Best Practice, Implement a gradual onboarding process to introduce users to AI capabilities and updates. Encourage user feedback to enhance the system’s adaptability.

Optimizing for Understanding

Prioritize clarity in communication. Complex algorithms may not always have a straightforward explanation. Developers may not fully understand the intricacies, and even when explainable, it might be challenging to convey in user-friendly terms.

For Example: A recommendation system on a streaming platform might use a complex algorithm. While the developers can’t explain every detail, the system can provide simple insights like “Because you enjoyed X, we suggest Y.”

As a Best Practice, Offer simplified explanations, use visual aids, and employ user testing to ensure that even non-technical users can comprehend the system’s outputs.

Managing Influence on User Decisions

Control the impact on user decisions. Since AI outputs may require user action, determining when and how the system communicates confidence levels is crucial for guiding user decisions and building trust.

For Example: In a financial app, if the AI suggests investment options, displaying a confidence score can help the user assess the reliability of the suggestion before making investment decisions.

As a Best Practice, Clearly communicate confidence levels, and incorporate user feedback to refine the timing and presentation of confidence information. Regularly update users on the system’s performance and reliability metrics.

In conclusion, the successful integration of AI into any system requires a strategic focus on three fundamental aspects.

Firstly, evaluating and fostering user trust in the AI system is paramount for establishing credibility and user confidence.

Secondly, determining the opportune moments for providing explanations ensures that users comprehend the system’s functioning, contributing to a seamless user experience.

Lastly, presenting confidence levels in AI predictions adds a layer of transparency, enabling users to make informed decisions.

By conscientiously addressing these three points, one can pave the way for delivering an exceptional product or professional service that leverages the power of AI effectively.

You+AI :Part-IV : Mental Models

A mental model is how we think something works and how our actions affect it. We create these models for everything we deal with, like products or places. They help us know what to expect and what value we can get.

When introducing new technology, it’s important to explain its abilities and limitations in stages. People will give feedback, influencing how the technology works, and this, in turn, changes how people use it.

To avoid misunderstandings, it’s crucial to be clear about what the technology can and can’t do, especially when it comes to human-like interactions. This helps set realistic expectations and builds trust with users.

Understanding Current Mental Models is the Key

Consider how people currently deal with the task your AI product aims to help with. What they’re used to doing will probably influence how they first understand new product.

For instance, if people usually organize their daily schedule on paper, they might think an AI planner works the same way. They could expect the AI to go through their plans, understand the priorities, and arrange them accordingly. However, it might surprise them to find out the AI might use other factors, like the time of day or the duration of the tasks, to organize the schedule.

Mental models are like mental shortcuts that simplify complex concepts and help individuals navigate their environment.

Here are some key points about mental models:

  1. Abstraction: Mental models are abstractions of reality. They don’t capture every detail of a situation but focus on the most relevant aspects.
  2. Simplification: They simplify complex phenomena, allowing individuals to grasp and work with complex ideas or systems.
  3. Interconnectedness: Mental models are often interconnected. One mental model can lead to the development of another, and they can be nested within each other.
  4. Subjectivity: Mental models are personal and subjective. They are shaped by an individual’s experiences, beliefs, and prior knowledge.
  5. Adaptability: People can adapt and refine their mental models as they learn and gain new experiences. This adaptability allows them to make better decisions and predictions over time.
  6. Heuristics: Mental models often involve heuristics, which are mental shortcuts or rules of thumb that simplify decision-making.

Examples of mental models include:

  • The Map is not the Territory: This mental model suggests that our mental representations (maps) of reality are not reality itself (the territory). We interpret and navigate the world through our mental models, but these models are not the same as the actual world.
  • Confirmation Bias: This mental model highlights our tendency to seek out and interpret information in ways that confirm our preexisting beliefs. Recognizing this bias can help us make more objective decisions.
  • Inversion: Inversion is a problem-solving mental model where you consider the opposite of what you want to achieve. By thinking about what you want to avoid, you can often find better solutions to problems.
  • Pareto Principle (80/20 Rule): This model suggests that roughly 80% of effects come from 20% of causes. It’s a useful concept for focusing effort and resources on the most significant factors.

Some of the Key considerations for onboarding users to a new Technology like AI

  • Be ready for changes. AI lets systems adapt and get better for users. Over time, things like personalized experiences based on probability have become common. Using what people already know can make them feel more comfortable.
  • Take it step by step. When you introduce people to a product with AI, explain what it can do, what it can’t, how it might change, and how to make it better.
  • Learn together. People will share feedback on AI products, which will make the products better and change how people use them. This, in turn, affects how the AI learns. People’s ideas about how the AI works will also change over time.
  • Don’t expect AI to be just like humans. People often think products with AI can do things like humans, but that’s not true. It’s important to explain how these products work using algorithms and to be clear about what they can and can’t do.

Understanding AI can be tricky when it seems to act like a person but actually works quite differently.

Take, for example, an “automatic recipe suggestion” tool. It might sound like it suggests recipes just like a person would, but automatically. However, the tool might miss some nuances, like personal tastes or dietary restrictions, leading to unexpected results.

The key is to make clear that AI has its limits and works differently from humans.

You+AI: Part-3 : Data Collection and Evaluation

To enable predictions, AI-powered products need to instruct their underlying machine learning model to identify patterns and correlations in data. This data, known as training data, can include collections of images, videos, text, audio, and more.

You can leverage existing data sources or gather new data specifically for training your system.

For e.g., you can utilize Overture maps, recently open sourced to develop AI based  predictive navigation system

The quality and labelling of the training data you obtain or collect directly shape the output of your system, influencing the overall user experience.

Consider following as guiding principles for collecting and evaluating data for AI systems

  • Acquire High Quality Data :Begin by strategizing the acquisition of high-quality data as a foundational step. While model development is often prioritized, allocating adequate time and resources to ensure data quality is essential. Proactive planning during data gathering and preparation is crucial to prevent adverse consequences stemming from suboptimal data choices later in the AI development process.
  • Map Data Needs to User  Needs : Identify the type of data necessary for training your model, taking into account factors such as predictive capability, relevance, fairness, privacy, and security.  Read my previous article for more details
  • Source your data ethically and diligently: Whether utilizing pre-labelled datasets(there are a lot of sources to pre-labelled data,( Google’s DataSet Search and FACET Dataset Explorer are  an excellent resource) or collecting your own, it’s crucial to rigorously evaluate both the data itself and the methods employed in its collection to ensure they align with the ethical standards and requirements of your project.
  • Thoroughly prepare and document your data :Ensure your dataset is suitably primed for AI applications, and document both its contents and the decisions made during the data gathering and processing stages. Partition the data into training and test sets. Test sets consist of data unfamiliar to your model, serving as a means to determine the effectiveness of your model. The training set must be sufficiently extensive to effectively train your model, while the test set should be sizable enough to thoroughly evaluate your model’s performance.
  • Adapt your design for labelers and labeling processes.Data labeling is the process of identifying raw data (images, text files, videos, etc.) and adding one or more meaningful and informative labels to provide context so that a machine learning model can learn from it.Labels can be applied through automated procedures or by individuals referred to as labelers. The term “labelers” is inclusive, encompassing diverse contexts, skill sets, and levels of specialization. In the context of supervised learning, the accuracy of data labels is paramount for obtaining valuable insights from your model.Deliberate design of labeller instructions and user interface flows can enhance the quality of labels, thereby improving overall model output.
  • Fine-tune your model. Once your model is operational, scrutinize the AI output to verify its alignment with product goals and user requirements.What if Tool by Google is an excellent resource to fine tune your model If discrepancies arise, troubleshoot by investigating potential issues with the underlying data.

In conclusion, it is evident that data serves as the cornerstone of any AI system. The guidelines presented in the article offer valuable insights for obtaining accurate, meaningful, and reliable data for your upcoming experiments or new product development. Recognizing the pivotal role of data in shaping the performance and outcomes of AI models, the provided strategies underscore the importance of meticulous planning, ethical sourcing, and thorough documentation.

In essence, the article provides a roadmap for practitioners to not only gather data effectively but also to enhance the overall integrity and reliability of their AI systems. Implementing these guidelines can lead to more accurate predictions, improved user experiences, and ultimately, the successful deployment of AI-driven solutions.

You+AI: Part-2 Building Better AI

This is the Third Article in ‘You+AI” series. Last article can be accessed here

To effectively harness the potential of AI, it is essential to align its strengths with real user needs and define success through thoughtful consideration. Let’s delve into key considerations for identifying suitable user problems, augmenting human capabilities, and optimizing AI’s reward function.

Aligning AI Solutions with Real User Problems

The first crucial step in developing a successful AI product is aligning it with genuine user needs. Finding that sweet spot where user requirements intersect with AI strengths is essential. This not only ensures that the AI product addresses a tangible problem but also that it adds unique value.

Instead of simply asking, “Can we use AI to solve this problem?” start by exploring human-centered solutions with questions like, “How might we solve this problem?” Evaluate whether AI can bring a unique value proposition to the table, offering solutions beyond traditional approaches.

The emphasis here is on employing AI as a solution to real-world problems, all while keeping ethical considerations at the forefront of the development process.

Assessing Automation vs. Augmentation

Once a user problem has been identified, the next crucial decision revolves around whether to automate certain aspects or augment existing processes.

Automate tasks that are challenging, repetitive, or unpleasant, especially when there is a clear consensus on the “correct” way to perform them.

Conversely, augment tasks that people enjoy, that hold social value, or where consensus on the “correct” way to perform them is elusive.

To understand user preferences, ask questions such as, “If you had a human assistant for this task, what duties would you assign them?”

Striking the right balance between automation and augmentation ensures that the AI product complements human capabilities, providing a more seamless and user-friendly experience.

Designing & Evaluating the Reward Function

Every AI model in follows a guide called a “reward function.” It’s like a set of rules written in math that helps the AI decide what’s a good or bad prediction. This guide influences how your system behaves and can greatly impact how users experience it. Think of it as the steering wheel for your AI’s actions.

Establish a clear framework for success and failure within your team. Define specific success metrics and meaningful thresholds.

For instance, “If our specific success metric for the AI-driven feature drops below a meaningful threshold, we will take a specific action.” This ensures a collective understanding of the desired outcomes and a swift response to deviations.

In essence, a well-crafted reward function is the cornerstone of an AI product that not only meets user needs but does so responsibly and ethically.

By navigating these three key aspects – aligning with user problems, assessing automation versus augmentation, and designing a robust reward function – developers can pave the way for AI products that are not just technologically advanced but are also user-centric, responsible, and designed for long-term success.

You+AI: Part-1: Design Patterns

As announced via my last article, this is the beginning of the “You+AI” series, which has 25 parts discussing how AI can be used in making products and decisions to get the most benefits. These 23 guidelines are helpful for product managers, consultants, and others who are new to AI and want to use it but may not understand all its pros and cons.

Given that products are designed for people, it’s crucial to consider their needs throughout the creation process. These guidelines encompass the entire product development journey, from inception to the final stage when it’s ready for customers to use.

Here are some of the things we talk about in the series:

  • How to start using AI in a way that focuses on people.
  • Using AI in products.
  • Helping users learn about new AI features.
  • Explaining how AI systems work to users.
  • Making sure datasets used by AI are made responsibly.
  • Building and making sure users trust the product.
  • Balancing how much control users have with how much is automated.
  • Giving support after the product is finished.

Google has free resources that can help you learn more about them. I’ve reviewed these resources and compiled a catalog of design patterns, complete with explanations and examples.

You can access these on-the-go through my Google Drive.

Here is a quick snapshot of the how these patterns have been organized

Article content

To safeguard the information and provide access exclusively to committed users, I’ve implemented a “Request-Access” model. If you want access, please send an email to pradeeppatel2k25@gmail.com.

Designing Tomorrow: Using AI in Product Development

Last year has been a year of AI, but this year will be the year when market will be flooded with AI based products. Unlike other design principles, whether it be UX /Design, Enterprise or highly complex Realtime systems, AI products will have an altogether different design and Development principles, methods  and frameworks .

Through this article I am starting 25 part series on AI based product design and  development principles, guides and frameworks that will make your product standout from the expected flood of AI based products in 2024

These set of articles will be  a comprehensive guide  for designing products with AI.  These will cover  a wide range of topics, from the fundamentals of human-centered AI to specific design patterns and case studies.

To start with let’s consider some key questions that come up in the product development process:

  • Does your product need AI and how to measure Success
  • How to get started with human-centered AI
  • When and how to use AI in a product
  • How to onboard users to new AI features
  • How to responsibly build a dataset
  • How to help users build and calibrate trust in a product
  • How to find the right balance of user control and automation
  • Designing for fairness and non-discrimination
  • Supporting users when something goes wrong
  • Building trust through transparency and explainability

By the end of this series you will be empowered with a practical framework for using AI to create products that are both useful and enjoyable for users

Stay tuned for the next article on  23  Patterns