Blogs

From Hype to Help: Designing Trustworthy AI Products

Design | Mon, Jun 30, 2025

By Valentina Abanina, UX/UI Designer, IA

From Hype to Help: Designing Trustworthy AI Products Banner

Artificial intelligence technologies are increasingly seen as having the potential to become among the main drivers of innovation and progress. Each year, they become more adaptive and sophisticated, with their applications expanding across nearly every aspect of our lives. 

However, while web and mobile app development companies continue to expand the use of AI in their products and business processes, the level of mistrust in AI remains high and influences how users perceive and engage with these technologies.

In this article, we review the latest research on trust in AI globally and in Canada, examine when it is appropriate to integrate AI into products versus when rule-based or heuristic approaches are better suited, and highlight design guidelines from Google, IBM, and Microsoft. We also cover industry best practices for integrating AI into products, emphasizing transparency, reliability, data quality, and clear communication to build user trust.

Trust in AI: Global and Canadian Perspectives

AI acceptance varies among users. Some people trust AI easily, especially if they have had positive experiences with smart devices in the past. Others remain skeptical, questioning AI-generated outputs due to concerns about accuracy, bias, or reliability.

A University of Melbourne–KPMG global survey (Nov 2024–Jan 2025) across 47 countries with over 48,000 respondents found that 58% of people consider AI untrustworthy, with lower levels of confidence reported in advanced economies. The 2024 Edelman Trust Barometer Tech Sector report states that only half of global respondents trust AI, much lower than the positive perception of the technology industry as a whole. The same study also shows that trust in AI companies has declined from 62% in 2019 to 54% in 2024. Furthermore, a June 2025 Euromonitor consumer survey revealed that only about 40% of consumers feel comfortable relying on generative AI, highlighting persistent skepticism despite its increasing use.

In Canada, trust in AI remains low. A KPMG Canada survey ranks the country 44th out of 47 in AI literacy and 42nd in trust systems. Only about a quarter of Canadians have received AI training. Concerns about misinformation and the reliability of AI technologies, such as self-driving cars, are common. For these and other reasons, there is strong public support for stricter AI regulation, highlighting a clear demand for oversight and accountability.

Media

When “AI-Powered” Turns Customers Away

Multiple studies show that users often react with caution to “AI-powered” features. People usually want products that are easy to use, and the AI label can make something seem complicated or unreliable. Research also shows that calling a product “AI-powered” can lower trust and reduce purchase intent. This depends on the context—terms like “AI-powered car” or “AI-powered illness diagnosis” feel risky to many users, while “AI-powered customer service” is generally more acceptable, because people consider it a less critical task, and are already familiar with chatbots and automated support systems.  

Therefore, when presenting your AI-powered product, focus on how it improves the user experience rather than the technology behind it.

When explaining AI recommendations, share only the key information users need to make decisions and take action: avoid overwhelming them with technical details. More detailed explanations can be provided outside the main product flow, such as through marketing materials, onboarding guides, help centers, or short educational videos. User testing can help determine what information users need to use the product effectively and why it matters. The right level of technical detail depends on the product and audience.

Where AI Fits in Product Design

Before you start building with AI, make sure the product or feature that you have in mind requires AI or would be enhanced by it.

AI is particularly useful for:

  • Personalized recommendations (e.g., movie suggestions tailored to users)
  • Predictive analytics (e.g., forecasting weather or flight price changes)
  • Natural language processing (e.g., chatbots, voice assistants)
  • Image recognition (e.g., facial recognition, object detection)

A rule-based or heuristic approach may be better when:

  • Predictability is crucial
    In fields like finance and law, accuracy and consistency are essential.

In finance, tasks such as tax calculations, invoicing, and budgeting require exact results. Unexpected variations caused by AI could lead to major financial errors.

In law, while AI can help with reviewing documents, decisions must follow strict legal standards that demand consistency and predictability.

  • Transparency is required
    Industries such as healthcare, banking, and government need clear, auditable systems. Rule-based solutions are easier to explain and verify, which helps meet compliance requirements.
  • Users prefer manual control
    In creative and hands-on work, users often value control over automation.

In areas like graphic design, UX/UI, and video editing, human creativity is key. In carpentry, cooking, or crafting, AI can support the process, but people usually enjoy doing these tasks themselves.

Recall vs. Precision

When building an AI-powered product, you need to choose between two priorities:

  • Recall: Broad coverage—find as many relevant items as possible, even if some aren’t a perfect match (e.g., Netflix movie recommendations, e-commerce suggestions).
  • Precision: Accuracy first—show fewer but highly relevant results (e.g., health app activity tips, food delivery suggestions, LinkedIn job matches).
Media

Designing Human-Centered AI Experiences

The following recommendations are useful guidelines for design with AI.

Build Trust Through Clarity and Control
Help users feel confident by setting expectations early. Use familiar UI patterns, avoid vague “AI magic,” and give users manual controls when possible. Let them explore, test, and interact with the system at their own pace.

Prioritize Transparency and Privacy
Clearly explain how user data is used and why. Ask for permissions upfront, and make it easy to adjust privacy settings. Transparency is key, especially in sensitive contexts such as those involving personally identifiable information, protected health information, or biometric data.

Design for Feedback and Adaptability
Encourage users to give feedback when something doesn’t feel right. This could be as simple as a thumbs-up/down or hiding unwanted suggestions. Show users that their input matters and, when possible, share how it’s used to improve the product.

Support High-Stakes Use with Human Context
For complex or potentially high-risk scenarios, offer additional support—whether that’s showing how confident the AI is in its prediction or giving users access to human experts or communities to help interpret results.

Ensure Data Quality and Relevance
Strong user experiences start with strong data. Regularly review your training data for relevance and bias. Poor or outdated data can damage trust and make even the best UX design ineffective.

AI Design Guidelines from Leading Tech Companies

To help teams design more trustworthy and user-friendly AI experiences, major tech companies have published practical guidelines based on research and real-world use.

For example, Google’s People + AI Guidebook focuses on setting clear expectations and helping users build accurate mental models of AI systems. IBM’s Design for AI highlights ethics, inclusivity, and explainability. Microsoft’s Human-AI Interaction Guidelines stress timely feedback, error recovery, and user control.

Together, these frameworks offer practical principles to design AI tools that are understandable, user-centered, and ethical.

Conclusion

In conclusion, while AI continues to drive innovation and transform industries, building user trust remains a priority. Understanding when AI is appropriate and designing with transparency, control, and clear communication can help bridge the trust gap. By following established guidelines from leading tech companies and prioritizing human-centered experiences, designers and developers can create AI-powered products that users feel confident and comfortable using.

Valentina Abanina

Mon, Jun 30, 2025

Logo

Share your email for program updates and upcoming info sessions — unsubscribe anytime.

Thank you for subscribing!
© 2025 Langara College. All rights reserved.Privacy Policy