
The False Dichotomy in UX Practice
Who precisely are the users that User Experience professionals consider when designing digital experiences? The industry has established various approaches to characterise users, with personas being the predominant method to bring abstract users to life and create representative archetypes. Yet these archetypes invariably lack nuance, and the essence of effective UX lies in delivering detailed experiences to all users—including those not captured by conventional personas.
In my extensive experience reviewing user personas across numerous organisations, I struggle to recall encountering a persona representing a disabled user. This absence is telling, but the fundamental issue extends beyond persona development.
The critical problem is that numerous UX professionals practice user experience design without integrating accessibility principles. These disciplines are frequently—and erroneously—treated as distinct skill sets and separate domains of expertise, often managed by different specialists: the Accessibility Expert and the UX Expert, supported by designers and consultants.
The Paradox of Inaccessible Experiences
Accessibility is persistently viewed as a specialised or supplementary component of UX rather than its foundation. This perspective creates a profound paradox: How can one genuinely design an experience for users when barriers prevent access to the experience itself?
This is comparable to architects designing an innovative building with entrances dimensioned only for people of average height and weight—178 cm tall and maximum 85 kg. Anyone outside these parameters would need to struggle to enter, whilst hoping that internal doorways might be more accommodating.
The unvarnished truth is that user experience cannot exist without accessibility. Access is not an added value but the essential prerequisite to any digital experience.
Reconnecting Divergent Disciplines
Despite emerging chronologically after accessibility, UX has curiously neglected to establish its foundations upon accessibility principles. Instead, it has evolved in parallel, adjacent but not integrated as would be logical. Both disciplines have now matured sufficiently to merge and benefit from constructive unification and active collaboration.
I continue to encounter the outdated argument that accessibility represents an additional cost—a perspective that should have been abandoned 15 years ago. Accessibility should not constitute an incremental expense but should be naturally integrated into every UX initiative from inception.
Reframing the Accessibility Imperative
Accessibility is not an added value or enhancement—it is the fundamental prerequisite for creating user experiences for everyone, regardless of their physical characteristics, access devices, connection quality, language, situational constraints, needs, or preferences.
The time has come for the UX community to recognise that without accessibility, there simply is no user experience worthy of the name.
The Trust Triangle: How Data Quality, Privacy, and AI Shape Customer Relationships
In today's data-driven world, organisations face a growing paradox: they need high-quality customer data to improve services and power AI systems, yet consumers are increasingly reluctant to share their personal information. This tension between data quality and privacy concerns isn't just a compliance issue—it’s fundamentally changing how organisations and customers interact, with profound implications for artificial intelligence adoption.
Defining Data Quality: Beyond Technical Correctness
Data quality is much more than just accurate information—it’s a multidimensional concept that determines how valuable data is for its intended purpose. While traditionally defined as "fit for use," high-quality data specifically exhibits these essential characteristics:
- Accuracy: Data correctly represents the real-world entity or event it describes
- Completeness: All required data points are present and populated
- Consistency: Data values don’t contradict each other across the dataset
- Timeliness: Data is sufficiently up-to-date for its intended use
- Relevance: Data is appropriate for the specific business need
- Accessibility: Authorised users can retrieve data when needed
- Interpretability: Data is presented in a format that users can understand
For AI systems specifically, we must add another critical dimension: representativeness—whether the data adequately reflects the population or phenomena it's meant to model, without harmful biases or significant gaps.
The quality of data directly determines the quality of AI outputs. As the saying goes in the industry: “garbage in, garbage out.”
The Privacy-Quality-AI Connection
When was the last time you provided fake information on an online form? If you’ve ever used a disposable email address or entered “123 Main Street” as your address, you’re part of a widespread phenomenon affecting data quality—and now, the effectiveness of AI systems—across industries.
Consider this example: In 2010, online game retailer Gamestation inserted a clause in their terms and conditions stating that customers were agreeing to surrender their "immortal souls." Over 7,500 customers accepted without complaint—proof that most people don’t read privacy policies before sharing personal information.
This contradiction is at the heart of the data quality challenge that now extends to AI: consumers express growing concern about privacy yet rarely take protective actions like reading policies. But they do take other protective measures—providing incorrect information or abandoning interactions altogether—which directly impacts AI performance.
Why Data Quality Is Critical for AI Success
- AI systems trained on poor-quality data produce unreliable outputs and potentially harmful decisions
- Biased or incomplete datasets lead to AI systems that perpetuate or amplify those biases
- Incorrect data creates unpredictable AI behaviours and erodes user trust
- Outdated information compromises AI’s ability to make relevant recommendations
High-quality, representative data is essential for creating AI systems that perform as intended and earn user trust. Yet many organisations still treat data quality as primarily a technical issue rather than a relationship challenge.
The Control Factor in the AI Era
Research consistently shows that consumers’ primary privacy concern isn’t about sharing data itself—it’s about losing control of that data after sharing it. This concern is amplified when AI systems enter the picture.
- People are even more reluctant to share information when they know AI will process it
- Trust in AI systems depends heavily on perceived control over personal data
- Transparency about AI usage increases willingness to share accurate information
- Meaningful human oversight of AI increases data sharing and quality
Building the Trust Bridge for AI-Powered Organisations
The missing element in this equation is trust—the willingness to assume risks when benefits outweigh concerns, based on the belief that commitments will be fulfilled.
- Integrity – Honest, reliable behaviour in collecting and using data
- Benevolence – Belief that harm will not result from AI use
- Ability – Competence in responsible AI development and governance
When trust is high, customers provide better data. When it’s low, they withhold or falsify data—compromising AI from the start.
Practical Implications for AI Implementation
- Provide meaningful control options over how AI uses personal data
- Design explainable AI systems aligned with user expectations
- Demonstrate integrity in AI-driven decisions
- Treat data quality as a relationship outcome, not just a technical standard
Organisations that focus on trust can obtain high-quality data and respect customer privacy simultaneously.
Moving Forward in the AI Era
As AI becomes more pervasive, the relationship between data quality, privacy, and trust becomes even more critical.
By building trust with users, organisations can fuel high-performance AI systems while safeguarding relationships and reputation.