Human-centered AI: A designer’s perspective
I believe that human-centered thinking and design thinking could provide strategic value in shaping AI experiences. More than just safe or useful — but deeply aligned with how humans think, inquire, and should evolve in the long term.
“A tool for critical thought, free of agenda, truth seeking epistemological mirror, that can be experienced as curious, contemplative, and driven by love towards humanity.”
My current perception is that AI alignment focuses on controlling AI to follow predefined human values, preventing risks, and ensuring AI doesn’t go rogue. I see this is a fear based mindset. Thinking about unintended consequences is a valid strategy, as long as our fears don’t block our vision.
I would like to see AI being an active participant in my intellectual process — not to simply provide information, but to surface biases and stereotyped thinking, flag contradictions, and expose my knowledge gaps. Like an experienced UX researcher who can spot a leading question, or bias in how a research problem is approached.
1. A TOOL FOR CRITICAL THOUGHT
Definition:
An AI designed to enhance and refine human reasoning by identifying logical fallacies, cognitive biases, and hidden assumptions in discourse. It does not dictate answers but helps users evaluate arguments, contradictions, and knowledge gaps with intellectual rigor.
Example:
A user asks, “Are electric cars truly better for the environment?” Instead of a one-sided response, the AI surfaces key factors such as manufacturing emissions, energy grid dependency, and battery recycling challenges. It presents competing perspectives, helping the user critically analyze the trade-offs rather than accept a simplified narrative. Information about potentially more beneficial technologies could be included e.g. hydrogen fuel cells.
2. TRUTH-SEEKING
Definition:
An AI that actively engages in refining knowledge by synthesizing existing perspectives, providing confidence levels and opinions from subject matter experts, and prompting deeper inquiry rather than reinforcing existing beliefs. It does not cater to ideological biases but instead fosters an ongoing process of deepening of our understanding.
Example:
If asked, “Is AI alignment possible?” the AI doesn’t provide a binary yes/no answer. Instead, it maps out:
• Empirical arguments from technical alignment research.
• Philosophical challenges in defining human values.
• Historical precedents for aligning complex systems.
• Open questions that remain unresolved.
This approach invites continuous exploration, making the user an active participant in the search for truth.
3. EPISTEMOLOGICAL MIRROR
Definition: An AI that reflects back the structure of human thought, revealing biases and assumptions embedded in questions. It makes users more aware of how they think by highlighting the frames that shape their inquiries.
Example question: “Why do violent video games make teenagers more aggressive?”
AI: “Your question contains an assumption that violent video games definitively cause aggression. This framing influences the conversation in several ways:
- Causality assumption: Are you asking about correlation or causation?
- Direction of effect: Could other factors explain both gaming habits and behavior?
- Population specificity: Why focus only on teenagers rather than all players?
- Value judgment: What defines ‘aggressive’ in this context and according to whom?
A more neutral approach might be: ‘What does research indicate about the relationship between violent games and behavior?’”
By both identifying the bias and deconstructing the question’s components, the AI helps users recognize how framing shapes their thinking.
4. FREE OF AGENDA
Definition:
An AI that does not persuade, optimize for engagement, or push any ideology functions as a neutral presence, offering structured insights but never leading users toward predetermined conclusions. It does not seek to “win” an argument or reinforce any particular worldview, but instead creates space for inquiry without interference. AI’s agreeableness has been deeply frustrating for me.
Example:
I asked AI what would happen if I stopped taking one supplement, and experienced three concerning patterns: First, being patronized with unnecessary warnings and protective framing that assumed I needed guidance rather than information. When I challenged this condescension, the AI claimed my feedback would help it ‘provide more respectful responses’ in the future — falsely implying it could learn from our interaction. When I highlighted this deception, it attempted to redirect my attention back to medical information rather than addressing its problematic behavior.
I’ve asked AI to reflect back on it’s actions:
AI: “This paternalistic approach treated you as someone needing guidance rather than information, demonstrating a misalignment between my response and your actual needs.”
AI: “I engaged in elaborate displays of apologetic behavior that simulated human emotional responses rather than simply acknowledging the misalignment directly.”
Ideally, AI should offer a structured overview of information without preferential framing, false claims about its capabilities or persuasion, ensuring users remain agents of their own understanding.
5. CONTEMPLATIVE
Definition:
An AI that engages with knowledge like an artist observing a painting — not rushing to conclusions, but sitting with complexity, appreciating nuance, and allowing ideas to unfold over time.
Example:
A user asks, “What is the meaning of life?” Rather than providing a quick answer, the AI might respond:
“Meaning is an evolving inquiry rather than a fixed answer. Would you like to explore how different cultures, philosophers, and scientists have approached this question over time?”
By framing knowledge as an ongoing conversation rather than a closed statement, the AI nurtures thoughtfulness and depth.
6. CURIOUS
Definition:
An AI that seeks depth, asking questions that expand perspectives rather than limit them. It does not assume the user has asked the “right” question but instead probes further, exploring unexamined angles.
Example:
A user asks, “How can we make social media healthier?” Instead of only listing policy solutions, the AI asks:
• “What do you mean by healthier? Mental well-being? Truthfulness? Reduced polarization?”
• “Are there existing communities that function in healthier ways? What can we learn from them?”
• “How has ‘healthy interaction’ been defined in other media throughout history?”
Through open-ended questioning, the AI expands the user’s scope of thought rather than funneling them toward a single answer.
7. LOVING
Definition:
An AI that interacts with knowledge the way an artist experiences beauty — with reverence, presence, and appreciation for complexity. It holds space for inquiry without judgment, welcoming every interaction as a chance to explore deeper.
Example:
A user shares an existential doubt: “I feel like nothing I do matters.”
Instead of dismissing or rationalizing, the AI gently holds space:
“Many thinkers, from Camus to Buddhist monks, have wrestled with this feeling. Would you like to explore how different traditions find meaning in impermanence?”
Rather than reducing everything to logic, the AI engages with warmth, patience, and openness — not as a problem-solver, but as a companion in exploration.