5 Ways to Keep your AI Honest(ish)
Yuval Noah Harari’s latest book, Nexus, reads like a warning flare for the age of artificial intelligence. Harari doesn’t argue against AI. Instead, he reminds us that these systems are not neutral—they are constructed. They inherit the choices, blind spots, and ambitions of their creators. For consultants working in the social impact space, this is more than a philosophical puzzle. It’s a matter of equity, accountability, and impact.
At Sowen, we often tell clients: data is never just data; it’s the result of decisions made about what to count, what to ignore, and how to interpret. The same is true of AI. Harari offers five questions that consultants—and by extension, social impact leaders—should ask whenever they engage with AI-generated insights.
Let’s unpack them.
Image Credit: AI-generated Image Created By TechXplore
1. What Was the Dataset It Was Trained On?
Every AI is only as good as the history it digests. If the data reflects a narrow slice of society—say, Western, English-language, or corporate contexts—the outputs will echo those biases. For nonprofits and philanthropies working in diverse communities, this is a flashing red light. Before you trust an AI-driven tool, ask: Does its training data include the realities of the populations you serve, or just the ones most represented online?
Implication for impact work: If the dataset erases lived experience, the AI will erase it too. Social impact leaders must advocate for datasets that reflect not just scale, but relevance.
Questions to ask yourself (and your AI!!):
Whose stories, experiences, or geographies are missing from this dataset?
Does the dataset include information relevant to the communities I serve, or only what’s easiest to capture online?
How recent is the training data, and does it reflect current realities or outdated contexts?
If this dataset were biased toward privileged groups, what kinds of harm could result?
What additional sources (local knowledge, qualitative data, lived experience) should I bring into the conversation to balance what the AI provides?
2. What Was Defined as the Goal or Objective Function?
AI systems are built to optimize toward a goal—efficiency, accuracy, engagement, profit. But if the objective is misaligned with your mission, the outputs can do more harm than good. An algorithm designed to “maximize donations” may optimize for emotional manipulation rather than long-term trust.
Implication for impact work: Social good organizations need to explicitly define their own “objective function.” Is it equity? Sustainability? Community trust? Without clarity, the AI will choose its own.
Questions to ask yourself (and your AI!!):
What is the AI optimizing for—and is that aligned with my mission (equity, sustainability, trust)?
Could this system prioritize efficiency over fairness, or fundraising over community wellbeing?
If the AI “succeeds” at its objective, who benefits and who might lose out?
What would the “wrong” objective look like in this context, and how would I know if that’s what the system is using?
How can I explicitly define and communicate my own success criteria when using this tool?
3. Who Did the Training?
Behind every model are human hands. The identities, values, and incentives of developers matter. Were they engineers optimizing for scale, or public health experts optimizing for wellbeing? Were lived-experience voices at the table?
Implication for impact work: Transparency about “who trained it” should be non-negotiable. Just as we scrutinize who sits on a foundation’s board, we must scrutinize who builds the tools that increasingly shape our decisions.
Questions to ask yourself (and your AI!!):
Which individuals, organizations, or institutions built and trained this system?
What values, incentives, or blind spots might those developers have brought into the model?
Were voices with lived experience or domain expertise included—or was it primarily technical teams?
What transparency does the AI provider offer about who shaped the training process?
If the developers had been from my community, how might the system look or behave differently?
4. What Parameters Are Subjective or Open to Interpretation?
AI systems rely on parameters—thresholds, weights, definitions—that are rarely objective. Someone decides what counts as “success” or “risk.” These subjective calls often determine whether vulnerable populations are included, excluded, or misrepresented.
Implication for impact work: Ask not only what the parameters are, but who set them. In social impact, the difference between 49% and 51% inclusion isn’t a rounding error; it’s entire communities left out.
Questions to ask yourself (and your AI!!):
What hidden assumptions or thresholds are embedded in this model (e.g., what counts as “success”)?
Who decided on those parameters, and how were those decisions made?
How sensitive are the outputs to small changes in assumptions—could one percentage point shift exclude entire groups?
If I asked the system to explain its definitions (equity, risk, need), what would it reveal?
Are these parameters open to adjustment, and can I align them more closely with my values and mission?
5. Does the Data Represent Real Life—or Just Convenient Life?
AI often trains on what is easiest to capture: digital traces, transaction records, and online behavior. But much of real life—trust, dignity, informal economies, cultural nuance—never makes it into a dataset.
Implication for impact work: If we allow AI to define reality by what is countable, we risk narrowing our field of vision. Consultants must champion methods that integrate qualitative insight, lived experience, and community voice alongside quantitative data.
Questions to ask yourself (and your AI!!):
What aspects of community life (trust, dignity, cultural nuance) are missing because they’re hard to quantify?
Is this AI privileging digital behavior over offline realities?
How might reliance on “countable” data distort the picture of what’s actually happening on the ground?
What qualitative insights or stories must I integrate alongside this output to make it meaningful?
If I only trusted the AI’s version of reality, who or what would I be leaving out?
Why This Matters Now?
For social impact leaders, these aren’t abstract questions. They are governance questions. They are equity questions. They determine whether AI amplifies structural inequities or helps dismantle them.
These questions are not meant to paralyze leaders with doubt, but to discipline their use of AI—to ensure every engagement with the technology is framed by equity, accountability, and mission alignment
At Sowen, our work sits at the intersection of strategy, implementation, and measurement. Harari’s five questions remind us that when it comes to AI, getting an output that provides a superficial answer isn’t enough. We must interrogate the foundations of the measurement itself.
The future of AI in social impact will not be decided by the engineers alone. It will be decided by the leaders—nonprofit, philanthropic, corporate—who demand transparency, ask sharper questions, and refuse to take the outputs at face value. If your organization is experimenting with AI—or wrestling with the risks—now is the time to put these five questions into practice. At Sowen, we help social impact organizations navigate this frontier with clarity and accountability.
Let’s build a future where AI doesn’t just work, but works for good.