Adopting AI Responsibly & Equitably: What Nonprofits Need
Insights from the AI Equity Project 2025
The AI Equity Project 2025 is a 100+ page, community-grounded study reflecting voices from 850 nonprofit professionals across the U.S. and Canada. It advanced last year’s baseline by moving beyond awareness to the early realities of implementation — what’s working, where harm might creep in, and what scaffolding our sector needs next.
It’s authored by Meena Das (NamasteData) and Michelle Flores Vryn, with support from sector partners including Bloomerang, DonorPerfect, Fluxx, Giving Compass, and The Nonprofit Hive.
Key findings nonprofits should act on now
Curiosity is high; readiness is not. While 65% of nonprofits are interested in AI, only 9% feel ready to adopt it responsibly, and 32% still don’t know how AI connects to their mission—a signal that training, governance, and capacity remain the binding constraints.
Values aren’t yet backed by policy. Despite widespread concern about equity and potential harms, only a small share report having any AI policy or internal guidance, even as adoption rises. (The report notes growth from 2024 to 2025 in policy preparedness, but from a very low base.)
Adoption is outpacing governance. 76% say they’re using AI (up from 59% in 2024), yet the Readiness Conversion Ratio stayed flat year-over-year at 0.46, meaning confidence and guardrails haven’t kept pace with usage.
Equity practice is slipping. Knowledge is up, but implementation is down: the “Equity Practice Ratio” fell from 92% to 62%, a red-flag gap between knowing the language of equity and practicing it in daily decisions.
Capacity, not appetite, is the bottleneck. More than 60% cite lack of in-house expertise to assess tools; only a small minority have AI-specific training budgets; and respondents consistently rank leadership support, staff training, and the right tools as the top needs for the coming year.
What this means for charities and nonprofits
Don’t start with tools—start with purpose, policy, and people.
The data shows a sector “learning while adopting.” The safest way to learn is by putting mission-alignment, data minimization, and harm-reduction into policy before pilots scale. Treat AI like infrastructure, not a gadget: who benefits, who’s at risk, and who’s accountable?Governance is equity in practice.
Equity concerns are loud, but action is quiet. Translating values into practical guardrails (acceptable use, prohibited data flows, review processes, opt-out for sensitive populations) is how we prevent bias or privacy harms—especially in human services, health, and public-benefit organizations serving marginalized communities.Resource the gap.
Leaders and boards should earmark time and dollars for AI literacy and data governance. The report shows technical assistance grants and shared learning as preferred supports—especially for small orgs that dominate the sector yet report the weakest readiness.Mind the leadership confidence divide.
The study surfaces age, gender, role, and country differences in familiarity and risk perception. If older or less-technical leaders are making strategic decisions, pair them with trained internal champions and equity advisors so choices reflect both mission and modern data ethics.
Quick-start, no-regrets actions for your team
Publish a 1-page AI use statement that ties every pilot to mission, community benefit, and harm-reduction. (Revisit quarterly.)
Adopt a lightweight AI/data equity checklist for any workflow touching donors, clients, or sensitive data (e.g., “do we need this data?”, “do we have consent?”, “who could be excluded by this model?”).
Run a readiness mini-audit across three pillars—people (training), policy (governance), and platforms (secure, minimal, purpose-fit)—before piloting.
Pilot in low-risk, high-value areas (summarization, internal drafting, FAQs) with human-in-the-loop and explicit “no PII” rules. Track error rates and community impact—then scale what’s safe and useful.
Form (or join) a shared learning circle. The report’s top sector opportunities call for “Nonprofit AI Readiness Studios,” an open resource library, and equity-first fellowships—collective scaffolding that lowers risk and raises confidence.
Funders, tech partners, and policymakers
Fund readiness, not just tools. Tie grants to training, governance, and equity commitments, not only outcomes dashboards. Require reporting on equitable practices—not just adoption rates.
Co-create with nonprofits. Design for low infrastructure realities and publish open templates (policies, DPIAs, model “dos and don’ts”) with annual feedback loops from front-line users.
Build shared infrastructure. Seed Readiness Studios and an AI Equity Open Library so smaller, equity-centered orgs can lead—not lag.
How we can help
As a training partner to social-purpose organizations, The Good Growth Company is committed to helping teams use AI for impact — safely, ethically, and effectively. We support nonprofits to:
Run an AI & data-equity readiness sprint (policy, practice, and pilot roadmap).
Upskill staff and boards with practical, tool-agnostic training grounded in equity and mission.
Design safe pilots with measurable value and clear guardrails—so you learn without risking trust.
If your organization is exploring AI, let’s co-design a path that protects communities and grows capacity.
Download the full report here.
This summary barely scratches the surface. Download and read the full AI Equity Project 2025 report by Meena Das and Michelle Flores Vryn, and explore the companion resources from NamasteData, Bloomerang, DonorPerfect, Fluxx, Giving Compass, and The Nonprofit Hive. Then host a discussion with your staff and board to choose one policy improvement and one safe pilot to start this quarter.
All statistics, definitions, and sector opportunities referenced above are from the AI Equity Project 2025 report and its year-over-year analysis (2024→2025).