Blog • Insights
How Nonprofits and Foundations Can Lead with Purpose in the Age of AI
From boardrooms to news headlines, AI is being heralded as the next great disruptor—and nonprofits and foundations are not immune. For mission-driven organizations, the rise of AI raises both promise and peril: Will it help extend impact, or widen inequities? Will it reduce administrative burdens, or distract from mission priorities?
These are more than technical questions; they’re strategic, ethical, and human ones.
Four AI Lessons Leaders Need to Learn
In a recent discussion, State of AI for Nonprofits and Foundations, moderated by Christopher Neu, CEO of Exchange Design, panelists Elisha Smith Arrillaga, Ph.D. (Center for Effective Philanthropy), Jeff Krentel (Conrad N. Hilton Foundation), and Forum One’s Elisabeth Bradley explored where the nonprofit and foundation sectors stand today. Their reflections, coupled with new CEP research, point toward what nonprofits and funders must do to move from buzz to impact.
1. The Reality Check: AI Conversations Are Lagging
The Center for Effective Philanthropy’s forthcoming research reveals a striking gap: while nearly two-thirds of nonprofit and foundation staff report a solid understanding of AI tools, fewer than 20% of nonprofits have had meaningful conversations with funders about how AI could support their work. Nearly 90% of foundations offer no funding or support for AI at all.
“We’ve had very limited conversations about AI. Only one funder understands it enough to see how it could be useful. No other funders have expressed a willingness to understand it.” — Nonprofit leader surveyed by CEP
The result is a sector where experimentation is happening in silos and equity considerations remain an afterthought.
2. Expertise Lives Within Communities
As Elisha Smith Arrillaga reminded participants, nonprofits themselves are the experts in their fields:
“If you’re working in education or healthcare, you are the expert. You and your communities are best positioned to ask how AI might be helpful—not just outside experts.”
This reframes the conversation. Instead of starting with “What can AI do?”, the sector should start with “What problems are we solving?” and then determine if and how AI fits.
3. Behavior Change, Not Just Technology
At the Conrad N. Hilton Foundation, Jeff Krentel has seen firsthand that the hardest part of adopting AI isn’t installing tools—it’s building trust, shifting workflows, and redefining roles.
“The challenge isn’t just the technology—it’s adoption and behavior change.”
The promise of AI is alluring: faster analysis, easier access to knowledge, streamlined evaluation. But Krentel’s caution underscores an important truth: technology without behavior change doesn’t create impact.
4. AI as a Tool, Not an Outcome
For Elisabeth Bradley, CEO of Forum One, the biggest risk is distraction.
“AI is a tool, not an outcome. Our job is to help leaders move past the ‘bright shiny object’ phase and identify safe, actionable ways to integrate AI into existing work.”
That means resisting the pressure to “do AI” simply because others are talking about it. Instead, organizations should ground their adoption in clear strategic goals, user-centered design, and ethical considerations—principles that have always underpinned strong digital strategy.
What Mission-Driven Organizations Need Next
Moderator Christopher Neu pointed out that while AI’s risks—misinformation, bias, security—are real, so are the opportunities. To seize them responsibly, nonprofits and funders must:
- Create space for dialogue. Nonprofits should feel safe discussing AI use with funders without fear it will be seen as “cheating.” Funders can open this door by sharing their own policies and expectations.
- Invest in equity-centered experimentation. Early pilots should focus on whether AI reduces inequities and supports historically marginalized communities.
- Provide resources. From funding pilots to offering training, foundations can ensure nonprofits have the capacity to test and adopt AI responsibly.
- Keep humans in the loop. AI can support decision-making, but it cannot replace accountability. People—not algorithms—must remain responsible for outcomes.
Leading With Purpose
AI is not going away. But nonprofits and foundations have a choice in how they engage: as cautious observers, passive adopters, or purposeful leaders.
At Forum One, we believe the path forward is clear: AI must serve mission, equity, and strategy—not the other way around. It’s not about keeping up with hype. It’s about keeping focused on people.
The full report from CEP will be released on September 30. Sign up here to receive it » and read CEP’s broader State of Nonprofits report for additional context.
In the meantime, we encourage nonprofit and foundation leaders to listen to the full webinar recording » and start the conversations internally: What problems are you trying to solve? Where could AI responsibly help? And how will you ensure equity remains at the center?