AI for Non Profits Network: Weekly Briefing 03/17
The weekly digest from a network of non-profits: The Governance Gap; Treat AI like a new hire, and Chipotle's coding tool + lots of great resources.
In The Briefing this week:
đ What's Caught our Eye: The Governance Gap
đ Thought for the Week: Treat AI Like a New Hire
â Interesting News and Funding Calls
đ From Across the Network
1) đ What Caught Our Eye: The Governance Gap
Ninety-two percent of nonprofits now use AI in some capacity whilst only 7% report meaningful improvements from it. And nearly half have no governance policy whatsoever.
This week, Info-Tech Research Group published a blueprint titled Empower Not-for-Profits With AI and ML, arriving at an uncomfortable moment. Nonprofits have quietly become AI-dependent organizations - through fundraising platforms, CRM systems, donor communications, and chatbots without most boards fully realizing it. Weâve often spoken about âShadow AIâ, and indeed it came up at our last workshop where most tools non-profits use these days have some element of AI integrated into them.
The blueprint identifies three familiar pressure points: AI enters through multiple doors with no one coordinating the traffic; staff experiment faster than policy can keep pace; and boards lack the literacy to know what questions to ask. The consequences are often seen as; bias in beneficiary services, data privacy vulnerabilities, donor communications that feel transactional, and leadership flying blind on what their organization is actually doing with these tools.
Data from Virtuous and Fundraising.AIâs 2026 Nonprofit AI Adoption Report adds important texture: organizations with formal AI governance policies were twice as likely to report measurable gains in efficiency and donor engagement. Governance isnât bureaucratic overhead - itâs what turns experimentation into results.
The tools and methods we discussed at our last workshop are available if you drop us an email - hello@aifornonprofitsnetwork.org.
What nonprofit leaders should do now:
Map your AI footprint. Survey every department - you likely have more touchpoints than you realize.
Form a cross-functional governance team spanning fundraising, programs, and communications. AI risk is organizational, not just technical.
Write a policy. Even a basic one, addressing data privacy and algorithmic bias - we have a template you can use.
Put AI on the board agenda - as a governance and risk conversation, not a technology update.
Our Next Workshop: Leading on AI When You Feel Behind with Paul Butcher - Join Us 1st April
If youâve spent the last six months nodding along in AI conversations while quietly wondering whether everyone else understands this better than you do this is the session is for you.
Our free 90-minute working session is designed specifically for nonprofit leaders who need to make smart decisions about AI without understanding every detail of how it works. Feeling âbehindâ isnât a sign of limited capability. Itâs a rational response to an irrational pace of change.
Joining us is Paul Butcher, from CommonSensing AI and former CMO and Digital Lead at Save the Children. Youâll walk away with a simple framework for AI decisions, a draft governance position for your board, and clarity on where to start - and what to stop worrying about.
Wednesday April 1 | 2pm EST | 90 minutes | Free
Reserve your spot by replying to this email or writing to us at hello@aifornonprofitsnetwork.org.
2) đ Thought for the Week: Treat AI Like a New Hire
The difference between a generic AI output and a genuinely useful one often comes down to a single question: did you actually tell it what you needed?
This week I spoke with a two-person development team at a mid-sized education nonprofit. Grant deadline in 72 hours, lean budget, no dedicated grant writer on staff. They had been using AI to draft narrative sections, but the outputs felt flat; technically correct, structurally sound, and completely interchangeable with any other literacy program in America.
Their typical prompt: âWrite a program overview for a literacy intervention serving middle school students.â
They werenât doing anything wrong, exactly. They were doing what most nonprofit teams do - treating AI like a search engine you can write in full sentences. Ask a broad question, get a broad answer. The tool was working. The output just wasnât good enough.
Then they came across a prompt framework making the rounds online. Six components: Role, Task, Context, Reasoning, Stop Conditions, Output. Simple in theory; transformative in practice.
They rebuilt their prompt from scratch:
The role: a grant writer for education nonprofits with expertise in literacy programs.
The task: a 500-word program overview.
The context: 200 students annually across three underperforming schools, phonics and comprehension focus, staffed by certified reading specialists.
The reasoning: emphasize evidence-based methods and measurable outcomes, because this particular funder prioritizes academic impact above all else.
Stop conditions: cap at 500 words, exclude budget information. Output format: Word-compatible, with clear paragraph breaks.
The difference was immediate. The second draft named specific program components. It referenced evidence-based pedagogy without drowning in jargon. It held the right register for the funder - rigorous without being clinical, mission-driven without being vague. The team still edited. But they edited for voice and accuracy, not to rebuild a structure that had never been there.
Hours saved. A better product. With the same tool they had been underusing all along.
Whatâs important here isnât the technology - itâs the assumption the team started with, that the AI should already know what they need.
If you hired a talented but brand-new staff member and said, âwrite me a program overview for our literacy intervention,â you would expect follow-up questions. Whoâs the audience? What outcomes matter most to this funder? What tone are we going for? How long? What should we leave out? You would expect to brief them. You would expect that briefing to take time, and you would understand that the quality of their work depended on the quality of your guidance.
AI is no different. The model has capability. What it lacks is your context, the specific knowledge of your program, your funder relationships, your organizational voice, and what success looks like in this particular moment. That context is yours to give. The framework is simply a structure for giving it systematically.
This matters beyond grant writing. The same logic applies to donor communications, board reports, volunteer outreach, beneficiary communications, and impact narratives. Every piece of writing your team produces has a specific audience, a specific purpose, and specific things it must and must not do. Generic prompts produce generic outputs. Specific prompts â ones that explain the role, the task, the context, the reasoning, the constraints, and the format - produce something you can actually use.
The investment is a few extra minutes at the start of the process. The return is work that requires editing rather than rebuilding.
Three things worth trying this week:
Audit one prompt your team uses regularly. Does it include context about your specific program, audience, and goals - or is it something anyone could have written?
Add a reasoning layer. Tell the AI not just what to write, but why this piece matters to this particular audience. The output changes significantly.
Set stop conditions. Word limits, excluded topics, required format. Constraints arenât restrictions - theyâre the instructions that make the output usable.
The most common complaint I hear from nonprofit teams about AI is that the outputs feel generic. In almost every case, the prompt was generic first.
đ And For a LaughâŚ
A developer posted on X this week showing that Chipotleâs customer service chatbot - presumably deployed to handle complaints about portion sizes and cold queso - responded to his message with a complete Python function for reversing a linked list. The bot walked through the full algorithm, noted that the solution runs in O(n) time, and then politely asked what heâd like for lunch. Weâre not saying Chipotleâs AI is overqualified. Weâre just saying that if your burrito bowl order requires computational theory to process, you might want to reconsider the complexity of your customizations.
3) â Interesting News and Funding Calls
Nonprofits Risk Eroding Trust as AI Adoption Expands Without Oversight (Info-Tech Research Group via PR Newswire)
The New Architecture of AI Governance: Building Trust Through Leadership, Ethics, and Accountability (The Nonprofit Recruiter)
New Nonprofit Radial Launches With $500 Million to Modernize Scientific Process for AI Era (STAT News)
4) đ From Across the Network
Data + AI Summit 2026 takes place June 15-18 in San Francisco and virtually. Organized by Databricks, the summit covers enterprise AI deployment, data architecture, and governance frameworks. Government, military, education, and nonprofit attendees are eligible for discounted pricing. Registration is open.
Have an event, case study, gathering or interesting insight you would like to share with the network? Drop us a note by replying to this email.
Weâre a network of non-profit leaders interested in how AI is impacting our sector and driving mission. Youâre receiving this email because you expressed an interest in the network. If you think this was a mistake, you can unsubscribe using the link below.




