AI for Non Profits Network: Weekly Briefing 03/24
The weekly digest from a network of non-profits: Teaching teachers in a war zone; Helping the board ask the right questions & Lots of great resources.
In The Briefing this week:
đ What's Caught our Eye: Teaching teachers in a war zone; with AI running on a low-bandwidth phone
đ Thought for the Week: The Board Has to Own This Now
â Interesting News and Funding Calls
đ From Across the Network
This newsletter is supported by Whitelabel.ai - helping nonprofits cut through the AI noise with practical tools built for mission-driven teams. Find out more at whitelabel.ai
1) đ What Caught Our Eye: Teaching teachers in a war zone; with AI running on a low-bandwidth phone
When people talk about AI in nonprofits, they tend to mean donor databases and email campaigns. The International Rescue Committee has something rather more ambitious in mind.
In our October edition last year, we published about the the IRCs use of AI. Now it has published an account of its AI programs in the field which offers one case study that cuts through the noise: a teacher coaching tool called aprendIA, deployed in the conflict-affected states of northeast Nigeria, Borno, Adamawa, and Yobe, where millions of children have grown up with deeply disrupted schooling.
The IRC designed aprendIA not for a tech-savvy urban user with reliable broadband, but for teachers in some of the worldâs most challenging conditions. The tool was built for low bandwidth, local languages, developed in close collaboration with educators themselves. No Silicon Valley assumptions or requirement for a stable internet connection.
The scale-up has been striking. From an initial cohort of around 400 teachers, aprendIA is now reaching 4,700 - and is expected to reach over 22,000 before the end of 2026, in partnership with state Ministries of Education.
The IRCâs conclusion is blunt: meaningful AI doesnât require sophisticated infrastructure. The most impactful applications are those that meet people where they are, rather than where technologists wish they were.
What this means for your organization:
Before adopting any AI tool, ask: was it designed for contexts like yours, or are you being asked to adapt to it?
âLow bandwidthâ thinking - simple, robust, context-appropriate - is good design for any resource-constrained nonprofit.
Piloting with the people who will actually use a tool, and building outward from their experience, isnât a nice-to-have, itâs the whole game.
Final Call for Our Next Workshop: Leading on AI When You Feel Behind with Paul Butcher - Join Us 1st April
If youâve spent the last six months nodding along in AI conversations while quietly wondering whether everyone else understands this better than you do this is the session is for you.
Our free 90-minute working session is designed specifically for nonprofit leaders who need to make smart decisions about AI without understanding every detail of how it works. Feeling âbehindâ isnât a sign of limited capability. Itâs a rational response to an irrational pace of change.
Joining us is Paul Butcher, from CommonSensing AI and former CMO and Digital Lead at Save the Children. Youâll walk away with a simple framework for AI decisions, a draft governance position for your board, and clarity on where to start - and what to stop worrying about.
Wednesday April 1 | 2pm EST | 90 minutes | Free
Reserve your spot by replying to this email or writing to us at hello@aifornonprofitsnetwork.org.
2) đ Thought for the Week: Your board approved the AI tool. Did it ask the right questions?
Weâve talked a lot about governance in nonprofit AI and what that means for the board. Here we unpick a few things to help the board ask the right questions.
AI has arrived in the boardroom - not through the front door, but through the back. A staff member starts using ChatGPT to draft appeals. A vendor bundles a predictive analytics tool into a CRM upgrade. A program team pilots an eligibility screener. Slowly, consequentially, AI becomes embedded in how an organization operates. And in most cases, the board never had a real conversation about any of it.
Forvis Mazars published guidance earlier this year making the case that nonprofit boards now carry explicit responsibility for technological oversight - not as an IT matter, but as a core governance duty. But for most boards, the gap between where they should be and where they are is stark. Three quarters of nonprofits still have no AI policy. Most boards have never formally discussed AI at all. This is a governance failure happening in plain sight.
The efficiency trap
The frame that gets boards into trouble is a familiar one. AI gets approved because it saves time and money. Thatâs a legitimate concern for resource-constrained organizations, and boards should absolutely care about operational efficiency. The trap is stopping there.
When boards evaluate AI adoption purely on cost-benefit grounds, they skip the questions that matter most for mission-driven organizations. Does this tool reinforce equity or undermine it? Does it enhance the relationships at the heart of our work, or quietly replace them? Does it build staff expertise over time, or slowly atrophy the human judgment that defines good practice?
The bias problem
Hidden biases in automated systems donât announce themselves. A grant eligibility screener trained on historical data can systematically disadvantage communities that were historically underserved - not through malice, but through the quiet replication of past patterns. A donor propensity model can reinforce who gets asked and who doesnât, baking existing inequities deeper into fundraising practice.
Boards that approve AI tools without asking about bias audits, training data sources, or equity impact assessments are not just being incurious. They are governing blind. In a sector whose entire mandate rests on serving people fairly, that is not a defensible position.
What boards actually need to do
None of this requires boards to become technology experts. It requires them to ask better questions and demand better answers. That means requesting bias audits for any AI system affecting beneficiaries, donors, or staff. It means adopting a recognized responsible AI framework. It means confirming that cyber insurance policies actually cover AI-related incidents - many legacy policies donât. Board oversight is not micromanaging vendor selection. It is setting the values boundaries, asking the equity questions, and ensuring management has real accountability structures rather than good intentions and a subscription.
Boards cannot govern what they donât understand. Upskilling is not optional. Add AI governance as a standing quarterly agenda item. Require management to report on what tools are in use, who is using them, and what oversight exists. Make sure someone at leadership level owns AI strategy and can answer board questions with specificity rather than reassurance.
The connection to organizational performance is not theoretical. Most nonprofits using AI are stuck at modest efficiency gains, while a small minority are achieving real transformation. Governance is what separates them.
The question for nonprofit leaders this week is simple: has your board had a real conversation about AI yet? Not a vendor demo. Not an IT policy update. A board-level conversation about values, risk, and what responsible AI adoption means for your organization and the people you serve.
3) â Interesting News and Funding Calls
2026 Nonprofit AI Adoption Report: 92% Adoption, 7% Real Impact (Virtuous)
AI Governance for Nonprofit Boards (Forvis Mazars)
You adopted AI - Why hasnât your organization changed? (Forbes)
4) đ From Across the Network
Have an event, case study, gathering or interesting insight you would like to share with the network? Drop us a note by replying to this email.
Weâre a network of non-profit leaders interested in how AI is impacting our sector and driving mission. Youâre receiving this email because you expressed an interest in the network. If you think this was a mistake, you can unsubscribe using the link below.



