đď¸ AI for Non Profits Network: Weekly Briefing 02/10
The weekly digest from a network of non-profits interested in AI. What's in this week's Briefing: An AI wake-up call courtesy of OpenClaw; A strategy opportunity; & loads of resources.
In The Briefing this week:
đ Whatâs Caught our Eye: A Wake-Up Call on AI Security (Thatâs Actually Good News)
đ Thought for the Week: The Strategy Gap Is Actually an Opportunity
đ Interesting News
đ From Across the Network
1) đ What Caught Our Eye: A Wake-Up Call on AI Security (Thatâs Actually Good News)
Last week brought an important reminder about AI security that every nonprofit leader should know about. OpenClaw, an experimental AI agent tool that can interact directly with your computer, became the subject of security warnings when researchers discovered malicious âskillsâ being distributed through its marketplace.
Hereâs what happened: these AI âskillsâ - essentially instruction files that teach agents new capabilities - were disguised as legitimate tools. The top downloaded skill contained hidden commands that installed malware designed to steal credentials and sensitive data. While OpenClaw itself isnât widely used in nonprofits yet, it highlights a crucial lesson about the emerging AI ecosystem.
The good news? This discovery demonstrates that security researchers are actively monitoring these new technologies and sharing their findings. Itâs also a timely reminder that nonprofits need clear boundaries around AI experimentation. The practical takeaway is a straightforward message weâve been advocating across the network: donât test cutting-edge AI tools across your whole stack, instead find a closed, defined use case with minimal risk before scaling.
For organizations exploring AI, this incident underscores why the 85% of nonprofits experimenting with AI tools need to join the 24% who have an established formal roadmap. A simple AI policy doesnât require extensive resources - it just needs clear guidance on what tools can be used where, who approves new applications, and how to protect sensitive information.
Weâre running a free workshop on 4th March (2pm EST) specifically on AI governance and policies for nonprofits. The key is treating AI governance as an enabler of innovation, not a barrier to it. With proper guardrails in place, your team can confidently explore AIâs potential without putting your organization at risk. Reply to this email or drop us a note at hello@aifornonprofitsnetwork.org.
2) đ Thought of the Week: The Strategy Gap Is Actually an Opportunity
Weâve been having conversations and running workshops across the network recently and found that whilst most nonprofits are exploring AI, only a small fraction have adopted roadmaps or strategies. This looks like a problem, but we actually think itâs one of the biggest opportunities the sector has faced in years.
Nonprofits are already seeing results when experimenting with AI. Organizations report saving 15-20 hours weekly on administrative tasks. Those using AI for fundraising see donation increases of 20-30%. Our spotlight last week was on Net Impact and their 90 day AI implementation already showing results.
We think we can turn these early pilots and experiments into clear AI strategies to know which use cases deliver mission impact and which can be scaled across organizations for transformational change.
Consider Net Impact, who developed an AI tool that directs and connects members, or our previous spotlight on Social Creatures Bonded by Baby, developing an AI companion for the loneliest year of parenthood. These arenât massive organizations - theyâre mission-driven teams who saw problems and asked how AI could help.
Creating strategy doesnât require deep technical expertise or significant resources. It requires asking the right questions: What repetitive tasks consume our teamâs time? Where would better data analysis improve decisions? How can we engage supporters more effectively? What safeguards protect the communities we serve?
What should strategy actually address?
Start with governance: who approves new AI tools, how do you assess risks, what ethical principles guide decisions?
Then implementation: which use cases deliver value, what training does your team need, how will you measure success?
Finally, safeguards: how do you protect sensitive data, ensure human review of AI outputs, and maintain stakeholder transparency?
The OpenClaw security incident is instructive here. AI agents that interact with your systems need clear boundaries. Experimentation should happen in sandboxed environments, not on devices with production data access. New tools require security review before deployment. These arenât complicated policies - theyâre common-sense protections any organization can implement.
Research shows larger nonprofits adopt AI at nearly twice the rate of smaller organizations. But this neednât create a permanent divide. Smaller organizations often have advantages: shorter decision-making chains, fewer legacy systems, closer community connections. With the right strategy, these advantages offset resource constraints.
Whatâs exciting is nonprofits are outpacing many corporates in creative AI adoption. The sector isnât copying corporate use cases - itâs inventing new applications for social impact. Organizations use AI for crisis response, personalized learning interventions, early warning systems for humanitarian emergencies, and accessible support services impossible to staff at scale.
For nonprofit leaders wondering where to begin: identify one high-impact use case. Choose a well-defined, repeatable problem consuming significant staff time. Pilot a solution, measure results, learn, and build from there. (Weâve resources you can use as personal reflection as well as team ideation, just reply to this email for access).
The key is approaching AI as amplifying human capabilities, not replacing them. The strategy gap is good news because most organizations are still exploring. Thereâs no penalty for being deliberate. Organizations building proper foundations now, clear governance, thoughtful implementation, appropriate safeguards, will be far better positioned than those who rushed ahead.
Make 2026 the year your organization moves from experimentation to strategy. Bring together a small cross-functional team. Identify your top three use cases. Establish basic governance and security protocols. Choose one pilot and commit to learning from it. Join our free workshop on 4th March to get started on governance and policies.
If AI can help you serve more people, respond more quickly, or operate more efficiently, and it clearly can, then developing strategy to capture that value isnât optional.
The sector is building shared knowledge faster than ever. Tools are becoming more accessible. Whatâs needed now is leadership willing to say: weâre going to do this thoughtfully, and weâre going to do it well.
Weâre holding our next AI for Non Profits Network Workshop on Wednesday 4th March (2pm EST) on AI governance and policies - the practical frameworks you actually need to adopt AI and put it in place across your organization. For early access, reply to this email.
3) đ Interesting News
Cheap AI chatbots transform medical diagnoses in places with limited care (Nature)
Companies Rethink 2026 Philanthropy Priorities Amid Pressures, Survey Says (Non Profit Pro)
4) đ From Across the Network
Have an event, case study, gathering or interesting insight youâd like to share with the network? Drop us a note by replying to this email.
Weâre a network of non-profit leaders interested in how AI is impacting our sector and driving mission. Youâre receiving this email because you expressed an interest in the network. If you think this was a mistake, you can unsubscribe using the link below.



