đď¸ AI for Non Profits Network: Weekly Briefing 12/02
The weekly digest from a network of non-profits interested in AI. What's in this week's Briefing: MIT's new Iceberg Index, Our Follow on from the AI Capability Ladder, and lots of resources.
In The Briefing this week:
đ Whatâs Caught our Eye: MITâs Map of the AI-Exposed Workforce
đ Thought for the Week: How to Build AI Confidence Without Compromising Ethics - Even at Rung One
đ Interesting News
đ From Across the Network
1) đ What Caught Our Eye: MITâs Map of the AI-Exposed Workforce
MITâs new Iceberg Index offers one of the clearest pictures yet of how exposed todayâs workforce is to AI - and the findings are sharper than many expected. By creating a âdigital twinâ of 151 million U.S. workers and matching 32,000 skills against thousands of AI tools, researchers found that while only 2.2% of visible work has been automated so far, nearly 11.7% of total wage value, about $1.2 trillion, is already technically within AIâs reach.
The report doesnât predict job losses; instead it maps where human and machine capability now overlap.
For nonprofits, often stretched by administrative load and limited staff time, the implications are immediate: a significant share of back-office tasks - reporting, data cleaning, communications drafting - could already be handled by todayâs systems.
For leaders, itâs a reminder that AI readiness isnât a luxury. Itâs basic operational resilience. Itâs something weâve been asking the AI for Non Profits Network about over the last few weeks, and weâll start to share stories and tools for how other leaders in the network are approaching AI.
2) đ Thought of the Week - How to Build AI Confidence Without Compromising Ethics - Even at Rung One
Last week, we surveyed leaders across our network about where they are on the AI capability ladder. The results were pretty conclusive: the overwhelming majority said theyâre still on âRung 1â of the ladder - what we call âAd Hoc Exploration.â
Hereâs what that looks like in practice: individual staff experimenting with ChatGPT, Claude, or Canvaâs AI features. No coordination, no policy, no measurement. People trying things quietly, often on their own time, seeing if these tools might save them 30 to 60 minutes a week on drafting or brainstorming.
What you canât do yet at Rung 1: deploy AI at scale, measure ROI, or integrate it into mission-critical workflows. And thereâs a real trap here - whatâs often called âShadow AI,â where unmanaged tools create data privacy and quality risks that leadership doesnât even know exist.
But hereâs what Iâve come to believe: Rung 1 isnât a problem. Itâs a starting point. And the real question isnât whether your organization should be further along - itâs whether youâre creating the conditions for people to climb deliberately.
Understanding the Ladder
Before we go further, hereâs the full picture of AI capability maturity:
Rung 1: Ad Hoc Exploration â Individual experimentation with no coordination. You get personal productivity gains but canât scale or measure impact.
Rung 2: Structured Pilots â Formal testing of 2-3 use cases with project owners, timelines, and metrics. You achieve validated proof-of-concept but risk âpilot purgatoryââproving AI works without resources to scale.
Rung 3: Operational Integration â AI embedded in 3-5 core workflows with designated leadership, budget, and governance. Organizations like The Kidsâ Cancer Project in Australia nearly doubled regular donations to ~$2.2M/year at this level.
Rung 4: Strategic AI Capability â You donât just use AI, you build it. Custom models, proprietary datasets, automated pipelines tied to mission strategy. You become a sector leader.
Rung 5: AI-Native Organization â AI is in your DNA. Every major decision informed by AI-driven insights. You contribute to open-source tools and publish research.
Most organizations donât need to reach Rung 5. But understanding the ladder helps you see where youâre going and why the climb matters.
The Confidence Gap
The more I talk with nonprofit leaders, the more convinced I am that the real barrier isnât technological - itâs psychological. Itâs not a skills gap; itâs a literacy gap. People donât feel ready, not because they canât learn, but because they havenât been given permission to learn safely.
The 2025 AI for Humanity Report confirms this. Among newer AI-powered nonprofits, 65% rely almost entirely on off-the-shelf tools because teams donât yet have the confidence, or the permission, to experiment more deeply. But organizations with a few years of practice? 63% have already moved to building their own in-house solutions. Experience doesnât just build capacity; it builds courage.
And that courage matters, because AI success stories in our sector rarely start with grand strategies. They start with small, human moments. A frontline staff member trying a chatbot to draft a case report. A fundraising officer using generative AI to personalize thank-you messages. A volunteer manager experimenting with automated onboarding.
These tiny tests often reveal what a 60-page strategy document wonât: what your people need, what your data can handle, and where the ethical guardrails actually sit.
What Rung 1 Actually Needs
If most of your organization is at Rung 1 right now, the next step is clear: assign an AI owner and audit what tools people are already using. That becomes your baseline.
Not to shut experimentation down - to legitimize it. To move from Shadow AI to supported exploration. To say: âWe know youâre trying things. Letâs make sure you can do it safely, with backing, and with a path forward.â
Because AI literacy isnât about technical mastery. Itâs about confidence, consent, and culture. Itâs the ability for a team to say:
âWe can try this safely.â
âWe know whatâs appropriate for our community.â
âWe understand enough to ask the right questions.â
The most important data point from the AI for Humanity Report isnât about technology at all - itâs this: 84% of organizations say they need additional funding to develop staff capacity long before impact shows up for beneficiaries. Thatâs the literacy gap in a single statistic. Investment in people must come first.
Permission to Climb
If youâre leading a nonprofit right now and your team is at Rung 1, this is the moment to make space for learning. Give your team room to play, a structure to question, and the signal that experimentation is allowed - even expected.
Because the future of AI in our sector wonât be defined by the sophistication of the tools themselves. Itâll be defined by the confidence of the people who choose to use them, the culture that supports them while they learn, and the leadership that treats Rung 1 not as failure, but as foundation.
Want help climbing the ladder? Weâre running a limited workshop for select network members to assess where you are on the AI capability ladder and build a practical roadmap to Rung 2 and beyond. Weâll audit your current tools, identify your highest-impact pilots, and create a business case you can take to your board and share the impact with the rest of the network. If youâre ready to move from ad hoc exploration to strategic capability, reply to this note and Iâll send you details.
3) đ Interesting News
Meta and Deloitte host Llama for Good Hackathon in Dublin to help Irish NGOs use AI for social impact, innovation and inclusive community solutions. (Think Business Ireland)
Woman seeking Canada Disability Benefit frustrated after being sent to costly AI-integrated service (CTV News)
How AI Can Deepen Nonprofit Relationships (Stanford Social Innovation Review)
4) đ From Across the Network
The AI Summit New York, 10-11th December, New York. Focuses on real-world applications and case studies of AI at scale - not non-profit specific, but useful learnings in best practices for efficiency, donor personalization, and process automation
Have an event, case study, gathering or interesting insight youâd like to share with the network? Drop us a note here.
Weâre a network of non-profit leaders interested in how AI is impacting our sector and driving mission. Youâre receiving this email because you expressed an interest in the network. If you think this was a mistake, you can unsubscribe using the link below.




