Can AI help nonprofits do more with less?

The generative AI bubble may or may not be about to burst, but the technology could still be a game changer for organizations around the world. And, according to recent data, nonprofit organizations are still trying to hop onto the AI wave.
A majority of nonprofits are interested in AI
Compared to other tech-forward spaces, the nonprofit industry has been much more hesitant to dive into AI and its pitch of humanless efficiency. Broadly, nonprofits have been slower to adopt AI as a universal helper or to deeply integrate it into their work, keeping AI segmented away from public work.
But as the tech has evolved — and in some ways acquiesced to the concerns of privacy experts and tech watchdogs — nonprofit leaders are more eager to accept AI's offer to help. It may soon become necessary.
In addition to historic funding and infrastructure barriers, American-based nonprofits are weathering new attacks on federal funding sources under the Trump administration. Federal leaders have resorted to intimidating organizations, questioning their motives as part of the administration's "anti-woke" agenda, which now extends to the country's AI innovations. In August, President Donald Trump signed an executive order that directed agencies to rewrite grant making policies for 501(c)(3) organizations, allowing agencies to terminate funding if it doesn't "advance the national interest."
Meanwhile, a 2025 report by Candid, the global nonprofit fundraising platform, found that 65 percent of nonprofits expressed interest in AI. Most nonprofits communicated being at a "beginner familiarity" with the tech. A recent survey by social good software provider Bonterra found more than half of its partner nonprofits had already adopted AI in some form, and a majority said they were interested in using it soon.
Tech nonprofit organization Fast Forward, with support from Google's philanthropic arm Google.org, recently surveyed more than 200 nonprofits that had already adopted AI in their work. The report showed that smaller organizations (less than 10 employees) were utilizing the tech the most, starting with their own chatbots and custom LLMs trained on public data. Most implemented it only in internal operations — and had been using AI for less than a year.
Guidance on AI safety and responsibility is still a major problem
While interest and adoption has grown, AI developers and tech funders haven't kept up with the needs of nonprofits. Organizations still navigate major gaps in training, resources, and policies that preclude AI's effectiveness in their work. Candid found that only 9 percent of nonprofits feel ready to adopt AI responsibly, and a third couldn't articulate a connection between AI tech and accomplishing their organization's mission.
Half of the organizations were worried that adopting AI could exacerbate inequalities that they themselves address within their work, especially among those serving BIPOC communities and people with disabilities. "Folks hold the desire to explore and to understand," wrote Candid in its findings, "but the support systems have not caught up."
These concerns were also expressed among nonprofits that have already adopted AI. Bonterra's survey found that nearly all nonprofits were worried about how AI companies could use their data. A third of the nonprofits said unresolved questions about bias, privacy, and security are actively limiting how they use it.
"With AI adoption on the rise, it’s critical for organizations to remember to prioritize people over data points. AI should be used to support a nonprofit's mission, not the other way around. For nonprofits and funders, this means that AI adoption must take on a people-first perspective that is grounded in transparency, accountability, and integrity," Bonterra CEO Scott Brighton told Mashable. "Social good wants to use AI ethically, and that means giving them guidance on how to approach data collection, ensuring human oversight over all decisions, and protecting private information."
Surveys have shown that very few nonprofits have internal AI training budgets, internal policies, or guidance for the organization's use of AI, most often due to a lack of infrastructure to sustain them. Nonprofits also expressed concern over the potential impact of automation on their work, high costs, and the lack of training resources for already overburdened staff — concerns that have existed for years as AI has become mainstream.
"The reality is that nonprofits can only do what funders allow them to do within their budgets," explained Fast Forward co-founder Shannon Farley. "Funders play an important role in helping to make sure nonprofits have the funding to prioritize AI equity and accountability."
Especially at the smallest level, nonprofits are still being cautious about AI — and deferring to their communities in its implementation. Fast Forward found that 70 percent of nonprofits "powered" by AI used community feedback to build their AI tools and policies as government regulation lags.
"At the end of the day, nonprofits don’t care about AI, they care about impact," said Fast Forward co-founder Kevin Barenblat. "Nonprofits have always looked for ways to do more with less — AI is unlocking the how."