Somewhere between the third "Can ChatGPT do this?" request and the fifth AI-generated donor email draft, many not-for-profit leaders realize something uncomfortable: AI is already in the building.
The real question is not whether your organization will use artificial intelligence, but whether you will govern it intentionally or allow it to evolve in the shadows.
Before formally embedding AI into operations, not-for-profits need to slow down, just a bit. Because enthusiasm without guardrails can turn into risk faster than most teams expect.
Start with why, not how
It is tempting to adopt AI because everyone else seems to be doing it. Universities are piloting it. Hospitals are experimenting. Even small community foundations are dabbling in automation tools. But technology momentum is not a strategy.
Organizations should first clarify the problem they are trying to solve. Is the goal to reduce administrative load on lean teams? Improve grant writing efficiency? Enhance data analysis for program outcomes? Strengthen fraud detection in finance? Each objective carries different governance and risk implications.
And the part that often gets skipped: Data readiness. AI systems are only as good as the information they are fed. If donor records are inconsistent or program data is scattered across spreadsheets, introducing AI may amplify confusion rather than solve it.
Who owns AI? The board, management, or both?
This is where things get interesting.
AI oversight does not sit neatly in one box. Boards carry fiduciary responsibility for enterprise risk and long-term strategy. If AI affects financial reporting, donor communications, or beneficiary data, the board cannot treat it as a purely operational experiment.
At the same time, management must handle day-to-day implementation. That includes selecting tools, establishing usage policies, training staff and monitoring performance. Pretending this is only a technology department issue would be a mistake.
In practice, governance should be shared. Boards should ask informed questions. Management should provide clear reporting on AI use, benefits and incidents. Transparency matters; trust depends on it.


