Most finance leaders say the right things about artificial intelligence (AI). Far fewer actually use it. Their teams notice — and that gap is quietly derailing transformation efforts across finance organizations — and nowhere more visibly than in technology companies, where the product itself is often built on the same AI capabilities the finance team is being asked to adopt.
Finance and accounting professionals are, by training, expert observers of inconsistency. When a leader pushes AI adoption while visibly not adopting it themselves, the team hears: this is important enough to change how you work, but not how I work. That is not transformation — it is compliance theater, and it produces the half-hearted adoption rates that frustrate finance leaders six months into every initiative. In technology companies, the irony is particularly sharp: the CFO may be signing off on AI infrastructure spend while their own team still builds the board package in manually linked spreadsheets.
The solution is not better communication or a more compelling change management deck. It is something harder and more personal: the CFO has to go first.
The specific challenge in finance
Finance carries structural tensions that make the leadership modeling problem especially acute. The professional identity of a controller, senior accountant, or FP&A analyst is tied directly to their domain expertise. For these professionals, AI does not simply automate a task — it introduces a perceived threat to the thing that makes them valuable. In SaaS and cloud businesses, where FP&A teams are often the institutional owners of ARR models, cohort analysis, and unit economics, this tension runs especially deep: if AI can draft an ARR bridge or model churn scenarios, teams begin to question where their edge lies.
At the same time, finance has a legitimate reason for caution that other functions do not: outputs carry legal, regulatory, and fiduciary weight. A hallucination in a marketing email is embarrassing. A hallucination in an earnings release is a material event. In a pre-IPO technology company, a fabricated ARR or NRR figure shared with prospective investors can damage relationships that take quarters to repair — with no equivalent to an 8-K to correct the record. Finance professionals are right to apply scrutiny, and any leader who dismisses that concern as resistance is misreading their team.
This combination — identity threat plus legitimate risk concern — creates a change environment that generic enterprise AI adoption frameworks are not built for. Technology companies face an additional layer: the engineering and product teams sitting down the hall are often early and confident AI adopters, making finance’s caution look like cultural lag rather than professional judgment. Effective transformation here requires both the modeling of confident personal AI use and governance structures that give teams permission to trust what AI produces. Neither works without the other.


