
| IN THIS ARTICLE Reading time: 20 minutes
The eighth article of twelve, and the third and final article of Act III. Article 6 specified what AI capability boards should authorise this year in Caribbean financial services. Article 7 addressed the human-capital consequence of those decisions across any AI-adopting institution. This article narrows back to a function that exists in every Caribbean institution of any scale — the finance function — and asks the question every audit committee chair should be asking: what does AI actually do to the work of the CFO, the controller, and the team that closes our books? The boards reading this series are not just AI strategists; they are also the boards to whom a CFO is accountable. The discussion below assumes a senior finance reader. By the end of this article you will be able to: 1. Distinguish four genuinely different categories of AI in the finance function — classical machine learning, generative AI for narrative output, agentic AI for closing-cycle execution, and AI in the audit relationship — and recognise why most Caribbean discourse conflates them. 2. Apply the D-AGENTICA™ Finance Function AI Maturity Model as the named instrument, locating your finance function on the five-stage ladder across each of the four categories above. 3. Identify the four institutional risks that, in our experience, distinguish finance functions that adopt AI well from those that adopt it badly — and understand why three of those risks are not technical but governance failures the board can prevent. |
The phone call
In early March of this year, the chief financial officer of a Caribbean institution with consolidated revenue of just over US$200 million telephoned me on a Friday evening. She had spent the morning in her quarterly audit committee meeting and the afternoon in her group leadership team. Both meetings, she said, had been about AI in the finance function, and the two meetings had reached opposite conclusions. The audit committee — drawing on commentary the chair had read in a Big Four publication — had arrived convinced that AI in the finance function was a controls risk, an exposure to be constrained. The leadership team — drawing on a vendor demonstration the chief information officer had organised the previous week — had arrived convinced that AI in the finance function was a productivity opportunity, a capability to be accelerated. Neither group, she said with a note of weariness I have come to associate with this particular conversation, was wrong; and neither, importantly, was right. The honest answer was that her finance function was already using AI in three of its four categories without anyone — including her — having authorised any of it. The week ahead would be spent finding out where, by whom, on what data, and to what effect.
I have had this conversation now perhaps eight times in the past year, with CFOs of Caribbean banks, manufacturers, retailers, hotel groups, and credit unions. The details differ. The shape does not. A finance function that was, until recently, the most controlled and audited part of the institution has become the part of the institution where a member of staff with a corporate credit card and a personal email address can, in twenty minutes, configure an AI model to draft the management commentary for next week’s board pack — without telling the controller, without anything appearing on the IT register, and without the external auditor having any idea. The controlled function has become an unmonitored one, and the people who carry the controls accountability — the CFO and the audit committee — are the last to know. This article exists because that situation is not sustainable, and because the path back to sustainability is not to ban the technology; it is to specify, function by function, what good adoption looks like and what the board’s role in supervising it is.
Where we are in the series
The Caribbean AI Adoption Imperative has, at this point, established a great deal that this article will draw on without re-arguing. Article 1 established the agentic decade and offered the Three-Question Board Diagnostic. Article 2 separated AI from the vendor noise around it and offered the Agentic Vendor Assessment. Article 3 set out the productivity economics for the small and medium Caribbean enterprise, with the SME AI Sequencing Framework. Articles 4 and 5 are the guardrails — data sovereignty with its decision matrix and the regulatory landscape with its readiness self-assessment. Articles 6 and 7 opened the application act: financial services use cases for institutions that provide financial services, and the workforce transition for institutions that employ Caribbean people. Article 8 closes Act III by asking the narrowest of the application questions: what happens inside one specific function — the finance function — that exists in every institution of scale, regardless of whether it is a bank or a manufacturer or a hotel group. After this article the series turns to Act IV — The Decision — where governance, sector spotlights, the comprehensive Maturity Model, and the closing call to the Caribbean boardroom await.
What this article adds, against everything that has come before, is a specific vocabulary for thinking about AI in the finance function — four categories that are regularly conflated in the literature and the vendor discourse — together with the named instrument that will let a CFO and an audit committee locate themselves on a ladder of maturity across each of those categories. By the end of the article, a reader who walked in believing AI in the finance function was either a controls risk or a productivity opportunity should walk out understanding that it is, in fact, four different things at once, and that the four require four different governance responses.
The four categories of AI in the finance function
In our engagement experience, the phrase “AI in the finance function” is doing far too much work. It conflates four genuinely different things, with different technologies, different risk profiles, different governance requirements, and different commercial outcomes. The honest first move for any CFO is to refuse the phrase and to insist on the four-category vocabulary that follows. The board discussion that ensues will be materially better.
Category one — classical machine learning for forecasting and anomaly detection
The most mature category, and the one most Caribbean finance functions have already adopted in some form, is classical machine learning applied to two specific finance tasks: forecasting (revenue, cash, expense lines, working-capital movements) and anomaly detection (transaction monitoring, expense outliers, journal-entry irregularities). These techniques predate the current generative-AI moment by a decade or more. The underlying mathematics is well-understood, the model output is structured, and the controls discipline around it — model validation, back-testing, governance committee sign-off — is established. Several Caribbean institutions we work with have classical ML embedded in their planning systems, in their treasury platforms, and in their anti-money-laundering monitoring. The risks are real but they are known risks: overfitting to historical patterns, drift as conditions change, false positives that exhaust analyst attention. None of these is novel in 2026.
Category two — generative AI for narrative output
The category that has expanded fastest in the past eighteen months is generative AI applied to the narrative output of the finance function. Management commentary in the board pack. The financial review section of the annual report. Variance explanations in the monthly management accounts. First drafts of management discussion and analysis disclosures. Internal memos explaining why a forecast moved. The technology here — large language models with access to the institution’s own structured numbers — is genuinely useful. A controller who previously spent eight hours each month drafting variance commentary now reviews and edits a draft that took eight minutes to generate. The productivity gain is real, immediate, and modest in risk if the controls are right.
The risk profile in category two is interesting because it is neither a controls risk in the traditional sense nor a numerical accuracy risk. The numbers being explained are correct; the risk is that the explanation, generated quickly, ends up making confident claims about why the numbers moved that the model has no actual evidence for. A model can correctly observe that gross margin fell three percentage points and incorrectly infer that this was “due to commodity input cost increases” when in fact it was due to a one-off promotional discount the model had no information about. The numbers are right; the explanation is fabricated; the reader cannot tell. This is a controls problem, but it is a narrative controls problem, not a numerical one, and most finance functions do not yet have a process for catching it.
Category three — agentic AI for closing-cycle execution
The third category is the one that is genuinely new, that is moving fastest, and that is least represented in the Caribbean discourse. Agentic AI in this context means software agents that do not merely advise the finance team but execute on its behalf — agents that read invoices, post journals, reconcile sub-ledgers, route exceptions, follow up with vendors on missing documents, and prepare specific schedules for review. The work is no longer a draft for a human to finalise; the work is the work, and a human reviews exceptions and approves at points the institution has designated as control gates. This is genuinely transformative. It is also where the most serious controls questions in the finance function arise.
Two specific risks deserve named attention. First, segregation-of-duties — a control the finance function has built around for half a century — fundamentally changes when a single agent can both prepare and post a journal under the institution’s own name. The compensating control, in our experience, is not to forbid this but to redesign the segregation around the agent’s design boundaries: which actions require a human approver, which do not, and how the audit trail of the agent’s reasoning is captured and retained. Second, the attribution of the work shifts. When an agent posts a journal, who is the preparer? Whose review constitutes evidence? The Caribbean audit firm that audits the institution will, sooner than its clients expect, ask both questions in the management letter; finance functions that have not thought about them in advance will discover them as findings.
Category four — AI in the audit relationship
The fourth category — and the one most underdiscussed in the Caribbean — is the consequence for the finance function of the external auditor’s own use of AI. Big Four firms globally, and increasingly the larger second-tier firms in the Caribbean, are deploying AI in their audit work: full-population testing where formerly there was sampling, anomaly detection across general-ledger transactions, contract-review models that read every lease or revenue contract rather than the partner’s selection. This changes what the auditor expects from the finance function. The auditor now asks for the entire general ledger as a structured file, not a sample. The auditor now expects to be able to reconcile every revenue-recognition decision back to the underlying contract. The auditor now expects metadata — when was this journal entered, by whom, with what supporting attachment — that some Caribbean finance functions cannot easily produce.
The implication for the CFO is that the finance function’s data and process discipline must rise to meet the auditor’s new expectations, regardless of whether the finance function has any internal AI ambition of its own. A finance function that is two years behind on its own AI agenda may discover that its auditor is two years ahead, and that the gap shows up as audit findings, scope expansions, and fees. We have seen this dynamic in three Caribbean engagements in the past twelve months. It is not abstract.
| WHAT WE OBSERVE ACROSS CARIBBEAN FINANCE FUNCTION ENGAGEMENTS
Of the Caribbean finance functions we have worked with on AI assessment in the past eighteen months, every single one had at least one category-two use case (generative AI for narrative output) running somewhere in the function, almost always without formal authorisation and almost always without the audit committee’s knowledge. About half had a category-one use case (classical ML, typically embedded in a vendor platform such as the planning system or the AML monitoring engine). Fewer than one in five had begun any category-three pilot. None had structured a position on category four. The pattern, in other words, is not that Caribbean finance functions are behind on AI; it is that adoption has happened in the categories where the technology is easiest to access by an individual user, and has not happened in the categories where it requires institutional decision. |
The D-AGENTICA™ Finance Function AI Maturity Model
What follows is the named instrument of this article. The D-AGENTICA™ Finance Function AI Maturity Model is a five-stage ladder applied independently to each of the four categories above. A finance function locates itself on each ladder; the four locations together describe its overall position more honestly than any single summary score. The five stages are sequenced; an institution that is at stage three in one category and stage one in another should not be embarrassed by the gap, which is normal, but it should be conscious of it, which often is not.
We deliberately do not call this a maturity score. Scoring against a ladder of this kind tends to invite a kind of false precision that is unhelpful to a board. The useful question is not what number a finance function gets; it is which stage it is at and what the gating step to the next stage is. We will describe the five stages in general terms below, and then briefly indicate how each plays out in each of the four categories.
Stage 1 — Unmonitored adoption
Individual members of the finance team are using the technology on their own initiative, on their own devices or under personal accounts, without any institutional policy. The CFO does not know what is being used, on what data, by whom, or to what effect. As of writing, in our experience, this is the position of the majority of Caribbean finance functions for category two — the generative-AI-for-narrative use case. It is not an indictment; it is the inheritance of a technology that diffused faster than any institutional policy could keep up with. But it is not where a finance function should remain past the next twelve months.
Stage 2 — Acknowledged adoption with policy
The institution has acknowledged that the technology is in use, has issued a written policy specifying which uses are sanctioned and on what data, and has identified an owner — typically the CFO or the controller, occasionally the chief operating officer — who is accountable for compliance with that policy. This is, in our view, the minimum responsible position for category two by the end of 2026 in any Caribbean institution of scale. It is also achievable in ninety days; the work is policy work and communication work, not technology work.
Stage 3 — Authorised pilots with measurement
The institution has identified specific finance use cases that justify formal pilots, has scoped them with success metrics agreed in advance, has run them for a defined period, and is in a position to report to the audit committee on what worked, what did not, and what should be either expanded or terminated. Stage three is where the honest measurement of category one already in production lives — the ML in the planning system, in the treasury platform, in the AML engine. It is also where category-three (agentic) work appropriately begins for any institution doing it for the first time.
Stage 4 — Embedded with controls
The pilot is now production. The use case is embedded in the finance function’s regular work, with a documented control framework around it: who can change what, who approves what, how exceptions are routed, how the audit trail is preserved, how the model or agent is itself audited and re-validated periodically. Stage four is where the harder governance work in categories one and three lives. It is also where the external auditor begins to find evidence of management’s design and operation of controls over the AI, which is itself becoming a separable audit consideration.
Stage 5 — Optimised and disclosed
The use case is mature, controlled, and the institution is willing to disclose it — in the annual report, in the audit committee charter, in the management discussion and analysis. Disclosure is the mark of confidence. A finance function that has reached stage five in one or more categories is signalling to its stakeholders that it has the technology under control to a degree that bears external scrutiny. As of writing, we are aware of no Caribbean finance function at stage five in category three. There are several at stage five in category one. The honest expectation for category two is stage four by 2027 in well-run institutions.
The four risks the board should worry about
Having set out the four categories and the maturity model, we turn to the risks. In our engagement experience, the finance functions that adopt AI well and the finance functions that adopt it badly are not separated by their technology choices; they are separated by whether four specific risks have been identified, owned, and mitigated. Three of the four are not technology risks at all — they are governance failures the board can prevent.
Risk one — invisibility
The first and most common risk is that the board simply does not know what is happening. The finance function is using AI in categories one and two; the audit committee has never been briefed; no policy exists; no register exists; no ownership exists. The board’s first protection is not to demand that the technology be slowed down — slowing it down is rarely possible and usually counterproductive — but to demand visibility. A quarterly written report to the audit committee from the CFO, specifying which AI uses are running in the finance function, on what data, with what outcomes, and what changed since the last report, is the simplest and most effective first defence. We have seen this single discipline transform Caribbean audit committees’ ability to govern AI in finance within two quarters.
Risk two — false-precision narrative
The second risk is the one we identified above in category two. Generative AI produces narratives that sound explanatory but may be confabulating. The defence is a controls discipline most finance functions do not yet have: every AI-drafted narrative claim — “this happened because of that” — must be traceable to evidence the institution actually has. The simplest implementation is a citation discipline. If the AI says revenue rose because of a specific factor, the draft must cite where the institution can verify it; if there is no source, the claim comes out before the human signs the document. This is a small change in workflow with a large change in risk profile.
Risk three — agentic over-reach
The third risk applies specifically to category three and is the risk most likely to produce a serious finding in the next two years. An agent designed to reconcile a specific account begins, through scope creep or misconfiguration, to reconcile other accounts; an agent designed to draft journals begins, through capability expansion, to post them. The boundary of what the agent is allowed to do drifts. The defence here is documented and tested boundaries — what we sometimes call a “capability perimeter” — combined with a periodic exercise in which a member of the internal audit team or an external party attempts to push the agent past its perimeter and documents whether the boundary holds. This is novel work for most internal audit functions in the Caribbean, and it is the kind of work the audit committee should be asking about by the end of this year.
Risk four — the auditor expectation gap
The fourth risk is category four made operational. The external auditor’s expectations of what the finance function can produce — full general ledgers, complete contract sets, structured metadata — are rising faster than many Caribbean finance functions are upgrading. The risk is not that the institution fails the audit; it is that the audit becomes more expensive, more intrusive, and more conflict-prone than it needs to be. The defence is a candid conversation with the audit partner, well in advance of the year-end, about what data and what processes the audit team intends to deploy AI against, and what the institution would need to provide. CFOs who have this conversation early are, in our experience, materially better placed than those who do not.
| WHAT WE OBSERVE ACROSS CARIBBEAN AUDIT ENGAGEMENTS
In three of the past twelve months’ Caribbean audit engagements where the audit team deployed AI tooling against the institution’s general ledger, the institution had not been informed in advance of the techniques being used. In two of the three, the AI tooling identified anomalies that were resolved without difficulty but that, had they been raised cold in the audit committee meeting, would have created unnecessary alarm. The lesson is not that the auditor should hold back; it is that the finance function and the audit team should be in conversation, well before the fieldwork begins, about what AI techniques the audit will use and what the finance function should be ready to evidence. This is not a controversial conversation. We find it almost never happens proactively. |
What to do this year
The article has now covered ground enough to be specific about the three things a Caribbean CFO should commit to in the next ninety days, and the three things the audit committee should be in a position to verify by the end of the year. We are deliberate about the sequencing; doing these in the wrong order tends to produce either bureaucracy without effect or activity without governance.
First, the CFO should commission a one-page register of every AI use case currently running in the finance function. The register should record, for each use case: the category (one through four), the owner, the data being used, the stage on the maturity model, and a single sentence about the controls in place. Producing this register honestly is itself a useful exercise; in our experience it is the moment at which the CFO discovers the gap between what she thought was happening and what is. The register goes to the audit committee, where it begins a quarterly cycle.
Second, the CFO should issue a finance function AI use policy — a single document, ideally no more than three pages — specifying what is sanctioned, on what data, by whom, with what review. The policy should explicitly address category two (the narrative-output use case), because that is where unauthorised adoption is most common. The policy is not a constraint on innovation; it is the precondition for it.
Third, the CFO should request, as a formal item on the next audit committee agenda, a discussion with the external audit partner about the audit firm’s own AI use against the institution’s records. The point is not adversarial; it is co-ordination. A partner who is asked this question by the CFO will almost always provide a useful answer.
By the end of 2026, an audit committee that has done this work will be able to say, of its institution’s finance function: we know what AI is in use; we have a policy governing it; we receive a quarterly register; we have located the function on a maturity model across each of the four categories; we have an explicit conversation with our external auditor about their AI use against our records. That is a different audit committee position from the one most Caribbean institutions occupy as of writing. It is achievable. It does not require new technology; it requires institutional decision.
Closing reflection — and what comes next
Article 8 closes Act III. The three application articles — financial services use cases, the workforce transition, and the finance function — together describe the domains in which Caribbean institutions will most concretely encounter AI in the next two years. The pattern across the three is consistent. The technology is real; the productivity gains are real; the controls and governance work is the gap; the responsibility for closing that gap belongs to the board and the senior management team, not to the technology vendor and not to the IT department.
Act IV — The Decision — will turn from application to governance. Article 9 takes up the AI governance question directly, asking what a Caribbean board should put in its AI governance charter and what should remain at management level. Article 10 offers sector spotlights — short readings on what AI adoption looks like specifically in Caribbean tourism, manufacturing, and the public sector. Article 11 unveils the comprehensive D-AGENTICA™ Maturity Model, of which the Finance Function model in this article is the first domain instance. Article 12 closes the series with the call to the Caribbean boardroom that the entire programme has been building toward. We will see you in Act IV.
| FOR THE BOARD AGENDA
This article has specified, through the four categories, the named maturity model, and the four risks, what Caribbean boards should expect of their CFO and audit committee on AI in the finance function this year. A board chair, audit committee chair, or executive committee chair reading this article has earned the right to ask their leadership team one specific question and to propose one specific decision that will materially improve the position over the next ninety days. THE QUESTION Within ninety days, can our CFO present to the audit committee a written register of every AI use case currently running in the finance function — by category, by owner, by data source, by maturity-model stage, and by the controls in place — together with the finance function AI use policy that governs them and a record of the conversation with our external audit partner about the audit team’s own AI use against our records? THE DECISION That, by the end of the next quarter, the audit committee will receive a standing quarterly written update from the CFO, in the format set out in this article, on the state of AI in the finance function — and that the chair of the audit committee will have personally reviewed and approved the finance function AI use policy before the audit committee receives its first quarterly update. |
ABOUT THE AUTHOR
Dr. Dawkins Brown is the Executive Chairman and Founder of Dawgen Global. He holds a PhD and the MCMI and ACFE designations, with twenty-three-plus years of professional experience including a prior career at Ernst & Young before founding Dawgen Global. He writes the LinkedIn newsletter Caribbean Boardroom Perspectives and serves as Executive Chairman of Business Access Television.
About Dawgen Global
“Embrace BIG FIRM capabilities without the big firm price at Dawgen Global, your committed partner in carving a pathway to continual progress in the vibrant Caribbean region. Our integrated, multidisciplinary approach is finely tuned to address the unique intricacies and lucrative prospects that the region has to offer. Offering a rich array of services, including audit, accounting, tax, IT, HR, risk management, and more, we facilitate smarter and more effective decisions that set the stage for unprecedented triumphs. Let’s collaborate and craft a future where every decision is a steppingstone to greater success. Reach out to explore a partnership that promises not just growth but a future beaming with opportunities and achievements.
Email: [email protected]
Visit: Dawgen Global Website
WhatsApp Global Number : +1 555-795-9071
Caribbean Office: +1876-6655926 / 876-9293670/876-9265210
WhatsApp Global: +1 5557959071
USA Office: 855-354-2447
Join hands with Dawgen Global. Together, let’s venture into a future brimming with opportunities and achievements

