Boards and executives are increasingly comfortable with the idea that Artificial Intelligence must be governed—not just developed and deployed.

Most organisations now recognise the importance of:

  • AI policies and principles

  • Defined roles and responsibilities

  • Data protection, cybersecurity and model risk management

  • Fairness, explainability, and human oversight

  • Ongoing monitoring and continuous improvement

Frameworks such as the NIST AI Risk Management Framework (AI RMF), built around the functions Govern, Map, Measure, Manage, provide a structured way to think about AI risk. International standards like ISO/IEC 42001, the world’s first AI management system standard, are emerging to formalise AI governance. And regulatory initiatives such as the EU AI Act are setting legally binding expectations for high-risk AI systems in sectors like credit, healthcare, employment, and public services.

But a recurring question remains:

“Where do we stand today—and how do we measure progress in AI governance and ethics over time?”

Policies and slide decks are not enough. Boards, regulators, and partners increasingly want a quantitative, evidence-based view of AI governance maturity—something that allows comparison across time, across business units, and against external benchmarks.

To answer this need, Dawgen Global has developed the Dawgen AI Governance & Ethics Index (DAGEI)™.

This article explains:

  • Why measurement is essential for AI governance

  • What DAGEI™ measures and how it is structured

  • How DAGEI™ aligns with NIST AI RMF, ISO/IEC 42001 and the EU AI Act

  • How DAGEI™ is applied in practice across sectors

  • How boards and executives can use DAGEI™ to steer AI responsibly

Why AI Governance Needs Measurement, Not Just Principles

Most organisations have already taken some visible steps on AI governance:

  • Issued an AI ethics statement or adopted responsible AI principles

  • Established an AI steering committee or model risk committee

  • Created policies covering AI development, model risk, and data protection

  • Signed up to public commitments or industry codes of conduct

These are important, but without measurement, it is difficult to answer practical questions such as:

  • Are our AI policies actually implemented in projects?

  • Are we managing AI risk consistently across businesses and geographies?

  • Are we improving—or just repeating the same slogans each year?

  • Where should we prioritise investment: data governance, monitoring, human oversight, or something else?

Regulators and standards bodies are pushing in the same direction:

  • The NIST AI RMF emphasises the need to measure AI risks and the trustworthiness characteristics (validity, reliability, safety, fairness, explainability, privacy, security) using clear metrics and evaluations.

  • ISO/IEC 42001 requires organisations to monitor, audit and review their AI management system—similar to other management standards—ensuring continual improvement, not static compliance.

  • Under the EU AI Act, providers and deployers of high-risk AI systems must maintain documentation, risk management processes, technical logs and post-market monitoring—artefacts that lend themselves to periodic review and scoring.

In other words: AI governance must become auditable and trackable.

DAGEI™ was created to provide exactly that: a structured, repeatable way to measure AI governance and ethics maturity.

What is DAGEI™?

The Dawgen AI Governance & Ethics Index (DAGEI)™ is a proprietary scoring and benchmarking tool designed by Dawgen Global to answer a simple but important question:

“How mature is our AI governance and ethics—right now, and where should we improve next?”

DAGEI™ converts a broad, sometimes abstract topic into a set of clear dimensions, sub-dimensions, and scores. It can be applied:

  • At enterprise level – to assess overall AI governance maturity

  • At division or sector level – e.g., banking vs. insurance vs. public sector

  • At portfolio or use-case level – e.g., credit scoring, AML, diagnostic AI, welfare eligibility, generative AI copilots

The index is built around six core dimensions:

  1. Governance & Accountability

  2. Policy, Standards & Regulatory Alignment

  3. Data, Privacy & Security

  4. Fairness, Human Rights & Societal Impact

  5. Operational Resilience, Monitoring & Incident Management

  6. Transparency, Explainability & Stakeholder Engagement

Each dimension is scored on a maturity scale (for example: Foundational, Emerging, Established, Leading), which can be mapped to numerical values for dashboards and benchmarking.

The Six Dimensions of DAGEI™

1. Governance & Accountability

This dimension assesses whether AI is embedded into core governance structures, not treated as an afterthought.

Key questions include:

  • Is there a clearly defined AI governance structure—committees, roles, RACI matrices?

  • Is AI risk integrated into enterprise risk management, model risk, and internal controls?

  • Does the board or executive committee receive regular AI risk reporting?

  • Are lines of accountability clear for each AI use case—who owns outcomes, risk, and remediation?

Higher DAGEI™ scores in this dimension indicate:

  • A defined AI governance framework

  • Active involvement of senior leadership

  • Integration of AI into risk and compliance processes rather than siloed treatment

This aligns closely with the Govern function of the NIST AI RMF and the leadership and context clauses of ISO/IEC 42001.

2. Policy, Standards & Regulatory Alignment

This dimension examines how well the organisation’s AI activities are aligned with:

  • Internal policies and codes of conduct

  • External frameworks such as NIST AI RMF, ISO/IEC 42001, sectoral AI guidelines, data protection law

  • Specific AI regulations where applicable (e.g., provisions analogous to the EU AI Act for high-risk AI systems)

Key considerations:

  • Are AI policies documented, communicated and understood by relevant teams?

  • Do they cover the full lifecycle—strategy, development, deployment, monitoring, retirement?

  • Are regulatory developments tracked and translated into internal requirements?

  • For high-risk AI systems (e.g., credit, healthcare, essential services), are there explicit controls and documentation aligning with emerging regulatory expectations?

A strong DAGEI™ score here indicates that AI governance is not operating in a vacuum, but is anchored in recognised standards and legal requirements, making the organisation more credible to regulators and partners.

3. Data, Privacy & Security

Data is the lifeblood of AI—and a major source of risk.

This dimension evaluates:

  • Data governance – lineage, ownership, quality management, retention, labelling, access control

  • Privacy and data protection – compliance with applicable data protection laws, data minimisation, consent, lawful basis for processing, use of personal data in training and inference

  • Security – protection of datasets, models, APIs and AI infrastructure; defence against data exfiltration, model theft, and unauthorized use

For generative AI and large language models, DAGEI™ also considers:

  • How prompts, logs, and outputs are handled (especially if they contain confidential or personal data)

  • Whether data is used for provider training and under what conditions

  • How third-party risks are managed for external AI platforms

High scores in this area show that AI is built on a secure and compliant data foundation, reducing the risk of breaches, misuse or regulatory sanctions.

4. Fairness, Human Rights & Societal Impact

This dimension focuses on ethical and societal risk—issues that are increasingly central to regulators, international bodies, and public opinion.

Key aspects include:

  • Processes to identify AI use cases that may affect fundamental rights (e.g., access to credit, employment, healthcare, welfare, justice).

  • Systematic bias and fairness testing where legally permissible and appropriate—e.g., performance and error rates across demographic or relevant sub-groups.

  • Consideration of indirect discrimination and proxy variables, not just explicit protected features.

  • Policies and procedures for human rights due diligence related to AI, drawing on global frameworks and guidance.

A high DAGEI™ score here indicates the organisation is not only compliant but also proactively managing the social and ethical footprint of its AI systems.

5. Operational Resilience, Monitoring & Incident Management

AI models change behaviour over time as data, context and systems evolve. This dimension assesses how prepared the organisation is to detect and respond when things go wrong.

Key questions:

  • Are there defined metrics and dashboards for AI performance, drift, complaints, and incidents?

  • Are drift detection processes in place (for both data drift and concept drift)?

  • Is AI integrated into incident management and operational risk frameworks—with clear severity levels, response plans, and root cause analysis?

  • Are there documented playbooks for pausing, rolling back or constraining AI systems during incidents?

This dimension closely ties to Dawgen Continuous AI Monitoring & Assurance (DCAMA)™, which operationalises monitoring and periodic review. Strong performance here demonstrates a move from “deploy and forget” to continuous AI resilience.

6. Transparency, Explainability & Stakeholder Engagement

Finally, AI governance is about how AI is perceived and experienced by those it affects.

This dimension covers:

  • Internal transparency – do staff and decision-makers understand when AI is involved and what its limitations are?

  • External transparency – do customers, patients, citizens or partners receive appropriate disclosures about AI use?

  • Explainability – can the organisation provide meaningful explanations of AI-assisted decisions, appropriate to each audience (e.g., technical vs. layperson vs. regulator)?

  • Stakeholder engagement – do high-impact AI use cases involve dialogue with regulators, civil society, or affected communities where appropriate?

High DAGEI™ scores here indicate that AI is not a black box but a well-explained, contestable and accountable capability, enhancing trust.

How DAGEI™ is Calculated

While the detailed scoring methodology is proprietary, the process broadly involves four stages:

  1. Scoping – Agreeing on the scope of the assessment (entire organisation vs. specific business, sector, or portfolio of AI systems).

  2. Evidence Gathering – Reviewing policies, procedures, documentation, system inventories, risk registers, monitoring reports, training materials, and organisational charts; conducting targeted interviews and workshops.

  3. Scoring & Calibration – Assessing each dimension against defined maturity descriptors, assigning scores, and calibrating across the organisation to ensure consistency.

  4. Reporting & Roadmap – Presenting results in a dashboard and narrative report, identifying “quick wins” and strategic priorities, and outlining a phased improvement roadmap.

Scores can be presented numerically, visually (heatmaps, radar charts), and qualitatively (with commentary on strengths and weaknesses).

Because DAGEI™ is designed to integrate with Dawgen’s broader frameworks—DALA™, DGACF™, DCAMA™—it also links directly to actionable interventions (e.g., audit engagements, control enhancements, monitoring improvements).

Aligning DAGEI™ with NIST AI RMF, ISO/IEC 42001 and AI Regulations

One of the strengths of DAGEI™ is that it is standards-aware.

  • NIST AI RMF – DAGEI™ dimensions map onto the RMF functions:

    • Govern ↔ Governance & Accountability; Policy & Standards

    • Map ↔ Policy & Regulatory Alignment; Data & Context understanding

    • Measure ↔ Operational Resilience, Monitoring & Incident Management

    • Manage ↔ Risk treatment and continuous improvement across all dimensions

  • ISO/IEC 42001 – DAGEI™ supports the management system clauses by:

    • Providing evidence for context, leadership, planning, operation, performance evaluation and improvement (all reflected in the index dimensions).

    • Helping organisations demonstrate a systematic approach to AI risk and ethics.

  • EU AI Act-style obligations for high-risk AI – while DAGEI™ itself is not tied to any single law, high scores typically indicate better readiness to meet requirements analogous to:

    • Documentation and risk management

    • Data governance and technical robustness

    • Transparency, human oversight, and post-market monitoring

For organisations operating across multiple jurisdictions, DAGEI™ offers a single, coherent view that can be used in discussions with different regulators and partners, even where local rules vary.

How Organisations Use DAGEI™ in Practice

1. Enterprise-Level Baseline

A regional banking group, healthcare provider, or conglomerate might use DAGEI™ to establish a baseline:

  • Identify where AI is already in use

  • Evaluate governance maturity across the six dimensions

  • Highlight divisions or subsidiaries with particularly high or low scores

This baseline becomes the starting point for an AI governance programme, with DAGEI™ recalculated annually to show progress.

2. Sector or Business-Unit Comparison

A diversified group (e.g., financial services + telecoms + public sector contracts) can use DAGEI™ to compare:

  • Which businesses are more mature in AI governance

  • Which sectors face higher regulatory expectations and therefore need faster improvements

  • Where shared services (e.g., central data and AI teams) need to support weaker areas

This helps leadership prioritise investment and allocate responsibilities.

3. Portfolio or Use-Case Focus

For organisations with a few critical AI systems—say, a credit scoring engine, a healthcare triage tool, and a welfare eligibility model—DAGEI™ can be applied at use-case level, producing:

  • A specific score for each system

  • Detailed findings on governance, data, fairness, monitoring, transparency

  • A roadmap of targeted improvements

This approach is particularly useful for high-risk AI areas where regulators and stakeholders are most concerned.

4. Integration with DCAMA™ and DALA™

DAGEI™ is not just a one-off diagnostic. When combined with DALA™ and DCAMA™:

  • DAGEI™ highlights where to focus audit and assurance work (e.g., low fairness scores, weak monitoring).

  • DALA™ engagements address identified weaknesses with structured AI lifecycle audits.

  • DCAMA™ provides ongoing monitoring and periodic reassessment, with updated DAGEI™ scores each year.

The result is a closed loop: measure → improve → monitor → measure again.

What Boards and Executives See

From a board or executive perspective, DAGEI™ delivers:

  • A single index that summarises AI governance and ethics maturity

  • Dimension-level scores that show where strengths and gaps lie

  • Comparisons over time (Year 1 vs. Year 2 vs. Year 3)

  • Clear links to regulatory and reputational risk

  • A tangible way to track the impact of governance investments (policies, committees, training, technology)

Instead of anecdotal answers—“we’re working on AI governance”—boards receive a structured, evidence-based narrative backed by Dawgen’s independent assessment.

Questions for Leadership: Do You Know Your DAGEI™?

Senior leaders should be able to answer:

  1. How mature is our AI governance and ethics across the organisation—and how do we know?

  2. Where are our biggest gaps: governance, data, fairness, monitoring, or transparency?

  3. Can we demonstrate alignment with global AI governance frameworks in a quantifiable way?

  4. How has our AI governance maturity changed over the last 12–24 months?

  5. Which AI systems or business units require urgent remediation, and what is the plan?

If your organisation cannot answer these questions clearly, DAGEI™ can help turn uncertainty into clarity and direction.

Next Step: Quantify Your AI Governance with DAGEI™

AI governance is no longer optional—and neither is measuring it.

The Dawgen AI Governance & Ethics Index (DAGEI)™ gives boards, executives and regulators a robust, transparent and repeatable way to understand AI governance maturity, identify gaps, and track improvement over time.

Combined with Dawgen Global’s broader AI assurance suite—DALA™, DGACF™, and DCAMA™—DAGEI™ becomes a powerful steering tool for organisations that want to innovate confidently while maintaining trust and compliance.

At Dawgen Global, we help you make Smarter and More Effective Decisions.

📧 To assess your organisation’s AI Governance & Ethics Index and receive a tailored improvement roadmap, email [email protected] to request a DAGEI™ assessment proposal.

Our multidisciplinary team will work with you to scope the assessment, gather evidence, calculate your DAGEI™ scores and design a practical roadmap—so your AI governance is not just well-intentioned, but measured, managed and demonstrably robust.

About Dawgen Global

“Embrace BIG FIRM capabilities without the big firm price at Dawgen Global, your committed partner in carving a pathway to continual progress in the vibrant Caribbean region. Our integrated, multidisciplinary approach is finely tuned to address the unique intricacies and lucrative prospects that the region has to offer. Offering a rich array of services, including audit, accounting, tax, IT, HR, risk management, and more, we facilitate smarter and more effective decisions that set the stage for unprecedented triumphs. Let’s collaborate and craft a future where every decision is a steppingstone to greater success. Reach out to explore a partnership that promises not just growth but a future beaming with opportunities and achievements.

✉️ Email: [email protected] 🌐 Visit: Dawgen Global Website 

📞 📱 WhatsApp Global Number : +1 555-795-9071

📞 Caribbean Office: +1876-6655926 / 876-9293670/876-9265210 📲 WhatsApp Global: +1 5557959071

📞 USA Office: 855-354-2447

Join hands with Dawgen Global. Together, let’s venture into a future brimming with opportunities and achievements

by Dr Dawkins Brown

Dr. Dawkins Brown is the Executive Chairman of Dawgen Global , an integrated multidisciplinary professional service firm . Dr. Brown earned his Doctor of Philosophy (Ph.D.) in the field of Accounting, Finance and Management from Rushmore University. He has over Twenty three (23) years experience in the field of Audit, Accounting, Taxation, Finance and management . Starting his public accounting career in the audit department of a “big four” firm (Ernst & Young), and gaining experience in local and international audits, Dr. Brown rose quickly through the senior ranks and held the position of Senior consultant prior to establishing Dawgen.

https://www.dawgen.global/wp-content/uploads/2023/07/Foo-WLogo.png

Dawgen Global is an integrated multidisciplinary professional service firm in the Caribbean Region. We are integrated as one Regional firm and provide several professional services including: audit,accounting ,tax,IT,Risk, HR,Performance, M&A,corporate recovery and other advisory services

Where to find us?
https://www.dawgen.global/wp-content/uploads/2019/04/img-footer-map.png
Dawgen Social links
Taking seamless key performance indicators offline to maximise the long tail.
https://www.dawgen.global/wp-content/uploads/2023/07/Foo-WLogo.png

Dawgen Global is an integrated multidisciplinary professional service firm in the Caribbean Region. We are integrated as one Regional firm and provide several professional services including: audit,accounting ,tax,IT,Risk, HR,Performance, M&A,corporate recovery and other advisory services

Where to find us?
https://www.dawgen.global/wp-content/uploads/2019/04/img-footer-map.png
Dawgen Social links
Taking seamless key performance indicators offline to maximise the long tail.

© 2023 Copyright Dawgen Global. All rights reserved.

© 2024 Copyright Dawgen Global. All rights reserved.