
The Bias Problem Is Not a Technical Problem
When discussions of AI bias arise in boardrooms, they are often quickly handed to technical teams with a mandate to ‘fix it’. This reflects a fundamental misunderstanding. AI bias is not primarily a technical problem — it is a governance problem that has technical manifestations. It arises from human decisions about what data to collect, what objectives to optimise for, what trade-offs to accept, and whose interests to prioritise. And it can only be adequately addressed through governance frameworks that bring human accountability to bear at each of those decision points.
For Caribbean enterprises, the stakes are particularly high. Economies characterised by significant historical inequality — in access to credit, employment, housing, healthcare, and education — produce data that reflects those inequalities. An AI system trained on that data will, without deliberate intervention, learn to reproduce those inequalities at scale. The efficiency gains from AI deployment can become a mechanism for entrenching historical disadvantage — unless governance actively prevents it.
Understanding the Sources of AI Bias
AI bias can enter a system at multiple points across the development and deployment lifecycle. Boards and governance functions must understand all of them:
| Bias Source | Description and Caribbean Relevance |
| Historical Bias | Training data reflects historical patterns of discrimination. Caribbean credit data may reflect decades of discriminatory lending practices; HR data may reflect gender or ethnic biases in past hiring decisions. |
| Representation Bias | Training data underrepresents certain groups, causing the model to perform poorly for those groups. Small Caribbean markets may lack sufficient data to train models that perform equitably across demographic groups. |
| Measurement Bias | Proxy variables are used because direct measurement is unavailable — e.g., using postcode as a proxy for creditworthiness in ways that correlate with race or ethnicity. |
| Aggregation Bias | A single model is applied to a diverse population where different sub-groups have systematically different characteristics, causing the model to perform poorly for subgroups underrepresented in training. |
| Feedback Loop Bias | A model’s outputs influence future data collection, reinforcing the model’s existing biases over time. |
The Competing Definitions of Fairness
A critical and often overlooked governance challenge is that ‘fairness’ in AI is not a single, mathematically defined concept. Multiple definitions of fairness exist — and they are frequently mutually incompatible. No AI system can simultaneously satisfy all of them. This means that fairness is a policy choice, not a technical determination, and it must be made explicitly by governance functions with human accountability.
The principal fairness definitions that Caribbean governance frameworks should be familiar with include:
- Demographic Parity: the AI system produces positive outcomes at equal rates across demographic groups. Does our credit model approve loans at equal rates for all demographic groups?
- Equal Opportunity: the AI system produces equal true positive rates across groups. Does our fraud model correctly identify fraudulent transactions at equal rates for all demographic groups?
- Predictive Parity: the AI system’s predictions are equally accurate across groups. Does our risk model have equal predictive accuracy for all customer segments?
- Individual Fairness: similar individuals are treated similarly by the AI system, regardless of group membership.
Choosing which definition of fairness to prioritise — and making that choice explicitly and accountably — is a governance decision, not a data science decision. It should be made at the appropriate level of the organisation and documented as a matter of record.
The Caribbean Governance Imperative on Bias
Caribbean enterprises deploying AI in consequential decision domains have both a legal and ethical obligation to actively assess and mitigate discriminatory bias. Key governance requirements include:
- Pre-Deployment Bias Assessment: all Tier 2 AI systems must undergo a structured bias assessment prior to deployment, using statistical tests across relevant demographic characteristics
- Fairness Metric Selection: the AI Governance Framework must specify which fairness metric(s) are required for each category of high-risk AI, with documented rationale
- Ongoing Fairness Monitoring: bias assessment must be continuous, not just at deployment — fairness metrics must be tracked in production and deterioration must trigger investigation
- Disparate Impact Reporting: material disparities in AI outcomes across demographic groups must be reported to the appropriate governance level and disclosed to regulators where required
Dawgen Global Fairness Assessment Service
Dawgen Global’s AI Assurance practice provides structured Fairness Assessment services for Caribbean enterprises deploying high-risk AI. Our methodology applies multiple statistical fairness tests, identifies the sources of observed disparities, and recommends governance and technical remediation measures. We also assist boards in making the explicit fairness policy choices that effective AI governance requires — ensuring those choices are documented, owned, and defensible.
| Fairness in AI is not achieved by ignoring demographic characteristics — it requires actively measuring, understanding, and governing outcomes across those characteristics. Colourblind AI is not fair AI. |
Next in the Series — Article 7: Explainability and Transparency: The Right to Understand AI Decisions
About Dawgen Global
“Embrace BIG FIRM capabilities without the big firm price at Dawgen Global, your committed partner in carving a pathway to continual progress in the vibrant Caribbean region. Our integrated, multidisciplinary approach is finely tuned to address the unique intricacies and lucrative prospects that the region has to offer. Offering a rich array of services, including audit, accounting, tax, IT, HR, risk management, and more, we facilitate smarter and more effective decisions that set the stage for unprecedented triumphs. Let’s collaborate and craft a future where every decision is a steppingstone to greater success. Reach out to explore a partnership that promises not just growth but a future beaming with opportunities and achievements.
Email: [email protected]
Visit: Dawgen Global Website
WhatsApp Global Number : +1 555-795-9071
Caribbean Office: +1876-6655926 / 876-9293670/876-9265210
WhatsApp Global: +1 5557959071
USA Office: 855-354-2447
Join hands with Dawgen Global. Together, let’s venture into a future brimming with opportunities and achievements

