Who Defines “Good” and “Evil” in AI — and Where Does Africa Stand?
AI systems do not reason ethically; they enforce judgment through design.
Artificial intelligence is often described as a neutral tool: efficient, obedient, and value-free. This is misleading. AI has quietly become a moral infrastructure, deciding which behaviour is acceptable, which claims are credible, and which risks deserve attention. In doing so, it has assumed a role once reserved for law, religion, and politics: distinguishing between good and evil.
The relevant question is not whether AI contains values. It does. The more difficult question is who defines those values, and who must live with them.
Morality by design
AI systems do not reason ethically; they enforce judgment through design. Training data, optimisation targets, safety rules, and legal constraints together create a moral architecture shaped less by universal ethics than by political pressure, legal exposure, and commercial incentives.
Most frontier AI systems are built to comply with the norms of a few powerful jurisdictions—notably the United States, the European Union, and China. Their definitions of harm, safety, and responsibility reflect the histories and anxieties of those societies. When these systems are deployed globally, their moral assumptions travel with them.
Africa rarely participates in this upstream stage.
The New Jim Code goes digital
A persistent danger of automated systems is not explicit discrimination but institutionalised misclassification. Research has repeatedly shown how historical bias, once encoded into data, becomes self-reinforcing when automated. Unequal policing produces skewed datasets; skewed datasets produce biased predictions; biased predictions justify further unequal policing.
This dynamic has been described as the “New Jim Code”—a term coined by Ruha Benjamin to capture how modern technologies reproduce old hierarchies while appearing neutral and progressive. As she puts it, “technologies that are marketed as objective and efficient often encode the very inequalities they claim to fix.”
The insight matters well beyond the American context in which it was first articulated. When AI systems trained on Western data and aligned to Western institutional logics are deployed in African societies, the risk is not merely technical error but moral misalignment.
Ethics as power
AI ethics is often framed as a matter of principles—fairness, transparency, accountability. In practice, it is about who is protected and who is exposed.
Automated decision systems in welfare, policing or public administration tend to prioritise efficiency over dignity. Errors fall disproportionately on those with limited capacity to challenge them. In contexts where legal recourse is weak and oversight limited, algorithmic judgment becomes effectively final.
For African states pursuing rapid digitalisation, the appeal of such systems is clear. So is the danger of importing tools that embed moral judgements ill-suited to local realities.
Surveillance and moral sorting
AI-driven surveillance does more than observe; it categorises. It distinguishes normal from deviant, safe from risky. These distinctions carry moral weight and shape access to services, freedom of movement, and political participation.
In environments with fragile democratic safeguards, such tools can quietly expand state power. Surveillance shifts from responding to wrongdoing to anticipating it, often without clear standards or transparency.
Africa’s expanding use of biometric identification, smart-city technologies, and digital security platforms raises pressing questions. Who defines suspicious behaviour? Who audits the models? And who bears responsibility when automated judgment goes wrong?
A continent on the receiving end
Despite growing interest in national AI strategies and continental frameworks, Africa remains largely a norm-taker in global AI governance. It consumes models built elsewhere, is governed by standards it did not set, and is aligned with moral frameworks it had little role in shaping.
This asymmetry is not merely technical. It is moral. Africa inherits definitions of harm, safety, and acceptability developed in other societies, often with limited sensitivity to local context.
The result is a quiet but consequential distortion: African realities are filtered through external lenses and misread as risk profiles, development problems, or humanitarian crises.
Why it matters
When moral judgment is automated, it becomes harder to contest. Bias scales. Accountability diffuses. Injustice becomes procedural.
For Africa, the danger is not that AI will be unethical in some abstract sense, but that it will entrench external moral hierarchies, shaping governance and public life without democratic debate.
The question of who defines good and evil in AI is therefore not a philosophical one. It concerns the agency that decides how societies are classified, governed, and remembered.
Africa’s choice is narrowing. It can remain a consumer of pre-aligned intelligence, or it can invest in the capacity to shape how machines reason about its people, history,s and futures. The longer that choice is delayed, the more difficult it will become.


