AI policy debacle shows dangers of technology used without public oversight

SA’s draft policy contained sources fabricated by a large language model

| By

Technological progress should serve the public good, rather than corporate interests, writes Tyronne McCrindle of Article One. Image via Wikimedia user mikemacmarketing (CC BY 2.0).

Following the recent withdrawal of the draft National Artificial Intelligence Policy by Minister Solly Malatsi, Tyronne McCrindle argues that AI must be governed in the public interest, not left to opaque systems and corporate power.

This past Sunday, Minister of Communications and Digital Technologies Solly Malatsi withdrew South Africa’s draft National Artificial Intelligence Policy after it was found to contain at least six AI-fabricated sources, commonly referred to as “AI hallucinations”.

“The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened,” said Malatsi.

The swift action by the minister in acknowledging the severity of the mistake and withdrawing the policy needs to be commended.

But the inclusion of six fictitious references in a national policy document isn’t simply embarrassing, it’s a serious breach of public trust. It demonstrates the dangers of over-reliance on digital technologies without proper human oversight and that public engagement with these technologies is essential.

Over the past two decades, digital technologies have become a crucial part of our lives. Increasingly, AI algorithms shape access to jobs, credit and public services.

And while these technologies have the potential to advance participatory democracy and social and economic development, they have become a major arena for the exercise of political and military power, for war and genocide.

Competition between the Big Tech giants is fierce, with many investing hundreds of billions of US dollars in AI technologies. As these corporations wrestle for dominance, they are leaving a trail of environmental and economic destruction in their wake.

AI-linked job losses in the US are starting to rack up. Amazon, Oracle, and Meta have laid off thousands of workers this past year. Microsoft dismissed more than 15,000 workers in 2025 and last week announced it would implement voluntary retrenchments for about 8,000 people in its US workforce, many of them having “spent years, and in some cases, decades, shaping Microsoft into what it is today”, Microsoft’s Chief People Officer Amy Coleman acknowledged.

On Monday, following negotiations between Google and the US Department of Defence, more than 600 Google employees wrote an open letter to the company’s CEO demanding that Google’s AI be used to benefit humanity and that it should not be used in “inhumane or extremely harmful ways”, including for “lethal autonomous weapons and mass surveillance”.

Social media platforms shape political discourse without democratic oversight. Ride-hailing apps like Uber and delivery apps, such as Takealot or Amazon, have disrupted markets, including labour. And an already dire digital divide is exacerbated by unequal access to connectivity, devices, and digital skills.

Earlier last week, I wrote twice to Minister Malatsi, on behalf of Article One, requesting an extension to the comment period for the draft AI policy. In my first letter I gave five reasons for why an extension was both reasonable and necessary: 1) the complexity and technical nature of the policy; 2) the need for multi-stakeholder and interdisciplinary engagement with the policy, 3) that insufficient background information on the impact of AI was provided on the policy, 4) the constitutional imperative for meaningful public participation, and 5) the significance of the proposed institutional and regulatory reforms in the policy.

Days later, I was informed that a fictitious academic journal had been discovered in the policy’s reference list. On taking a closer look, I identified six sources that seemed to be fabricated, strongly suggesting that a large language model had been used to draft the policy.

When the minister’s office replied that it could not commit to a further extension for public participation, I wrote back, informing the minister of the AI hallucinations in the policy.

I wrote that the ministry and department “have a duty to provide accurate and correct information that does not mislead the public – especially in policy documents that are specifically aimed at addressing the issues of new and potentially disruptive technology that could be misused”.

“The department’s failure to independently verify these sources is already an indication of negligence and means that the public needs additional time to scrutinise the policy and independently verify all references, sources, and the policy as a whole. The misuse of AI in this context undermines public confidence in the state, public administration, and the policy as a whole.”

I also pointed out that the policy did not address specific harms and how they might be mitigated. For example, the question of jurisdiction of global tech corporations is not covered, despite this already being an issue faced by another South African regulatory body, the Information Regulator.

The Information Regulator has been involved in a legal struggle with Google and Meta, who both claim that South African law does not apply to them, despite operating in South Africa and managing the private data of millions of its citizens.

I requested the minister provide copies of the 32 submissions received on the 2024 AI Policy Framework, including those made by Microsoft and Huawei, so that they may be studied together with the policy to determine the level of influence corporations had over the policy.

We must insist on an AI policy that is grounded in our constitutional values, that is informed by broad public engagement, and that is capable of confronting global tech power and corporate dominance.

The lesson for us here is not only about the risks of AI, but about how we need to ensure that technological progress serves the public good rather than corporate profits.

Tyronne McCrindle is the Executive Director of Article One, a public benefit organisation working to build a culture of participatory governance in South Africa by holding government, corporations and powerful interests accountable.

Views expressed are not necessarily those of GroundUp.

Support independent journalism
Donate using Payfast
Snapscan

TOPICS:  Government Technology

Next:  Cape Town’s bid to manage Metrorail gets the go-ahead in R1.9-trillion national plan

Previous:  Post Office workers march in Pretoria

© 2026 GroundUp. This article is published under the GroundUp Republication Licence Version 1.0. Email [email protected] to request permission to republish.