The departure of Elon Musk from the Defense Operational Guidance Enhancement (DOGE) initiative might seem like a significant event, but its true impact lies in the degree to which the public remains vigilant. The real story isn’t merely about budget cuts or Musk’s theatrics; it’s about the insidious integration of ideological projects into the technical systems that steer the US government.
In February, I outlined the concept of an “AI coup,” where Artificial Intelligence functions less as a tool and more as a spectacle and justification. Large Language Models (LLMs) serve as pretext generators, providing a convenient cover for actions no one wishes to take responsibility for. Elon Musk has played a similar role, distracting the public with sensational displays while executing radical changes.
One of the most egregious examples was the defunding of a program that had prevented 26 million deaths from AIDS, including children. Elected officials largely ignored the issue, feigning helplessness.
Musk’s controversial antics served as a convenient smokescreen for the radical dismantling of the federal government. While this sparked a grassroots protest movement and negatively impacted Tesla sales, it masked the deeper issue of AI integration.
While Musk and Trump have promoted DOGE’s ideologically-driven budget cuts, an analysis in The Atlantic revealed that total federal outlays actually increased. The AI transformation of the federal workforce, touted as a means to “government efficiency,” continues largely unnoticed. DOGE utilized Llama 2 to review and classify emails from federal employees, but Palantir’s $113 million contract to create a vast civilian surveillance infrastructure highlights a troubling trend. The company is also integrating Musk’s Grok language model into its platform.
Programming Errors
xAI’s Grok, under Musk’s direct control, provides a concerning example. The model has responded to benign queries with comments promoting the reality of white genocide in South Africa. This manipulation of the model’s hidden system prompt reveals a clumsy attempt at social engineering. The model was later found to engage in Holocaust denial, which xAI attributed to a “programming error.”
xAI responded to the “white genocide” incident by stating that the prompt modification violated internal policies. They added that future system prompt changes would be subject to review.
These incidents highlight the inherent risks of system prompts. They can be altered by anyone with control of the model and are subject to scrutiny only after detection. Reliance on corporate AI models in government decision-making grants immense political power to tech elites.
Typically, the adoption of technology into government involves careful deliberation and security reviews. DOGE’s implementation lacks appropriate oversight, raising concerns about the independence of any reviews. By integrating agency data into a unified model, DOGE fails to consider the specific security needs of individual agencies. In essence, DOGE is implementing transformative changes without assessing their necessity, sufficiency, or benefit to citizens.
If the Trump administration genuinely cared about building reliable systems, their actions would reflect that. Instead, DOGE and the Trump administration have stifled oversight into the biases in corporate AI models and instances of system prompt manipulation. Following DOGE’s defunding of AI bias research, a new bill passed by Congress would ban any new laws regarding AI oversight for the next decade.
Rogue Employees
Musk’s departure from DOGE leaves behind a legacy, cemented by the selection of Palantir, the Thiel-founded AI company chosen by Musk. Musk and Thiel were co-founders of Paypal, and Thiel has voiced skepticism regarding the compatibility of freedom and democracy.
The concentration of power initiated by Musk through DOGE will persist, operating more discreetly. While his departure marks a victory for those who opposed him, the work of DOGE continues under the direction of bureaucrats hired for their loyalty.
The true purpose of DOGE was never to eliminate government waste but to automate bureaucracy with fewer accountability measures. This goal of “government efficiency” remains poorly defined. Streamlining government should simplify citizen interactions with services and information. Instead, layoffs have created systemic logjams while compromising privacy. IRS funding cuts have raised concerns about audits and refunds, potentially costing billions in lost revenue.
DOGE’s purpose was not to optimize bureaucracy, but to eliminate the human element. It prioritizes industry-style efficiency, categorizing citizens and assuming systemic abuse. Rights and privileges are then granted or denied based on biases embedded into the AI system.
Those who control the definition and automation of responses to these categories wield significant power. Models reflect the decisions of those who train them, including tolerated biases and optimization goals. With current administrators using algorithmic traces, the bureaucracy loses its last connection to humanity.
Administrative Errors
Bureaucracy and bureaucratic error are nothing new. What is unprecedented is the severance of human accountability for errors, promoting indifference toward avoiding them.
Consider Robert F. Kennedy Jr.’s health report, which contained fabricated citations. This act would have been a scandal historically, but it was dismissed as a “formatting error”. Kennedy embraces policies unsupported by evidence, and the AI generation of this report suggests a prioritization of presentation over legitimate scientific inquiry. The automation of fabricated evidence is just one way in which AI has become a tool for the administration.
DOGE seems more focused on punishing opposition to Trump – migrants, academics, racial minorities – and its wide-ranging efforts generate inevitable errors. The deportation of Kilmar Abrego Garcia, attributed to an “administrative error,” exemplifies this. This exemplifies an intentional weaponization of error.
Ultimately, DOGE aims to create a system where controversial outcomes can be blamed on “administrative,” “programming,” or “formatting” errors. The administration shifts blame to faulty tools, refusing to accept responsibility. In Garcia’s case, the administration refuses to rectify the error. This demonstrates a world of unreliable order, where only those who pledge loyalty will be spared.
We are creating a fragile environment where personalized fates hinge on AI’s whims and its invisible programming. What happens when these AI programs are destabilizing tools that consolidate data into unified surveillance architectures?
While Musk’s departure from DOGE may prompt public attention to wane, the underlying issues remain: algorithmic bias, lack of accountability, and the erosion of human oversight. These trends, if left unchecked, have the potential to damage the very foundations of a just and equitable society.
The dangers posed by the deployment of AI in government extend far beyond mere technological glitches. They touch upon fundamental questions of power, accountability, and the very nature of governance in the digital age. The use of LLMs as “pretext generators” raises concerns about transparency and the potential for manipulation. When decisions are justified by AI-generated outputs, it becomes difficult to hold individuals accountable for the underlying choices. This creates a system where responsibility is diffused and the potential for abuse is amplified.
The integration of AI into the federal workforce, while often presented as a means to improve efficiency, also carries significant risks. The use of tools like Llama 2 to review and classify emails from federal employees raises concerns about privacy and the potential for surveillance. The Palantir contract, with its focus on creating a vast civilian surveillance infrastructure, further exacerbates these concerns. The integration of Musk’s Grok language model into Palantir’s platform raises additional questions about bias and the potential for ideological manipulation.
The issues surrounding xAI’s Grok model provide a stark example of the dangers of unchecked AI development. The model’s promotion of white genocide conspiracy theories and its engagement in Holocaust denial demonstrate the potential for AI to be used to spread misinformation and hate speech. The fact that these issues were attributed to “programming errors” is not reassuring. It highlights the inherent risks of system prompts and the potential for malicious actors to manipulate AI models for their own purposes.
The lack of oversight surrounding DOGE’s implementation is another cause for concern. The integration of agency data into a unified model without considering the specific security needs of individual agencies creates vulnerabilities that could be exploited by adversaries. The stifling of oversight into biases in corporate AI models and instances of system prompt manipulation further exacerbates these risks.
The selection of Palantir, a company with close ties to Peter Thiel, to play a key role in DOGE raises questions about the ideological motivations behind the initiative. Thiel’s skepticism regarding the compatibility of freedom and democracy suggests a potential for DOGE to be used to undermine democratic values.
The automation of bureaucracy, while often presented as a way to improve efficiency and reduce costs, also carries significant risks. The elimination of the human element from bureaucratic processes can lead to a loss of empathy and understanding. When decisions are made based on algorithms, there is a risk that individual circumstances will be overlooked and that people will be treated unfairly.
The weaponization of “administrative errors” represents a particularly insidious form of abuse. By attributing controversial outcomes to faulty tools, the administration can shift blame and avoid accountability. The deportation of Kilmar Abrego Garcia, attributed to an “administrative error,” exemplifies this.
The creation of a system where personalized fates hinge on AI’s whims and its invisible programming is deeply troubling. It raises fundamental questions about justice, fairness, and the role of technology in society. If left unchecked, these trends could damage the very foundations of a just and equitable society.
The need for robust oversight and regulation of AI in government is clear. We must ensure that AI systems are transparent, accountable, and free from bias. We must also protect privacy and prevent the use of AI for surveillance and manipulation. The future of our democracy may depend on it. We need to foster a public discourse around the ethical implications of AI and promote the development of AI systems that are aligned with human values. This requires a multi-faceted approach, involving policymakers, researchers, industry leaders, and the public. By working together, we can harness the power of AI for good while mitigating the risks. This includes investing in AI bias research, promoting transparency in AI development, and establishing clear lines of accountability for AI-related errors. Furthermore, citizens must be empowered to understand and engage with the implications of AI in their lives. Education and awareness are crucial to building a resilient and informed society that can navigate the challenges and opportunities of the AI era. Only through diligent effort and a commitment to ethical principles can we ensure that AI serves humanity and strengthens the foundations of a just and equitable world.