I did a talk for the Open University's final year, undergraduate, computing and communications project students earlier this week. An edited transcript follows below.
Good evening, everyone.
I am going to be talking about algorithmic decision-making machines we are calling artificial intelligence, even though they are not intelligent, in any meaningful sense of the word.
Some industry leaders are promising artificial general intelligence – AGI – will, magically, stimulate economic growth, rectify climate change and fix prettymuch all the world’s problems, including curing cancer, and creating a utopia.
One has said he believes regulations and environmental activists are the tools of theantichrist.
One talks of his pride in killing enemies.
Several warn a future superintelligent AI may wipe out humanity.
Whatever their predictions of the possible effects, all of these characters, hype and doom merchants alike, are basically telling the same story – that AGI isinevitable.
The hype of AI has captured hearts and minds at the highest levels of government and commerce.
Politicians are wrapped in fear of missing out and want AI in everything.
CEOs are beginning to get impatient for a payoff on their investments.
AGI may or may not be inevitable. But there are serious geopolitical, environmental, societal, economic and technological questions about whether we should be pursuing it at all.
Meanwhile not enough attention is being paid to how algorithmic decision-making systems are being deployed in practice and with what consequences – in social welfare, border control, policing, security & intelligence, the military, employment, health and other contexts. Let’s consider some of these and think about whether the inevitability of AGI is the appropriate framing for board rooms and governments to be making decisions about our algorithmic future.
But… Before we get into any of that, a bit of history…
Meet John McCarthy. It was McCarthy who coined the term artificial intelligence when, with Claude Shannon, Marvin Minsky and Nathaniel Rochester, he convened the Dartmouth summer workshop on AI in 1956. McCarthy freely admitted choosing the label ‘artificial intelligence’ as a hook to attract funding. That’s right folks, AI began as a sales pitch. And so it pretty much continues to this day.
On the left is Joseph Weizenbaum who created the first chatbot, ELIZA, in 1966. Named after the female leading character in George Bernard Shaw’s Pygmalion, ELIZA’s Doctor variation was a simple key word matching code that bounced rephrased user statements back at them, typically as questions.
On the right is a sample of the kind of responses ELIZA provided, as noted in the paper Weizenbaum published on the program in the Communications of the ACM journal, in January 1966.
Just designed to enable natural language communication with the computer, Weizenbaum was astonished and then disturbed it elicited emotional responses from users, who assumed ELIZA was conscious and showing interest in them.
This false attribution of human empathy, understanding and consciousness to machines became known as the ELIZA effect.
Weizenbaum considered it delusional thinking and was a critic of the blind faith in and adoption of technology generally. He also believed that using algorithms for tasks needing empathy & human judgment (e.g. therapy or welfare or criminal justice) was “obscene”.
I’m going to fast forward, now, through to the early 2010s. AI, machine learning, deep learning, neural networks had lost the power to dazzle benefactors and had spent quite a chunk of the interim in an “AI winter”. Funds had been hard to come by and the field was marginalised.
December 2012 was the inflection point.
Geoffrey Hinton published a paper with his then students Alex Krizhevsky and Ilya Sutskever, entitled “ImageNet Classification with Deep Convolutional NeuralNetworks”. They used GPUs rather than CPUs to create a deep learning image/pattern recognition system later called AlexNet, after its primary developer Krizshevsky. It spawned an almost instant change from AI winter to apparently unlimited venture capital funds for AI researchers and startup companies.
The underlying deep learning technology had not advanced that much since the 1990s but there came a realisation that the computing power which had become available, along with the bottomless bounty of digital data available via the internet, created a huge increase its possible practical applications.
Hinton, who has been called the godfather of modern AI, auctioned their company, DNN Research, the only asset of which was their deep learning paper. There were four bidders (Google, Microsoft, DeepMind and Baidu), Google winning with an offer of $44 million.
Krizhevsky remains one of the most respected AI scholars. Sutskever left Google in 2017 and became a founder and chief scientist at OpenAI where he oversaw the development of ChatGTP [sic]. Though regretting it shortly afterwards, he was one of the board members who ousted Sam Altman in 2023. He stepped down after Altman was reinstated. He founded Safe Superintelligence Inc (SSI Inc) in 2024 with the aim of developing super intelligent AI safely.
By 2012 unfettered, private sector mass surveillance & profiling, of a scale unthinkable before the turn of the century, plus addictive, attention- grabbing apps and social media algorithms, had been long established as the core business model of the internet. They were and are the profit engines of a concentrated set of giant, mostly US and Chinese, technology corporations like Google, Apple, Facebook/Meta, Amazon, Microsoft, Nvidia, Oracle, Tencent (WeChat etc), Alibaba, ByteDance and others.
Notable mention should also be made of the Taiwan Semiconductor Manufacturing Company Limited (TSMC or Taiwan Semiconductor), the world’s largest chip manufacturer which has an almost global monopoly on advanced AI chips, being a key supplier of Nvidia which in turn supplies the hardware at the heart of most advanced AI systems.
The technology systems built and rolled out by those companies are used, extensively, by states in the whole gamut of government services from law enforcement, health and social welfare to border control, military, security and intelligence.
The most profitable firms in the world either monetise or otherwise exploit data through stalker advertising and profiling and/or provide software and hardware services and infrastructure to the economic & state actors who do.
A report in the past couple of weeks, for example, confirmed that the US Customs & Border Protection agency has been buying location data from adtech and data broker industries to track people. The brutal Immigration & Customs Enforcement agency is also buying location data and is actively soliciting more, as well as more, related, surveillance technology.
Advertising technology – adtech systems invisibly, invasively, relentlessly, secretly, at scale, track activity on every internet connected device.
Algorithms aggregate and classify into profiles all those actions.
That makes it a national security issue, according to the Irish Council for Civil Liberties, because these systems, and the vast ecosystems of companies that run and are tapped into them, track and share everybody’s personal information. Including those at the highest levels of government, intelligence, security and military services.
Cameras, sometimes with integrated face recognition, watch us everywhere, in public and in parts of previously private spaces, such as homes. So, for example, GCHQ ran ascheme called Optic Nerve, which surreptitiously collected webcam images, in one six-month period from 1.8 million Yahoo! account holders. In an effort to comply with the Human Rights Act of 1998, operators were limited to collecting one image from each camera every 5 minutes.
Phone apps & networked computers track location and behaviour.
Thousands of economic and state actors collect and/or buy, process and attempt to derive benefits from the digital panopticon we have permitted them to build.
Systems built to collect data, profile us, target ads, grab & maintain our attention and fuel profits have become systems of disproportionate influence and control.
The quest for super intelligent AI that can outperform humans on every dimension is the latest manifestation of all this. The opportunity for those AI models to chomp on the entire digitised history of human creativity and personal data, generated through interaction with the internet, was too good to miss, now the computing power had become available.
And grab that material for training their giant AI models is exactly what the big AI companies have done, riding roughshod over intellectual property, creative moral rights, privacy and data protection rights, in the AI arms race. Hundreds of billions of dollars have been and are being invested in AI companies, models and the massive, environmentally corrosive data centres that run them, leading to further intensification of the data gathering surveillance.
It’s not just corporations and governments btw who have been collaborating in building our mass surveillance society. We have, enthusiastically, invested in and deployed these devices, gadgets and systems and the frictionless convenience and access to information, entertainment and community they offer.
And when some of the companies involved get negative publicity for pushing the boundaries of invasive behaviour, we collude in excusing them; or just shrug disinterestedly, if we notice at all.
In one of the gentler examples of this, in 2024 Microsoft discovered state backed hackers from China, Russia and Iran using Microsoft AI for cyber attacks. Security experts assessed the reports and evidence and concluded that Microsoft must have been engaged in large scale, surreptitious spying on their AI users, in order to be able to detect these hackers.
Defenders of Microsoft suggested it was “not fair” to call the company’s actions “spying.” They were just the good guys and they found the bad guys. Besides the small print in the terms of service say, explicitly, that Microsoft monitors users; and all users agreed to the ToS, so had no justification for complaining.
The rest of us pled indifference or simply didn’t pay any attention.
Our expectations of privacy have dropped enormously.
A bit like what fisheries scientists call a shifting baseline. Basically, radical declines in ecosystems or fish species over long periods of time, caused by intensive overfishing, went unnoticed. The declines were evaluated by experts, who used the state of fishery at the start of their careers as the baselines, thereby missing the longer-term trends.
What seems “normal” or “natural” is whatever we, as children or at the start of our careers or in particular contexts or circumstances; and we evaluate everything from that baseline.
Surveillance is easier & more pervasive than it has ever been – commerce & governments expand & consume it voraciously.
Not long ago the ability to track people’s daily movements, by knowing their employer, home address or place of worship, was considered a dangerous power, only available, via a warrant, to specific government agencies. Now pretty much anyone can access this capability.
In the last century practically the only people who provided their fingerprints were suspects detained in police stations. Now kids need to press a fingerprintreader to register or get access to lunch, lockers or books in school; or unlock their phones. Children have been conditioned to find it normal to provide their fingerprints for routine daily tasks.
Sadly my call to arms to teenagers to opt out of their school fingerprint systems, as is their right under section 26(5) of the Protection of Freedoms Act 2012, went unnoticed.
After over two decades of the deployment of biometric collection systems in schools, the UK government finally released some guidelines on their use in July 2022. They are short and not particularly enlightening.
Privacy has been disappearing faster than fish stocks, without us paying attention and the AI industry is supercharging that process.
In November 2022 the release of ChatGTP [sic] [thanks to Cory Doctorow for pointing out the error :-). I checked the automated transcript and it looks like I repeated it throughout the talk] rocked the world. Queue another stampede of venture capital and big tech investment and other economic actors wanting in at the apparently brightening dawn of a new AI age.
Now the arrangement of these investments is hugely complicated… so a large chunk of the $135 billion, 27% stake Microsoft has in OpenAI, for example, is in Microsoft’s provision of Azure cloud services to train and run OpenAI models.
There is massive cross fertilisation with many of the investors hedging their bets, by carving out a piece of many of the big AI companies; in the hope of being there for the payoff, when/if one of them reaches the AGI pot at the end of the rainbow.
Depending on which list you consult, the biggest companies include Nvidia, Microsoft, Apple, Alphabet (Google), OpenAI, Meta (Facebook), Tesla, Oracle, Anthropic, Amazon, Palantir, IBM, xAI, Anduril Industries and Databricks.
Btw you don’t need to worry about most of these guys on the slide if Armageddon does come to pass. Many of them are building giant, luxury, underground bunker complexes, they and their chosen friends, family and acolytes get to ride it out in. I believe Mr Zuckerberg has chosen Hawaii for a 1,400-acre, $270 million compound. Planning documents suggest it will include a 5,000-square-foot underground shelter, with its own energy and food supplies.
I’ve been talking for a while and haven’t yet even addressed the question of what AI actually is. Partly because it is a fraught and complicated one and partly because it remains the catch all sales pitch that John McCarthy conceived it as. AI appears to be everywhere and in everything these days.
Think about how we’d be able to talk about planes, trains, automobiles, bikes, boats, rockets, even walking, if we didn’t have words for them other than “vehicle”.
There’s a media frenzy about a new vehicle that breaks all previous speed records. It happens to be a rocket but the public don’t know that. They want their car, bike etc to be able to go as fast.
That’s pretty much where we are with AI. Just replace the word “vehicle” in that scenario with “artificial intelligence”.
All software tools are being labelled AI and people, in public debates, talk past each other when discussing it because of the paucity of understanding of what precise kind of technology it is they are talking about.
AI covers a huge range of different technologies, some of them nothing to do with AI, some useful e.g. spell checkers, some revolutionary e.g. image processing and breakthrough tech on protein folding, some harmful e.g. mass surveillance tech.
Some deployed in inappropriate contexts and causing harm, like Elon Musk’s Grok chatbot generating estimated 3 million sexualized images including 23,000 of children, over an 11 day period at the end of 2025 and beginning of this year.
Yet many of these tools have the potential to have positive effects. What matters is how we design and use them, who controls them and for whose benefit.
Basically then, AI is umbrella term for a loosely related range of technologies –
LLMs like ChatGTP [sic] have nothing in common with software social welfare depts use to evaluate if someone may be accused of benefit fraud
But how they work, what used for, by whom, against whom & how they fail – is hugely different
Generative AI, GenAI e.g. Stable Diffusion, Claude, ChatGTP [sic], Dall-E can generate apparently realistic content quickly. Progress is impressive but it is unreliable & prone to misuse and generating misinformation
Predictive AI – deployed by governments in policing, border control, social welfare, the military – is where much of the harm happens. It is widely used & being expanded & sold as a “solution” to problems but does not and, often, cannot work. It is hard to predict the future and AI does NOT change that.
Artificial general intelligence (AGI, not to be confused with GenAI) is the holy grail being chased by the AI industry. This is an advanced, sentient AI that can understand, learn, and apply knowledge across a vast, diverse range of domains and that matches or surpasses human capabilities, in all these areas.
A further extension of AGI is artificial super intelligence (ASI) that will evolve from AGI to surpass human abilities by a significant margin.
AI, at its heart, is a probability machine – it finds patterns in data, re-mixes, regurgitates – predictive text/images/code on steroids. It is NOT intelligent or sentient or empathetic or conscious.
Discriminatory data and algorithms have been and are being hardwired into predictive AI systems. The oppression, discrimination, violence that have blighted the lives of marginalised communities, throughout history, is getting baked into AI systems, perceived, wrongly, to be “intelligent”, neutral, decision-making machines. With this, institutional discrimination and prejudice will become harder than ever to combat.
People, not directly affected, can get an idea of the real-life consequences for victims of institutional aggression, based on rogue IT systems, from stories like the PostOffice Horizon scandal.
Bottom line, we should be less worried about future utopia (even an exclusive billionaires’ members only utopia) or Armageddon and more concerned with the harmful effects of these technologies right now and in the recent past, particularly those under the control of powerful concentrated economic and political forces that can, reasonably objectively, be described as malevolent.
Too often the harms suffered by the marginalised are dismissed, by those with vested interests, as inconvenient hurdles to “progress”.
Let’s take a look at just some of the harms that are been perpetrated by people deploying these technologies in the real world.
I’d like to look at examples from some of the areas on the slide.
Dr Joy Buolamwini downloaded some face recognition software to help with an art project when she was at MIT and discovered it could not register her face, unless she put a white mask on. I’ve put video of her describing this experience in the slide and recommend you watch it when you get the chance.
The first independent study of South Wales Police’s use of face recognition, at a Champions League football match and a six nations rugby game, was published by Cardiff University researchers in 2018. At the football the system made correct identifications in 3% (yes I said three per cent) of cases (94 people out of 2710 scanned) and falsely identified people in 72% of cases (1962 people). By August 2020 the Court of Appeal ruled that the force’s use of live face recognition was unlawful.
Advocates, vendors and every techbro who sells face recognition gadgets will tell you the technology is better now.
Some even claim that the encoded bias has been fixed. (See response to Prof Fry's question at 55m 52s).
It remains racially biased and is considered a significant threat to human rights by many, including Buolamwini. A visceral example was when the head of Iran’s agency for enforcing morality laws announced, in September 2022, that face recognition tech would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.”
Face recognition has become popular with authoritarian regimes as a tool to suppress dissent. It is also very popular with the UK government who in December last year pledged to “ramp up facial recognition” and “better equip the police”.
In her 2023 book, Buolamwini documents the evidence of discrimination hard wired and coded into technology like face recognition. It is discriminatory on multiple dimensions, including race, gender, disability, religion.
AI mirrors the bias and assumptions built into its training data, which, in turn, show the bias and assumptions of the societies that data was produced by and about. It was trained on data shaped by generations of policy decisions, societal norms & attitudes and security doctrines that framed ethnic minority or disabled or migrant or female or gender-nonconforming or Muslim or Jewish or other religious or nomadic identities as inherently inferior, less valued or just plain suspicious. It should not be a surprise it exhibits bias.
This is the brilliant Dr Timnit Gebru. At the AI conference in 2015 where Sam Altman and Elon Musk launched OpenAI, she was one of only 5 black people in a delegation of thousands.
Nascent AI systems were already having an influence that caused her deep concern – deciding credit scores, whether to grant mortgages or access to housing, flagging suspects for police, helping judges to decide sentences or whether to grant bail.
A system called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) was and is being used by judges and parole officers to make decisions about bail, sentencing and parole.
COMPAS attributed high re-offending risk scores to black defendants and lower scores to white counterparts. A 2016 investigation, by ProPublica, showed COMPAS more than twice as likely to be wrong about black defendants as white ones.
Given COMPAS and other discriminatory AI systems she was studying – predictive policing systems trained on historic data which led police to target already over-policed minority communities; Google and Microsoft’s gender biased translation software which assumed e.g. certain professions like doctor or engineer were male, the Google computer vision algorithm that classified black people as apes – Gebru was furious
As she put it, “A white tech tycoon, born and raised in South Africa during apartheid, along with an all-white, all-male set of investors and researchers” were launching OpenAI allegedly to, as they claimed, stop AI “taking over the world”.
It was clear to Gebru that the narrative generated, to focus media, political and public attention on imagined future Arnold Schwarzenegger like Terminators destroying the world, was intended to distract from the need to address the serious problems that AI systems were already causing.
Working out why AI systems churn out biased information is difficult. Some experts claim it is impossible to tweak models to fix bias, once embedded, because they are so complex, even their creators don’t understand them. Gebru had an idea on how to do a more effective job at trying filter out bias when the systems are getting built.
She proposed every AI training dataset be documented in detail, just as every component in the electronics industry is. Every data set should be “accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses”, details on how it was created, what was in it, what its purpose was to be, what limitations it might have, make the design transparent, reproducible, independently testable. It didn’t catch on.
Big AI firms don’t do transparency, much though they have occasionally claimed otherwise. OpenAI’s standard excuse is that they don’t want to make the details available to bad guys who might misuse them.
Gebru believes we need regulation that imposes transparency on AI companies; that it should always be clear when we use AI systems that they are machines, not sentient, empathic beings; and that the organisations behind them should be required to document and publicly release training data and model architectures.
In 2018 Gebru joined Margaret Mitchell at Google to co-lead the AI ethics research team. Mitchell was a computational linguistics specialist known and respected for her work on fairness in machine learning. She was frustrated that her repeated internal warnings, of the potential problems the AI systems Google were developing would cause, were falling on deaf ears. Worse, often she would get memos from HR telling her to stop making waves and be more collaborative. By 2020 both Mitchell and Gebru became depressed that they could not get through to management about the risks of LLMs.
In 2021, Gebru, Mitchell and a couple of other brilliant computational linguistics experts, Emily M. Bender and Angelina McMillan-Major, who shared their concerns, wrote a seminal paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
Bender was particularly concerned that even linguistics and AI specialists, who should know better, because they know LLMs are coded mathematical probability machines, were saying, publicly, that LLMs were displaying understanding and consciousness.
The stochastic parrots paper ran to 14 pages and summarised the evidence that LLMs were amplifying bias, the companies building them were becoming more and more secretive about their details and they were under-representing all languages other than English. Mitchell has herself credited as Shmargaret Shmitchell on the paper to emphasis the story it tells.
Gebru and Mitchell went through all Google’s internal processes to get approval for the paper and additionally sent it to more than 20 reviewers they trusted inside and outside the company, to get comprehensive pre-publication feedback.
About a month after submitting it, Gebru and Mitchell were summoned to a meeting with Google executives and told to retract the paper because it was too negative about LLMs. It should be more positive. They didn’t want it associated with Google and anyway Google’s LLMs were “engineered to avoid” the problems they described.
Long story short, Gebru then got sacked (though Google said she resigned) and Mitchell was also fired a few months later, reportedly, when attempting to gather evidence of the company’s discrimination against Gebru and sharing it with external parties.
Their departures, particularly Gebru’s, caused a scandal. There were more than a few articles in mainstream as well as technology press. Some even wondered whether Google were as committed to equality, diversity, inclusion and fairness as they liked to claim. Especially after sacking their two AI ethics leads in quick succession, just because they had raised concerns about the potential risks of LLMs? It also drew significantly more attention to the paper than it might otherwise have received.
Gebru now runs the Distributed Artificial Intelligence Research Institute (DAIR) which she founded after leaving Google. She was named one of the 10 greatest minds in tech by Spanish newspaper El Pais (El Pie-eezz) in December 2025.
Mitchell is a researcher and Chief Ethics Scientist at Hugging Face.
Time is pressing on, so I’m going to fast forward through a few of the following slides, but before I do, English residents should note that the Heath Secretary is a big fan of AI and is proposing a few developments in the NHS that everyone really should be more aware of.
The core takeaway from his 10 year plan is that doctor patient confidentiality is to be a thing of the past.
AI, for example, will record and transcribe and send to a central database, likely brokered by Palantir, every word a patient and their doctor shares.
I highly recommend signing up to medConfidential’s newsletter if you would like to know more about this.
According to the UN, the first known use of what they called “lethal autonomous weapons systems such as the STM Kargu-2”, a Turkish manufactured drone shown in the picture, was in the Libyan civil war early in 2020.
The drones “were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability”.
Since 2018, UN Secretary-General António Guterres has maintained that lethal autonomous weapons systems are politically unacceptable and morally repugnant and has recommended these systems be banned, under international law.
In December 2024 the UN General Assembly adopted resolution 79/239, basically saying that international law should apply to the use of AI by the military. Given the absence of respect for international law demonstrated by major powers in the past few years, the resolution, in practice, carries little weight.
A UK MoD policy paper on AI in defence, published in 2022, says that real time human supervision of such systems “may act as an unnecessary and inappropriate constraint on operational performance”.
At the time the US, Russia, India, Australia agreed. Brazil, South Africa, New Zealand, Switzerland, France & Germany wanted bans or restrictions.
In August 2023, the US announced “the Replicator Initiative” to produce large numbers of cheap autonomous weapons. The intent, according to a January 2026 briefing for Congress, was “to deploy uncrewed systems en masse, allowing the U.S. military to disperse combat power over a large number of relatively inexpensive systems.” The first contracts for autonomous boats, drones and anti-drone weapons were issued in May 2024.
A House of Lords AI in Weapons Systems Committee report in 2024 “recommended that the government proceeds with caution on the development and use of artificial intelligence in weapon systems.”
The 26 page response by the government essentially said they’d think about the recommendations but weren’t prepared to be held back when the big bad guys out there were developing this stuff too.
By 2025 Minister for the Armed Forces, Luke Pollard, declared that “compliance with international humanitarian law is absolutely essential”.
The government’s 2025 Strategic Defence Review, however, stated that “The UK’s competitors are unlikely to adhere to common ethical standards” in developing such weapons. It was silent on whether the UK would do so. You may draw your own conclusions on that.
128 countries at the UN weapons convention have not been able to agree to limit or ban the development of lethal autonomous weapons. They have pledged, instead, to continue and “intensify” discussions.
Two weeks ago, a researcher at Cambridge University’s AI Safety Hub released a paper on what he described as military AI agents. He noted they have six structural failure modes that mean, once set up in autonomous weapon mode, there is little or no meaningful human control of them in live military or battlefield contexts.
However well tested (or not) these AI agents might be under laboratory conditions in research & development labs, deploying them in the wild creates profound challenges for safety, security, reliability, trustworthiness and control. Not to mention threats to life.
New Scientist reported, a week earlier, that a researcher at King’s College London discovered that advanced AI models, in war game simulations, escalated the conflict to nuclear strikes, in 95% of cases.
They never agreed to surrender, no matter how badly they were losing; and the made significant mistakes, leading to escalation and unintended casualties in 86% of cases. The simulations were run with GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash.
Major powers are already using AI in war gaming. We do not know what extent they are incorporating AI decision support into live military decision-making processes.
That leads us onto one of the big AI companies, Anthropic, whose CEO Dario Amodei hasstated he believes “deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat… autocratic adversaries.” Going on to say:
“Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community.”
Anthropic’s Claude AI is integrated into Palantir software. Palantir’s “Maven Smart System” is variously described, in press reports, as powered by Anthropic’s Claude, to identify, profile, prioritise and select targets for military strikes, in real time.
Palantir’s UK head, Louis Mosley, has said it enables “faster, more efficient and ultimately more lethal decisions where that’s appropriate”.
Anthropic has two restrictions on the use of their technology –
1. That it should not be used for the mass surveillance of Americans (people in the rest of the world are fair game but not Americans)
2. That it should not be used in autonomous weapons systems
They asked Palantir if their AI was used in the bombing of Venezuela and rendition of the Venezuelan president into US custody. Claude did, apparently, play a key role.
To make a long story short, that has led to a falling out with the Trump administration, in turn causing Anthropic’s contract to be cancelled and the company designated a supply chain risk.
It is first time in history such a designation has been applied to a US business. Anthropic are suing the government, while continuing to fulfil their contractual duties, during the six month phase out period Mr Trump has announced; and Claude is being used, by the US military, in their attacks on Iran.
We don’t yet know if Claude / Smart Maven was used in targeting the Minab school, killing over 170 people, most of them children, on the first day of the bombings. If it was, you may wish to consider whether Anthropic & Palantir are responsible for a war crime.
The US government has offered Anthropic’s contract to OpenAI, which Sam Altman has, patriotically, accepted, agreeing not to impose any woke, unreasonable Anthropic-like restrictions. The Defense/War department’s chief technology officer, Emil Michael, described as a “whoa moment” the realisation that Claude was the only AI model authorized in classified settings.
OpenAI are now inside the US classified tent, as is Elon Musk’s xAI and there are ongoing negotiations to include Google too. Michael said he’s not biased. He wants all of them because he needs redundancy.
Whatever concerns any of us might have about AI in weapons, as Alan Z. Rozenshtein, a law professor at the University of Minnesota, has said, the rules governing military AI should not be set through ad hoc negotiations between government officials and individual companies, with no democratic input, no regulatory constraints, and no framework that survives the next change of government.
Dare I add an addendum to Prof Rozenshtein’s concerns, that neither should they be set by the next change of mind of a volatile resident of the Oval Office.
What is not widely known is that a few weeks prior to this dispute, Anthropic’s safeguards research team lead Mrinank Sharma, left the company on the 9th of February, citing serious ethical concerns; or as he put it in his leaving letter to colleagues: “throughout my time here, I’ve repeatedly seen how hard it is to let our values truly govern our actions.” He’s got an NDA so cannot be more specific about his reasons. But when asked about where he thought we would be with AI safety next year, he posted a gif of the everything’s fine dog.
Here’s some images of the Anthropic dispute with the Pentagon I asked AI to produce. ChatGTP [sic] didn’t do too badly. Not sure what kind of AI juice DeepAI was drinking.
When OpenAI agreed to steal Anthropic’s War Department lunch, without restrictions, their robotics chief resigned, declaring the same red lines as Anthropic. She said, on Twitter: “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
Silicon Valley has come a long way in the past decade on its attitudes to doing business with the military. Having been reluctant in the 1990s and early 2000s to engage in such business, at least overtly, despite the industry’s origins in the military industrial complex, the more aggressive big tech moguls are now actively touting for the business of the traditional big defence companies, while labelling the incumbents dinosaurs.
In 2024 Google sacked over 50 people for protesting about the company’s joint involvement, with Amzon Web Services, in the $1.2bn Project Nimbus cloud computing contract with the government of Israel, to be used in finance, healthcare, transportation and education, as well as by the military.
In April 2024 Google CEO, Sundar Pichai, sent a memo to all employees, praising their state-of-the-art AI models and research, their leadership in building and deploying responsible and safe AI, the exciting opportunities they have in AI. He concluded with a sting in the tail – a warning that Google is a business and not a place for employee political opinions or protests.
Former Google chief, Eric Schmidt, served as chairman of the U.S. Defense Innovation Board from 2016 to 2020 and led the National Security Commission on AI from 2018 to 2021.
In 2025 a UN report listed 122 companies as profiting from genocide, including Google, Amazon, IBM, Microsoft and Palantir. The author of the report was, subsequently, subjected to US sanctions.
Judges and prosecutors at the International Criminal Court have also been sanctioned by the US, following their indictment of Israel’s prime minister, Benjamin Netanyahu for war crimes and crimes against humanity.
Their finances have been frozen and they can’t use a credit card, book a hotel room or so much as buy a cup of coffee. EU member state governments have the legal power and the duty to protect their own citizens, under something called the EU Blocking Statute, but have chosen not to do so, for fear of how the US might respond. Likewise financial institutions and other economic actors.
I wonder what President Eisenhower, who coined the term would make of the modern day big tech military industrial complex?
Israel’s version of the Palantir/Anthropic “Maven Smart System” is what press reportsdescribe as AI called “Lavender”. Lavender was used to generate kill lists of tens of thousands of suspected members of Hamas. These targets’ homes were then bombed, at night, when they were believed to be asleep with their families.
There was little meaningful human oversight, according to six of the Israeli intelligence officers who operated it. Operators were no more than rubber stamps for the AI output, devoting about 20 seconds to each target before authorizing the bombing.
“Dumb” rather than so-called “smart” “precision” bombs were used for suspected junior militants. These cause greater numbers of casualties. One of the intelligence officers reportedly told a +972 magazine journalist that “You don’t want to waste expensive bombs on unimportant people”.
A policy decision was made that it was acceptable to kill up to 20 innocent civilians as well as the Hamas suspect, in the case of low-ranking targets.
The killing of more than a hundred innocents was authorised on several occasions, when the target was a suspected senior Hamas official. Sometimes homes were bombed when the suspect was not there because the information on their whereabouts was not verified in real time.
This was, essentially a decision-making algorithm, targeting people, their families and neighbours for assassination, embedded in a military institution; where organisational dictats determined that the killing of up to 20, or occasionally 100 or more, innocent people was an acceptable measure, in the process of eliminating suspected members of Hamas.
Tolerances for civilian deaths expanded as a matter of organisational, bureaucratic process. When you have a machine that can identify targets, accurately or not, there will always be demands to ID more targets. And if someone in the chain of command thinks those targets are not being dealt with efficiently enough, the pressure to relax the conditions under which they can be eliminated will ramp up.
International humanitarian law prohibits all of this but has no meaning or force for the people of Gaza.
IHL prohibits any attacks directly targeting civilians or civilian objects. They may be incidentally affected by attacks against lawful targets but only if it is proportional; and the attacker must take all feasible precautionary measures to avoid incidental effects on civilians. This includes any foreseeable knock-on, indirect adverse effects on civilians’ life and health such as exposure to toxic chemicals; and the prohibition of the use of starvation as a weapon of war.
By December 2023, Lavender reportedly identified around 37,000 Palestinians as suspected members of Hamas, operating with an accepted error rate of roughly 10 percent.
It is important to understand that AI systems don’t stand alone.
They are never just the technology.
They are always deployed in sociotechnical, organisational, political, social, economic, environmental and, in the case of military systems, geopolitical contexts, with all of the structural flaws and prejudices that come with people and institutions.
And the failure modes of these machines, and the sociotechnical systems they form part of, multiply exponentially, when they run in the real world. In military, health, border control, policing, criminal justice and welfare organisations – which are, themselves, forms of artificial intelligence – these failure modes can be life altering.
Former Greek finance minister, Yanis Varoufakis, spoke to someone who recently left their job at Palantir. He told Varoufakis the “Gaza event”, as he described it, was “very exciting” for technologists. He said what’s happening in Gaza is terrible BUT it was fantastic for Palantir.
He pointed at Varoufakis’s phone and said it was useless to him, at that moment, because it was lying on the table not being used. It’s only when you move with it and interact with it, that it produces data that Palantir can acquire to train their algorithms.
When you bomb people in highly densely populated areas, they panic and do a lot of things that can be tracked – they run around, try to escape, move and are moved, forcibly, from one place to another, they make calls, send messages, try to find loved ones, rush to hospitals, if there are any, and to other places for help.
All that activity was great for training Palantir’s algorithms. And Palantir, having trained their algorithms, using cloud services from AWS, Oracle etc., could develop AI tools to sell to, say the UK NHS, MoD or police forces, with whom they have a combined collection of contracts worth nearly £700 milllion. So, hospitals, for example, can use the Palantir software for managing personnel inside the hospital during emergencies and crises.
The suffering of the Palestinian people in Gaza continues and is intensifying again. The limited amount of humanitarian aid that was being allowed in has been severely reduced, under cover of Israel’s and the US’s war with Iran. On Sunday evening a large wall collapsed onto displaced people’s tents, killing dozens and there are hundreds more injured or missing under the rubble.
We don’t need to wait for some future imaginary, malevolent AI to wipe us out. People in conflict zones all over the world are experiencing dystopian Armageddon right now.
Iran is in the news. Ukraine gets some attention.
Mostly, we are barely aware of their plight.
And by the way, just briefly getting back to autonomous weapons, developments in drone swarms in Ukraine are bringing us closer to the fictional slaughterbots that the Professor Stuart Russell at UC Berkeley was warning of back in 2017.
So where does all this leave us?
We have $100s of billions invested in the race for artificial general intelligence. So far that has been consolidating the increasing concentration of control of the communications infrastructure, that trains and runs the big AI models, in the hands of a small number of big tech companies - the usual suspects, like Google, Amazon and Microsoft, as well as a few I mentioned earlier.
Politicians and board rooms have bought into the hype that they have to embed AI in everything. Though, interestingly enough, a survey of 6000 CEOs, published within the past few weeks, shows they’re getting impatient that investments in AI have shown little return. More than 80% of them said they saw no impact on employment or productivity.
Little or no attention is being paid to the harm that AI tools have been enabling or to mitigating or eliminating that harm.
Cards on the table, I believe the focus on developing AGI is wrongheaded and it is certainly not inevitable as many would have you believe.
It’s a distraction from the good we could be doing with this technology and from fixing the damage we are already causing with it. Whether it is even possible to develop a machine that is better than everyone at everything is largely irrelevant. The mere quest for it is leaving a trail of harm in its wake, some of which we have covered this evening.
We would be far better off investing even a fraction of those hundreds of billions in smaller, specialised AI models, trained with carefully curated, scientifically sound datasets, focussing on solving specialised problems.
The quest for AGI does benefit:
· Investors & big companies developing and selling technology
· The companies grabbing and laundering personal data and creative work of others at enormous scale
· Big economic actors who control the communications infrastructure of AI
· Those trying to replace government services, in health, welfare, policing, criminal justice, defence, health and others with cheap, automated systems, even when, more often than not, are causing serious pain, suffering and distress
AI will do what we make it capable of doing.
Many of the AI models now in existence are impressively capable, in the hands of people who can draw empirically sound, accurate and enlightening outputs from them. Too often they are used by people who don’t understand them, including by researchers. Basic errors in research papers are shockingly frequent, especially when they are using off the shelf AI tools and are not trained computer scientists.
Reviews have determined the majority of machine learning research carried out in areas like health, social science and politics has been shown to be flawed.
When used in life changing areas like policing or immigration, by institutions who don’t understand them, the consequences can be severe or even deadly. It is actively inhumane to embed these black boxes in immigration, welfare, policing & criminal justice, border control, conflict, military, health or employment contexts, as “normal technology”.
It can help to teach, solve problems, big and small, illuminate, inspire and possibly even fulfil some of the hype preached by its high priests but only to the extent that WE are prepared to develop and use these technologies to those ends.
Otherwise, they will be giant, networked, resource draining, socially and environmentally damaging data centres of wires, chips, lights and black boxes, controlled by an increasingly concentrated group of intensely wealthy and powerful economic actors, for their own ends.
We won’t be dealing with Armageddon or Utopia but a dystopia even the combined imaginations of Orwell, Huxley and Kafka didn’t anticipate.
I’ll leave you with an extract from Joseph Weizenbaum’s 1976 book, Weizenbaum, J. (1976) Computer power and human reason: from judgment to calculation. Freeman.
The question is not whether computers can make decisions in the context of human welfare but whether they should. Since we can’t make computers wise, we should not give them tasks that demand wisdom.
Selected sources and further reading
The AI Now Institute
The Open Rights Group
The Distributed Artificial Intelligence Research Institute (DAIR)
medConfidential
Amnesty International
Human Rights Watch
The Hind Rajab Foundation
B'Tselem
Foxglove
Big Brother Watch
Privacy International
United Nations Human Rights Office of the High Commissioner
Electronic Frontier Foundation
European Digtial Rights (EDRi)
Electronic Privacy Information Center (EPIC)
United Nations Relief and Works Agency for Palestinian Refusgees in the Near East (UNRWA)
The AI Con: How to Fight Big Tech's Hype and Create the Future We Want by Emily M. Bender and Alex Hanna
Data Grab: The New Colonialism of Big Tech (and How to Fight Back) by Ulises A. Mejas & Nick Choudry
Why Privacy Matters by Neil Richards
Systems Thinking in the Public Sector by John Seddon
Underground Empire by Henry Farrell & Abraham Newman
The Palestine Laboratory by Antony Loewenstein
Atlas of AI by Kate Crawford
Native: Race & Class in the Ruins of Empire by Akala
AI Snake Oil by Arvind Arayanan & Sayash Kapoor
Your Face Belongs to Us by Kashmir Hill
IBM and the Holocaust by Edwin Black
The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence by Petra Molnar
Race After Technology by Ruha Benjamin
Privacy's Blueprint by Woodrow Hartzog
Supremacy: AI, ChatGPT and the race that will change the world by Parmy Olson
Protecting Children in Armed Conflict by Shaheed Fatima KC
Ghost Work by Mary L Gray & Siddarth Suri
Computer Power and Human Reason by Joseph Weizenbaum
The Demon-Haunted World: Science as a Candle in the Dark by Carl Sagan
Drone Theory by Gregoire Chamayou
The Rise Of Big Data Policing by Andrew Guthrie Fergusson
Anatomy of a Genocide: Report of the Special Rapporeur on the situation of human rights in the Palestinian territories occupied since 1967 by Francesca Albanese
From Economy of Occupation to Economy of Genocide: Report of the Special Rapporeur on the situation of human rights in the Palestinian territories occupied since 1967 by Francesca Albanese
Mind the Gap: Foundation Models and the Covert Proliferation of Military Intelligance, Surveillance and Targetting by Heidy Khlassf, Sarah Myers West & Merridith Whittaker
Joint Legal Opinion for the Open Rights Group - The Use of Artificial Intelligence Tools by Government: A Case Study of the Home Office's Asylum Practice by Robin Allen KC, Dee Masters & Joshua Jackson
The Dawn of the AI Drone by C.J. Chivers, New York Times
Congress - Not the Pentagon or Anthropic - Should Set Military AI Rules by Alan Z Rozenshtein
UK Government announcement of £1.2 billion strategic defence partnership with Palantir
Ministry of Defence: Palantir Contracts - Parliamentary debate, 10 February. 2026, Hansard
Human oversight of autonomous weapons doesn't mean much when the aim is to maximise destruction. Lucy Suchman
Human Machine Autonomies by Lucy Suchman & Jutta Weber
EU General Data Protection Regulations (GDPR)
EU ePrivacy Directive
EU AI Act
EU Digital Omnibus - proposal to amend/circumvent GDPR, E-Privacy Directive and EU AI Act
Joint statement On the Proposal for a Regulation as regards the simplification of the digital
legislative framework (Digital Omnibus) by the European Data Protection Board (EDPB) & The Euporpean Data Protection Supervisor (EDPS)
Europe's Hidden Security Crisis by Johnny Ryan and Wolfi Christl for Irish Council for Civil Liberties
America's Hidden Security Crisis by Johnny Ryan and Wolfi Christl for Irish Council for Civil Liberties
Joint statement of security and privacy scientists and researchers on Age Assurance by 400+ privacy & security experts and scholars, March 2026
Children's Wellbeing and Schools Bill Running List of Amendments, p20-21, 11 December 2025
Automating the Hostile Environment: AI in the asylum decision-making system by The Open Rights Group
American Dragnet: Data-Driven Deportation in the 21st Century by the Center on Privacy & Technology, Georgetown Law
Physiognomic Artificial Intelligence by Luke Stark & Jevon Hutson
Geneva Conventions
UN Convention on Certain Conventional Weapons