Wednesday, April 08, 2026

Chinese Government on AI Ethics

The following is a Google translation of a Chinese government document on AI ethics. It's very clumsy and difficult to follow in places but nevertheless an interesting window into the stated official position of China on AI ethics. If someone has a more accurate translation you can point me to, I'd be very interested in reading it.

Notice of the Ministry of Industry and Information Technology and other ten departments on the issuance of the "Artificial Intelligence Technology Ethics Review and Service Measures (Trial)"

Released 2 April 2026.

Machine translated by Google. 

 Chapter 1 General Provisions

Article 1 aims to regulate the ethical governance of artificial intelligence technology activities and promote fairness.

Fair, harmonious, safe and responsible innovation will drive the healthy development of the artificial intelligence industry.

Opinions on the Governance of Science and Technology Ethics and Measures for the Review of Science and Technology Ethics (Trial) (hereinafter referred to as the "Opinions") 

This regulation is formulated in accordance with laws, regulations, and relevant provisions, including the "Ethics Measures". 

 Article 2. The artificial intelligence technology activities to which these Measures apply are within the scope of the People's Republic of China.

 Activities carried out within the territory of the People's Republic of China that may infringe upon human dignity, public order, and the health of life

The technological ethics risks and challenges brought about by health, ecological environment, and sustainable development 

Activities related to artificial intelligence scientific research and technological development, as well as activities conducted in accordance with laws and administrative regulations. 

 Other artificial intelligence technology ethics reviews required by laws, regulations and relevant national provisions

 Science and technology activities.

Article 3. Scientific and technological ethics requirements should be consistently applied in carrying out artificial intelligence technology activities.

Throughout the entire process, we adhere to the principles of enhancing human well-being, respecting the right to life, and upholding fairness and justice.

 Reasonably control risks, maintain openness and transparency, protect privacy and security, and ensure controllability.

 Adhering to the ethical principles of artificial intelligence technology, and complying with my country's Constitution, laws and regulations.

Relevant regulations.

Chapter Two: Service and Promotion  

 Article 4. Establish and improve the ethical standards system for artificial intelligence technology, and promote...

Develop relevant international standards, national standards, industry standards, and group standards, and support 

Establish an international platform for standardization exchange and cooperation.

 Encourage universities, research institutions, medical and health institutions, enterprises and science and technology

Social groups and other entities participate in the formulation, verification, and promotion of ethical standards for artificial intelligence technology.

wide.

 Article 5. Promote the construction of an artificial intelligence technology ethics service system and strengthen human rights protection.

Services include monitoring, early warning, testing, evaluation, certification, and consulting for ethical risks in artificial intelligence technology. 

Service supply, enhancing enterprises' ability to prevent ethical risks in technology research and development and artificial intelligence. 

Efforts will be intensified to support and serve the ethical review of artificial intelligence technologies for small and medium-sized enterprises. 

We will increase our efforts to promote international exchange and cooperation on artificial intelligence technology ethics. 

Article 6 encourages universities, research institutions, medical and health institutions, and enterprises to participate in this endeavor. 

Industry and technology-related social organizations should conduct research on the ethical review of artificial intelligence technology. 

We will uphold technological innovation in the ethical review of artificial intelligence and strengthen the use of technology to prevent human-to-human transmission. 

Ethical risks of artificial intelligence technology; promoting high-quality data on ethical review of artificial intelligence technology. 

According to the orderly open source and open source approach, we will strengthen the development of general-purpose risk management, assessment and auditing tools. 

Explore and evaluate technology ethics risks based on application scenarios; promote technologies that conform to... 

AI products and services related to technological ethics, protecting the knowledge of technological ethics review technologies. 

Identify intellectual property rights.

Article 7. Conduct publicity and education on artificial intelligence science and technology ethics, and leverage the role of science and technology...

 The role of social organizations in promoting and educating the public about the ethics of artificial intelligence technology, and encouraging public participation.

Public participation promotes practical demonstrations and enhances public ethical awareness and literacy. It guides the public... 

The media should conduct targeted publicity and education on the ethics of artificial intelligence technology. 

 Article 8 supports universities, research institutions, medical and health institutions, and enterprises.

 Industry and technology-related social organizations should conduct education on artificial intelligence technology ethics and related topics.

 Training promotes the development of vocational and curriculum systems, and cultivates talent through various means.

Talents in artificial intelligence technology ethics should be cultivated, and talent exchange should be promoted. 

Chapter Three Implementing Entities 

Article 9 Universities and research institutions engaged in artificial intelligence technology activities 

Medical and health institutions, enterprises, etc. are responsible for the ethical review and management of artificial intelligence technology within their respective organizations. 

The responsible party should, in accordance with the requirements of Article 4 of the "Ethics Measures", establish a person... 

Artificial Intelligence Technology Ethics Committee (hereinafter referred to as the Committee). The Committee should be equipped with... 

Necessary staff, office space, and funding, and effective measures should be taken to ensure 

 The committee operates independently. Qualified organizations are encouraged to conduct research on artificial intelligence.

Certification related to the energy technology ethics management system. 

Article 10 The committee's charter, composition, and the duties and obligations of its members shall be in accordance with the "Lun". 

The procedures outlined in Articles 5 to 8 of the "Management Measures" shall be followed. The committee should include members specializing in artificial intelligence. 

Experts with relevant professional backgrounds in technology, application, ethics, law, etc. 

Article 11 Local authorities and relevant competent departments may, in light of actual circumstances, rely on relevant... 

Relevant departments shall establish a professional artificial intelligence technology ethics review and service center (hereinafter referred to as the "center"). 

(Referred to as the Service Center). The Service Center accepts commissions from other organizations to provide artificial intelligence services. 

We provide services such as ethical review, verification, training, and consultation for science and technology activities. Service Center 

Review and verification services may not be provided simultaneously for the same artificial intelligence technology activity. 

The service center should establish standardized management systems and procedures, and be equipped with artificial intelligence technology. 

Dedicated personnel with expertise in technical ethics review and service capabilities, subject to review by local or relevant authorities. 

Door supervision.

Chapter Four Work Procedures

Section 1 Application and Acceptance 

Article 12 addresses artificial intelligence technologies falling within the scope listed in Article 2 of these Measures. 

For activities involving artificial intelligence technology, the person in charge should apply to the committee of their unit. 

If an organization has not established a committee or the committee is unable to perform its duties in reviewing scientific and technological ethics, it must... 

Please apply to the service center entrusted by this unit to conduct a science and technology ethics review. (No specific application is required.) 

Individuals should entrust qualified service centers to conduct science and technology ethics reviews. 

The person in charge of smart technology activities should submit to the committee or service center in accordance with regulations. 

Application materials. The application materials mainly include: 

(a) Artificial intelligence science and technology activity plan, including research background, objectives and... 

The plan includes the legal qualification documents, personnel information, and funding of the relevant institutions involved. 

Source, proposed algorithm mechanism, data source and acquisition method, testing and evaluation 

The estimation method, the proposed software and hardware products, the expected application areas and target users, etc. 

(II) Assessment and prevention of ethical risks in artificial intelligence technology activities 

and emergency response plans, including the potential scientific and technological advancements brought about by the anticipated applications of artificial intelligence technology. 

Technology ethics risk assessment, monitoring and early warning measures for technology ethics risks, and potential risks. 

Plans to prevent and control risks to scientific and technological ethics, etc.; 

(iii) Commitment to complying with the requirements of artificial intelligence technology ethics and research integrity. 

Book. 

Article 13 The committee or service center shall decide whether to accept the application based on the application materials. 

The application will be accepted and the applicant will be notified. If the application is accepted, it should be based on the principles of scientific and technological ethics. 

The likelihood and extent of occurrence, emergency situations, etc., are clearly applicable to general, simple, or... 

 Emergency procedures, depending on the requirements of different procedures, may be carried out in the form of offline or online methods.

For technical ethics review, if the materials are incomplete, the applicant should be fully informed of the required supplementary materials in one go.

Fee.

Section 2 General Procedures and Simplified Procedures 

Article 14 The Artificial Intelligence Technology Ethics Review Meeting shall be chaired by the Committee Chairman. 

The meeting shall be chaired by a member or their designated vice-chairman, and at least five members shall be present. 

This should include the different categories of committee members listed in Article 10 of these Measures. The service center may refer to... 

The examination committee shall organize and implement the work in accordance with its regulations. 

Depending on the needs of the review, individuals with no direct interest in the relevant field may be invited. 

Advisors and experts provide advice. Advisors and experts do not participate in voting at the meeting. 

Article 15. The committee or service center shall conduct ethical reviews of artificial intelligence technology. 

The investigation should focus on the following aspects: 

(a) In terms of human well-being, do artificial intelligence technology activities have scientific and technological value? 

Academic and social value; research objectives contribute to enhancing human well-being and achieving social sustainability. 

Does it have a positive effect on sustainable development? What are the risks of artificial intelligence technology activities? 

Whether or not the benefit is reasonable. 

(ii) Regarding fairness and impartiality, the selection criteria for training data, algorithms, 

 Are the models and systems designed reasonably? Have measures been taken to prevent bias and discrimination?

 Algorithmic optimization ensures objectivity in resource allocation, opportunity acquisition, and decision-making processes.

Inclusivity.

 (iii) In terms of controllability and reliability, can the robustness of the model and system be guaranteed?

Strength to cope with open environments, extreme situations, and disruptive factors; whether it can 

Ensure users can control, guide, and intervene in the basic operation of the model and system; whether 

Develop a continuous monitoring plan and an emergency response plan. 

(iv) Regarding transparency and explainability, whether the algorithms, models, and... have been reasonably disclosed. 

Information such as the system's purpose, operating logic, interaction methods, and potential risks; 

Whether effective technical means are adopted to improve the interpretability of algorithms, models, and systems. 

(v) Regarding traceability of responsibility, are there measures such as log management in place? 

Chu records comprehensive information on all aspects of the data, algorithms, models, and systems to ensure... 

The entire supply chain is traceable and manageable; the qualifications of the technical personnel meet the relevant requirements. 

(vi) Regarding privacy protection, the collection, storage, processing, and use of data
 

Whether sufficient measures have been taken for data processing activities and research and development of new data technologies, etc.

 Measures are in place to ensure that privacy data is effectively protected.

Article 16 The committee or service center shall, within 30 days of accepting the application, 

 Make decisions such as approval, requiring revisions and further review, or disapproval. This is necessary in complex situations or when further review is needed.

In special circumstances such as supplementing or correcting materials, the time limit may be appropriately extended and the extension period shall be clearly defined. 

For items requiring modification or not approved, the committee or service center should submit a proposal. 

Suggestions for amendments or explanations of reasons. If the applicant has any objections, they should submit them within [timeframe] of the decision's service. 

You may submit your appeal to the committee or service center within 3 business days of the date of appeal. Grounds for appeal. 

If sufficient, the committee or service center should make a new decision within 7 working days. 

Article 17. Those in charge of artificial intelligence technology activities shall promptly identify technological ethics. 

Manage changes in risk and report relevant changes to the committee or service center.

 The committee or service center shall review and approve applications in accordance with Article 19 of the Ethics Measures.

 We will conduct follow-up reviews of AI technology activities to promptly grasp the ethical risks of technology.

In response to changes in risk levels, decisions may be made to suspend or terminate related scientific and technological activities if necessary. 

The interval for follow-up reviews is generally no more than 12 months. 

Article 18. Multiple entities collaborating on artificial intelligence technology activities may, based on… 

Based on the actual situation, mutual recognition of the results of ethical reviews of artificial intelligence technologies among different organizations will be carried out.

 Article 19. The simplified procedure may be applied in any of the following circumstances:

(a) The likelihood and extent of ethical risks in artificial intelligence technology activities

No higher than the usual risks encountered in daily life; 

(ii) Minor modifications to the approved artificial intelligence science and technology activity plan and 

Without increasing the risk-benefit ratio; 

(III) Follow-up review of artificial intelligence technology activities that have not undergone major adjustments in the early stage check. 

Article 20 The committee or service center shall formulate a review procedure applicable to simplified procedures. 

The work procedures and tracking frequency. The simplified procedure review is directed by the committee chairperson. 

Two or more committee members should be appointed to undertake this task. The service center can refer to the committee's regulations. 

The organization will determine the implementation of the work. 

During the simplified procedure review process, if the review result is a negative opinion, the review...

 In cases where there are doubts about the content of the investigation or disagreements among committee members, the application of Article 1 should be adjusted.

General procedure.

 Section 3 Expert Review Procedure

Article 21 The Ministry of Industry and Information Technology and the Ministry of Science and Technology, in conjunction with relevant departments, formulate...

The "List of Artificial Intelligence Science and Technology Activities Requiring Expert Review of Science and Technology Ethics" will be released.

 The "Review List" (hereinafter referred to as the "Review List") is dynamically adjusted according to work needs.

 Article 22. Conduct artificial intelligence technology activities included in the review list.

 After preliminary review by the committee or service center, the organization shall apply to conduct specialized training.

 Home review. If multiple units are involved, the lead unit is responsible for the application. Central enterprises,

Universities, research institutions, and medical and health institutions directly under the central and state organs 

Report directly to the relevant competent authority to organize expert review. Other units should report to the local authorities.

 The relevant authorities will conduct an expert review.

 Article 23. Organizations undertaking artificial intelligence technology activities shall comply with the "Ethics"

 Article 27 of the Measures stipulates that materials for expert review must be submitted.

 Local authorities or relevant competent departments should act in accordance with Articles 28 to 29 of the "Ethical Measures".

 Article 30 stipulates that a review expert group shall be established to verify the compliance of the preliminary review opinions.

 The review will be conducted on the aspects of sex, rationality, etc., and the review application will be submitted to the relevant authorities within 30 days of receipt.

 The applicant organization provided feedback on the review.

 Local authorities or relevant competent departments may entrust the service center to carry out the review work.

do.

Article 24 The committee or service center shall make decisions based on the expert review opinions.

 The decision to conduct a science and technology ethics review was issued.

 Article 25. The committee or service center shall review the personnel included in the review list.

 Smart technology activities will be subject to enhanced tracking and review, with review intervals generally not exceeding 6 months.

moon.

 If there are significant changes in the ethical risks associated with science and technology, the relevant provisions of Article [number] of the "Ethical Measures" shall apply.

 The twenty provisions stipulate that relevant scientific and technological ethics reviews should be conducted again and expert review should be requested.

 Article 26. Artificial intelligence technology activities in areas such as deep synthesis, algorithm recommendation,

Registration, filing, and administrative approval are implemented for generative artificial intelligence service management and other aspects.

 Regulatory measures will be implemented, and compliance with scientific and technological ethics requirements will be used as a condition for approval and a subject of regulation.

If so, expert review will no longer be required.

 Section 4 Emergency Procedures

 Article 27 The committee or service center shall formulate artificial intelligence technology ethics

 Establish an emergency review system to clarify emergency review procedures during public emergencies and other urgent situations.

 Review the procedures and standard operating procedures. Emergency reviews are typically completed within 72 hours.

 For artificial intelligence technology activities subject to expert review procedures, prior to expert review...

The review is usually completed within 36 hours. 

Article 28 The committee or service center shall ensure emergency review of science and technology ethics. 

Ensure the quality and timeliness of the investigation, and strengthen follow-up work and process supervision. When necessary, invite [relevant personnel].

 Consultants and experts in relevant fields are invited to attend the meeting and provide advice.

 Chapter Five Supervision and Management

 Article 29 The Ministry of Science and Technology is responsible for the overall guidance of national science and technology ethics supervision work.

The Ministry of Industry and Information Technology, together with relevant departments, is responsible for the governance of artificial intelligence technology ethics. 

Strengthen the coordination and guidance of emergency ethics review work. Each department shall, in accordance with its responsibilities and authority...

 Limited to supervising and managing the ethical review of artificial intelligence technologies within this industry and system.

 Each locality is responsible for the ethical review of artificial intelligence technologies within its jurisdiction, in accordance with its duties and authority.

 Supervision and management work.

 Article 30 The unit shall comply with Articles 43 to 44 of the "Ethics Measures".

 The fifteen provisions stipulate that relevant information from the committee and artificial intelligence included in the review list will be considered.

 Scientific and technological activities are registered through the National Science and Technology Ethics Management Information Registration Platform, and

 The Artificial Intelligence Science Department submitted the previous year's committee work report and was included in the review list.

 Reports on the implementation of technical activities and other relevant materials. The service center shall comply with the above regulations.

 Register and submit the previous year's work report.

 The Ministry of Science and Technology and relevant authorities are registering information related to the ethics of artificial intelligence technologies.

 Information is synchronized and shared.

 Article 31 Local authorities, relevant competent departments, and those engaged in artificial intelligence technology

 The participating organizations should, in light of their own industry, system, and unit's specific circumstances, [make full use of the information].

 Channels that can reflect ethical violations and irregularities in artificial intelligence technology, and in accordance with

 The matter will be handled in accordance with relevant regulations.

 Article 32. In artificial intelligence technology activities or the conduct of artificial intelligence technology...

Violations of these Measures during ethical work will be dealt with in accordance with the relevant provisions of the People's Republic of China.

 The Cybersecurity Law of the People's Republic of China, the Data Security Law of the People's Republic of China, and the Law of the People's Republic of China on the Protection of the Rights and Interests of the People's Republic of China

 The Personal Information Protection Law of the People's Republic of China, the Science and Technology Progress Law of the People's Republic of China

 The Personal Information Protection Law of the People's Republic of China, the Science and Technology Progress Law of the People's Republic of China

Punishment.

 Chapter Six Supplementary Provisions

 Article 33. Where time limits are specified in these Measures, and are not defined as working days, the provisions regarding such time limits shall apply.

All dates are calendar days. 

The term "local" as used in these Measures refers to the provincial people's government that is responsible for the relevant work. 

The provincial-level management department responsible for the review and management of scientific and technological ethics in the field of intelligence, "related 

"Competent authorities" refers to the relevant competent authorities under the State Council.

 Article 34 Local authorities and relevant competent departments may, in accordance with the provisions of these Measures,

Develop or revise artificial intelligence standards for this region, industry, or system based on actual conditions. 

The system includes regulations and guidelines for the review and service of science and technology ethics. (Science and technology-related social organizations)

 Associations can formulate specific guidelines for ethical review and services related to artificial intelligence technologies in this field.

Standards and guidelines.

 Article 35. Relevant competent authorities shall supervise the development of artificial intelligence in their respective industries and systems.

 Where there are special provisions for science and technology ethics review and services that conform to the spirit of these Measures, those provisions shall apply.

 Provisions. For matters not covered in these provisions, the "Ethics Measures" and relevant laws and regulations shall apply.

 The regulations shall be followed.

Article 36 This regulation shall be implemented by the Ministry of Industry and Information Technology in conjunction with relevant departments.

 Responsible for explanation.

Article 37 This regulation shall come into effect on the date of its issuance. 

Attachment: Artificial Intelligence Technology Activities Requiring Expert Review of Technology Ethics

List

 appendix

 List of AI technology activities requiring expert review of science
and technology ethics

 I. It has a strong influence on human subjective behavior, psychological emotions, and life and health.

 The development of human-machine integration systems is affected.

 II. Algorithms with the ability to mobilize public opinion and guide social consciousness.

 Development of models, applications, and systems.

 Third, for scenarios involving safety and personal health risks, highly self-reliant...

 Research and development of automated decision-making systems for core capabilities.

 This list will be dynamically adjusted according to work needs.

 

Friday, March 20, 2026

Our algorithmic future – Utopia or Armageddon?

 

I did a talk for the Open University's final year, undergraduate, computing and communications project students earlier this week. An edited transcript follows below.

Good evening, everyone.

I am going to be talking about algorithmic decision-making machines we are calling artificial intelligence, even though they are not intelligent, in any meaningful sense of the word.

Some industry leaders are promising artificial general intelligence – AGI – will, magically, stimulate economic growth, rectify climate change and fix pretty much all the world’s problems, including curing cancer, and creating a utopia.

One has said he believes regulations and environmental activists are the tools of the antichrist.

One talks of his pride in killing enemies.

Several warn a future superintelligent AI may wipe out humanity.

Whatever their predictions of the possible effects, all of these characters, hype and doom merchants alike, are basically telling the same story – that AGI is inevitable.

The hype of AI has captured hearts and minds at the highest levels of government and commerce.

Politicians are wrapped in fear of missing out and want AI in everything.

CEOs are beginning to get impatient for a payoff on their investments.

AGI may or may not be inevitable. But there are serious geopolitical, environmental, societal, economic and technological questions about whether we should be pursuing it at all.

Meanwhile not enough attention is being paid to how algorithmic decision-making systems are being deployed in practice and with what consequences – in social welfare, border control, policing, security & intelligence, the military, employment, health and other contexts. Let’s consider some of these and think about whether the inevitability of AGI is the appropriate framing for board rooms and governments to be making decisions about our algorithmic future.

 

But… Before we get into any of that, a bit of history…

Meet John McCarthy. It was McCarthy who coined the term artificial intelligence when, with Claude Shannon, Marvin Minsky and Nathaniel Rochester, he convened the Dartmouth summer workshop on AI in 1956. McCarthy freely admitted choosing the label ‘artificial intelligence’ as a hook to attract funding. That’s right folks, AI began as a sales pitch. And so it pretty much continues to this day.

On the left is Joseph Weizenbaum who created the first chatbot, ELIZA, in 1966. Named after the female leading character in George Bernard Shaw’s Pygmalion, ELIZA’s Doctor variation was a simple key word matching code that bounced rephrased user statements back at them, typically as questions.

On the right is a sample of the kind of responses ELIZA provided, as noted in the paper Weizenbaum published on the program in the Communications of the ACM journal, in January 1966.

Just designed to enable natural language communication with the computer, Weizenbaum was astonished and then disturbed it elicited emotional responses from users, who assumed ELIZA was conscious and showing interest in them. Famously, on at least one occasion, his secretary asked him to leave the room, so she could converse privately with ELIZA.

This false attribution of human empathy, understanding and consciousness to machines became known as the ELIZA effect.

Weizenbaum considered it delusional thinking and was a critic of the blind faith in and adoption of technology generally. He also believed that using algorithms for tasks needing empathy & human judgment (e.g. therapy or welfare or criminal justice) was “obscene”.

 

I’m going to fast forward, now, through to the early 2010s. AI, machine learning, deep learning, neural networks had lost the power to dazzle benefactors and had spent quite a chunk of the interim in an “AI winter”. Funds had been hard to come by and the field was marginalised.

December 2012 was the inflection point.

Geoffrey Hinton published a paper with his then students Alex Krizhevsky and Ilya Sutskever, entitled “ImageNet Classification with Deep Convolutional NeuralNetworks”. They used GPUs rather than CPUs to create a deep learning image/pattern recognition system they called AlexNet, after its primary developer Krizshevsky. It spawned an almost instant change from AI winter to apparently unlimited venture capital funds for AI researchers and startup companies.

The underlying deep learning technology had not advanced that much since the 1990s but there came a realisation that the computing power which had become available, along with the bottomless bounty of digital data available via the internet, created a huge increase its possible practical applications.

Hinton, who has been called the godfather of modern AI, auctioned their company, DNN Research, the only asset of which was their deep learning paper. There were four bidders (Google, Microsoft, DeepMind and Baidu), Google winning with an offer of $44 million.

Krizhevsky remains one of the most respected AI scholars. Sutskever left Google in 2017 and became a founder and chief scientist at OpenAI where he oversaw the development of ChatGTP [sic]. Though regretting it shortly afterwards, he was one of the board members who ousted Sam Altman in 2023. He stepped down after Altman was reinstated. He founded Safe Superintelligence Inc (SSI Inc) in 2024 with the aim of developing super intelligent AI safely.

 

By 2012 unfettered, private sector mass surveillance & profiling, of a scale unthinkable before the turn of the century, plus addictive, attention- grabbing apps and social media algorithms, had been long established as the core business model of the internet. They were and are the profit engines of a concentrated set of giant, mostly US and Chinese, technology corporations like Google, Apple, Facebook/Meta, Amazon, Microsoft, Nvidia, Oracle, Tencent (WeChat etc), Alibaba, ByteDance and others.

Notable mention should also be made of the Taiwan Semiconductor Manufacturing Company Limited (TSMC or Taiwan Semiconductor), the world’s largest chip manufacturer which has an almost global monopoly on advanced AI chips, being a key supplier of Nvidia which in turn supplies the hardware at the heart of most advanced AI systems.

The technology systems built and rolled out by those companies are used, extensively, by states in the whole gamut of government services from law enforcement, health and social welfare to border control, military, security and intelligence.

The most profitable firms in the world either monetise or otherwise exploit data through stalker advertising and profiling and/or provide software and hardware services and infrastructure to the economic & state actors who do.

A report in the past couple of weeks, for example, confirmed that the US Customs & Border Protection agency has been buying location data from adtech and data broker industries to track people. The brutal Immigration & Customs Enforcement agency is also buying location data and is actively soliciting more, as well as more, related, surveillance technology.

Advertising technology – adtech systems invisibly, invasively, relentlessly, secretly, at scale, track activity on every internet connected device.

Algorithms aggregate and classify into profiles all those actions.

That makes it a national security issue, according to the Irish Council for Civil Liberties, because these systems, and the vast ecosystems of companies that run and are tapped into them, track and share everybody’s personal information. Including those at the highest levels of government, intelligence, security and military services.

Cameras, sometimes with integrated face recognition, watch us everywhere, in public and in parts of previously private spaces, such as homes. So, for example, GCHQ ran ascheme called Optic Nerve, which surreptitiously collected webcam images, in one six-month period from 1.8 million Yahoo! account holders. In an effort to comply with the Human Rights Act of 1998, operators were limited to collecting one image from each camera every 5 minutes.

Phone apps & networked computers track location and behaviour.

Thousands of economic and state actors collect and/or buy, process and attempt to derive benefits from the digital panopticon we have permitted them to build.

Systems built to collect data, profile us, target ads, grab & maintain our attention and fuel profits have become systems of disproportionate influence and control.

The quest for super intelligent AI that can outperform humans on every dimension is the latest manifestation of all this. The opportunity for those AI models to chomp on the entire digitised history of human creativity and personal data, generated through interaction with the internet, was too good to miss, now the computing power had become available.

And grab that material for training their giant AI models is exactly what the big AI companies have done, riding roughshod over intellectual property, creative moral rights, privacy and data protection rights, in the AI arms race. Hundreds of billions of dollars have been and are being invested in AI companies, models and the massive, environmentally corrosive data centres that run them, leading to further intensification of the data gathering surveillance.

It’s not just corporations and governments, by the way, who have been collaborating in building our mass surveillance society. We have, enthusiastically, invested in and deployed these devices, gadgets and systems and the frictionless convenience and access to information, entertainment, consumption, retailers and community they offer.

And when some of the companies involved get negative publicity for pushing the boundaries of invasive behaviour, we collude in excusing them; or just shrug disinterestedly, if we notice at all.

In one of the gentler examples of this, in 2024 Microsoft discovered state backed hackers from China, Russia and Iran using Microsoft AI for cyber attacks. Security experts assessed the reports and evidence and concluded that Microsoft must have been engaged in large scale, surreptitious spying on their AI users, in order to be able to detect these hackers.

Defenders of Microsoft suggested it was “not fair” to call the company’s actions “spying.” They were just the good guys and they found the bad guys. Besides the small print in the terms of service say, explicitly, that Microsoft monitors users; and all users agreed to the ToS, so had no justification for complaining.

The rest of us pled indifference or simply didn’t pay any attention.

Our expectations of privacy have dropped enormously.

A bit like what fisheries scientists call a shifting baseline. Basically, radical declines in ecosystems or fish species over long periods of time, caused by intensive overfishing, went unnoticed. The declines were evaluated by experts, who used the state of fishery at the start of their careers as the baselines, thereby missing the longer-term trends.

What seems “normal” or “natural” is whatever we experience, as children or at the start of our careers or in particular contexts or circumstances; and we evaluate everything from that baseline.

Surveillance is easier & more pervasive than it has ever been – commerce & governments expand & consume it voraciously.

Not long ago the ability to track people’s daily movements, by knowing their employer, home address or place of worship, was considered a dangerous power, only available, via a warrant, to specific government agencies. Now pretty much anyone can access this capability.

In the last century practically the only people who provided their fingerprints were suspects detained in police stations. Now kids need to press a fingerprint reader to register or get access to lunch, lockers or books in school; or unlock their phones. Children have been conditioned to find it normal to provide their fingerprints for routine daily tasks.

Sadly my call to arms to teenagers to opt out of their school fingerprint systems, as is their right under section 26(5) of the Protection of Freedoms Act 2012, went unnoticed.

After over two decades of the deployment of biometric collection systems in schools, the UK government finally released some guidelines on their use in July 2022. They are short and not particularly enlightening.

Privacy has been disappearing faster than fish stocks, without us paying attention and the AI industry is supercharging that process.

In November 2022 the release of ChatGTP [sic] [thanks to Cory Doctorow for pointing out the error :-). I checked the recording (only available to the students) and it looks like I repeated it throughout the talk] rocked the world. Queue another stampede of venture capital and big tech investment and other economic actors wanting in at the apparently brightening dawn of a new AI age.

Now the arrangement of these investments is hugely complicated… so a large chunk of the $135 billion, 27% stake Microsoft has in OpenAI, for example, is in Microsoft’s provision of Azure cloud services to train and run OpenAI models.

There is massive cross fertilisation with many of the investors hedging their bets, by carving out a piece of many of the big AI companies; in the hope of being there for the payoff, when/if one of them reaches the AGI pot at the end of the rainbow.

Depending on which list you consult, the biggest companies include Nvidia, Microsoft, Apple, Alphabet (Google), OpenAI, Meta (Facebook), Tesla, Oracle, Anthropic, Amazon, Palantir, IBM, xAI, Anduril Industries and Databricks.

You don’t need to worry about most of these guys on the slide if Armageddon does come to pass. Many of them are building giant, luxury, underground bunker complexes, they and their chosen friends, family and acolytes get to ride it out in. I believe Mr Zuckerberg has chosen Hawaii for a 1,400-acre, $270 million compound. Planning documents suggest it will include a 5,000-square-foot underground shelter, with its own energy and food supplies.

I’ve been talking for a while and haven’t yet even addressed the question of what AI actually is. Partly because it is a fraught and complicated one and partly because it remains the catch all sales pitch that John McCarthy conceived it as. AI appears to be everywhere and in everything these days.

Think about how we’d be able to talk about planes, trains, automobiles, bikes, boats, rockets, even walking, if we didn’t have words for them other than “vehicle”.

There’s a media frenzy about a new vehicle that breaks all previous speed records. It happens to be a rocket but the public don’t know that. They want their car, bike etc to be able to go as fast.

That’s pretty much where we are with AI. Just replace the word “vehicle” in that scenario with “artificial intelligence”.

All software tools are being labelled AI and people, in public and policy debates, talk past each other when discussing it because of the paucity of understanding of what precise kind of technology it is they are talking about.

AI covers a huge range of different technologies, some of them nothing to do with AI, some useful e.g. spell checkers, some revolutionary e.g. image processing and breakthrough tech on protein folding, some harmful e.g. mass surveillance tech.

Some deployed in inappropriate contexts and causing harm, like Elon Musk’s Grok chatbot generating estimated 3 million sexualized images including 23,000 of children, over an 11 day period at the end of 2025 and beginning of this year.

Yet many of these tools have the potential to have positive effects. What matters is how we design and use them, who controls them and for whose benefit.

Basically then, AI is umbrella term for a loosely related range of technologies

LLMs like ChatGTP [sic] have nothing in common with software social welfare depts use to evaluate if someone may be accused of benefit fraud

But how they work, what used for, by whom, against whom & how they fail – is hugely different

Generative AI, GenAI e.g. Stable Diffusion, Claude, ChatGTP [sic], Dall-E can generate apparently realistic content quickly. Progress is impressive but it is unreliable & prone to misuse and generating misinformation

Predictive AI – deployed by governments in policing, border control, social welfare, the military – is where much of the harm happens.  It is widely used & being expanded & sold as a “solution” to problems but does not and, often, cannot work. It is hard to predict the future and AI does NOT change that.

Artificial general intelligence (AGI, not to be confused with GenAI) is the holy grail being chased by the AI industry. This is an advanced, sentient AI that can understand, learn, and apply knowledge across a vast, diverse range of domains and that matches or surpasses human capabilities, in all these areas.

A further extension of AGI is artificial super intelligence (ASI) that will evolve from AGI to surpass human abilities by a significant margin.

AI, at its heart, is a probability machine – it finds patterns in data, re-mixes, regurgitates – predictive text/images/code on steroids. It is NOT intelligent or sentient or empathetic or conscious.

Discriminatory data and algorithms have been and are being hardwired into predictive AI systems. The oppression, discrimination, violence that have blighted the lives of marginalised communities, throughout history, is getting baked into AI systems, perceived, wrongly, to be “intelligent”, neutral, decision-making machines. With this, institutional discrimination and prejudice will become harder than ever to combat.

People, not directly affected, can get an idea of the real-life consequences for victims of institutional aggression, based on rogue IT systems, from stories like the PostOffice Horizon scandal.

Bottom line, we should be less worried about future utopia (even an exclusive billionaires’ members only utopia) or Armageddon and more concerned with the harmful effects of these technologies right now and in the recent past, particularly those under the control of powerful concentrated economic and political forces that can, reasonably objectively, be described as malevolent.

Too often the harms suffered by the marginalised are dismissed, by those with vested interests, as inconvenient hurdles to “progress”.

Let’s take a look at just some of the harms that are been perpetrated by people deploying these technologies in the real world.

I’d like to look at examples from some of the areas on the slide.

Dr Joy Buolamwini downloaded some face recognition software to help with an art project when she was at MIT and discovered it could not register her face, unless she put a white mask on. I’ve put video of her describing this experience in the slide and recommend you watch it when you get the chance.

The first independent study of South Wales Police’s use of face recognition, at a Champions League football match and a six nations rugby game, was published by Cardiff University researchers in 2018. At the football the system made correct identifications in 3% (yes I said three per cent) of cases (94 people out of 2710 scanned) and falsely identified people in 72% of cases (1962 people). By August 2020 the Court of Appeal ruled that the force’s use of live face recognition was unlawful.

Advocates, vendors and every techbro who sells face recognition gadgets will tell you the technology is better now.

Some even claim that the encoded bias has been fixed. (See response to Prof Fry's question at 55m 52s).

It remains racially biased and is considered a significant threat to human rights by many, including Buolamwini. A visceral example was when the head of Iran’s agency for enforcing morality laws announced, in September 2022, that face recognition tech would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.”

Face recognition has become popular with authoritarian regimes as a tool to suppress dissent. It is also very popular with the UK government who in December last year pledged to “ramp up facial recognition” and “better equip the police”.

In her 2023 book, Buolamwini documents the evidence of discrimination hard wired and coded into technology like face recognition. It is discriminatory on multiple dimensions, including race, gender, disability, religion.

AI mirrors the bias and assumptions built into its training data, which, in turn, show the bias and assumptions of the societies that data was produced by and about. It was trained on data shaped by generations of policy decisions, societal norms & attitudes and security doctrines that framed ethnic minority or disabled or migrant or female or gender-nonconforming or Muslim or Jewish or other religious or nomadic identities as inherently inferior, less valued or just plain suspicious. It should not be a surprise it exhibits bias.

This is the brilliant Dr Timnit Gebru. At the AI conference in 2015 where Sam Altman and Elon Musk launched OpenAI, she was one of only 5 black people in a delegation of thousands.

Nascent AI systems were already having an influence that caused her deep concern – deciding credit scores, whether to grant mortgages or access to housing, flagging suspects for police, helping judges to decide sentences or whether to grant bail.

A system called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) was and is being used by judges and parole officers to make decisions about bail, sentencing and parole.

COMPAS attributed high re-offending risk scores to black defendants and lower scores to white counterparts. A 2016 investigation, by ProPublica, showed COMPAS more than twice as likely to be wrong about black defendants as white ones.

Given COMPAS and other discriminatory AI systems she was studying – predictive policing systems trained on historic data which led police to target already over-policed minority communities; Google and Microsoft’s gender biased translation software which assumed e.g. certain professions like doctor or engineer were male, the Google computer vision algorithm that classified black people as apes – Gebru was furious.

As she put it, “A white tech tycoon, born and raised in South Africa during apartheid, along with an all-white, all-male set of investors and researchers” were launching OpenAI allegedly to, as they claimed, stop AI “taking over the world”.

 It was clear to Gebru that the narrative generated, to focus media, political and public attention on imagined future Arnold Schwarzenegger like Terminators destroying the world, was intended to distract from the need to address the serious problems that AI systems were already causing.

Working out why AI systems churn out biased information is difficult. Some experts claim it is impossible to tweak models to fix bias, once embedded, because they are so complex, even their creators don’t understand them. Gebru had an idea on how to do a more effective job at trying filter out bias when the systems are getting built.

She proposed every AI training dataset be documented in detail, just as every component in the electronics industry is. Every data set should be “accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses”, details on how it was created, what was in it, what its purpose was to be, what limitations it might have, make the design transparent, reproducible, independently testable. It didn’t catch on.

Big AI firms don’t do transparency, much though they have occasionally claimed otherwise. OpenAI’s standard excuse is that they don’t want to make the details available to bad guys who might misuse them.

Gebru believes we need regulation that imposes transparency on AI companies; that it should always be clear when we use AI systems that they are machines, not sentient, empathic beings; and that the organisations behind them should be required to document and publicly release training data and model architectures.

 

In 2018 Gebru joined Margaret Mitchell at Google to co-lead the AI ethics research team. Mitchell was a computational linguistics specialist known and respected for her work on fairness in machine learning. She was frustrated that her repeated internal warnings, of the potential problems the AI systems Google were developing would cause, were falling on deaf ears. Worse, often she would get memos from HR telling her to stop making waves and be more collaborative. By 2020 both Mitchell and Gebru became depressed that they could not get through to management about the risks of LLMs.

In 2021, Gebru, Mitchell and a couple of other brilliant computational linguistics experts, Emily M. Bender and Angelina McMillan-Major, who shared their concerns, wrote a seminal paper, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Bender was particularly concerned that even linguistics and AI specialists, who should know better, because they know LLMs are coded mathematical probability machines, were saying, publicly, that LLMs were displaying understanding and consciousness.

The stochastic parrots paper ran to 14 pages and summarised the evidence that LLMs were amplifying bias, the companies building them were becoming more and more secretive about their details and they were under-representing all languages other than English. Mitchell has herself credited as Shmargaret Shmitchell on the paper to emphasise the story it tells.

Gebru and Mitchell went through all Google’s internal processes to get approval for the paper and additionally sent it to more than 20 reviewers they trusted inside and outside the company, to get comprehensive pre-publication feedback.

About a month after submitting it, Gebru and Mitchell were summoned to a meeting with Google executives and told to retract the paper because it was too negative about LLMs. It should be more positive. They didn’t want it associated with Google and anyway Google’s LLMs were “engineered to avoid” the problems they described.

Long story short, Gebru then got sacked (though Google said she resigned) and Mitchell was also fired a few months later, reportedly, when attempting to gather evidence of the company’s discrimination against Gebru and sharing it with external parties.

Their departures, particularly Gebru’s, caused a scandal. There were more than a few articles in mainstream as well as technology press. Some even wondered whether Google were as committed to equality, diversity, inclusion and fairness as they liked to claim. Especially after sacking their two AI ethics leads in quick succession, just because they had raised concerns about the potential risks of LLMs? It also drew significantly more attention to the paper than it might otherwise have received.

Gebru now runs the Distributed Artificial Intelligence Research Institute (DAIR) which she founded after leaving Google. She was named one of the 10 greatest minds in tech by Spanish newspaper El Pais in December 2025.

Mitchell is a researcher and Chief Ethics Scientist at Hugging Face.

Time is pressing on, so I’m going to fast forward through a few of the following slides, but before I do, English residents should note that the Heath Secretary is a big fan of AI and is proposing a few developments in the NHS that everyone really should be more aware of.

The core takeaway from his 10 year plan is that doctor patient confidentiality is to be a thing of the past.

AI, for example, will record and transcribe and send to a central database, likely brokered by Palantir, every word a patient and their doctor shares.

I highly recommend signing up to medConfidential’s newsletter if you would like to know more about this.

 

 

 

 

  

 

According to the UN, the first known use of what they called “lethal autonomous weapons systems such as the STM Kargu-2”, a Turkish manufactured drone shown in the picture, was in the Libyan civil war early in 2020.

The drones “were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability”.

Since 2018, UN Secretary-General António Guterres has maintained that lethal autonomous weapons systems are politically unacceptable and morally repugnant and has recommended these systems be banned, under international law.

In December 2024 the UN General Assembly adopted resolution 79/239, basically saying that international law should apply to the use of AI by the military. Given the absence of respect for international law demonstrated by major powers in the past few years, the resolution, in practice, carries little weight.

A UK MoD policy paper on AI in defence, published in 2022, says that real time human supervision of such systems “may act as an unnecessary and inappropriate constraint on operational performance”.

At the time the US, Russia, India, Australia agreed. Brazil, South Africa, New Zealand, Switzerland, France & Germany wanted bans or restrictions.

In August 2023, the US announced “the Replicator Initiative” to produce large numbers of cheap autonomous weapons. The intent, according to a January 2026 briefing for Congress, was “to deploy uncrewed systems en masse, allowing the U.S. military to disperse combat power over a large number of relatively inexpensive systems.”  The first contracts for autonomous boats, drones and anti-drone weapons were issued in May 2024.

A House ofLords AI in Weapons Systems Committee report in December 2023 “recommended that the government proceeds with caution on the development and use of artificial intelligence in weapon systems.”

The 26 page response by the government essentially said they’d think about the recommendations but weren’t prepared to be held back when the big bad guys out there were developing this stuff too.

By 2025 Minister for the Armed Forces, Luke Pollard, declared that “compliance with international humanitarian law is absolutely essential”

The government’s 2025 Strategic Defence Review, however, stated that “The UK’s competitors are unlikely to adhere to common ethical standards” in developing such weapons. It was silent on whether the UK would do so. You may draw your own conclusions on that.

128 countries at the UN weapons convention have not been able to agree to limit or ban the development of lethal autonomous weapons. They have pledged, instead, to continue and “intensify” discussions.

Two weeks ago, a researcher at Cambridge University’s AI Safety Hub released a paper on what he described as military AI agents. He noted they have six structural failure modes that mean, once set up in autonomous weapon mode, there is little or no meaningful human control of them in live military or battlefield contexts.

However well tested (or not) these AI agents might be under laboratory conditions in research & development labs, deploying them in the wild creates profound challenges for safety, security, reliability, trustworthiness and control. Not to mention threats to life.

New Scientist reported, a week earlier, that a researcher at King’s College London discovered that advanced AI models, in war game simulations, escalated the conflict to nuclear strikes, in 95% of cases.

They never agreed to surrender, no matter how badly they were losing; and the made significant mistakes, leading to escalation and unintended casualties in 86% of cases. The simulations were run with GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash.

Major powers are already using AI in war gaming. We do not know what extent they are incorporating AI decision support into live military decision-making processes.

That leads us onto one of the big AI companies, Anthropic, whose CEO Dario Amodei has stated he believes “deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat… autocratic adversaries.” Going on to say:

“Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community.”

Anthropic’s Claude AI is integrated into Palantir software. Palantir’s “Maven Smart System” is variously described, in press reports, as powered by Anthropic’s Claude, to identify, profile, prioritise and select targets for military strikes, in real time.

Palantir’s UK head, Louis Mosley, has said it enables “faster, more efficient and ultimately more lethal decisions where that’s appropriate”.

Anthropic has two restrictions on the use of their technology –

1.     That it should not be used for the mass surveillance of Americans (people in the rest of the world are fair game but not Americans)

2.     That it should not be used in autonomous weapons systems

They asked Palantir if their AI was used in the bombing of Venezuela and rendition of the Venezuelan president into US custody. Claude did, apparently, play a key role.

To make a long story short, that has led to a falling out with the Trump administration, in turn causing Anthropic’s contract to be cancelled and the company designated a supply chain risk.

It is first time in history such a designation has been applied to a US business. Anthropic are suing the government, while continuing to fulfil their contractual duties, during the six month phase out period Mr Trump has announced; and Claude is being used, by the US military, in their attacks on Iran.

Though it generated approximately 1,000 targets on the first day of the bombings, we don’t yet know if Claude / Smart Maven was used in targeting the Minab school, killing over 170 people, most of them children. If it was, you may wish to consider whether Anthropic & Palantir are responsible for a war crime.

The US government has offered Anthropic’s contract to OpenAI, which Sam Altman has, patriotically, accepted, agreeing not to impose any woke, unreasonable Anthropic-like restrictions. The Defense/War department’s chief technology officer, Emil Michael, described as a “whoa moment” the realisation that Claude was the only AI model authorized in classified settings.

OpenAI are now inside the US classified tent, as is Elon Musk’s xAI and there are ongoing negotiations to include Google too. Michael said he’s not biased. He wants all of them because he needs redundancy.

Whatever concerns any of us might have about AI in weapons, as Alan Z. Rozenshtein, a law professor at the University of Minnesota, has said, the rules governing military AI should not be set through ad hoc negotiations between government officials and individual companies, with no democratic input, no regulatory constraints, and no framework that survives the next change of government. 

Dare I add an addendum to Prof Rozenshtein’s concerns, that neither should they be set by the next change of mind of a volatile resident of the Oval Office.

What is not widely known is that a few weeks prior to this dispute, Anthropic’s safeguards research team lead Mrinank Sharma, left the company on the 9th of February, citing serious ethical concerns; or as he put it in his leaving letter to colleagues: “throughout my time here, I’ve repeatedly seen how hard it is to let our values truly govern our actions.” He’s got an NDA so cannot be more specific about his reasons. But when asked about where he thought we would be with AI safety next year, he posted a gif of the everything’s fine dog.

Here’s some images of the Anthropic dispute with the Pentagon I asked AI to produce. ChatGTP [sic] didn’t do too badly. Not sure what kind of AI juice DeepAI was drinking.

When OpenAI agreed to steal Anthropic’s War Department lunch, without restrictions, their robotics chief resigned, declaring the same red lines as Anthropic. She said, on Twitter: “AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

 

Silicon Valley has come a long way in the past decade on its attitudes to doing business with the military. Having been reluctant in the 1990s and early 2000s to engage in such business, at least overtly, despite the industry’s origins in the military industrial complex, the more aggressive big tech moguls are now actively touting for the business of the traditional big defence companies, while labelling the incumbents dinosaurs.

In 2024 Google sacked over 50 people for protesting about the company’s joint involvement, with Amazon Web Services, in the $1.2bn Project Nimbus cloud computing contract with the government of Israel, to be used in finance, healthcare, transportation and education, as well as by the military.

In April 2024 Google CEO, Sundar Pichai, sent a memo to all employees, praising their state-of-the-art AI models and research, their leadership in building and deploying responsible and safe AI, the exciting opportunities they have in AI. He concluded with a sting in the tail – a warning that Google is a business and not a place "to attempt to use the company as a personal platform... or debate politics", the implication being that employee protests would not be tolerated.

Former Google chief, Eric Schmidt, served as chairman of the U.S. Defense Innovation Board from 2016 to 2020 and led the National Security Commission on AI from 2018 to 2021. 

In 2025 a UN report, From economy of occupation to economy of genocide, listed 122 companies as profiting from genocide, including Google, Amazon, IBM, Microsoft and Palantir. The author of the report, Francesca Albanese, UN Special Rapporteur on the situation in the Palestinaian territories occupied since 1967, was, subsequently, subjected to US sanctions.

Judges and prosecutors at the International Criminal Court have also been sanctioned by the US, following their indictment of Israel’s prime minister, Benjamin Netanyahu for war crimes and crimes against humanity.

Their finances have been frozen and they can’t use a credit card, book a hotel room or so much as buy a cup of coffee. EU member state governments have the legal power and the duty to protect their own citizens, under something called the EU Blocking Statute, but have chosen not to do so, for fear of how the US might respond. Likewise financial institutions and other economic actors.

I wonder what President Eisenhower, who coined the term would make of the modern day big tech military industrial complex?

 https://x.com/AssalRad/status/2035480540498006524/photo/1

Israel’s version of the Palantir/Anthropic “Maven Smart System” is what press reports describe as AI called “Lavender”. Lavender was used to generate kill lists of tens of thousands of suspected members of Hamas. These targets’ homes were then bombed, at night, when they were believed to be asleep with their families.

There was little meaningful human oversight, according to six of the Israeli intelligence officers who operated it. Operators were no more than rubber stamps for the AI output, devoting about 20 seconds to each target before authorizing the bombing.

“Dumb” rather than so-called “smart” “precision” bombs were used for suspected junior militants. These cause greater numbers of casualties. One of the intelligence officers reportedly told a +972 magazine journalist that “You don’t want to waste expensive bombs on unimportant people”.

A policy decision was made that it was acceptable to kill up to 20 innocent civilians as well as the Hamas suspect, in the case of low-ranking targets.

The killing of more than a hundred innocents was authorised on several occasions, when the target was a suspected senior Hamas official. Sometimes homes were bombed when the suspect was not there because the information on their whereabouts was not verified in real time.

This was, essentially a decision-making algorithm, targeting people, their families and neighbours for assassination, embedded in a military institution; where organisational dictats determined that the killing of up to 20, or occasionally 100 or more, innocent people was an acceptable measure, in the process of eliminating suspected members of Hamas.

Tolerances for civilian deaths expanded as a matter of organisational, bureaucratic process. When you have a machine that can identify targets, accurately or not, there will always be demands to ID more targets. And if someone in the chain of command thinks those targets are not being dealt with efficiently enough, the pressure to relax the conditions under which they can be eliminated will ramp up.

International humanitarian law prohibits all of this but has no meaning or force for the people of Gaza.

IHL prohibits any attacks directly targeting civilians or civilian objects. They may be incidentally affected by attacks against lawful targets but only if it is proportional; and the attacker must take all feasible precautionary measures to avoid incidental effects on civilians. This includes any foreseeable knock-on, indirect adverse effects on civilians’ life and health such as exposure to toxic chemicals; and the prohibition of the use of starvation as a weapon of war.

By December 2023, Lavender reportedly identified around 37,000 Palestinians as suspected members of Hamas, operating with an accepted error rate of roughly 10 percent.

It is important to understand that AI systems don’t stand alone.

They are never just the technology.

They are always deployed in sociotechnical, organisational, political, social, economic, environmental and, in the case of military systems, geopolitical contexts, with all of the structural flaws and prejudices that come with people and institutions.

And the failure modes of these machines, and the sociotechnical systems they form part of, multiply exponentially, when they run in the real world. In military, health, border control, policing, criminal justice and welfare organisations – which are, themselves, forms of artificial intelligence – these failure modes can be life altering.

 

Former Greek finance minister, Yanis Varoufakis, spoke to someone who recently left their job at Palantir. He told Varoufakis the “Gaza event”, as he described it, was “very exciting” for technologists. He said what’s happening in Gaza is terrible BUT it was fantastic for Palantir.

He pointed at Varoufakis’s phone and said it was useless to him, at that moment, because it was lying on the table not being used. It’s only when you move with it and interact with it, that it produces data that Palantir can acquire to train their algorithms.

When you bomb people in highly densely populated areas, they panic and do a lot of things that can be tracked – they run around, try to escape, move and are moved, forcibly, from one place to another, they make calls, send messages, try to find loved ones, rush to hospitals, if there are any, and to other places for help.

All that activity was great for training Palantir’s algorithms. And Palantir, having trained their algorithms, using cloud services from AWS, Oracle etc., could develop AI tools to sell to, say the UK NHS, MoD or police forces, with whom they have a combined collection of contracts worth nearly £700 milllion. So, hospitals, for example, can use the Palantir software for managing personnel inside the hospital during emergencies and crises.

 

The suffering of the Palestinian people in Gaza continues and is intensifying again. The limited amount of humanitarian aid that was being allowed in has been severely reduced, under cover of Israel’s and the US’s war with Iran. On Sunday evening a large wall collapsed onto displaced people’s tents, killing dozens and there are hundreds more injured or missing under the rubble.

We don’t need to wait for some future imaginary, malevolent AI to wipe us out. People in conflict zones all over the world are experiencing dystopian Armageddon right now.

Iran is in the news. Ukraine gets some attention.

Mostly, we are barely aware of their plight.

And by the way, just briefly getting back to autonomous weapons, developments in drone swarms in Ukraine are bringing us closer to the fictional slaughterbots that the Professor Stuart Russell at UC Berkeley was warning of back in 2017.

So where does all this leave us?

We have $100s of billions invested in the race for artificial general intelligence. So far that has been consolidating the increasing concentration of control of the communications infrastructure, that trains and runs the big AI models, in the hands of a small number of big tech companies - the usual suspects, like Google, Amazon and Microsoft, as well as a few I mentioned earlier.

Politicians and board rooms have bought into the hype that they have to embed AI in everything. Though, interestingly enough, a survey of 6000 CEOs, published within the past few weeks, shows they’re getting impatient that investments in AI have shown little return. More than 80% of them said they saw no impact on employment or productivity.

Little or no attention is being paid to the harm that AI tools have been enabling or to mitigating or eliminating that harm.

Cards on the table, I believe the focus on developing AGI is wrongheaded and it is certainly not inevitable as many would have you believe.

It’s a distraction from the good we could be doing with this technology and from fixing the damage we are already causing with it. Whether it is even possible to develop a machine that is better than everyone at everything is largely irrelevant. The mere quest for it is leaving a trail of harm in its wake, some of which we have covered this evening.

We would be far better off investing even a fraction of those hundreds of billions in smaller, specialised AI models, trained with carefully curated, scientifically sound datasets, focussing on solving specialised problems.

The quest for AGI does benefit:

·        Investors & big companies developing and selling technology

·        The companies grabbing and laundering personal data and creative work of others at enormous scale

·        Big economic actors who control the communications infrastructure of AI

·        Those trying to replace government services, in health, welfare, policing, criminal justice, defence, health and others with cheap, automated systems, even when, more often than not, are causing serious pain, suffering and distress

AI will do what we make it capable of doing.

Many of the AI models now in existence are impressively capable, in the hands of people who can draw empirically sound, accurate and enlightening outputs from them. Too often they are used by people who don’t understand them, including by researchers. Basic errors in research papers are shockingly frequent, especially when they are using off the shelf AI tools and are not trained computer scientists.

Reviews have determined the majority of machine learning research carried out in areas like health, social science and politics has been shown to be flawed.

When used in life changing areas like policing or immigration, by institutions who don’t understand them, the consequences can be severe or even deadly. It is actively inhumane to embed these black boxes in immigration, welfare, policing & criminal justice, border control, conflict, military, health or employment contexts, as “normal technology”.

It can help to teach, solve problems, big and small, illuminate, inspire and possibly even fulfil some of the hype preached by its high priests but only to the extent that WE are prepared to develop and use these technologies to those ends.

Otherwise, they will be giant, networked, resource draining, socially and environmentally damaging data centres of wires, chips, lights and black boxes, controlled by an increasingly concentrated group of intensely wealthy and powerful economic actors, for their own ends.

We won’t be dealing with Armageddon or Utopia but a dystopia even the combined imaginations of Orwell, Huxley and Kafka didn’t anticipate.

 

I’ll leave you with an extract from Joseph Weizenbaum’s 1976 book, Weizenbaum, J. (1976) Computer power and human reason: from judgment to calculation. Freeman.

The question is not whether computers can make decisions in the context of human welfare but whether they should. Since we can’t make computers wise, we should not give them tasks that demand wisdom.

 

Selected sources and further reading 

The AI Now Institute

The Open Rights Group

The Distributed Artificial Intelligence Research Institute (DAIR) 

medConfidential 

Amnesty International

Human Rights Watch 

The Hind Rajab Foundation 

B'Tselem 

Foxglove

Big Brother Watch

Privacy International 

United Nations Human Rights Office of the High Commissioner 

Electronic Frontier Foundation

European Digtial Rights (EDRi)

Electronic Privacy Information Center (EPIC) 

United Nations Relief and Works Agency for Palestinian Refugees in the Near East (UNRWA)

The AI Con: How to Fight Big Tech's Hype and Create the Future We Want by Emily M. Bender and Alex Hanna 

Data Grab: The New Colonialism of Big Tech (and How to Fight Back) by Ulises A. Mejas & Nick Choudry

Why Privacy Matters by Neil Richards 

Systems Thinking in the Public Sector by John Seddon

The Palestine Laboratory by Antony Loewenstein 

Atlas of AI by Kate Crawford 

Native: Race & Class in the Ruins of Empire by Akala 

AI Snake Oil by Arvind Arayanan & Sayash Kapoor 

Your Face Belongs to Us by Kashmir Hill 

IBM and the Holocaust by Edwin Black 

The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence by Petra Molnar

Race After Technology by Ruha Benjamin 

Privacy's Blueprint by Woodrow Hartzog 

Supremacy: AI, ChatGPT and the race that will change the world by Parmy Olson 

Protecting Children in Armed Conflict by Shaheed Fatima KC 

Ghost Work by Mary L Gray & Siddarth Suri 

Computer Power and Human Reason by Joseph Weizenbaum 

The Demon-Haunted World: Science as a Candle in the Dark by Carl Sagan 

Drone Theory by Gregoire Chamayou 

The Rise Of Big Data Policing by Andrew Guthrie Fergusson 

Enshittification: Why Everything Suddenly Got Worse and What to Do About It by Cory Doctorow 

The Age of Extraction: How Tech Firms Conquered the Economy and Threaten Our Future Prosperity by Tim Wu 

Underground Empire: How America Weaponized the World Economy by Henry Farrell and Abraham Newman 

Anatomy of a Genocide: Report of the Special Rapporeur on the situation of human rights in the Palestinian territories occupied since 1967 by Francesca Albanese 

From Economy of Occupation to Economy of Genocide: Report of the Special Rapporeur on the situation of human rights in the Palestinian territories occupied since 1967 by Francesca Albanese 

Mind the Gap: Foundation Models and the Covert Proliferation of Military Intelligance, Surveillance and Targetting by Heidy Khlassf, Sarah Myers West & Merridith Whittaker 

Joint Legal Opinion for the Open Rights Group - The Use of Artificial Intelligence Tools by Government: A Case Study of the Home Office's Asylum Practice by Robin Allen KC, Dee Masters & Joshua Jackson

The Dawn of the AI Drone by C.J. Chivers, New York Times 

Congress - Not the Pentagon or Anthropic - Should Set Military AI Rules by Alan Z Rozenshtein 

UK Government announcement of £1.2 billion strategic defence partnership with Palantir 

Ministry of Defence: Palantir Contracts - Parliamentary debate, 10 February. 2026, Hansard  

Human oversight of autonomous weapons doesn't mean much when the aim is to maximise destruction. Lucy Suchman 

Human Machine Autonomies by Lucy Suchman & Jutta Weber 

EU General Data Protection Regulations (GDPR)

EU ePrivacy Directive 

EU AI Act 

EU Digital Omnibus - proposal to amend/circumvent GDPR, E-Privacy Directive and EU AI Act

Joint statement On the Proposal for a Regulation as regards the simplification of the digital
legislative framework (Digital Omnibus)
by the European Data Protection Board (EDPB) & The Euporpean Data Protection Supervisor (EDPS)

Europe's Hidden Security Crisis by Johnny Ryan and Wolfi Christl for Irish Council for Civil Liberties 

America's Hidden Security Crisis by Johnny Ryan and Wolfi Christl for Irish Council for Civil Liberties  

Joint statement of security and privacy scientists and researchers on Age Assurance by 400+ privacy & security experts and scholars, March 2026 

Children's Wellbeing and Schools Bill Running List of Amendments, p20-21, 11 December 2025

Automating the Hostile Environment: AI in the asylum decision-making system by The Open Rights Group 

American Dragnet: Data-Driven Deportation in the 21st Century by the Center on Privacy & Technology, Georgetown Law

Physiognomic Artificial Intelligence by Luke Stark & Jevon Hutson 

Joint letter calling for the EU digital security agenda to promote fundamental rights and support a safe digital ecosystem 

Geneva Conventions

UN Convention on Certain Conventional Weapons