Monday, December 01, 2025

Note to MP on petition to block mandatory government digital ID scheme

At the behest of Big Brother Watch, I have written to my MP asking her to attend the parliamentary debate on the petition to block the introduction of digital identity card scheme.

"Dear Layla, 

I am writing, as your constituent, to urge you to attend the December 8th petition debate to represent my opposition to the government’s plans for a mandatory digital ID scheme. As you know, the Liberal Democrats have, historically, opposed national identity card schemes and abolished the previous Labour government national identity card when joining in a coalition government with the Conservative party in 2010.

Almost three million people have signed the petition, making it the fourth largest Parliamentary petition recorded. The scale of the response is not a surprise. Millions of people have well-grounded concerns that a mandatory digital ID scheme will pose a threat to privacy, upending our relationship to the state, and make British residents even more attractive targets for hacking by foreign adversaries and criminals. It is critical that the government hears these concerns at the upcoming debate.

The government initially pitched the digital ID scheme as a means to check work eligibility. But since the scheme’s announcement multiple ministers and members of Parliament have made it clear that the initial roll out of the digital ID will merely be the first step in a wider plan to, in the words of Minister for Intergovernmental Relations, Darren Jones, “shut down the legacy state”.  Parliamentary Under-Secretary of State for Children and Families, Josh Macalister, has stated the “first use case for it is around right to work checks… we’re not saying we’re going to boil the ocean in one go because the public would be really sceptical of that, so we’re starting with this issue of right to work checks first but there are loads of other applications for digital ID.” The planned mission creep is not even being hidden any more – they are boasting about it.

There is no guarantee that this or indeed a future government would not make digital ID a requirement to access a range of public and private services – including healthcare, education, childcare, tax payments and accessing age-restricted services. A digital ID system will last far beyond this government, meaning that we risk building the mass surveillance infrastructure for a less rights-respecting future administration.

A sprawling digital ID system represents a serious threat to privacy. A mandatory digital ID scheme will gather and store information about us. Each time an individual uses their digital ID, that use may be recorded in government databases, allowing the vast amounts of information to be amassed, searched, and sorted to offer insights through data analysis and profiling. The public should not be forced to bare their lives to the state just to help it administer itself.

The introduction of a mandatory digital ID scheme would fundamentally change the relationship between the population and the state, shifting power away from individuals and towards the government.  We currently operate in a system where we can prove our identity as and when we need to with a variety of methods. Mandatory digital ID would turn that dynamic on its head by creating a society on licence, where we might soon need permission every time we interact with the state.

Mandatory digital ID would put the population’s personal data at unprecedented risk of data breaches by creating a honey pot for hackers and foreign adversaries. In the past year, breaches of the legal aid database and Afghans relocating to the UK have resulted in the personal information of hundreds of thousands of people being leaked. The government’s OneLogin system is also reported to be susceptible to impersonation.

A mandatory digital ID scheme will not solve the problems its proponents claim it will and all the usual excuses for it have been rolled out – right to work checks, stop the boats, smash the gangs, halt benefit fraud, border control and multiple variations on tackling the four horsemen of the infocalypse i.e. child abusers, terrorists, organised crime and narcotics cartels. The government, this time, proposed the scheme as a means to crack down on illegal hiring practices,"tackle illegal migration, make accessing government services easier, and enable wider efficiencies." If we have learned anything from the first quarter of the 21st century, it is that building and deploying infrastructures of mass surveillance does not solve these problems.

The government has not presented a convincing argument that a mandatory digital ID scheme, which will inevitably cost billions of pounds, will persuade criminals who already break employment and immigration law to change their behaviour. In the end, it will be law-abiding people who suffer the effects of digital ID through privacy intrusions and security risks.

One final point that might be worth making is that the digital ID scheme is being pushed heavily by the Tony Blair Institute for Global Change. This “institute” has received huge sums of money from economic actors who stand to benefit significantly from contracts to build and operate the scheme. It reminds us to think of the five questions of a different Tony, Tony Benn –

  1. What power have you got?
  2. Where did you get it from?
  3. In whose interests do you exercise it?
  4. To whom are you accountable?
  5. How do we get rid of you?

Of particular importance in this context are questions 3 & 4.

The infrastructure of mass surveillance that constitutes the modern internet has been repeatedly exploited, by state, criminal and economic actors, to perpetrate widespread abuse, from basic privacy invasion to, tragically, the targeting of innocents for bombing. Another state run digital ID surveillance system will only expand that capacity for abuse. If you build it, they will come – powerful abusers of all stripes, some even believing themselves to be well-intentioned, convinced of their own righteousness – and once deployed mass surveillance infrastructure is incredibly difficult to combat, control or decommission. Please stand in defence of civil liberties, attend the December 8th debate, and articulate the dangers associated with a mandatory government digital ID scheme.

Kind regards,

Ray"

Update, 4/12'25: I got a response from Liberal Democrat MP, Layla Moran. They agree with the concerns raised and have launched their own No to Labour's Digital ID Cards petition.

"Dear Ray 

Thank you for taking the time to write regarding the Government’s proposals for compulsory digital ID. Unfortunately, due to previously arranged commitments, I will be unable to attend the petitions debate on December 8th. However, please rest assured that I and my Liberal Democrat colleagues share your concerns entirely. 

The Liberal Democrats are clear that a mandatory digital ID system would cross a red line. It risks eroding long-held civil liberties while doing little to address the Government’s stated aims of immigration enforcement.

We share concerns that a mandatory digital ID system threatens our right to privacy. Digital tools should be about giving individuals more control over their personal data, not giving the government more control over our lives.

We are also concerned that a mandatory Digital ID system could deepen digital exclusion and disproportionately affect society’s most marginalised- older people, people living in poverty and disabled people who often have limited access to digital devices or low digital literacy.

This scheme is set to cost the taxpayer billions. If the government really wants to restore public trust in the immigration system, it could spend this money on Nightingale processing centres, as the Liberal Democrats have called for, to clear the asylum backlog, and still have lots left over to improve public services.

The British public has consistently rejected mandatory ID proposals over the decades, and the Liberal Democrats are proud to have led the charge on this against Tony Blair’s government in the 2000s. We once again stand ready to oppose Labour’s push for mandatory ID cards.

The Liberal Democrats have launched a petition to show the government the strength of public opposition to these plans. You can add your name here.

Thank you once again for taking the time to write. 

Best wishes,

Layla"

Friday, October 03, 2025

AI in education – some critical concerns on balancing innovation & ethics

Some colleagues, Mike Richards and Marian Petre and I recently had a book chapter published on AI in Education: Balancing Innovation and Ethics, in a book edited by Geoff Baker, Professor of Education and Lucy Caton, Senior Lecturer in Education, at the University of Greater Manchester, AI-Powered Pedagogy and Curriculum Design: Practical Insights for Educators.

The chapter is a fairly standard academic treatise with a core set of ten guidelines for the ethical use of AI in education, though my initial starting point for the piece was something of a light polemic which I had not originally intended sharing more widely.

In the wake of President Trump's recent state visit to the UK, when he parked Silicon Valley's tanks on Downing Street's lawns and got a fawning, acquiescent, servile response from the prime minister, however, I've reconsidered. 

I'm not going to get into how the government's obsession with growth is misplaced. Professor Richard Murphy does that far better than I ever could. But Mr Starmer's deference to Mr Trump and his preferred Silicon Valley corporations and billionaires, has significant implications for UK sovereignty, the economy, public services generally and education in particular. So without further ado...

AI in education - some critical concerns on balancing innovation & ethics 

Vast quantities of data scraped from the internet, often faulty, fallible, incomplete, collected without authorisation or consent, dirty, biased and otherwise problematic on multiple dimensions, get pumped into AI systems to “train” them. These systems are then used to make claims about the world or make decisions affecting real lives that are often not credible or even verifiable.

Large language models (LLMs), like ChatGTP, make probabilistic predictions about what the next word in a sentence should be, in order to spit out apparently plausible text. But the model, though it is assumed, falsely, to be intelligent, neutral and objective, is just making stuff up, however plausible or useful the outputs appear to be. Professor Emily M. Bender of the University of Washington describes these LLMs as synthetic text extrusion machines.

These technologies have been and are being integrated into core social, political and economic systems and institutions. There has been an exponential expansion in police rollouts of face recognition technology in the UK. Algorithms are being used for predictive policing, to decide whether the suspect in a crime should be released on bail and to determine whether someone’s digital profile is sufficiently terrorist-like that they should have a bomb dropped on their home. They are being used to select who should have access to housing and employment.

They are determining which schools kids get to go to and, in schools, children’s access routes to library books or lunch. They are deployed for automated grading of assessment and to decide whether students are attentive enough in class or whether they are cheating on exams. They are targeting people with disabilities for social welfare case review and wrongly accusing hundreds of thousands of fraud. They are becoming core to borders infrastructure deciding whether refugees should be denied passage or let through.

In short, they are being used for classification, rationing the allocation of limited resources and, all too often, inflicting harm.

They have been proven to be racist and discriminatory, disproportionately inflicting harms on historically marginalised groups.

If we are going to continue to build these tools into the infrastructure of education systems, taking the baseline assumption that education is a public good, we have to be asking, right up front, what are the protective or precautionary guardrails?

What are the checks against the harms that inappropriate or poor deployment of these technologies can cause?

What are the safeguards protecting those these tools are used on from the assumptions of the techbros who profit from them?

What are the protections against systemic and institutional racism and discrimination being hardwired into these systems?

What are the restraints on educational institutions’ systemic target/metric chasing harms that get perpetrated against students and staff. e.g. claims re student analytics being the key to student engagement, success and wellbeing. What are the guardrails, in education, when we deploy surveillance-intensive personalised tuition robo-tutor chatbots, automated assessment grading, exam proctoring software, plagiarism detection, personality, intelligence and attention deficit testing?

On the subject of systemic discrimination, how do we ensure we are not solidifying historic patterns of inequity, into these unquestionable/unchallengeable AI black boxes; biases learned from the vast quantities of data, often faulty, fallible, incomplete, collected without authorisation or consent, dirty and otherwise problematic on multiple dimensions, that we are training them on?

Specifically, we need to be asking:

How is the AI being developed?

By whom?

With what purpose?

How does it work and what data is being used to “train” it?

How and through what sources and with what permissions is that data acquired and monitored/reviewed for e.g. biases?

Where is that AI being used?

In what context?

How reliable, accurate, secure and safe is it?

Who benefits?

Who is harmed?

What other problems or opportunities are being generated?

Are the fundamental rights of students and workers being affected? Which & whose rights and how?

How is the AI being tested and monitored in operation?

How is being integrated into not just education but our society, our environment, our politics, our economies, our laws, our technologies?

What are the effects of this?

Who is accountable for those effects?

How do we mitigate harms, improve or evolve or decommission the AI and the economic/ political/social actors benefiting from those harms, when and where necessary?

Many of these questions are sidelined in public debate and institutional and political decision making. Post-hoc superficial approaches to managing AI systems are proffered or preferred. Institutions can put someone in charge of AI ethics and or hire an ethics consultant to run a training seminar for staff periodically, to tell them everyone is responsible for behaving ethically – box ticked, corporate AI ethics responsibility satisfied. Getting on top of building ethical considerations in from inception is really hard. Ticking a box on a form, having an AI ethics policy which gathers dust on a shelf or in a digital archive and putting someone in charge of AI ethics is easy.

How do industry or researchers clean up data, whether for predictive policing or student analytics? Researchers at the AINow Institute are seriously sceptical about capacity of simplistic tech fixes to address this. Importing historical biased data to design your system is the problem. The data can’t be cleaned with any degree of efficacy. And If, as most of these systems are, it is proprietary data processed by proprietary algorithms – no independent verification of the claimed data cleansing process can be performed. All outsiders can see is a black box and ‘trust us, we’re the experts’ public relations from the vendor.

The fundamental issue is that you cannot deploy structurally, systemically and institutionally racist and otherwise discriminatory systems and then engage in post hoc mitigation of harms with any degree of efficacy. By the time AI is embedded in infrastructure it is too late.

As per long held understanding of systems theorists & practitioners, one of the most critical approaches to implementing complex systems, to enable some degree of success, is to involve affected communities in the decision making about those systems, such as the adoption of AI as core educational infrastructure. The people who are obliged to use and those on whom the systems are used are going to be most cognisant of potential harms. They will also be best placed to understand the practical issues around deployment and whether they should be used at all. In the education context, students and staff are the key stakeholders that need to be involved.

There are a host of other questions to consider.

How do we think about how we represent the world and students’ needs in an educational context? What are the institutional and broader politics and ethics of AI in education? What are digital student analytics, exam proctoring, automated assessment, robot tutors proxies for? Are they legitimate, useful, adequate or harmful proxies and for which stakeholders? AI systems are political, not neutral or objective. They are built by people – disproportionately techbros – so it matters who these people are and what their worldview is and what problems they believe they are attempting to solve. They are not education specialists but they are selling ‘solutions’ to educational challenges. ‘Solutions’ that education leaders grab enthusiastically, as they vie to be seen to be innovative, forward thinking, digital futurists and innovators. Not to mention AI solutions offer promises of cost and efficiency savings.

When AI touches complex social, political and economic area such as education, it is critical that we understand the technology, its deployment and its effects, broadly and deeply. Because we are using it to affect people’s lives. Students must be encouraged and taught to be sceptical of these systems or at least analyse them critically and we need vastly more research to test AI systems independently.

The integration of AI into educational infrastructure must be treated with caution and there may well be an argument for the application of a precautionary principle when considering it. Mechanisms for key stakeholders, in particular students and staff, to participate in the decision-making processes around deployment and the facilitation of time, resources, space and agency to understand, assess, and reject such systems are crucial.

Just one final general thought to conclude which may well make all of the above moot:

Are ethics in AI or efficacious ethical guidelines or controls even remotely conceivable, with a technology built through mass exploitation & abuse of labour, including children, egregious, colonial, exploitative rare metal & other resource intensive mining and manufacturing processes, mass copyright infringement, industrial scale privacy invasion, concentrated control of power & capital enveloping it and the use of which consumes hideous amounts of energy and water?

Monday, September 08, 2025

Submission to ICO call for views on ICO approach to regulating online advertising

 I have summited a response to the Information Commissioner's Office's call for views on their approach to regulating online advertising.

It seems clear that the ICO, having concluded, unequivocally, back in 2019, that the adtech industry was egregiously violating the GDPR, are moving, now, towards finding ways to enable the industry to avoid or evade their data protection obligations with impunity. 

The pressure from the UK government to facilitate growth will be a factor here. However the ICO has been underperforming on enforcement for quite some time, as their own annual reports, including the latest, and eminent commentators such as Professor David Erdos of Cambridge University and Baroness Young of Old Scone, the former head of the UK’s Environment Agency, have, unimpeachably, noted. As Prof Erdos says, The ICO's

"strong duty to enforce and to respond to complaints has generally not been reflected in ICO practice and, even more concerningly, its accelerating stance points strongly away from rather than towards any expectation of regular and concrete regulatory action...

Especially post-Brexit, it may be argued that UK regulation as a whole is beset by an serious enforcement gap and that the ICO’s track-record merely reflects this. Even if true, this would in no way demonstrate that such an outcome is acceptable either in the abstract or given the UK GDPR’s very specific expectations...

the ICO’s relative performance cannot be explained by a lack of resourcing as it is likely the single best resourced data protection (and freedom of information) authority in the world with approximately 1,000 members of staff.  Rather it has been primarily driven by deeply rooted ICO internal culture which has been fuelled by a lack of effective accountability mechanisms for data subjects and by an Information Commissioner who has publicly set his face against full use of the UK GDPR’s powers by, for example, peremptorily degrading fines in the public sector in June 2022 and, without clear evidence, stating in November 2024 that neither high value nor high volume fines against companies were the best way to achieve impact."

 Baroness Young was, if anything, more damning when speaking in the House of Lords in 2023,

"We need a powerful and effective regulator. The ICO’s enforcement and prosecution record has not been sparkling, with low levels of enforcement notices, prosecutions and fines. If, when I was at the Environment Agency, I had had as low a level of those as the Information Commissioner has had, I would think I had gone to sleep somewhere along the line."

The ICO's own recent defence of the lack of enforcement action against the Ministry of Defence, following the serious breach involving the disclosure of personal details of thousands of Afghans who worked with British forces during the UK’s presence in Afghanistan, was weak, at best. 

This latest consultation, therefore, which seems to be about enabling the adtech industry to continue to generate oceans of what the Open Rights Group rightly call stalker ads, would appear to be another worrying indication of a longer term trend of the inefficacy and regulatory capture of the ICO. 

 For the past three decades, we have transformed the greatest communications medium in the history of the planet, the internet, into an invasive, toxic, and in the case of the people of Palestine and other marginalised peoples, oppressive and deadly, mass surveillance machine. We have facilitated the development and deployment of this infrastructure primarily in the interests of economic actors who generate their main revenues through targeted aka stalker advertising. 

States have, enthusiastically, supported and exploited and engaged in these developments, in their own interests. Essentially the most powerful ecomomic and political forces have all been pushing in the same direction - more mass surveillance, more mass data collection, storage, analysis and processing, more mass privacy invasion. Hypnotised by the belief that the magic computerised mass surveillance machine can deliver their economic or political dreams, decision makers have paid little substantive attention to the social or democratic consequences. 

The most powerful economic actors are thriving, even as the services and technology they offer get worse, so they are happy. Policymakers are in thrall to billionaire techbros building machines they claim can solve the politicians' complex socio-political-economic-environmental problems. The techbros are now claiming we should not even worry about burning the planet, in the quest to build an AI machine that will figure out how to fix the problems of global warming, climate change, environmental destruction. 

When the magic tech solutions don't work for the politicians and economic actors they become even more obsessed with making them work - techies must just nerd harder. Besides, as their billionaire techbro mates will constantly remind them, the technology is better now, it will work this time, or sometime in the future, if only you give them the license and enough resources. Innovation and growth will see us through.

My colleague at the Open University, Dr. Syed Mustafa Ali, characterises all this as a combination of racial capitalism and digital colonialism.  

I would simply ask -

Is the world more stable than it was 30 years ago?

Have we tackled global warming?

How about racism and xenophobia?

Discrimination on all fronts? 

War crimes and genocide? 

Crime?

Terrorism?

Poverty?

Equity and equality?

Conflict and mass population displacement?

Social welfare?

Fair and equal access to healthcare, treatment and medicines?

Migration, immigration & border control?

Security and intelligence?

Disinformation & misinformation?

Concentration of wealth and power? 

Exploitation and abuse of the under-privileged, on every dimension?  

Housing?

Employment? 

The obsession of the UK government - economic growth, at seemingly any cost?

Enabling people to live in basic comfort and dignity? 

Is there a greater recognition of and respect for fundamental human rights?  

What problems has this giant mass privacy invasion machine solved? 

How well has it solved them?

What other problems has it caused?

How much has it cost, not just in economic terms but in structural social, political, cultural, environmental, in human, personal, community, local, regional, national and global contexts?

Has it been or is it worth it? 

How can we better shape and/or retrofit and/or evolve and/or scrap and replace the architectures of these systems in more socially progressive ways, in the public interest?

Perhaps giving the most blatant and flagrant, systemic and systematic violators of basic data protection rights the ICO's blanket blessing, to continue and expand those practices, might not be the most appropriate course of action?

With all that in mind, and when the appalling notion of chat control is coming perilously close to being endorsed by most EU governments, my response to the ICO consultation copied below, has tended, in places to be direct. Please excuse the formatting - it is copied and pasted from the pdf version of my submission via the ICO webform. It should be noted that parts of my response are also guided by and/or edited from the response of the Open Rights Group to the same call for views.

"Submitted to Our approach to regulating online advertising - Call for views
Submitted on 2025-09-05 21:31:09
Advertising purposes and capabilities
1 Ad delivery and billing
What features within ad delivery and billing are the minimum requirements for a commercially viable advertising model, and why?:
Up front I would remind the ICO of your own adtech & RTB report in 2019 specifying:
"general, systemic concerns around the level of compliance of Real Time Bidding (RTB):
1.Processing of non-special category data is taking place unlawfully at the point of collection due to the perception that legitimate interests can be used
for placing and/or reading a cookie or other technology(rather than obtaining the consent PECR requires).
2.Any processing of special category data is taking place unlawfully as explicit consent is not being collected (and no other condition applies).In general,
processing such data requires more protection as it brings an increased potential for harm to individuals.
3.Even if an argument could be made for reliance on legitimate interests,participants within the ecosystem are unable to demonstrate that they have
properly carried out the legitimate interests tests and implemented appropriate safeguards.
4.There appears to be a lack of understanding of, and potentially compliance with, the DPIA requirements of data protection law more broadly (and
specifically as regards the ICO’s Article 35(4) list). We therefore have little confidence that the risks associated with RTB have been fully assessed and
mitigated.
5.Privacy information provided to individuals lacks clarity whilst also being overly complex. The TCF and Authorized Buyers frameworks are insufficient to
ensure transparency and fair processing of the personal data in question and therefore also insufficient to provide for free and informed consent, with
attendant implications for PECR compliance.
6.The profiles created about individuals are extremely detailed and are repeatedly shared among hundreds of organisations for any one bid request, all
without the individuals’ knowledge.
7.Thousands of organisations are processing billions of bid requests in the UK each week with (at best) inconsistent application of adequate technical and
organisational measures to secure the data in transit and at rest, and with little or no consideration as to the requirements of data protection law about
international transfers of personal data.
8.There are similar inconsistencies about the application of data minimisation and retention controls.
9.Individuals have no guarantees about the security of their personal data within the ecosystem."
Having not addressed these concerns, why does the ICO now wish to find ways to permit the advertising industries to continue evade data protection
law?
2 Ad fraud prevention and detection
What features within ad fraud prevention and detection are the minimum requirements for a commercially viable advertising model, and why?:
How is this question within the remit of the ICO?
3 Brand safety, brand suitability and brand compliance
What features within brand safety, brand suitability and brand compliance are the minimum requirements for a commercially viable advertising model,
and why?:
I realise the ICO is under pressure to legalise some targeted online advertising, without consent, so the advertising industry can continue to generate
revenues but, again, how is this question within the remit of the ICO?.
4 Frequency capping
What features within frequency capping are the minimum requirements for a commercially viable advertising model, and why?:
Yet again, how is this question within the remit of the ICO?.
5 Measurement and attribution
What features within measurement and attribution are the minimum requirements for a commercially viable advertising model, and why?:
Yet again, how is this question within the remit of the ICO?.
6 Targeting
What features within targeting are the minimum requirements for a commercially viable advertising model, and why?:
Why is the ICO so concerned with the commercial viability of advertising models. If I can quote from your own website, "The Information Commissioner is
the UK’s independent regulator for Data Protection and Freedom of Information, with key responsibilities under the Data Protection Act 2018 (DPA) and
Freedom of Information Act 2000 (FOIA)." Your "role is to uphold information rights in the public interest" not working out commercially viable advertising
models. You cover
Data Protection Act
Privacy and Electronic Communications Regulations
Environmental Information Regulations
eIDAS Regulation
NIS Regulations
Freedom of Information Act
General Data Protection Regulation
INSPIRE Regulations
Re-use of Public Sector Information Regulations
Investigatory Powers Act
None of these, as I understand the regulations concerned, have anything to do with "What features within targeting are the minimum requirements for a
commercially viable advertising model, and why?"
RTB and behavioural advertising blatantly operate in breach of data protection regulations where, once consent is given (and often, even when it's not)
adtech intermediaries process, share and re-purpose this data at will. This is illegal under the UK GDPR, which requires data not to be processed beyond
the specific, granular purpose for which consent was given.
The true commercial viability of advertising practices cannot be measured despite the claims of the industry on its efficacy. Prices are distorted and the
market is dominated by giant oligopolies and consequent unfair competition of non-compliant advertising practices. Rather than asking what targeting
features are needed to attain a “commercially viable advertising model” should not the ICO be taking action to enforce data protection law so egregiously
ignored by the industry, to remove illegal advertising from the market, and restore a level playing-field for actual law-abiding businesses.
7 How significant are the changes in ICO regulatory posture towards PECR regulation 6 consent requirements that would be required to
enable delivery of a commercially viable advertising model?
Change needed - Ad delivery and billing:
Change needed - Ad fraud prevention and detection:
Change needed - Brand safety, brand suitability and brand compliance:
Change needed - Frequency capping:
Change needed - Measurement and attribution:
Change needed - Targeting:
No change
Please explain your answer::
Why is the ICO concerned with undermining PECR to "enable delivery of a commercially viable advertising model?"
Regarding the changes on targeting there should be 'No change'.
The ability to target individuals based on personal data is the main enabler of harms, discrimination and predatory practices that plague online
advertising. Targeting based on personal data exposes women to unjust prosecutions for their attempt to exercise reproductive health rights; problem
gamblers to being targeted with gambling ads that are meant to exploit their addiction; anyone to be excluded on the basis of their gender, sexual
preferences, ethnicity or other sensitive characteristics; children and those in a more vulnerable status to be targeted and taken advantage of.
These are not unfortunate outcomes, but a feature of the technology. Behaviour is the only personal data that can be observed and captured by storage
and access technologies. It is never a reliable proxy for an individuals' characteristics, preferences or inner desires, but is a reliable means to identify
addiction, health statuses and other syndromes—all of which are, indeed, recognisable by “typical”, “compulsive” behaviours and clearly discernible
patterns of behaviour.
A system that is inherently bad at guessing your commercial preferences but inherently good at identifying weak spots that can be exploited does, not
surprisingly, serve the purpose of exploiting individuals better than it does serve the purpose of delivering legitimate advertising. Advertising systems that
target individuals on the basis of personal data should NEVER be considered low-risk or exempted from consent requirements. That the ICO is running a
consultation on how to enable such activity would appear directly at odds with your duty to act in the public interest.
This call for views includes the following statement: “We will continue to enforce consent requirements for collecting personal information for ad
targeting and personalisation.”
A RELAXATION of the consent requirement for ad targeting, based on ANY amount of personal data, is clearly outside of the scope of this consultation. It
is important, therefore, that the ICO to honour this statement.
Impacts of our approach
8 How far do you agree that the approach outlined in our call for views can identify commercially viable solutions that can also safeguard
people’s privacy and improve user experience?
Strongly disagree
Please explain your answer::
Your call for views appears specifically designed to undermine people's privacy in order to facilitate the ongoing data protection breaching practices of
the industry.
I strongly disagree that such a consultation can remotely "safeguard people's privacy and improve user experience." On the contrary this call for views
displays all the hallmarks of regulatory capture by industry.
DPA tolerance and facilitation of non-compliant advertising practices prevent a meaningful measurement of the true value and commercial viability of
advertising practices let alone those “that can also safeguard people’s privacy and improve user experience”.
The approach of this call for views turns the relationships between commercial viability and “safeguarding people's privacy and improving user
experience” on its head: it is the duty of the economic actors in the advertising industry to commercialise their services WITHIN the boundaries and IN
COMPLIANCE with the norms that have been established by legislation. The UK GDPR and PECR already require advertising to be done in a manner that
safeguards privacy and our agency. The role of the ICO is to enforce these boundaries, NOT to adapt them to meet the needs of non-compliant
advertising firms or to enable such companies to evade their legal obligations.
In the event of exemptions to cookie consent requirements being adopted, “safeguarding people's privacy” would ultimately depend on the limits and
safeguards in place that underpin those exemptions. The call for views provides some clarifications over what will not be exempted, but does not clarify
what practices are being considered to be covered by those exemptions. Without such details it is impossible to evaluate how the ICO is proposing to
“safeguard people's privacy” while conducting this call for views.
9 Would you anticipate any of the following positive impacts if any of the capabilities referenced were permitted without PECR consent in
circumstances where the ICO considers them to be low risk to people? Please select all that apply:
If other, please specify::
I would anticipate NO positive impacts, in the public interest, of enabling industry to evade or circumvent their legal obligations.
Which positive impacts for which stakeholders, in particular, is the ICO interested in? The interests of citizens and customers are not coincident with the
interests of the ecology of economic actors that make up the advertising industry.
Please provide any evidence on the likely scale of these positive impacts::
I refer you to the Irish Council for Civil Liberties report from 2023, Europe's Hidden Security Crises, on the wider NEGATIVE impacts of RTB which
concludes: RTB is not just a privacy concern but a national security concern:
"Real-Time Bidding (RTB) allows foreign states and non-state actors to obtain compromising sensitive personal data about key European personnel and
leaders.
Key insights:
Our investigation highlights a widespread trade in data about sensitive European personnel and leaders that exposes them to blackmail, hacking and
compromise, and undermines the security of their organisations and institutions.
These data flow from Real-Time Bidding (RTB), an advertising technology that is active on almost all websites and apps. RTB involves the broadcasting of
sensitive data about people using those websites and apps to large numbers of other entities, without security measures to protect the data. This occurs
billions of times a day...
...EU military personnel and political decision makers are targeted using RTB...
...Google and other RTB firms send RTB data about people in the U.S. to Russia and China, where national laws enable security agencies to access the
data. RTB data are also broadcast widely within the EU in a free-for-all, which means that foreign and non-state actors can indirectly obtain them, too.
RTB data often include location data or time-stamps or other identifiers that make it relatively easy for bad actors to link them to specific individuals.
Foreign states and non-state actors can use RTB to spy on target individuals’ financial problems, mental state, and compromising intimate secrets. Even if
target individuals use secure devices, data about them will still flow via RTB from personal devices, their friends, family, and compromising personal
contacts.
In addition, private surveillance companies in foreign countries deploy RTB data for surreptitious surveillance. We reveal “Patternz”, a previously
unreported surveillance tool that uses RTB to profile 5 billion people, including the children of their targets...
Cambridge Analytica style psychological profiling of target individuals’ movements, financial problems, mental health problems and vulnerabilities,
including if they are likely survivors of sexual abuse."
See also ICCL's reports on America's & Australia's hidden security crises from 2023 & 2024 and their 2025 RTB complaint to the FTC.
10 Would you anticipate any of the following negative impacts if any of the capabilities referenced were permitted without PECR consent in
circumstances where the ICO considers them to be low risk to people? Please select all that apply:
Worsened customer experience, Increased risk of privacy harm
If other, please specify::
There will be an increased risk of privacy harm. This privacy harm will be accompanied by what you describe as "worsened customer experience." This
questionnaire gives adtech providers ample freedom to argue in favour of removing consent requirements for a range of purposes, as listed in questions
1-6, questions which themselves seem to be outwith with the duties and responsibilities of the ICO.
The call for views does not provide any proposal whose impact on people’s privacy can be commented upon. Further, the call for views allows industry
players to keep their responses confidential, which could prevent industry submissions in favour of deregulation to be scrutinised publicly.
The “scale of these negative impacts" can only be measured when specific proposals are presented.
The shape and form of this call for views makes it likely that responses of the industry will be over-represented. In turn, the views of those concerned
about significant increased risk of privacy harms, that will undoubtedly arise from providing industry with an ICO license to evade data protection laws,
will be under-represented.
Please provide any evidence on the likely scale of these negative impacts::
I refer you to Cory Doctorow's book, longlisted for the Financial Times and Schroders Business Book of the Year 2025, Enshittification: Why Everything
Suddenly Got Worse and What To Do About It.
Doctorow diagnoses the broader issues with unchecked oligopolistic markets online and the negative impacts already out of control. It is exactly the
opposite of easing the pressure on these industries that Doctorow proposes is the way forward, not the ICO's apparent ideas about giving the advertising
industry more freedoms. From the book description:
"Misogyny, conspiratorialism, surveillance, manipulation, fraud, and AI slop are drowning the internet. For the monopolists who dominate online - X,
TikTok, Amazon, Meta, Apple - this is all part of the playbook. The process is what leading tech critic Cory Doctorow has dubbed 'enshittification'. First, the
platform attracts users with some bait, such as free access; then the activity is monetized, bringing in the business customers and degrading the user
experience; then, once everyone is trapped and competitors eradicated, the platform wrings out all the value and transfers it to their executives and
shareholders.
As a result, online public squares have become places of torment, and online retailers are hellish dumpster fires. The virtual gathering places where we
once imagined the world's problems might be resolved are now a sewer of hatred and abuse - thoroughly enshittified.
Doctorow enumerates the symptoms, lays out the diagnosis, and identifies the best responses to these diseased platforms: the monopolies online must
be shattered. Companies too big to fail or to jail - and much too big to care - must be cut down to size. Only an attack on corporate power will permit
effective regulation and real privacy. Tech unions must protect the workers who should, in turn, defend us against their bosses' sadism and greed."
11 Do you see any challenges in delivering commercially viable advertising if the ICO were to revise its regulatory posture towards regulation
6 PECR requirements for specific advertising purposes?
Unsure / Don't know
Please explain your answer::
"Challenges in delivering commercially viable advertising" should be no part of the concern of the ICO.
Diluting the ICO's "regulatory posture towards regulation 6 PECR" to suit an industry notorious for engaging in systematic and systemic practices
breaching its legal obligations under data protection and privacy regulations would seem the very definition of derogation of duty on the part of the ICO.
Technical safeguards
12 Are you aware of any technical safeguards to reduce data protection and privacy risks of storage and access of information for the
advertising purposes listed above?
Please provide your answer::
Though technical architectures can be configured in privacy enhancing ways, with default settings providing better respect for privacy, there are no sole
technical solutions to reduce data protection and privacy risks on the internet.
Structurally, privacy protection requires a combination of regulations, economic incentives/sanctions, social and technological architecture measures
working in harmony to offer the requisite respect for fundamental rights.
According to Schedule 12(2) of the Data (Use and Access) Act:
(3) [...] the means by which the subscriber or user may signify consent include—
10(a) amending or setting controls on the internet browser which the subscriber or user uses;
(b) using another application or programme.
So, powers to amend exemptions to Regulation 6 PECR, which the ICO is proposing should be amended in favour of industry, could actually be used to
give legal enforceability to technical signals.
Giving legal enforceability to technical signals could allow individuals to express consent for online advertising targeting via browser settings and
communicate them persistently as they browse the Internet. But that is only tinkering at the edges of the need for network architecture, regulatory,
economic incentive and social reforms.
13 Do you currently use any technical safeguards or PETs in your online advertising model?
Please provide your answer::
I don't have an online advertising model.
14 Are you aware of any recent innovations which significantly reduce the data protection and privacy risks of one or more of the capabilities?
Please provide your answer::
No and even if there were, there are no simple tech fixes to complex sociotechnical/economic/tech-architectural systemic problems.
That politicians and media and regulators like to believe there are is a fundamental part of the reason why we keep making the same mistakes in failing
to shape the development and deployment of these technologies in the public interest.
About you and your organisation
15 Are you responding on behalf of an organisation?
I'm not responding on behalf of an organisation
If other please specify::
16 If you are not responding on behalf of an organisation, are you answering as:
An academic
If other, please specify::
Final comments
21 Before completing this call for views, do you have any final comments you have not made elsewhere?
Please provide your comments::
I would just like to repeat my concern that the shape and form of this call for views makes it likely that responses of the adtech industry will be
over-represented. In turn, the views of those concerned about significant increased risk of privacy harms, that will undoubtedly arise from providing
industry with an ICO license to evade data protection laws, will be under-represented.
22 We may wish to contact you for further information on your responses. If you are happy to be contacted, please provide your name and
an email address below.
Please provide your name:
Ray Corrigan

23 We may publish in full the responses received from organisations or a summary of the responses. If so, we would like your permission to
publish your consultation response. Please indicate your publishing preference:
Publish response"

Thursday, June 12, 2025

Liberal Democrats perspective on e-Visas

 I've had a response from Layla Moran's office to my email about the government planning to use e-Visas for immigration raids.

"Dear Ray,

Thank you for taking the time to share your concerns about e-Visas.

As Co-President of The European Movement, Layla is deeply concerned by the growing number of cases where EU citizens have faced severe consequences due to errors or failures in the Home Office’s digital system. As you rightly point out, problems with proving immigration status can affect everything from healthcare access and housing to employment and education, all of which are fundamental rights people should be able to rely on.

We have heard of real-life experiences which highlight a system that is failing in both transparency and reliability, and we agree that this must be urgently addressed. The Liberal Democrats agree with Settled that a status system that leaves people in limbo, unable to prove their right to live, work, or access essential services in the UK, is not fit for purpose.

We believe EU citizens deserve a secure, reliable and inclusive form of proof of status. The Home Office must take urgent steps to restore confidence in the system, including commissioning an independent review into the operation of the eVisa platform and ensuring there are accessible backup forms of status available. Everyone who has the legal right to be in the UK should be able to prove it easily and consistently.

We continue to call for stronger UK–EU relations that go beyond the instability and bureaucracy of recent years, including by supporting Early Day Motion 1318. This EDM welcomes the progress made at this week’s UK–EU Summit, but also expresses concern that there is still much more to be done, not least on restoring people’s mobility rights and addressing the red tape many now face.

We greatly appreciate the work that Settled has done and continues to do. Please rest assured that Layla and her Liberal Democrat colleagues will keep pressing the Government to show greater ambition in resetting our relationship with Europe and to ensure that those who live, work, and contribute to communities in the UK, including long-standing EU residents, are treated with the dignity and security they deserve.

Thank you again for getting in touch.
 

Best wishes,

Office of Layla Moran
Liberal Democrat Member of Parliament for Oxford West & Abingdon
"

 

Tuesday, May 20, 2025

Ban police use of face recognition technology

At the suggestion of the Open Rights Group, I've written to my local council about police use of face recognition technology.

 "I am writing to urge you to to ban police use of live facial recognition technology in Abingdon East.

These surveillance tools are being deployed by police without public consent, or clear legal grounds, undermining the right to privacy, the right to be presumed innocent and making police misuse of the technology more likely.

These tools do not predict crime – they predict policing. Built on flawed, discriminatory data, they disproportionately target Black and racialised communities, low-income areas, and migrants. Rather than making our communites safer, these technologies reinforce racism and criminalise poverty.

“Predictive policing” strips away our fundamental right to be presumed innocent. It creates fear, not safety. The European Union has already recognised these harms and taken action to ban such technologies. The UK must do the same.

It is well documented that when police are given new tools, it is Black and racialised communities that bear the brunt of the harm, with consequences including increased police harassment, injury from use of force and unjust stop and search. I am deeply concerned for the safety of my neighbours if we are to allow police use of “crime predicting” technology to become anymore embedded in our neighbourhoods.

We all want to feel safe in our local areas, but real safety comes from investing in our communities – not from surveillance that fuels fear and distrust.

That is why I urge you to stand in solidarity with your local community, by bringing forward a motion to ban this dangerous tech. As your constituent, I want to feel confident that my rights and safety are being protected.

Please take urgent action to prohibit the use of “crime-predicting” technologies and protect the rights and freedoms of all."

Lib Dem response on Data Use & Access Bill

 I've had a reponse from the Liberal Democrats on my concerns about the Data Use & Access Bill.

 "Dear Ray,

Thank you for taking the time to write. We are responding on Layla’s behalf whilst she takes parental leave. You can view the full transcript of the Bill’s third reading, along with a record of Layla’s proxy votes on amendments, here

The Liberal Democrats welcome the omission of many of the more objectionable elements of the previous DPDI bill, which was introduced by the previous Conservative government but fell when the General Election was called.

Despite these changes, retention and enhancement of public trust in data use and sharing is a major issue in the bill. The focus on smart data and sharing of government data means that the Government must do more to educate the public about how and where our data is used and what powers individuals have to find out this information.

There are still major changes proposed to the Data Protection Act 2018 (GDPR), such as in regard to police duties and Automated Decision Making, which continue to make retention of data adequacy for the purposes of digital trade with the EU of the utmost priority in considering any changes.

We continue to believe that GDPR is not in need of fundamental reform, but rather, where there is any ambiguity in interpretation, clarifications incorporating relevant recitals to the GDPR should be made in the legislation and in improved guidance.

The Liberal Democrats will continue to follow the progress of this Bill closely. Thank you once again for taking the time to get in touch. 


Best wishes,

Office of Layla Moran"

Bottom line - the Lib Dems are not concerned about the expansion in data sharing proposed in the Bill, just that "that the Government must do more to educate the public" about it and the GDPR is not in need of reform.

Wednesday, May 14, 2025

Government plan to use eVisa scheme for immigration raids

The government has announced this week that the Home Office's flawed eVisa scheme will be used for immigration raids. So I've followed up my email to my MP to alert her to this development. The Open Rights Group are coordinating an effort to make MPs aware and ask for resistance to this scheme.

Dear Layla,

 I wrote to you recently about the Home Office’s flawed eVisa scheme.

 This week, little noticed in media reporting on the Prime Minister’s unconscionable ‘island of strangers’ targeting of immigrants speech, the Government announced that the eVisa scheme would be used to support immigration raids. If this goes ahead, people with the legal right to be in the UK could be deported because of flaws in the Home Office’s systems.

 Since the rollout of the eVisa scheme, the human rights organisation, the Open Rights Group, has heard about travellers stranded at airports, refugees unable to rent a home or get a job, and even a man being made homeless because of a data error. But these harms would pale into significance if eVisa data is used for immigration raids that result in deportation.

 Please will you contact  the Secretary of State for the Home Department, Yvette Cooper and urge her to stop eVisa data from being used for immigration raids and press again for the government to provide offline alternative for people to prove their immigration status when the eVisa is not working.  The seeds of another Windrush scandal have been sown and are sprouting.

 Regards,

 Ray