Monday, April 27, 2026

AI is being used inappropriately in asylum cases

Another note to my MP, Layla Moran, at the behest of the Open Rights Group:

Dear Layla,

One of the most important decisions that the state can make is whether or not to grant asylum to those seeking refuge in the UK.

 I am deeply concerned to learn, therefore, that the Home Office is using AI tools in asylum cases. These include the Asylum Case Summarisation (ACS) tool, which uses ChatGPT-4 to summarise asylum interview transcripts, where individuals outline why and the Asylum Policy Search (APS) tool, which summarises country Policy and Information Notes (CPINs), guidance documents, and Country of Origin Information (COI) reports.

 AI is not neutral. Errors and bias are core features of AI systems. AI tools are probability machines, "trained" on historic datasets that are themselves compiled and derived from historically flawed and discriminatory social patterns and contexts.

 This is well known. For some policymakers this is an attractive feature. The harm that these error prone systems cause is acceptable collateral damage that they can wield as a badge of honour to display their 'tough' stance on immigration and asylum seekers.

 The Home Office’s own evaluation of the ACS tool found that 9% of the AI-generated summaries were so flawed they had to be removed from the pilot. And yet these tools are shaping the information upon which life-changing decisions are being made. 

 Worse, those affected do not know that AI is being used and do not have the opportunity to check the changes these tools have made to their personal information and to correct any errors.

 A legal Opinion, commissioned by Open Rights Group, has found that this failure to inform applicants that AI is being used is likely unlawful.  The same Opinion finds that their use does not meet a number of legal obligations nor the standards set out in the AI Playbook for the UK government. 

 I urge you to ask the Home Office:

     • to stop using these AI tools in their current form;

    • to publish the Data Protection Impact Assessment and the Equality Impact Assessment relating to the ACS and APS; and

     • to ensure that the future use of AI is developed only through full transparency and meaningful consultation with affected communities and the organisations working with them, given their sensitivity and potential impact.

 Automating and dehumanising immigration management is not a process any enlightened democracy with a commitment to social justice should either be engaged in or embarking on; however attractive it might be to those dedicated to making the UK a hostile environment for asylum seekers.

 If you would like to know more about the harms created by these tools, I recommend that you contact Open Rights Group

 For a more general introduction to the harms AI systems have been enabling, a transcript of my recent spring lecture to Open University final year computing & communications project students may be found at https://b2fxxx.blogspot.com/2026/03/our-algorithmic-future-utopia-or.html.

 Yours sincerely,

 Ray Corrigan