Letter: The EU AI Act Must Protect People on the Move

The European Union Artificial Intelligence (AI) Act will regulate the development and use of ‘high-risk’ AI, and aims to promote the uptake of ‘trustworthy AI’ whilst protecting the rights of people affected by AI systems.

However, in its original proposal, the EU AI Act does not adequately address and prevent the harms stemming from the use of AI in the migration context. Whilst states and institutions often promote AI in terms of benefits for wider society, for marginalised communities, and people on the move (namely migrants, asylum seekers and refugees), AI technologies fit into wider systems of over-surveillance, criminalisation, structural discrimination and violence.

It is critical that the EU AI Act protects all people from harmful uses of AI systems, regardless of their migration status. We, the undersigned organisations and individuals, call on the European Parliament, the European Commission, the Council of the European Union, and EU Member States to ensure the EU Artificial Intelligence Act protects the rights of all people, including people on the move. We recommend the following amendments to the AI act:

  1. Prohibit unacceptable uses of AI systems in the context of migration:

    Some AI systems pose an ‘unacceptable risk’ to our fundamental rights, which will never be fixed by technical means or procedural safeguards. Whilst the proposed AI Act prohibits some uses of AI, it does not prevent some of the most harmful uses of AI in migration and border control, despite the potential for irreversible harm. The AI Act must be amended to include the following as ‘prohibited practices’:

    • Predictive analytic systems when used to interdict, curtail and prevent migration. These systems generate predictions as to where there is a risk of “irregular migration” and are potentially used to facilitate preventative responses to forbid or halt movement, often carried out by third countries enlisted as gatekeepers of Europe’s borders. These systems risk being used for punitive and abusive border control policies that prevent people from seeking asylum, expose them to a risk of refoulement, violate their rights to free movement and present risks to the right to life, liberty, and security of the person.
    • Automated risk assessments and profiling systems. These systems involve the use of AI to assess whether people on the move present a ‘risk’ of unlawful activity or security threats. Such systems are inherently discriminatory, pre-judging people on the basis of factors outside of their control, or on discriminatory inferences based on their personal characteristics. Such practices therefore violate the right to equality and non-discrimination, the presumption of innocence and human dignity. They can also lead to unfair infringements on the rights to work, liberty (through unlawful detention), a fair trial, social protection, or health.
    • Emotion recognition and biometric categorisation systems. Systems such as AI ‘lie-detectors’ are pseudo-scientific technology claiming to infer emotions on the basis of biometric data, while behavioral analytics are used to detect ‘suspicious’ individuals on the basis of the way they look. Their use reinforces a process of racialised suspicion towards people on the move, and can automate discriminatory assumptions.
    • Remote Biometric Identification (RBI) at the borders and in and around detention facilities. A ban on remote biometric identification (such as the use of facial recognition) is required to prevent the dystopian scenario in which technologies are used to scan border areas as deterrence and part of a wider interdiction regime, preventing people from seeking asylum and undermining Member States’ obligations under international law, in particular upholding the right to non-refoulement.
  2. Expand the list of high-risk systems used in migration:


    While the proposal already lists in Annex III the uses of ‘high-risk’ AI systems in migration and border control, it fails to capture all AI-based systems that affect people’s rights and that should be subject to oversight and transparency measures. To ensure all AI systems used in migration are regulated, Annex III must be amended to include the following as ‘high-risk’:


    • Biometric identification systems. Biometric identification systems (such as mobile fingerprint scanners) are increasingly used to perform identity checks, both at and within EU borders. These systems facilitate and increase the unlawful and harmful practice of racial profiling, with race, ethnicity or skin  colour serving as a proxy for an individual’s migration status. Due to the severe risks of discrimination that come with the use of these systems, lawmakers must ensure the EU AI Act regulates their use. AI systems for border monitoring and surveillance. In the absence of safe and regular pathways to the EU territory, people will cross European borders via irregular means. Authorities increasingly use AI systems for generalised and indiscriminate surveillance at borders, such as scanning drones or thermal cameras. The use of these technologies can exacerbate violence at the borders and facilitate collective expulsions or illegal pushbacks. Given the elevated risks and broader structural injustices, lawmakers should include all AI systems used for border surveillance within the scope of the AI Act. Predictive analytic systems used in migration, asylum and border control. Systems used to generate predictions as to migration flows may have vast consequences for fundamental rights and access to international protection procedures. Often these systems influence how resources are assessed and allocated in the migration control and international protection contexts. Incorrect assessments about migration trends and reception needs will have significant consequences for the preparedness of Member States, but also for the likelihood that individuals can access international protection and numerous other fundamental rights. As such predictive systems should be considered as ‘high-risk’ when deployed in the context of migration.

  3. Ensure the AI Act applies to all high-risk systems in migration, including those in use as part of EU IT systems:


    Article 83 of the AI Act lays out the rules for AI systems already on the market, at the time of the legislation’s entry into force. Article 83 includes a carve-out for AI systems that form part of the EU’s large-scale IT systems used in migration, such as Eurodac, the Schengen Information System, and ETIAS. [1] All of these large-scale IT systems – which foresee a capacity of over 300 million records – involve the automated processing of personal and sensitive data, automated risk assessment systems or the use of technology for biometric identification. For example, the EU plans to subject all visa and ‘travel authorisation’ applicants to automated risk profiling technologies in the next few years. Further, EU institutions are currently considering an update to Eurodac to include the processing of facial images in databases of asylum applicants.

    The exclusion of these databases would mean the safeguards in the EU AI Act do not apply. This blanket exemption will only serve to decrease accountability, transparency and oversight of AI systems used in EU migration control, and lessen protection for people impacted by AI systems as part of EU large-scale EU IT systems. Due to the exemption from regulatory scrutiny of these systems, the EU AI Act would lead to a double-standard when it comes to protecting fundamental rights of persons, depending on their migration status.

    The EU AI Act should be amended to ensure that Art. 83 applies the same compliance rules for all high-risk systems and protects the fundamental rights of every person, regardless of their migration status.


  4. Ensure transparency and oversight measures apply:


    People affected by high-risk AI systems need to be able to understand, challenge, and seek remedies when those systems violate their rights. In the context of migration, this requirement is both urgent and necessary given the overwhelming imbalance of power between those deploying AI systems and those subject to them.The EU AI Act must prevent harm from AI systems used in migration and border control, guarantee public transparency, and empower people to seek justice. The EU AI Act must be amended to:

    • Include the obligation on users of high-risk AI systems to conduct and publish a fundamental rights impact assessment (FRIA) before deploying any high-risk AI system, as well as during its lifecycle. Ensure a requirement for authorities to register the use of high-risk – and all public – uses of AI for migration, asylum and border management in the EU database. Public transparency is essential for effective oversight, particularly in the high risk areas of migration where a number of fundamental rights are at stake. It is crucial that the AI Act does not allow carve-outs for transparency measures in law enforcement and migration. Include rights and redress mechanisms to enable people and groups to understand, seek explanation, complain and achieve remedies when AI systems violate their rights. The AI act must provide effective avenues for affected people, or public interest organisations on their behalf, to challenge AI systems within its scope if they are non-compliant or violate fundamental rights.

 

Drafted by:

Access Now, European Digital Rights (EDRi), Platform for International Cooperation on Undocumented Migrants, and the Refugee Law Lab.

Endnotes: 

[1] And other EU migration databases, as outlined in Annex IX of the Artificial Intelligence Act.

Signed by:

  1. Access Now
  2. European Digital Rights (EDRi)
  3. Platform for International Cooperation on Undocumented Migrants (PICUM)
  4. Refugee Law Lab, York University
  5. Albanian Media Council, Albania
  6. Alternatif Bilisim (Alternative Informatics Association), Turkey
  7. Aspiration, International
  8. Bits of Freedom, Netherlands
  9. Centre for Information Technology and Development (CITAD), Nigeria
  10. Center for Muslim Rights in Denmark (CEDA), Denmark
  11. Comitato per i Diritti Civili delle Prostitute APS, Italy
  12. Consortium for Refugees and Migrants in South Africa, South Africa
  13. Digitalcourage, Germany
  14. Državljan D / Citizen D, Slovenia
  15. European Anti poverty Network, Europe
  16. European Network Against Racism (ENAR), Europe
  17. European Network for the Promotion of Rights and Health among Migrant Sex Workers (TAMPEP)
  18. European Sex Workers’ Rights Alliance (ESWA), Europe
  19. European Center for Human Rights, European
  20. European Center for Not-for-Profit Law (ECNL), Netherlands
  21. FEANTSA, the European Federation of National Organisations Working with the Homeless, Europe
  22. FIDH (International Federation For Human Rights), International
  23. Glitch, UK
  24. Global Data Justice project (Tilburg Institute for Law, Technology and Society), Netherlands
  25. Koapanang Africa Against Xenophobia {KAAX], South Africa
  26. KOK – German NGO Network against Trafficking in Human Beings, Germany
  27. Lawyers for Human RIghts, South Africa
  28. Ligue Des Droits De L’Homme
  29. Migration-Controle.info, Germany
  30. Migrants Organise, UK, France
  31. Moje Państwo Foundation, Poland
  32. Novact, Spain
  33. Homo Digitalis
  34. Panoptykon Foundation
  35. Privacy International
  36. Prostitution Information Center (PIC), Netherlands
  37. Red en Defensa de los Derechos Digitales (R3D), Mexico
  38. Refugees International, United States
  39. Revibra Europe
  40. Statewatch
  41. StraLi for Strategic Litigation , Italy
  42. Taraaz, International

Individuals:

  1. Derya Ozkul, Refugee Studies Centre
  2. Dr Dale T McKinley
  3. Douwe Korff
  4. Tom Neal
  5. Lisa Fleischer
  6. Niovi Vavoula
  7. Rakhal Zaman
  8. Francesca M
  9. Elisa Elhadj
  10. Essia van der Ploeg
  11. Dr. Grace S. Thomson

Banner Photo Caption: View of the Closed Controlled Access Centre on the island of Samos. Photo Attribution – Daphne Panayotatos