AI Lie Detectors at EU Borders: Balancing Security and Privacy

by | Jun 16, 2024

British tourists venturing into Europe are poised to encounter an unprecedented level of scrutiny as the European Union prepares to introduce AI-driven ‘lie detector’ tests at its borders. This initiative, designed to bolster security in the post-Brexit era, has sparked a wave of apprehensions and criticisms. As these Orwellian-style checks become a reality, their implications for visa rejections, discrimination, and privacy are profound.

These new measures are part of a broader strategy to enhance border security through advanced technology. On October 6, the EU will begin phasing in the Entry-Exit System (EES), followed by the European Travel Information and Authorization System (ETIAS) for short stays of up to 90 days. This transition will introduce significant changes for British travelers, who will need to navigate a more complex and potentially intrusive screening process.

At the heart of this new regime is the AI software that has been trialed under the names iBorderCtrl and TRESPASS. Between 2016 and 2019, these trials were conducted in Greece, Hungary, and Latvia, involving avatar-based interviews where AI analyzed facial expressions and body language to assess the truthfulness of applicants. If the software detected deceit or suspicious behavior, the individual’s file was flagged for further inspection by an immigration officer, potentially leading to refusal of entry.

While the technology promises to enhance operational capacity at the borders, it has faced strong criticism. Patrick Breyer, a German Member of the European Parliament (MEP), has labeled the AI ‘lie detector’ as “pseudoscience,” contending that it will discriminate against people with disabilities or anxious personalities. He asserts, “It will not work,” reflecting a broader skepticism about the reliability and fairness of such technology.

As the EES and ETIAS systems roll out, British travelers will need to apply online at least a month before travel, paying a fee of seven euros (£6). The application process will involve a video interview with an avatar immigration officer, during which the AI will scrutinize eye movements, facial expressions, and body language to detect signs of lying. This level of scrutiny has raised alarms about potential errors and the ethical implications of relying on AI for such critical decisions.

Adding to these concerns is the establishment of the Common Identity Repository (CIR), a vast super-database that will hold 300 million records, including data on terrorists, criminals, asylum seekers, and illegal immigrants. Every British traveler entering the EU will have their data stored in the CIR, adding another layer of complexity and potential risk. Civil rights groups and politicians across Europe have voiced fears that this could lead to widespread rejections of visas and discrimination against individuals with disabilities.

Privacy concerns are further heightened by the inclusion of social media checks in some pilot schemes. This raises the possibility that political or controversial comments on platforms like X or Facebook could result in a travel ban. The integration of social media checks into the screening process underscores the invasive nature of the new measures and the potential for misuse of personal data.

The introduction of AI-driven ‘lie detector’ tests at European borders signifies a substantial shift in how border security is managed. While the technology aims to streamline operations and enhance security, it also presents numerous ethical and practical challenges. Experts warn that the AI software is prone to errors and could unfairly target vulnerable individuals, leading to unjust outcomes.

The criticisms from MEP Patrick Breyer and civil rights groups underscore the potential for discrimination and the limitations of AI in accurately detecting deceit. The trials under iBorderCtrl and TRESPASS have demonstrated the technology’s questionable effectiveness, further fueling concerns about its deployment.

The creation of the Common Identity Repository (CIR) adds another layer of complexity. With its vast database of personal information, the CIR increases the risk of data breaches and misuse, raising serious privacy concerns. Travelers are troubled by the prospect of having their data stored in such a repository, fearing the potential consequences of a security lapse.

As the EU continues to tighten its borders, the role of AI in immigration control is likely to expand. The success or failure of the AI-driven ‘lie detector’ tests will depend on their implementation and the ability to address the ethical and practical concerns raised. In the short term, travelers should prepare for increased scrutiny and potential delays as the new systems are phased in.

Looking ahead, the development of more advanced AI technologies could improve the accuracy of these systems, but the ethical implications will remain a significant concern. The EU must balance the benefits of enhanced security with the need to protect individual rights and privacy. Ensuring robust data protection measures for the Common Identity Repository (CIR) will be crucial to prevent misuse and safeguard the privacy of individuals.

While AI-driven ‘lie detector’ tests represent a new frontier in border security, their implementation must be carefully managed to address the ethical and practical challenges they pose. As the EU navigates this complex landscape, the future of travel for British tourists and other non-EU nationals will be shaped by the balance between security and individual rights. The journey towards a secure, yet fair, immigration system is fraught with challenges, and the role of AI in this context will continue to be a topic of intense debate and scrutiny.