On 13 October 2019 Privacy International (PI) published the article How financial surveillance in the name of counter-terrorism fuels social exclusion. Their article starts with:
The global counter-terrorism agenda is driven by a group of powerful governments and industry with a vested political and economic interest in pushing for security solutions that increasingly rely on surveillance technologies at the expenses of human rights.
In the article PI welcomes the report of the UN Special Rapporteur on counter-terrorism and human rights, that was submitted to the UN General Assembly.
One of the focuses of the Rapporteur is the Financial Action Task Force (FATF), an international body that is described as follows:
The FATF is an exclusive, non-transparent, State-created forum to which civil society and UN human rights entities have little or no consistent access.
On the nature and mandate of FATF:
The FATF’s mandate contains no references to international law, international human rights law or international humanitarian law. However, laws and policies related to the standards set up by the FATF address issues such as criminalizing and prosecuting terrorist financing, targeted financial sanctions, tackling the risk of abuse of the not-for-profit sector for terrorist financing purposes and, thus engage human rights at multiple levels. Their impact is all the more significant as States generally adopt domestic laws and policies that enable them to implement FATF standards, thereby leading to national ‘hardening’ of these otherwise soft law standards. In the Special Rapporteur’s view, human rights implications linked to the development and implementation of these standards require sustained and in-depth attention.
Financial surveillance and profiling
FATF’s recommendations encompass surveillance of financial data of people and organisations, PI writes:
Financial data is some of the most sensitive data about people, revealing not only their financial standing but also factors like family interactions, behaviours and habits, and the state of their health, including mental health. While monitoring and regulating financial transactions are important for investigating and preventing terrorist acts and other serious crimes, it is essential that it is done in a way that does not endanger human rights.
Interference with human rights and capabilities of surveillance in this sector are many, but generally fall into the following stages:
* information requirements placed upon individuals and organisations, including identity documentation for opening and using accounts, requirements to explain the reasons of financial transactions (customer due diligence);
* generation of profiles and suspicious transaction reports on individuals’ and organisations’ activities based on the characteristics of the transactions;
* sharing of these reports and other financial data with Financial Intelligence Units, who then sometimes share data with law enforcement agencies;
* bulk sharing and access to data by government authorities, such as when the U.S. intelligence services gained access to SWIFT, without any safeguards or when generalised reporting is taking place to tax authorities.
These are often mandatory requirements that are not limited to investigation-led activities. In this sense, financial surveillance is markedly different to other forms of surveillance – where interferences to privacy must be on a case-by-case basis and authorised by an independent competent authority. Financial surveillance actively monitors transactions, generates intelligence on these transactions, shares data based on how the sector identifies ‘suspicious activity’ as opposed to being led by a law enforcement investigation. Another difference is the key role played by the private sector (including financial institutions, but also involving state agents and other actors).
The impact of rules surrounding money laundering and terrorist financing extends far beyond the financial sector. FATF’s requirements on customer due diligence (KYC) lead to interference with privacy and other human rights, as well as social exclusion, PI concludes. The huge collection of confidential information raises the risks of abuses and data breaches.
Report by the Special Rapporteur
In the summary the Rapporteur warns:
The digital welfare state is either already a reality or is emerging in many countries across the globe. In these states, systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish. This report acknowledges the irresistible attractions for governments to move in this direction, but warns that there is a grave risk of stumbling zombie-like into a digital welfare dystopia. It argues that Big Tech operates in an almost human rights free-zone, and that this is especially problematic when the private sector is taking a leading role in designing, constructing, and even operating significant parts of the digital welfare state. The report recommends that instead of obsessing about fraud, cost savings, sanctions, and market-driven definitions of efficiency, the starting point should be on how welfare budgets could be transformed through technology to ensure a higher standard of living for the vulnerable and disadvantaged.
Readers find familiar subjects in the report like:
- identity verification
- fraud prevention and detection
- risk scoring and need classification
- the role of the private sector and the reluctance of many governments to regulate the activities of technology companies, and the strong resistance of those companies to taking any systematic account of human rights considerations
The conclusions of the report are interesting, e.g.:
72. But as humankind moves, perhaps inexorably, towards the digital welfare future it needs to alter course significantly and rapidly to avoid stumbling zombie-like into a digital welfare dystopia. Such a future would be one in which: unrestricted data matching is used to expose and punish the slightest irregularities in the record of welfare beneficiaries (while assiduously avoiding such measures in relation to the well-off); evermore refined surveillance options enable around the clock monitoring of beneficiaries; conditions are imposed on recipients that undermine individual autonomy and choice in relation to sexual and reproductive choices, and in relation to food, alcohol and drugs and much else; and highly punitive sanctions are able to be imposed on those who step out of line.
On the tech-optimists the Rapporteur writes: “There are a great many cheerleaders extolling the benefits, but all too few counselling sober reflection on the downsides“.
The Rapporteur concludes that currently the technology industry is only oriented on gadgets for the rich and not on improving the situation on the earth. It is high time that technology is developed in the public interest.
The dangers of FATF’s recommendations and its application
The risks of FATF’s recommendations and its application around the globe do not only exist in the area of counter-terrorism. The same applies to the anti-money laundering (AML) measures. Not only vulnerable and disadvantaged citizens are harmed by the measures. The counter-terrorism- and AML-measures also are harmful for developing countries, for eastern Europe (as already made public by MONEYVAL), for smal and medium sized companies and other organisations and for many other groups of citizens and organisations.
It is time to wake up.
- How financial surveillance in the name of counter-terrorism fuels social exclusion, 13 October 2019.
- Subjects: scrutinising the global counter-terrorism agenda, counter-terrorism; fintech.
UN Human Rights:
- World stumbling zombie-like into a digital welfare dystopia, warns UN human rights expert, 17 October 2019, Office of the High Commissioner for Human Rights (UN Human Rights).
- Report (MS Word) of the UN Special Rapporteur on counter-terrorism and human rights.
- Page on the Rapporteur.
The Guardian on the report: ‘Digital welfare state’: big tech allowed to target and surveil the poor, UN is warned, 16 October 2019.
- On 16 October 2019 UN Human Rights published the article The Netherlands is building a surveillance state for the poor, says UN rights expert.
Profiling and use of artificial intelligence in general:
- Paul Nemitz, Profiling the European Citizen: why today’s democracy needs to look harder at the negative potential of new technology than at its positive potential, 2018.
- The Guardian started a serie of articles on ‘automating poverty‘, exploring how our governments use AI to target the vulnerable. Some articles: How Bristol assesses citizens’ risk of harm – using an algorithm; One in three councils using algorithms to make welfare decisions; Digital dystopia: how algorithms punish the poor.
- Artificial Intelligence Has Become A Tool For Classifying And Ranking People, Simon Chandler (Forbes), 4 October 2019.
- French plan to scan social media for tax fraud causes alarm, The Guardian 1 October 2019.
On this blog:
- Articles on this blog on compliance-exclusion and de-risking.
Addition 18 October 2019
UN-Bericht kritisiert Einsatz neuer Technologien in Sozialsystemen, Christopher Hamich on Netzpolitik, 17 October 2019.
Human Rights Watch: UN: Protect Rights in Welfare Systems’ Tech Overhaul, 17 October 2019.
Big Tech Is ‘Almost Human Rights-Free Zone’ UN Report Warns, Emma Woollacott, Forbes, 16 October 2019.
In Dutch: Tweakers, Sargasso.
Profileren van de rest van Nederland, bijvoorbeeld door banken op grond van #Wwft is eveneens in strijd met mensenrechten https://t.co/wHYEzt0EUY
— Ellen Timmer (@Ellen_Timmer) 17 oktober 2019
Addition, 30 January 2020
Read also: How digital financial services can prey upon the poor, The Economist, 29 January 2020.