The main principle of internet economics is theft. The internet giants tell us that we can share with other people, but in reality we share with these giants and all their adtech friends. We are sold by them on invisible markets.
Theft as a business principle
Facebook and Google do not only obtain information from those people that have an account with them, their ‘members’. They for instance collect all information from the address books of of their members, meaning personal information of persons who never have consented to this ‘sharing’ of their personal details with Facebook and Google. In such a way the nitwits that have my information in their address books, share my details with digital rogues.
To my irritation Microsoft’s LinkedIn keeps asking for access to my address book. Of professional secrecy (or privacy) this Microsoft company has never heard.
What if your emotions were tracked to spy on you?
A new dimension to these practices of digital theft is that in future not only our metadata and practical personal data are grabbed. The next stage is that our emotions will be grabbed and analyzed as well. This is something the internet giants already have started with. It will grow rapidly.
The European Parliamentary Research Service (EPRS) recently published a briefing on our future of emotion recognition, based on facial recognition (‘FR’) and other biometric methods, with the title “What if your emotions were tracked to spy on you?“.
Recent reports of celebrity singer, Taylor Swift, deploying facial recognition technology to spot stalkers at her concerts raised many eyebrows. What started out as a tool to unlock your smartphone or tag photos for you on social media is surreptitiously becoming a means of monitoring people in their daily lives without their consent. What impact and implications are facial recognition technology applications likely to have, and what can be done to ensure the fair engagement of this technology with its users and the public at large?
The briefing describes the current developments, for instance:
Emotion recognition may also one day be used by recruiters when hiring, or by employers to monitor the moods of employees and adapt the working environment empathetically, or even to track employees’ work engagement patterns.
I don’t think this is a good idea.
It looks as if EPRS is not aware that biometric log-in systems are not secure (read this old article by Schneier about that), when they write “new applications, such as face-enabled log-in systems, that are not in themselves problematic“. Furtunately EPRS realises that the new technology poses great risks, by making mistakes, being biased and restricting personal freedom.
One of the risks I see already looming is that people are forced to adapt to technology, instead of technology adapting to individual persons. This is something that is already happening in organisation environment, where everyone has to work with standardised applications based on average users, this is the ‘dumbing down‘ effect of technology.
EPRS supports ethics guidelines for trustworthy artificial intelligence, as recently issued by a high level expert group and mentions European plans for regulation. Let’s hope the European plans for regulation become true and really protect people.
- What if your emotions were tracked to spy on you? European Parliamentary Research Service, March 2019.
- Draft Ethics Guidelines for Trustworthy AI, The European Commission’s High-Level Expert Group on Artificial Intelligence, 18 December 2018.
- In an earlier post on this blog I wrote about thought reading (in Dutch) and referred to a Guardian article “New human rights to protect against ‘mind hacking’ and brain data theft proposed“, 27 april 2017.
- Human rights in the robot age : challenges arising from the use of robotics, artificial intelligence, and virtual and augmented reality, Rathenau Institute, 11 May 2017