Pics News

Can AI push the boundaries of privacy and reach the subconscious mind?

[ad_1]

Newswise — Influencing the US election or the UK’s political future by using a combination of the personal information posted on Facebook by millions of people and powerful data analysis technology – it wasn’t that long ago that this would have seemed like something out of a sci-fi novel, but the 2018 Cambridge Analytica scandal proved that it can happen and that, as a result of advancing technology and machine intelligence, we are now facing fundamental dilemmas that we never had to think about before.

The neurorights initiative led by the Neurorights Foundation advocates for the recognition of a new set of protection measures against the challenges of these technical advances. Some of these are being debated in connection with the Artificial Intelligence Act that is currently being negotiated within the EU’s governing bodies. This law must regulate, among other matters, the ability of AI to influence our subconscious (similarly to the Cambridge Analytica case but at much deeper levels).

Ignasi Beltran de Heredia, dean of the Faculty of Law and Political Science at the Universitat Oberta de Catalunya (UOC) and author of the book “Inteligencia artificial y neuroderechos” (Aranzadi, 2023), has just published an open-access article examining the challenges we face as a result of the advances in AI and questioning the EU’s latest bill from the perspective of neuroscience.

 

The risks of giving AI access to our subconscious

According to estimates, only 5% of human brain activity is conscious. The remaining 95% takes place subconsciously and not only do we have no real control over it, but we are also not even aware that it is taking place. As noted by Beltran de Heredia in his article, we are unaware of this extraordinary torrent of neural activity due to the high complexity of the interaction between our conscious mind and our subconscious behaviour and our complete lack of control over the forces that guide our lives.

However, this does not mean that people cannot be influenced subconsciously. “There are two ways for artificial intelligence to do this,” he explained. “The first one is by collecting data about people’s lives and creating a decision architecture that leads you to make a particular decision. And the other – which is currently less developed – involves using applications or devices to directly create impulses that are irresistible for our subconscious mind in order to generate impulsive responses at a subliminal level, i.e. to create impulses.”

“As we gradually develop better and more powerful machines and become more closely connected to them, both options will become increasingly widespread. Algorithms will have more information about our lives, and creating tools to generate these impulsive responses will be easier […] The risk of these technologies is that, just like the Pied Piper of Hamelin, they will make us dance without knowing why.”

In Beltran de Heredia’s opinion, the field in which we are most likely to see the first attempts to influence human behaviour through AI is that of work, more specifically occupational health. He argues that a number of intrusive technologies are currently in use. These include devices that monitor bus drivers to detect microsleep or electroencephalography (EEG) sensors used by employers to monitor employees’ brainwaves for stress and attention levels while at work. “It’s hard to predict the future but, if we don’t restrict such intrusive technologies while they’re still at the earliest stages of development, the most likely scenario is that they’ll keep improving and spreading their tendrils in the name of productivity.”

 

The (blurry) limits proposed by the EU

The new artificial intelligence regulation currently being discussed by the EU seeks to anticipate the possible future risks of this and other uses of AI. Article 5.1 of the original bill contained an express prohibition on placing on the market, putting into service or using an AI that is capable of influencing a person other than at a conscious level in order to distort that person’s behaviour. However, the amendments and modifications gradually introduced since then have slowly diluted the absolute nature of the prohibition.

The current bill, which will be used as reference for the final wording of the law, bans such techniques only if they are intended to be manipulative or deceptive, they significantly affect a person’s ability to make an informed decision such that they make a decision that they would not otherwise have made, and they cause significant harm to someone in some way. In addition, the prohibition will not apply to AI systems for approved therapeutic purposes.

“Under the proposal, the AI ban will apply when there is serious harm and the person ends up doing something they wouldn’t otherwise have done. But that’s an unrealistic standard. If I can’t access my subconscious, I can’t possibly prove what I would’ve done without the stimulus, and I can’t prove the harm either […] If subliminal advertising is now completely banned without qualification, why are we leaving room for subliminal conditioning by artificial intelligence?”

According to Beltran de Heredia, if we leave the door open to our subconscious mind, even for good reasons, we won’t be able to control who has access to it, how it is accessed or the aims of this access. “Some may think that these concerns belong to an unlikely dystopian future. And yet there’s no doubt that we’re already being intruded upon at a depth that was unimaginable only a few years ago and that the public should be given the fullest protection possible. Our subconscious mind represents our most private selves and should be completely sealed from outside access. Indeed, we shouldn’t even be discussing it.”

There’s still much we don’t know about how our brain works and how the conscious and subconscious parts of our mind interact with each other. The brain remains a very elusive organ and, although science is making great strides in this field, we don’t know about many of the ways in which its functioning could be affected by certain stimuli. “We need to be aware of the risk of giving other people and companies access to our inner selves at such deep levels. In the context of the data economy, many public and private institutions are competing for access to our information but, paradoxically, it’s been shown time and time again that individuals place little value on their privacy,” he concluded.

 

This research contributes to Sustainable Development Goal (SDG) 8: Promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all.

 

Article

Beltran de Heredia Ruiz, I. (2023). Algoritmos y condicionamiento por debajo del nivel consciente: un análisis crítico de la propuesta de Ley de Inteligencia Artificial de la Unión Europea. Revista De La Facultad De Derecho De México, 73(286), 621–650. https://doi.org/10.22201/fder.24488933e.2023.286.86406

 

UOC R&I

The UOC’s research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century by studying interactions between technology and human & social sciences with a specific focus on the network society, e-learning and e-health.

Over 500 researchers and more than 50 research groups work in the UOC’s seven faculties, its eLearning Research programme and its two research centres: the Internet Interdisciplinary Institute (IN3) and the eHealth Center (eHC).

The university also develops online learning innovations at its eLearning Innovation Center (eLinC), as well as UOC community entrepreneurship and knowledge transfer via the Hubbik platform.

Open knowledge and the goals of the United Nations 2030 Agenda for Sustainable Development serve as strategic pillars for the UOC’s teaching, research and innovation. More information: research.uoc.edu.



[ad_2]

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button