Félix Tréguer : “La surveillance croit largement avec les techniques d’IA”

[ad_1]

Algorithmic video surveillance, discriminatory risk scores deployed by Caf, communications surveillance… The risks of abuse associated with AI are numerous. Associations like Quadrature du net monitor the sector and defend fundamental freedoms in the digital environment.

Since 2008, Quadrature du Net* has defended the fundamental freedoms of citizens in the digital world. Initially focused on the web, this political association has now opened up to the entire digital sector and fights, among other things, against censorship and surveillance, to conversely promote a free and decentralized Internet.

Associate researcher at the CNRS Internet and Society Center, Félix Tréguer is a founding member of Quadrature du Net and author of the book Counter-history of the Internet. For Techniques de l’Ingénieur, he returns to the risks ofartificial intelligence for our democracy and invites deep reflection on our use of digital technology.

Engineering Techniques: What is Net Quadrature?

Félix Tréguer, researcher at CNRS and member of Quadrature du Net
Félix Tréguer, researcher at CNRS and member of Quadrature du Net.

inx

Félix Tréguer: La Quadrature du Net is a political association whose role is to defend an emancipatory vision of the web and respect for human rights in everything related to digital technology. It was created by activists who claim the possibility of democratic computing controlled by users, and which leaves ample space for communications and non-market services.
Initially focused on web-related topics such as copyright or net neutrality, in 2019 we evolved towards topics that concern digital technology more broadly. For example, our Technopolice initiative focuses on incorporating AI into police surveillance practices.

Concretely, we have citizen lobbying activities: we collect information, we carry out technical, political and legal analyses, we meet French and European parliamentarians, and we take legal action if necessary.

What topics will you work on in 2024?

Our Technopolice component is gaining momentum with the 2024 Olympics and the experimentation of the law on algorithmic video surveillance. This surveillance has been used illegally for years and is entering the experimental legislative framework this year. Another file concerns the defense of the right to encryption of communications to fight against state surveillance. There has been great progress in the democratization of these techniques, which have been deployed by WhatsApp for example, but they remain criminalized because they would interfere with police surveillance of communications. It seems important to us to remember why encryption is fundamental and to warn about growing digital surveillance.

We also want to push interoperability to break the monopolies of the large toxic social networks which dominate our communication spaces. Finally, we will continue to denounce anti-fraud practices and the use of scoring algorithms in organizations like Caf. They use risk scores to assess beneficiaries likely to commit fraud, but these systems are intrinsically discriminatory and redirect controls towards people already in a very precarious situation. We will try to ban this type of practice.

What deviations have you observed regarding AI?

We have worked a lot on police AI, and we are concerned about the very high speed deployment of innovations in this area, because our societies are poorly equipped to deal with it. Concretely, surveillance is largely increasing with AI techniques and they have a “black box” effect because we do not know exactly how they work. Thus, in recent years, predictive policing has been deployed in total opacity. A whole bunch of crime statistics are cross-referenced with socio-demographic data to assess in which areas there are likely to be incidents. However, as with the Caf risk scores, there is a good chance that high risk factors are correlated with discriminatory variables.

Now let’s take algorithmic video surveillance, so the idea of ​​automating video surveillance to automatically identify certain people according to specific criteria. If these techniques had been deployed in the 1940s, it would not have been possible to organize clandestine resistance networks against Nazism. Sometimes democracy holds in the absence of surveillance infrastructure. However, these technologies allow permanent and exhaustive surveillance of public space, so they seem fundamentally undemocratic to us. At the same time, we observed a form of total laissez-faire from the CNIL on these issues.

How does AI increase the balance of power in our societies?

Unlike the initial idea of ​​the web and personal computers which was to decentralize access to computing resources, we have returned to digital technology in the hands of a few large organizations. And this is particularly the case with AI. Even if alternatives emerge, they are not yet able to compete with large players who have access to enormous data and calculation capacities. IT centralizes power, it inscribes social relations in technical devices which are very difficult to understand and therefore to criticize.

Another aspect that has been largely overlooked is the ecological cost of these systems. Digital technology has become an essential cog in an industrial society which has massive ecological impacts and which threatens the survival of many species. Today, we are adding computing to all levels of society when the first utopia was just to communicate between humans. This is accompanied by enormous energy costs and the exploitation of extremely rare mineral resources with all the geopolitical disorders that this entails. Thus, computing is a machine at the service of power which only increases the disorders caused by a society saturated with relations of domination.

What room for maneuver do we have as a civil society?

This is the problem of a society that is not truly democratic because, collectively, we do not really have any control over these trajectories. When Caf decides to use algorithms for risk scores, there are no discussions, debates in Parliament, or even public communication. The same goes for police AI which developed illegally. And once the technologies are installed, citizens are so used to them that they don’t want to go back. We are in a political regime in poor health, which makes decisions with capitalist interests.

This is why the work we do with Quadrature du Net is very important. But if we manage to win a few rounds, I feel like we are losing the war because the digital surveillance we are fighting against is progressing. It’s quite complicated to create mass effects because these questions remain technical. I think we should politicize computer scientists and technicians more. These people manage a crucial part of the global political and economic system, and their voices could tip the scales.

What are your proposals for a more ethical digital world?

We need to rethink practices and get rid of our dependencies. Among the new files that we would like to launch, there would be a reflection on low-tech, less energy-consuming digital technology. There is already a whole bunch of interesting thinking in academia and activism. For starters, we could remove digital from lots of places in society where it is not needed. We must also not forget that machines remain fragile and create vulnerabilities in areas that only work thanks to digital technology. We would also have to rethink the terminals with more rudimentary machines to return to our initial objective which was to communicate between humans.

Then, we could question our uses, and move away from the centralized social network model, where debates are polarized and hate speech is valued. This involves questions such as: how to create a community around the web? How to exchange quality information? It is therefore a question of rethinking the web without having tools imposed by GAFA, and by moving away from the greenwashing discourse common in the sector.

What should we say to people who see limitations as obstacles to scientific progress?

Research in science is not at all a sphere of autonomous reflection whose consequences are separated from the social and economic spheres. We cannot work in AI while telling ourselves that the energy issue is not within our purview. These arguments produce a form of collective irresponsibility of scientists with regard to their work. Science and technoscience are part of the harmful logics of our society, particularly ecological ones, so it seems quite logical to me that the players in this sector take up these issues and stop passive or active complicity with this type of trend.

What do you think of regulatory initiatives such as the upcoming AI Act in the European Union, or the proposals from the AI ​​Safety Summit which took place last November?

Given the controversies surrounding an AI that would replace certain professions, there are attempts at regulation. But I regret that the debate is dominated by the prospect of a distant threat because it leaves silent all the harms of AI today. The AI ​​Act probably has a good intention, but I see it above all as a way of reassuring the population and strengthening the social acceptability of a controversial technology, by proposing some frameworks. Obviously, we should succeed in regulating digital technology through law, but I think we should rather carry out real reflection to ask ourselves whether these technologies are really useful to us. For the moment, the regulations around AI seem to me to be communication gestures whose effects remain marginal in the face of the issues.

Comments collected by Alexandra Vpierre


* The Squaring the Net

[ad_2]

Source link

Scroll to Top