Algorithms that will choose the next head of government | Technique

In 1955, Isaac Asimov published his story universal suffrage. In it he describes how the first electronic democracy used the world’s most advanced computer (Multivac) to decide the vote of an entire nation, with the intervention of a single human voter.
Although we are not yet in this ominous future, the role of artificial intelligence and data science is becoming increasingly important in the course of democratic elections. A good example is the election campaigns of Barack Obama and Donald Trump, the Danish industrial party, and the massive theft of information in the Macron campaign.
“sentiment analysis”
One of the first success stories in using big data techniques and social network analysis to tune an election campaign was that of Barack Obama in the United States presidential election in 2012. In his campaign (and many others since), traditional voting intention questionnaires were supplemented , which are based on phone calls or personal interviews, analyzed social networks.
These analytics provide a cheap, near real-time way to gauge voter sentiment. For this purpose, natural language processing (NLP) technologies are applied, in particular those dedicated to sentiment analysis. These methods analyze messages received in tweets, blogs, etc. They try to gauge whether the opinions expressed in them are positive or negative in relation to a particular politician or electoral message.
The main problem they face is sampling bias, since the most active users on social networks tend to be young and tech-savvy, and they are not representative of the entire population. For this reason, these methods have limitations when it comes to predicting election results, although they are very useful for studying voting trends and the state of public opinion.
Donald Trump case
Even more troubling than studying emotions in social networks is their use to influence opinion states and modulate voting. A well-known example is the campaign of Donald Trump in the 2016 US presidential election. Big data and psychographic profiles had so much to do with victory that polls failed to predict.
It was not group manipulation, but different voters who received different messages based on predictions about their susceptibility to different arguments, receiving information that was biased, fragmented, and sometimes contradictory to other messages from the candidate. The task was entrusted to Cambride Analytica, which was embroiled in a controversy over the unauthorized collection of information on millions of Facebook users.
The Cambride Analytica method was based on psychometric studies by Kosinski, which investigated how a user’s profile with a limited number of likes got as accurate as if their family or friends did.
The problem with this approach is not the use of technology, but rather the “covert” nature of the campaign, the psychological manipulation of vulnerable voters through direct appeals to their feelings, or the deliberate spreading of fake news via bots. Such was the case for Emmanuel Macron in the 2017 French presidential election. His campaign was subjected to massive email theft just two days before the election. A large number of bots were responsible for publishing evidence of crimes supposedly contained in the information, which was later found to be false.
Politics and government
No less disturbing than the previous point is the prospect of being ruled by artificial intelligence (AI).
Denmark opened the debate in its recent legislative elections, which was attended by the artificial intelligence-led synthetic party, a chatbot called Leader Lars, with aspirations to enter parliament. Behind the chatbot are humans, of course, and in particular art and technology foundation MindFuture.
Leder Lars has been training the electoral platforms of fringe Danish parties since 1970 to form a proposal that would represent the 20% of the Danish population that does not go to the polls.
As extravagant as the synthetic party sounds (with bold proposals such as a universal basic income of more than €13,400 per month), it has galvanized the debate about AI’s ability to control us. Can a modern, well-trained, well-resourced AI rule us?
If we analyze the recent past of artificial intelligence, we see that progress follows one after another at a breakneck speed, especially in the field of natural language processing after the emergence of architectures based on transformers.TransformersThey are huge artificial neural networks trained to learn to generate texts, but they are easily adaptable to many other tasks. Somehow, these networks learn the general structure of human language and end up gaining knowledge of the world through what they “read”.
One of the most advanced and interesting examples developed by OpenAI is called ChatGPT. It is a chatbot capable of coherently responding to almost any natural language formulated question, generating text, or performing complex tasks such as writing computer programs from a few prompts.
Corruption free but without transparency
The advantages of using AI in government procedures will be numerous. On the other hand, their ability to process data and knowledge in order to make a decision is far superior to that of any human being. It will also be free (in principle) from the phenomenon of corruption and will not be affected by personal interests.
But for now, chatbots are just reacting, feeding on information someone is giving them and giving answers. They are not really free to think “spontaneously”, to take the initiative. It is more appropriate to see these systems as oracles, able to answer questions such as “what do you think would happen if…”, “what would you suggest if…”, rather than as active agents or controllers.
The potential problems and dangers of this type of intelligence, based on large neural networks, have been analyzed in the scientific literature. The fundamental problem is the lack of transparency (“explainability”) of the decisions they make. In general, they act as “black boxes” without us being able to know what logic they used to come to a conclusion.
And let’s not forget that behind the machine are humans, who have been able to introduce certain biases (consciously or not) into the AI through the scripts they used to train it. On the other hand, AI is not without false data or advice, which many ChatGPT users have been able to experience.
Technological advances allow us to glimpse a future artificial intelligence capable of “ruling us”, for the time being not without basic human control. The discussion should soon move from the technical level to the ethical and social level.
Jorge Grace Del Rio He is the Ramón y Cajal Researcher in Computer Languages and Systems, University of Zaragoza.
You can follow country technology in Facebook s Twitter Or sign up here to receive The weekly newsletter.
