Regulate Me: This AI Is Too Fat (And It Would Be Mine) | technology

Our systems are now, say the heads of artificial intelligence. What we have in our hands is so great, so revolutionary, that machines will replace us humans, they will enslave us, and this can end in the extinction of the species. They regulate us now, they say, like arms companies: we want to be searched, we work only under license… Organize us, they don’t say this, to raise barriers for small companies, or for collaborative projects that use open source; To avoid that every company or organization has its own AI system if it does not pass through our box.

Here’s what’s happening: the champions of artificial intelligence themselves – and at the head of the man of the moment: OpenAI creator Sam Altman, as well as the CEOs of Google DeepMind and Anthopic – are in a hurry to organize their activity. By this, first of all, they give themselves great importance: it is pure marketing. This is still daring to call intelligence what these algorithms are doing, and what we normal people are feeding is not entirely artificial. A frightening General Artificial Intelligence, which would sum up all of humanity’s knowledge and surpass all human capabilities, remains a distant dream (or nightmare). But this field is going to skyrocket quickly, there’s no doubt about that.

We are on our way to what is called web3 (decentralized, democratic, free from the control of big corporations) the same is happening to web2 (those with social networks), which would have also empowered citizens and only strengthened the oligopoly of digital services. What has happened so far is the “winner takes all” effect, which in addition to a beautiful ABBA song is the rule that has led to an excessive concentration of power in a few companies. This is why, in general, navigation is dominated by Google; Amazon, e-commerce; Microsoft operating systems and software for computers; Apple, the stylish part of the hardware. Facebook (Meta) was one of those winners, and it was almost as dominant as the social network, but the rise of others like TikTok and its foolish all-or-nothing bet on the metaverse made it fall from the elite. Nvidia is now entering the group of billion-dollar companies, thanks precisely to its developments in the field of artificial intelligence.

See also  Why did we believe the image of the Pope in the white coat? | technology

What is at stake is who will be the winner, take it all in with artificial intelligence. Microsoft, in alliance with OpenIA (creator of ChatGPT) is well positioned; Google is waking up because its search business is under threat; And Nvidia claims its place among the greats with a below-average but highly solvent track record in graphics processing and high-performance computing. It’s in the West: the Asian giants will get a good piece of the pie.

Should artificial intelligence be regulated? naturally! Let’s not arrive as late as social networks, which today have become a jungle. Laws and regulations must protect the rights and privacy of citizens, avoid blanket and global surveillance, prevent massive campaigns of disinformation and political manipulation more effective than the one we are already experiencing, and stop discrimination. In particular, it will be necessary to organize the protection of intellectual property, because artificial intelligence swallows all kinds of information, which does not belong to it, in order to make it itself. Not only are creators’ copyrights at risk, they were already plagued by piracy at the turn of the century; Your private data and profile picture are yours, and the app should not be able to capture it.

And one of the more nuanced aspects that needs to be determined is which decisions can be entrusted to AI and which cannot: Should we allow machines to decide on personnel selection, on the granting of mortgages, on the parole of a prisoner? Do we allow military or police machines to choose to shoot on target? These are all very urgent discussions, and they should lead to quick decisions. But is it necessary to regulate that only a handful of large companies can work with AI, through licenses? On the contrary: legislation should stimulate competition rather than repeat the mistakes of the past.

See also  Sergio Boixo, Physicist at Google Quantum IA: "We're very close to having a quantum computer without errors" | technology

Some say: We will not be able to regulate artificial intelligence so much because even its engineers do not fully understand how a machine that learns on its own works. Weak argument: there is no need to delve into the bowels of overly complex programs: just examine (evaluate, audit) their results. And at the moment, smarts like ChatGPT surprise us for its somewhat natural use of language (although it does it better in English), but for nothing else. He does not give accurate information, he makes up a lot of what he says, he makes gross mistakes that are unacceptable in any profession. And AI, as it is known, inherits human biases through the information and criteria that have been provided: gender, race, class biases and many more.

Disaster that imagines the tyranny of machines in a dystopian future sounds terrifying, but it responds to more mundane interests. Because this debate about the end of the world distracts us from the abuses that these primitive technologies continue to commit, including the ever-obvious extraction of the talents of others. Let’s organize artificial intelligence, of course we do. But not at the dictates of the owners.

Ricardo de Quirol is the author of The Great Disintegration (Harp).

You can follow The Country Technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.

Subscribe to continue reading

Read without limits

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button