‘As in Climate Change’: Group of Experts Call for Global Commission to Monitor Technological Risks | technology



Information technology researchers, from computing to journalism, are calling for the creation of an intergovernmental body to monitor the evolution of technological development. In an article published in nature, suggesting that it is a threat with characteristics similar to those of the climate crisis, and thus, calls for the creation of an entity similar to the Intergovernmental Panel on Climate Change or (IPCC) where specialists from different fields such as politics, engineering or ethics sit together to monitor global information distribution systems in a coordinated manner . This will include online banking, social media, search engines (such as Google), and generative language models such as ChatGPT.

They are not the only ones involved. Less than two months ago, Elon Musk, Apple co-founder Steve Wozniak, and historian Yuval N. Harari, along with thousands of experts, signed an open letter to stop the uncontrolled race of artificial intelligence (AI). Yesterday on Capitol Hill, Sam Altman, co-founder of the company that created ChatGPT, sat down before the Judiciary Committee for the first time to call for urgent action and called for the creation of an international organization that would set standards for artificial intelligence in the style of how it did in the past with “nukes.” Jeffrey Hinton, the godfather of this technology, left Google and said: “If there is any way to control artificial intelligence, we must figure it out before it’s too late.”

In the comment posted on natureExperts agree that risks from climate change or environmental degradation are of the same complexity, scale and significance as those presented by information management on a global scale, which is in the hands of a few companies. Algorithmic decision-making, they say, can exacerbate existing social prejudices and give rise to new forms of inequality in various aspects. In the case of housing acquisition, they note, algorithms for guiding rental home owners, “support dynamics similar to those of cartels” regarding supply and price constraints. And algorithms that direct police toward potentially high-crime areas, using location data from previous arrests, “may exacerbate existing biases in the criminal justice system.”

See also  MWC 2023: Xiaomi claims the throne of smartphone photography with its 13 series, which includes Leica cameras | technology

With regard to social networks, they consider that the speed at which content is created and shared gives way to more misinformation and hate speech. Meanwhile, generative AI is already threatening employment structures in entire industries and challenging societal perception based on scientific knowledge. They criticize that “ChatGPT can threaten public understanding of science by encouraging the production of texts containing falsehoods and wrongdoings on an industrial scale.” “The world is not prepared culturally or legally,” they add.

Knowledge and transparency

The goal of this interdisciplinary group they propose will not be to seek international consensus or legal development, but rather to provide “a knowledge base that supports the decisions of governments, humanitarian groups, or even corporations” on a global scale. “Just as bodies such as the United Nations’ Intergovernmental Panel on Climate Change conduct policy-based assessments of global environmental change, a similar group is now needed to understand and address the impact of new information technologies on the information systems of the social, economic, political and natural world,” they write in the note, which It was signed by Joe Buck Coleman, researcher at Columbia University’s Craig Newmark Center for Journalism, Ethics, and Security and Carl T. Bergstrom, Professor of Biology at the University of Washington, among other experts.

They say such a group would have more influence than independent researchers or nonprofits in persuading big tech companies to be more transparent. In their opinion, these companies are “deploying a series of tactics” to influence social perception of their gadgets as well as to stop outside scientific research. For example, by specifying the type of data available for searching, which often only includes information about user behavior, rather than the design or operation of the platforms themselves. “As independent researchers, we continually weigh the risks of companies taking legal action against us for core academic activities: collecting and sharing data, analyzing results, publishing articles, and distributing results,” they argue in their comment.

“An intergovernmental group representing the interests of UN member states can identify cases where current levels of transparency do not generate sufficient information.”

To illustrate, they highlight that in 2021, Meta, the owner of Facebook, sent a discontinuation notice to researchers at New York University who had created a browser extension to collect data about targeted ads on the platform. “From conversations with colleagues we know that since then others have been discouraged from doing this kind of work,” they detail.

See also  Big Tech before the US Supreme Court: Blame it on Smith | technology

Another example of how social networks promote toxic content occurred when Facebook made its documents available to the US Securities and Exchange Commission in 2021. On that occasion, it became clear that the method’s algorithm rated emoji reactions as five times more valuable than likes over the course of three Years . In this period, internal data has shown that posts with angry emojis are more likely to contain harmful or false content, such as incorrect claims about vaccines.

To date, attempts to manage digital information ecosystems have been to put in place barriers to protect user data, but have not shown how to reliably assess and prevent damage. Unlike the problem of climate change, which is characterized by an abundance of data, with relatively well-understood causes and consequences, and measurable and visible long-term economic damage, the dangers of information technologies remain uncharted terrain. Essentially, due to a lack of transparency on the part of those who control them: “An intergovernmental group representing the interests of UN member states can determine when current levels of transparency do not generate sufficient information.”

They argue that a panel like this has credibility in non-Western countries, which they consider “crucial” because the implications of digital technologies cross borders and manifest themselves in different cultural contexts. “Although countries like China, Russia, and the United States do not agree on how platforms and services are issued or restricted, the consequences of the digital world transcend international borders. Any hope of negotiation between states requires a clearer picture of what is happening and why, and what policy responses are available.”

See also  Will a computer algorithm become the head of the government? | Intangible News | technology

You can follow The Country Technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button