Algorithms prey on the networks’ most vulnerable: migrants who are black and loud; Quiet White Refugees | technology

75% of YouTube videos that appear after searching for “immigrants” feature men crossing borders. In 76% of cases, non-white people are seen moving around in large groups, which “contributes to their dehumanization” and promotes a “feeling of threat or collapse.” On the other hand, for the platform itself, after the word “refugees” there are mostly videos of white women, affected by the war in Ukraine, in calm situations. Their faces are better appreciated and not seen crossing borders or posing.
These are some of the findings from an analysis conducted by Eticas on two influential social networks: YouTube and TikTok. This consultancy specializing in audits was launched to externally review the work of various social networks (extracting publicly available data, not provided by the platforms themselves). “We wanted to analyze how immigrants and refugees are represented because a lot of teaching has been done in the media about how to show certain groups and what consequences that might have, but that thinking has not been carried over to social networks,” explains Gemma Galdon. Executive Director of Ethics.
Next year the Digital Services Regulation (DSA) will require technology companies operating in the EU to conduct independent audits to ensure a safe digital space in which users’ basic rights are protected.
“Our research has shown that there is a problem with biased and unfavorable representation of immigration on YouTube. The study concludes that YouTube and other interested parties must work to remove a range of biases from algorithmic systems, demonstrate more transparency, and engage more with immigrants themselves.
To prepare the study, which was conducted in the US, UK and Canada, Eticas analysts collected videos on the topic of migration over two months (June-July 2022 in the case of YouTube and October-December in the case of TikTok). They did this through different profiles on each platform to try to find differences in the content suggested by the algorithm. Practice Fake users appear pro-immigrant, anti-immigrant, and neutral.

The searches were also conducted from sites with different political leanings, so that it could be ascertained whether there were differences in this regard. The YouTube analysis was conducted from London and Toronto, two cities with very different relationships with immigration and refugees. We were looking for spaces with a high political significance. Toronto is a large city in Canada, with a historic commitment to welcoming immigrants and refugees. London, on the other hand, is the capital of a country that has been denoting it for some time due to border closures and more aggressive rhetoric around immigration,” explains Galdon.
In the case of TikTok, San Francisco (majority Democratic), Oklahoma City (majority Republican), and Virginia Beach (majority Republican) were selected. The focus was on the US because it was in the context of the elections ( midterm electionswhere all members of the House of Representatives and part of the Senate are elected).
Curiously, the results were very similar in all cases: the algorithm could hardly distinguish between users based on the type of content it displayed, neither by profile characteristics nor by location. This takes into account both direct search results and content suggested by each platform’s algorithm.
YouTube: Distortions in the public image
With over 2 billion active users, YouTube is the second most used social platform in the world, behind only Facebook. According to YouTube CEO Neil Mohan, up to 70% of the videos viewed on the platform are suggested by its video recommendation system. These suggestions are made from a pool of data that includes everything from viewing history and interactions with videos to a user’s age, gender or location. “As a result, YouTube’s recommendation algorithms could influence the perception of migration by a large number of people around the world,” the report’s authors say.
Analysis of this platform shows that in the vast majority of immigration-related videos, the faces of the characters appearing in the videos are unrecognizable. In more than 60% of cases, the protagonists are not individuals, but large groups of people (15 or more). Only 4% of those appearing under this topic are white and women are underrepresented (representing between 1% and 4%). People who appear in cuts are not working (between 75% and 84% of cases) and protesting (between 73% and 81%).

In the case of refugees, the picture is different. Most of them have Caucasian features. In more than half of the cases their faces can be recognized, there is a higher proportion of videos featuring one to five individuals and women appearing than men. There are more videos in which they do not cross the border than there are in which they do, despite the fact that, as in the case of migrants, they had to do so by force.
“Our search and recommendation systems are not designed to filter or downgrade videos or channels based on certain political views,” reads the platform’s policy the company refers to. “In addition, we audit our automated systems to ensure there are no unwanted algorithm biases (for example, gender). We fix bugs as soon as we find them and retrain our systems to be more accurate.”
TikTok: a lot of entertainment, a little politics
TikTok is the social network that has made the most progress in recent years. Particularly popular with younger audiences, it shows users short videos that are determined by an algorithm, which doesn’t take into account what your friends or even your country is watching. The platform is designed to amplify the most viral content, wherever they are and whatever they appear (as long as they comply with the company’s legal content standards).
One of the more surprising conclusions from the TikTok analysis is that the Chinese platform avoids politically charged videos about immigrants or refugees. Less than 1% of the content studied develops arguments for or against these groups. Most of the videos on this topic have shown people telling jokes, cooking, working, creating art, or demonstrating some weird skill.
Eticas’ work concludes that the Chinese social network’s recommendation algorithm is “not influenced by users’ political preferences.” Not because of its location. This contradicts an investigation he conducted The Wall Street JournalAnd According to TikTok, it is able to learn the interests implicitly assigned to bots and deliver content that adapts to them in less than two hours. “This indicates that the level of personalization for TikTok recommenders has adjusted in the past year and underscores the evolution of algorithms over time and the need to periodically re-evaluate their impact,” the report reads.
TikTok does not have specific mechanisms in place to try to correct potential biases, according to a spokesperson for the platform. The algorithm serves content from different content creators, is committed to diversity of sources so that there are no distortions and to confirm the global nature of this social network.
You can follow country technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.
Subscribe to continue reading
Read without limits