US Supreme Court Keeps Social Networks Protecting Their Users’ Content | technology



I breathe for technology. In two high-profile cases this year, the US Supreme Court has ruled in favor of Twitter, Google and Facebook, which have been sued for publishing messages from terrorist organizations. In one of them, they agreed with them, rejecting that this involved cooperation with terrorism. The other was sent to the lower courts so that they would follow the same criteria. Although for the time being this maintains the protection that social networks enjoy due to content uploaded by their users, the judges have avoided ruling on the scope of this exemption from liability and seem to be reserving it for a better occasion.

What the Supreme Court has done is say that it is not enough for terrorist organizations to use social networks to hold them civilly liable for collaborating with terrorism. In a situation in which they have ruled on the merits, Twitter against Tamna, A lawsuit from relatives of one of the victims of the terrorist attack on the Reina nightclub in Istanbul, in which 39 people were killed on New Year’s Eve in 2016, was analyzed on New Year’s Eve in 2017. Although the case bears the name of the social network owned by Elon Musk, along with Twitter, Google and Facebook were also a part. In their day, the judges ruled in the earlier trial against the tech companies, which appealed to the Supreme Court and have now won.

In a unanimous ruling signed by Justice Clarence Thomas, the justices dismissed the platforms’ liability. This is the key clause in the 38-page judgment: “The transmission of information to billions of people – most of whom use the platforms for interactions that previously occurred by mail, telephone or in public – is insufficient to confirm that the defendants knowingly provided significant assistance and therefore helped and incited the actions of ISIS. A finding to the contrary would make any telecom provider liable for any kind of crime simply for knowing, not preventing, that criminals were using their services. This would ignore the typical limits of non-contractual liability and disengage the link between complicity and guilt,” it read. in judgment.

See also  ABIS: The Spanish police will use automatic facial recognition | technology

The justices argue that the lawsuit is based largely on the platforms’ inaction, but they reject that this “remote inaction” should not be taken for “informed and substantial assistance that could prove complicity in an attack.” [la discoteca] Queen.” “The broad scope of the Plaintiffs’ allegations will necessarily make the Defendants liable as abettors of any and all acts of ISIS terrorism committed anywhere in the world. The allegations that prosecutors are making here are not some kind of pervasive, systematic, and culpable aid to a series of terrorist activities that could be described as complicity in every terrorist act by ISIS.”

The Supreme Court chose to rule on a point in anti-terrorism law that would allow victims of attacks to sue terrorists and their collaborators for damages. The judges decided not to consider the networks collaborating with terrorist organizations just because their content is published through them, but they did not want to rule on other possible cases.

Thus, the sentence does not advocate a specific protection for social media networks, but a general one: “Bad actors like ISIS may be able to use platforms like those of the accused for illegal — and sometimes egregious — purposes. But the same can be said of cell phones and email or the internet in general,” they say. In general, however, we do not believe that mobile or Internet service providers are at fault simply for offering their services to the general public. Nor do we believe that these providers can be said to aid and abet, for example, illegal drug trafficking through cell phones, even if the service provider’s conference call or video call functionality facilitates the sale.”

See also  It's Not Your Bank, But It's a Scam: An (Almost) Definitive Guide to Avoiding 'Phishing' Attacks | your technology | Country

Therefore, the scope of another rule under discussion, on whose interpretation the future of the Internet depends, remains unclear, which is the famous Section 230 of the Communications Decency Act of 1996, which states: “No provider or user of an interactive computer service will be treated as a publisher or A publisher of information provided by another provider of information content.

In case Twitter against Tamna It did not stand Whether in that article or in the responsibility for cooperation or assistance to terrorists. In the other case that affected tech companies, Gonzalez vs. GoogleAnd the above He issued a judgment of only three pages without signature and without entering into the interpretation of the controversial article either. “We refuse to address the application of Section 230 to a claim that appears to give rise to little or no reasonable claim for compensation. Instead, we vacate the judgment and remand the case to [tribunal del] The Ninth Circuit is considering Plaintiffs’ case in light of our Twitter decision.”

In this second case, it was analyzed whether YouTube (and therefore any social network) algorithm recommendations were protected in the same way as third-party content. Relatives of Nehemiah Gonzalez, one of the victims of the Islamic State attacks that shocked Paris on November 13, 2015, at the Bataclan concert hall and other venues in the French capital, sued Google, the owner of YouTube, for publishing Islamic State videos.

When the Supreme Court accepted these two cases last October, it set off alarm bells in tech companies. Google, Twitter, Facebook, Yelp, Reddit, Microsoft and Craigslist were among the companies that warned that job searches, restaurants and products could be restricted if their platforms had to worry about being sued over the recommendations and the content of their posts. users.

See also  This is how companies prepare for the arrival of the sixth generation | technology

Several Supreme Court justices, including conservatives Clarence Thomas and Samuel Alito, have already expressed interest in accepting cases related to content moderation on the Internet, but have not seen the opportunity to establish a general doctrine and appear to be holding back for the future. Tech companies have long been caught in the crossfire between political parties. Republicans accuse them of practicing censorship with a progressive bias. Democrats, led by President Joe Biden, are slamming this shield that absolves them of responsibility when they spread hate speech or misinformation.

You can follow The Country Technology in Facebook And Twitter Or sign up here to receive The weekly newsletter.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button