Science and TechnologySocial Networks

How countries deal with violent content in cyberspace – Mehr News Agency | Iran and world’s news


According to the Mehr correspondent, in our country, following the murder of a woman in Ahvaz by her husband in recent days and the simultaneous publication of related images and videos in cyberspace, the issue of legalizing online content has arisen again. The release of these images provoked various reactions, on the one hand, and on the other, it hurt public feelings.

Although some officials commented on the incident and banned the publication of these images or videos in cyberspace and the media, what is clear in this regard is that the content of cyberspace and how it is published in our country is not legal and does not have a specific framework. .

On the other hand, what is felt more and more empty is the way we deal with the platforms in this field, because they, especially foreign platforms, do not follow the domestic laws of the country.

For several years now, countries around the world have been trying to protect their users by enacting laws against hateful and violent cyberspace content.

Prohibition laws in these countries do not allow the spread of such content in cyberspace, and if it is not eliminated, fines and penalties await the social networks.

Some countries have required technology companies to legislate on their own, and under the laws of others, hateful and violent content must be removed within a limited time after publication. Otherwise, the offending platform will be fined. However, what you read below is a set of laws that different countries have enacted in recent years to combat hate speech and violent content in cyberspace.

Technology companies responsible for removing violent content in the United States

In the United States, technology platforms are responsible for regulating themselves. Various social networks such as Twitter, Facebook and Instagram have rules about violent content.

These popular platforms have similar content review policies that prohibit any posting or content that promotes or encourages violence. Posts containing hate speech are also banned in the United States. Large platforms have also included measures such as content verification to limit such content.

For example, violent content, bullying and threats are prohibited on Facebook. This social network sometimes keeps the content if it is in the public interest, otherwise it removes it.

Twitter has a similar policy. Users can report profiles, posts, pictures, videos, groups with violent content. YouTube has also banned this type of content.

This type of content is also prohibited in WhatsApp. The publication of illegal, inappropriate, defamatory, threatening, intimidating, harassing, hateful, discriminatory content on this platform is prohibited.

The turning point in the process of legalizing social platforms in the United States was the January 6, 2021 attack on Congress by supporters of former US President Donald Trump. As a result of this event, Donald Trump’s account on Facebook, Twitter, Instagram, Pinterest, YouTube, etc. was blocked. Twitter also blocked a violent group advocating extremist conspiracy theories (QAnon). Parler, a social networking application that claims to promote free speech, was also removed from Amazon Web Hosting Services (AWS). The website finally went offline a month after the incident.

In this regard, there have been discussions and speculations about the review of online content in US law.

British efforts to remove inappropriate content on video publishing platforms and increase police power

The country is currently reviewing and amending the online safety law to cover more cases and crimes. By amending this law, platforms that display illegal content will face large fines or blockages.

Under this law, online harassment is considered illegal. Examples include child abuse, the use of the Internet for terrorist purposes, hate crimes as well as hate speech, cyberbullying and online abuse. In the meantime, online platforms, apps, software, etc. must manage their content. Anyone who manages online platforms is now required by English law to remove illegal online content.

For this purpose, platforms must be designed to be more secure. This means that basic safety features must be taken into account in the design of a website, app or software. Users must also report illegal content to law enforcement and the police.

In October 2021, the United Kingdom enacted new laws relating to video-sharing platforms. The purpose of these laws is to protect users and minors from hate speech and violent videos. However, in October 2021, the country enacted new laws aimed at video-sharing platforms. The law aims to protect users and minors from hate speech and violent videos against certain groups. In this regard, Ofcom (British media regulator) announced new guidelines for platforms such as TickTalk, SnapChat, Vimo and Twitch. These platforms must take appropriate measures to protect users from malicious content. Terrorist content, racism, etc. are part of this violent content.

But before that, in 2019, Sajid Javid, then British Home Secretary and head of the British Working Group on Serious Violence, set a. 1.38 million budget for the working group to be implemented by the end of May of that year. The 17-member team consisted of police and officers tasked with removing content related to criminal gangs online.

In this working group, senior executives of Google and Facebook presented their actions to deal with violent content, including videos that promote violence.

A joint working group of the British National Police and social networks was established in the Metropolitan Police. In this working group, teams dedicated to dealing with online content work on investigating, disrupting and enforcing the law against criminal groups.

The task force also identifies dangerous content on social media to remove it permanently. The center was part of the British government’s strategy of serious violence to keep young people away from crime, as well as to strengthen the police response.

Germany has 24 hours to remove inappropriate content

Germany is also one of the countries that has enacted laws in this field. NetzDG, also known as Facebook, has been enacted to combat hate speech and fake news on social media.

The law requires social networks with more than 2 million users to remove explicit illegal content within 24 hours and implicit illegal content within 7 days of posting. The offending social network faces a fine of up to 50 million euros. Deleted content should be removed at least 10 weeks later, and platforms should submit transparency reports on illegal content every six months.

This law was approved in 2017 and has been implemented since 2018.

France’s failed attempt to pass law against inappropriate content

France, meanwhile, passed a similar law to NetDG in 2019. The law, backed by Emmanuel Macron and his party, required online platforms to remove violent and hateful content identified by users within 24 hours. If the platform violated the law, it would face a fine of € 1.25 million.

But in 2020, the French Constitutional Council (a court that reviews laws to make them constitutional) repealed them. According to the court, the law allowed platforms to remove any reported content, regardless of whether it was hate speech or not, in order to avoid fines and punishment.

The Christie Church event marks the beginning of New Zealand and Australia’s efforts to tackle violent content

New Zealand also has a specific law on the removal of illegal content. In the aftermath of the Christie Church terrorist attack in the country, a mechanism was put in place to deal more explicitly with violent content and hate speech. In the ring, an extremist attacked a mosque, killing about 50 people, and posted the whole process live on Facebook.

Tackling the challenge of extremism is a complex and long-term challenge, so several government agencies, law enforcement agencies, civil society, and New Zealand experts are all involved.

The digital security team at the Department of the Interior has a responsibility to keep its citizens safe from online harassment, as well as to prevent the spread of malicious content that promotes violence.

The Department of Homeland Security’s digital security team has a responsibility to keep its citizens safe from online harassment, as well as to prevent the spread of malicious content that promotes violence.

One of the responsibilities of this team is to identify the environment that creates violent content and shares it online. The team also has the task of avoiding, prosecuting and, if necessary, prosecuting those who share malicious content in cyberspace.

Another action by the country is the commitment of the government and technology companies to remove terrorist and violent content online.

Also, according to a law submitted to the parliament in 2020, live broadcast of any illegal content such as the Christie Church terrorist incident is a criminal offense. The government can order the removal of online content to the platform.

Australia is another country that has laws on violent content and hate speech online. In this country, platforms are required to remove disgusting and violent content, otherwise they face large fines. The law came after the Christie Church terrorist attack in New Zealand, which attacked and killed Muslims present at a mosque and was posted live on Facebook.

India’s widespread efforts to combat violent and hateful content

With 500 million online users and a history of spreading fake news on social networks and messengers that sometimes lead to violence, India is one of the countries where content review is very important. Content regulation is governed by a law called the Information Technology Act (ITA).

Using the Internet to spread terrorist content is subject to cybercrime under the Information Technology Act.

With the law that in 2000 Greetings and in 2005 A revised AD has created a framework for combating cyber terrorism in India. In 2015, the Supreme Court of India ruled that technology companies should actively monitor platforms for illegal content.

Although ITA does not explicitly state the content of terrorism, in 2015 it became clear that the use of communication devices to share information that is highly offensive or threatening in nature, along with fake news that creates hatred and enmity, is prohibited. Is. In this regard, technology companies are required to remove the content if they receive an order from the court or the government.

Government and judicial organizations can also ask Internet platforms to remove illegal content.

How countries deal with violent content in cyberspace

Internet operation law in Singapore

Singapore is one of the major technology hubs in Asia and is in fact the Asian replacement for Silicon Valley. Many large technology companies. Facebook, Microsoft, Google and YouTube have headquarters in Singapore.

The country in October 2016 AD passed the Internet Code of Practice, which provides the basis for the work of Internet services and content providers in Singapore. In addition, an Internet Regulatory Framework has been established in the country, which provides an overview of Singapore’s approach to online regulation and is related to the Internet Performance Code. The “Content Protection and Online Manipulation” Act (POFMA) was also enacted in October 2019 Approved in Singapore for controlling the spread of fake news by content correction and removal orders. In this regard, IMDA monitors the implementation of these laws in the country.

The country’s legislative framework specifically targets online terrorist content. Of course, in this regard, banning online content that promotes hatred is also a basis for removing content.

All Internet content and service providers in Singapore must comply with the Internet Code of Conduct. In case of violation of IMDA law, it has the power to impose sanctions, including the imposition of fines on technology companies.

Leave a Reply

Back to top button