An agreement on Tuesday by four major U.S. internet companies to block illegal hate speech from their services in Europe within 24 hours shows the tight corner the companies find themselves in as they face mounting pressure to monitor and control content.The new European Union “code of conduct on illegal online hate speech” states that Facebook Inc, Google’s YouTube, Twitter Inc and Microsoft will review reports of hate speech in less than 24 hours and remove or disable access to the content if necessary.
http://in.reuters.com/article/socialmedia-hatespeech-eu-idINKCN0YN33XAlso see:Facebook, Twitter, YouTube, Microsoft back EU hate speech rules
Facebook, Twitter, Google’s YouTube and Microsoft on Tuesday agreed to an EU code of conduct to tackle online hate speech within 24 hours in Europe.EU governments have been trying in recent months to get social platforms to crack down on rising online racism following the refugee crisis and terror attacks, with some even threatening action against the companies.
http://in.reuters.com/article/facebook-twitter-hatecrime-idINKCN0YM0WBEuropean Commission and IT Companies announce Code of Conduct on illegal online hate speech
- The IT Companies, taking the lead on countering the spread of illegal hate speech online, have agreed with the European Commission on a code of conduct setting the following public commitments:
- The IT Companies to have in place clear and effective processes to review notifications regarding illegal hate speech on their services so they can remove or disable access to such content. The IT companies to have in place Rules or Community Guidelines clarifying that they prohibit the promotion of incitement to violence and hateful conduct.
- Upon receipt of a valid removal notification, the IT Companies to review such requests against their rules and community guidelines and where necessary national laws transposing the Framework Decision 2008/913/JHA, with dedicated teams reviewing requests.
- The IT Companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary.
- In addition to the above, the IT Companies to educate and raise awareness with their users about the types of content not permitted under their rules and community guidelines. The use of the notification system could be used as a tool to do this.
- The IT companies to provide information on the procedures for submitting notices, with a view to improving the speed and effectiveness of communication between the Member State authorities and the IT Companies, in particular on notifications and on disabling access to or removal of illegal hate speech online. The information is to be channelled through the national contact points designated by the IT companies and the Member States respectively. This would also enable Member States, and in particular their law enforcement agencies, to further familiarise themselves with the methods to recognise and notify the companies of illegal hate speech online.
- The IT Companies to encourage the provision of notices and flagging of content that promotes incitement to violence and hateful conduct at scale by experts, particularly via partnerships with CSOs, by providing clear information on individual company Rules and Community Guidelines and rules on the reporting and notification processes. The IT Companies to endeavour to strengthen partnerships with CSOs by widening the geographical spread of such partnerships and, where appropriate, to provide support and training to enable CSO partners to fulfil the role of a “trusted reporter” or equivalent, with due respect to the need of maintaining their independence and credibility.
- The IT Companies rely on support from Member States and the European Commission to ensure access to a representative network of CSO partners and “trusted reporters” in all Member States helping to help provide high quality notices. IT Companies to make information about “trusted reporters” available on their websites.
- The IT Companies to provide regular training to their staff on current societal developments and to exchange views on the potential for further improvement.
- The IT Companies to intensify cooperation between themselves and other platforms and social media companies to enhance best practice sharing.
- The IT Companies and the European Commission, recognising the value of independent counter speech against hateful rhetoric and prejudice, aim to continue their work in identifying and promoting independent counter-narratives, new ideas and initiatives and supporting educational programs that encourage critical thinking.
- The IT Companies to intensify their work with CSOs to deliver best practice training on countering hateful rhetoric and prejudice and increase the scale of their proactive outreach to CSOs to help them deliver effective counter speech campaigns. The European Commission, in cooperation with Member States, to contribute to this endeavour by taking steps to map CSOs’ specific needs and demands in this respect.
- The European Commission in coordination with Member States to promote the adherence to the commitments set out in this code of conduct also to other relevant platforms and social media companies.
The IT Companies and the European Commission agree to assess the public commitments in this code of conduct on a regular basis, including their impact. They also agree to further discuss how to promote transparency and encourage counter and alternative narratives. To this end, regular meetings will take place and a preliminary assessment will be reported to the High Level Group on Combating Racism, Xenophobia and all forms of intolerance by the end of 2016.BackgroundThe Commission has been working with social media companies to ensure that hate speech is tackled online similarly to other media channels.The e-Commerce Directive (article 14) has led to the development of take-down procedures, but does not regulate them in detail. A “notice-and-action” procedure begins when someone notifies a hosting service provider – for instance a social network, an e-commerce platform or a company that hosts websites – about illegal content on the internet (for example, racist content, child abuse content or spam) and is concluded when a hosting service provider acts against the illegal content.Following the EU Colloquium on Fundamental Rights in October 2015 on ‘Tolerance and respect: preventing and combating Antisemitic and anti-Muslim hatred in Europe’, the Commission initiated a dialogue with IT companies, in cooperation with Member States and civil society, to see how best to tackle illegal online hate speech which spreads violence and hate.The recent terror attacks and the use of social media by terrorist groups to radicalise young people have given more urgency to tackling this issue.The Commission already launched in December 2015 the EU Internet Forum to protect the public from the spread of terrorist material and terrorist exploitation of communication channels to facilitate and direct their activities. The Joint Statement of the extraordinary Justice and Home Affairs Council following the Brussels terrorist attacks underlined the need to step up work in this field and also to agree on a Code of Conduct on hate speech online.The Framework Decision on Combatting Racism and Xenophobia criminalises the public incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin. This is the legal basis for defining illegal online content.Freedom of expression is a core European value which must be preserved. The European Court of Human Rights set out the important distinction between content that “offends, shocks or disturbs the State or any sector of the population” and content that contains genuine and serious incitement to violence and hatred. The Court has made clear that States may sanction or prevent the latter.