In September 2017, the European Commission issued a communication paper entitled “Tackling Illegal Content Online: Towards an enhanced responsibility of online platforms” (the “Communication”). These non-binding guidelines focus solely on illegal content which incites terrorism, racism and xenophobia, and not on content which could amount to defamation or infringement of intellectual property rights. Nevertheless, the Communication is a welcomed step forward given that a harmonised and coherent approach to removing illegal content does not exist at present in the EU.
The Communication encourages online platforms to take appropriate steps to prevent, detect, disable and remove illegal online content and to do so in a timely manner. It sets out a number of active processes which may be carried out, including:
Close cooperation with law enforcement and the relevant national authorities
Cooperation should be facilitated through the establishment of specific points of contact. This ensures that online platforms can be rapidly and effectively contacted for requests to remove illegal content promptly and potentially alert law enforcement to signs of online illegal activity. To prevent the duplication of effort and notices, law enforcement and other national authorities should also make every effort to cooperate with one another to ensure efficient identification and reporting of illegal content. It is important to note here that the Communication also suggests that online platforms prioritise notifications and alerts of illegal content sent to them from national authorities and law enforcement groups. Having an established point of contact at both the online platform and the national authority would enable this process to be sped up.
Working closely with trusted flaggers
The Communication defines “trusted flaggers” as “specialised entities with specific expertise in identifying illegal content, and dedicated structures for detecting and identifying such content online”. These entities should be used to facilitate the screening and notification process which would result in higher quality notices and faster take-downs.
However, the EU Commission does not set out any additional guidelines as to how these trusted flaggers will remain unbiased, independent and efficient. It has been suggested that should online platforms choose to work with these flaggers, they should ensure that the flaggers are auditable and accountable.
Establishing easy-to-use flagging mechanisms for users
The Communication suggests that simple flagging mechanisms should be implemented on websites which strike a balance between, on the one hand, encouraging users to notify online platforms of illegal content but on the other hand, prevent the over-notification of content which is objectionable and not illegal.
Investment in automatic detection technologies to facilitate the detection and removal processes
The Communication encourages online platforms to utilise automatic detection technologies, such as artificial intelligence, in order to stream-line and facilitate the removal process. While these types of technologies would be an efficient way of online platforms carrying out the notification screening process, a question does arise as to the technologies’ capabilities under the current state of the art. For example, it an artificial technology could possibly spot some an illegal image, but are existing technical means capable to detect an underlying message expressed by an image poster or in an advert?
Moreover, not only would the technology be required to remove illegal content, it must also be capable of preventing its re-appearance, without affecting an individual’s right to freedom of expression. Several internet platforms like Google, Facebook and Twitter have recently come together and signed the Code of Conduct on Countering Illegal Hate Speech Online. This code enabled the creation of a shared database of digital fingerprints, tracking videos uploaded on any of the consortium’s websites in order to ensure that the content in question is not re-uploaded onto another website within the consortium. Ideally, more websites choose to sign up to the code and become part of a wider database, decreasing the probability that illegal content will be re-uploaded and reducing costs of each online platform’s operation too.
Although the release of the Communication is a step in the right direction in the fight against illegal online content, more action is needed to be taken by national authorities to combat online criminal activity. The Communication seems to propose a duty of care existing between online platforms and their users but fails to emphasise the importance of the national authorities’ role in this endeavour. After all, only States are empowered to investigate and prosecute alleged criminal activity and not online platforms.
The full guidance can be found here: https://ec.europa.eu/digital-single-market/en/illegal-content-online-platforms