When it comes to removing extremist content, it has largely been on a volunteer basis, where companies such as Facebook and YouTube have taken the initiative to try and remove content by themselves. Of course extremist content can find its way to all kinds of websites, not just Facebook, YouTube, Twitter, and so on, which means that whether or not a company will enact similar policies is entirely up to them.

However over in the EU, it seems that they are considering implementing a law in which it would fine social media platforms if they do not remove extremist content in a timely manner. This is according to a draft regulations obtained by the Financial Times (via TNW) which are expected to be published next month.

The draft suggests that fines could be levied onto companies who do not remove the content within an hour of it being posted. As TNW notes, this is a change in stance where previously it was up to those platforms as to whether or not they wanted to remove the content. Since the law has yet to be proposed or put into effect, it remains to be seen if it will be effective, although we imagine that financial penalties are usually a good motivator for companies to act swiftly.

That being said, how these companies detect extremist content is unclear. There are algorithms in place that help to filter certain content, but whether or not these filters are good enough to catch all the posts remains to be seen. Meanwhile over in the UK, the government has actually developed an AI that is capable of detecting extremist content online.

Filed in General. Read more about .

Discover more from Ubergizmo

Subscribe now to keep reading and get access to the full archive.

Continue reading