First published in in libdemvoice.org on 16 February 2014. Link to original article here.
Earlier this month, the Government reiterated its intent to censor online extremist content through ISP filtering systems. This has largely been in reaction to fears over radical jihadi videos coming from Syria and has been heightened due to recent estimates of 2,000 European fighters travelling to Syria. There is particular concern over the influence foreign fighters may have on the young and impressionable upon their return to their countries of origin. Though well-intentioned, government-controlled filtering is problematic for a number of reasons.
Firstly, it raises big questions about what can be deemed ‘extremist’ in theory. Secondly, current filtering technology is quite blunt and, in practice, such measures end up over-filtering which has many negative consequences. Thirdly, deleting, filtering or censoring unwanted content does not ensure that material will not resurface rapidly. Even blocked material can easily be accessed through a variety of proxy servers and add-ons.
The fear among counter-terrorist researchers is that the more online extremist content is blocked and deleted, the more such content is pushed into the ‘dark internet’, where tracking and information retrieval is nearly impossible. This also makes it much more difficult for researchers to extract data and for counter-extremism practitioners to engage in dialogue and debate.
Within the debate of online censorship, extremist content has been consistently and incorrectly paralleled with current efforts for online filtering of child abuse or rape content. There is a society-wide consensus around the illegality of child abuse and rape and there is no real debate to be had around the merits of such malicious activities. ‘Extremism’, on the other hand, is an easily distorted and often subjective term that can be highly contentious and debated when defined centrally by government.
That is not to say that current efforts by the Counter Terrorism Internet Referral Unit (CTIRU) should discontinue. The CTIRU has taken down large quantities of online content relating to illegal terrorist activities. It needs to be made clear that broad terms with unclear regulatory definitions, such as ‘extremist’, are not synonymous with the precise and well-defined legal guidelines for ‘terrorism’ defined under the Terrorism Acts existing in the UK.
The Government has chastised online social networks, such as Facebook and YouTube, for allowing controversial content, like recent videos from Syria of beheadings. This is not to say horrific content does not exist online but, in the fight against radicalization, a free internet is a tool rather than an obstacle. Creating large-scale government internet regulations is, therefore, targeting a symptom rather than the cause of radicalization. De-legitimizing the extremist narrative should be the real aim. Fighting online extremism through open debate, confrontation and the support for counter-extremism content is a far more effective, long-term and cost-effective approach.
Rather than spending time and money on illiberal and ineffective filtering processes that provide few measurable outcomes, the Government should be providing support for existing counter-extremism practitioners and their efforts.
Educators in schools, mentors in prisons and practitioners in communities should be should be given the help they need to extend their work to the online sphere. Pro-actively diffusing the initial allure of the extremist narrative to young individuals is always more powerful than waiting to react to extremist content online.
* Dr Erin Marie Saltman is Research Project Officer at Quilliam, working on research looking at online trends of radicalisation and how governments and organisations can counter these processes.