I originally intended this theme to cover both the silencing of the Internet and mobile phones by authoritarian regimes, but decided that in order to do a very dense subject justice I will leave the topic of mobile phone interruptions for a future blog and focus only on how governments stifle and cut internet services within their borders and internationally.
Once hailed as the ultimate platform for freedom of expression and democracy, the internet has, over the course of its short history, proven itself to be as susceptible to state controls as any of the other traditional forms of information dissemination (Goldsmith & Wu, 2006). It is, by nature of its reach and speed, a larger threat to sovereignty and state control than any other media source ever has been. It is no wonder, therefore, that states with a poor track record of freedom and fairness would go to elaborate measures to stifle its influence. Whilst a thorn in the side of totalitarianism, the internet is necessary evil for those states wanting to make gains in a now totally global and wired economy. China fights hardest with this duplicity; it maintains the most elaborate system of internet control and censorship in the world – not surprising as it boasts the most internet users in the world: 389 Million (CIA world factbook, 2011), and an equally grand track record for human rights violations.
Libya, on the other hand, whilst comparable to China in its human rights offences, cannot be compared in its rate of internet saturation: 354,000 users. As stated in the last blog entry that equates to less than 6% of the population having access to the internet and its censorship techniques are somewhat less involved.
There are various methods employed by authoritarian states (and non-authoritarian states) for controlling content and access to content on the internet. Whilst we may visualise the internet as a web in which information can navigate around any road blocks standing in the way, it is important that we understand two faults with this idea: 1/most people who use the internet are not techies or even a little techno-savvy and so have neither the knowledge nor the language for rerouting information and; 2/the internet has international gateways through which information passes from one sovereign state to another and whilst we think of cyberspace as borderless, the infrastructure, which makes it possible, is not. Instead, the information is passed via chokepoints, nodes and routers all of which serve as loci of control on the internet information path (Diebert, 2007).
The main techniques for censorship are Content Analysis techniques, Address Blocking techniques, Take-Downs, Service Attacks:
Content analysis techniques include Inclusion Filtering where a select number of preapproved sites are allowed through the filter and Exclusion Filtering where sites are restricted through blacklisting. Local language filtering is more prevalent than say filtering of English language sites. Content analysis works through analysing site and URL content to find accepted or prohibited keywords.
Programmes such as Smartfilter, Websense & FortiGate are available off the shelf for states and organisations censorship needs. These offer fairly blanket approaches to censorship, and because the programmes work by filtering general categories (eg ‘politics/opinion’ ) this means that sites which may be acceptable or even desirable are accidentally filtered out. Ironically, the companies defining these censorship categories and offering & selling these products are not ideologically despotic; in fact they are freedom-loving & American. Contrary to Ronald Deibert’s findings (2007), in which he directly links Websense to Yemen authorities, Websense’s corporate social responsibility document denies selling to regimes intent on silencing its citizens:
We recognize that some governments restrict access to the Internet by their citizens. Websense does not sell to governments or Internet Service Providers (ISPs) that are engaged in government-imposed censorship. Government-mandated censorship projects will not be engaged by Websense. If Websense does win business and later discovers that it is being used by a government, or by ISPs based on government rule, to engage in censorship of the Web and Web content we will remove our technology and capabilities from the project (Websense Social Responsibility Policy, 2011).
How they would execute removal of their technology and capabilities is not made explicit, but it is certainly an interesting proposition. But whether with Websense, another brand name or some homemade programme this type of censorship is achieved on various institutional levels from Internet Service Providers (ISP) to organisations and individual computers.
Address blocking is a national measure which takes place at the international gateway or through ISPs. Routers are configured to block certain Internet Protocol (IP) addresses or domain names.
If you come up against these kinds of blockade you are likely to receive an error page; governments who are more transparent about their censorship policy may provide information about their censorship policy on the error page others may reroute to other websites.
Herdict Web, is a unique project of Harvard University’s Berkman Centre, which uses the idea of crowd-sourcing to monitor website filtering and blocking.
The OpenNet Intitiative (ONI), who have been tracking and monitoring internet censorship techniques worldwide, breakdown filtering into the following categories: political (eg opposition party sites, minority rights sites), social (eg pornographic and/or fleshy sites, religiously sensitive sites), conflict/security (eg bomb making sites). In 2006, the ONI, described Libya’s internet filtering programme as largely political in terms of content and suggested that the level of filtering of these types of sites was substantial. A follow up study and subsequent report in 2009 revealed that, whilst the type of content being filtered was the same, it had become evident that substantially less filtering was happening. They suggested that this was due to efforts on the part of the regime to move towards more openness. Gaddafi’s own son, Seif, complained in 2006 that “in all frankness and transparency, there is no freedom of the press in Libya; actually there is no press, even, and there is no real ‘direct people’s democracy’ on the ground” (Libya Internet Censorship Report, 2009).
Another method of censorship is to remove search results. If governments can find ways of gaining compliance from search engine services, they are also able to omit undesirable websites from search engines -surprisingly common amongst even major search engine services with vested financial interest in the censoring country (Goldsmith & Wu, 2006).
Take-Downs work when authorities have powers of arbitration over web content hosts and simply force hosts to remove undesirable websites (ONI, 2011).
Another more pathological means of censorship is Induced Self-Censorship where fear and ideology control censorship at the level of the individual. Authoritarian governments can close down internet cafés for allowing users to surf illicit content and arrests are made of those involved in producing, facilitating and contributing to illicit content. According to the same ONI report on Libyan Internet Censorship, “ Internet users in Libya have told the Arabic media that security personnel and Internet café operators closely monitor Internet cafés and often harass Internet users. Several Internet cafés have been shut down by security, which has prompted café operators to do the monitoring themselves to avoid being shut down. Internet users also reported that notes are posted in Internet cafés warning users against accessing opposition Web sites” (2009).
The type of censorship increasingly deployed in situations of serious political contention and crisis on a large scale are Service Attacks. Service Attacks, also known as ‘Kill Switch’tactics, occur at the internet’s chokepoint, the sovereign state. Deibert & Rohonzinski have suggested that this type of censorship represents a ‘just-in-time blocking’ approach.
Just-in-time blocking differs from the first-generation national filtering practices of countries like China and Iran in several significant ways. First, and most importantly, just-in-time blocking is temporally fixed. Unlike the evolving block lists used by national firewalls, just-in-time blocking occurs only at times when the information being sought has a specific value or importance. Usually, this will mean that blocking is imposed at times of political change, such as elections, or other potential social flashpoints (important anniversaries or times of social unrest) (2008).
Service attacks can be loosely described as a complete denial of access. More clearly there are several methods for executing service attacks: Distributed denial of service attacks, cutting the power at web servers location, sabotaging fibre optic cables, misconfiguring routing tables, geolocation filters (Deibert, 2007). Richard Stiennon of IT Harvest, an expert in Cyberwarfare says “The primary means of killing Internet access is to update the primary Internet routers so that all of the IP addresses associated with particular “Autonomous Systems” (AS) are re-routed to nowhere… While that is the most elegant way, a country can also just use a Firewall to limit Internet access such as Myanmar does. China and Australia are other examples. Or, a country could sever the fibre that comes into their territory – drastic and does not stop satellite connectivity”.
Geolocation attacks work when a server denies requests from internet users based on the actual location of a computer’s IP address. An interesting example of this type of attack backfiring can be found on Stefan Geens blog, Ogle Earth, Oh the irony: Google Earth ban in Sudan is due to US export restrictions 20th April, 2007 (Deibert, 2007) (And the follow up, Google Earth coming soon to Sudan, Iran and Cuba 20th March, 2010).
A Distributed Denial of Service attack (DDoS) involves a coordinated effort to attack a site or service through saturating it with communication requests; the outcome is that the site performs unacceptably slowly or cannot be accessed at all. DDoS attacks, whilst enacted by the perpetrators of internet censorship, have also been used by online communities to attempt retaliation against them.
In the recent Egyptian version of ‘Kill Switch’ tactics some experts believe that is was a simple case of authorities calling the ISPs and telling them to cut the service. Others have suggested that it was more than figuratively a kill switch – an actual breaker switch at the Ramses exchange caused the cessation. The difference between what has happened in Libya versus what happened in Egypt is that, as James Cowie of Renesys puts it, Libya has ‘throttled’ the internet rather than killing it. The internet is still very much available for those within the Gaddafi regime who want to use the information it provides, but it is being denied to the rest of the country. This is a clear illustration of router misconfiguration at work.
Censorship in the internet age is both convoluted and complex and is ever evolving in response to circumvention techniques. As Palfrey & Zittrain (2008) put it “a game of cat and mouse is well underway”.
Stay tuned for – Let’s make some noise: techniques for bypassing internet censorship
For any alterations or additions to this article contact communicationcrisis@live.co.uk
For more on this topic visit:
www.opennet.net
www.herdict.org
www.it-harvest.com
www.renesys.com
www.eff.org
Further reading on this topic:
Who Controls the Internet: Illusions of a Borderless world, Jack Goldsmith & Tim Wu, 2006
Access Denied: The Practice and Policy of Global Internet Filtering, Ronald Deibert, John Palfrey, Rafal Rohozinski, Jonathan Zittrain, eds., 2008.
Routledge Handbook of Internet Politics, Chadwick ed, 2009.
Keep your eyes peeled for the Cyber Roundtable.