Derek Caelin, Innovation and Data Senior Specialist
A Brave New World
At the start of 2020, 83% of the world’s population lives in closed, repressed, or obstructed civic environments, and negative civic progress in many countries. This trend is partly fueled by the increasingly tech-enabled, state-run surveillance programs. As the technology necessary to identify and track people in the streets becomes cheaper and more established, states adopt tools, ostensibly for deterring crime, but that also provide the capacity for the government to track the whereabouts, relationships, and activities of their residents. In a world where the people who participate in public protests critical of the government can be recorded and punished, the right of free assembly is a basic freedom under threat.
For activists, civil society organizations, and digital security operatives in Counterpart International’s networks, the implementation of artificial intelligence (AI) surveillance programs in their home countries reflect a real-world hazard to participation in the civic sphere. Some of the people interviewed for this article are from the Innovation for Change (I4C) network, which Counterpart has supported since 2015. Others come from our extended network and requested not to be named because of the challenges and professional limitations they face in their home country.
The use of private and state-run cameras in public spaces extends back decades, but recent developments in the field of AI, data processing, and data storage have multiplied the potential impact of cameras as surveillance tools. As a recent Carnegie Endowment for International Peace report illustrates, AI surveillance “allows regimes to automate many tracking and monitoring functions formerly delegated to human operators [bringing] cost efficiencies, [decreasing] reliance on security forces, and [overriding] potential principal-agent loyalty problems.” These systems, often employed in city centers, allow the state to track the movements of citizens as they traverse the city and interact with other people.
Surveillance technology takes a noticeable toll when it is abused. In Serbia – a country whose civic space status was recently downgraded by the CIVICUS Monitor to “obstructed” – one digital security expert explained to us that the installation of AI enabled cameras since 2017 have frightened people who may wish to protest the against the state. “…when you have a demonstration, when you have something happen in the city, [the government] can know exactly who was there.” This is particularly worrying for the 230,000 people who are government contractors and employees.
“The government is keeping them on a temporary contract, as long as they do the things that they tell them,” the digital security expert said, “… These people cannot be seen in the streets doing anything, because [the government] will know they are there.” Further inhibiting citizen’s freedom of assembly, government staff have allegedly been compelled to participate in pro-government protests. “Like, ‘go to this specific event that the president is organizing’, or ‘collect signatures for the elections’…,” said the expert. “They are blackmailing for jobs.”
Given the state’s need to provide security for citizens, many countries justify employing some form of surveillance technology. Yet according to civil society leaders in Counterpart’s network, they are concerned that the necessary regulations for the protection of user data are too few, not yet passed, or not enforced in the countries where AI surveillance is employed.
States and companies who wield AI surveillance technology typically couch their actions in terms of security. Referring to its efforts in Serbia, a now-archived 2018 Huawei report on its “Safe Cities” program declared that, “thanks to Huawei’s intelligent technology, police are now able to locate suspects based on stored HD video, improving safety and security, and realizing an overall reduction in the rates of crime.” Yet, these systems often lack the necessary oversight and regulation that would prevent them from being abused.
In Pakistan the Safe City program launched in cooperation with the Chinese companies Huawei and LTE, serves a dual purpose. “The goal of the project was to boost … internal security and counter terrorism in the country,” said Arzak Khan, a member of the Innovation for Change network’s South Asia hub who spent 16 years working on digital ID and Safe City platforms for a leading public sector organization in Pakistan. “The hidden goals were of course surveillance of citizens.”
Arjun Venkatraman, Managing Trustee for the The Mojolab Foundation and a member of Innovation for Change based in India, points out that laws in his country appear more to be more geared towards extending government authority than regulating its actions. “India has a personal data protection bill in parliament for the winter session,” said Venkatraman, “however, that bill appears to be far more about giving government overreaching powers into data, but doesn’t seem to be doing a lot for protection per se.”
Despite this, experts in our network perceive that citizens in their countries are unaware of and largely unconcerned with the spread of the technology. “People generally have very little awareness around the issue of Safe City security cams,” said Arzak Khan about the population in Pakistan.
“People don’t know about it,” said the Serbian digital security trainer. “People don’t understand it in the way that we understand it… People [say], ‘Oh, it’s just a camera’. No, it’s not just a camera, it’s so much more complex, and much more scary.” In India, Arjun Venkatraman described citizens’ awareness of surveillance issues as “much below average.” A digital security specialist in Kyrgyzstan offered the same conclusion: “I think the majority [of people] here don’t care at all,” said the specialist, “…Some people understand what we’re talking about, like an editor, a security specialist, … sometimes an activist, or journalist,” but not the broader public. Trends like this are why Counterpart International has invested in building the digital capacity of members of civil society organizations.
Civil Society Response: Mobilized Citizens, More Protective Policies
But what should the safeguards that protect against abuse of facial recognition surveillance be? Various solutions from civil society and government bodies have been proposed.
First: Civil society has to play a role in raising awareness for the public of the impact of surveillance technology. “In view of the Chinese [technology companies], said the security expert in Kyrgyzstan, “I think the best, best, best way to educate the crowd is to scare them. We should have five, six, ten narrow or emotional cases that expose to people the insecurity of such technologies… something that will really shock the crowd. Maybe in this case people will start to think about it.” In Pakistan, Arzak Khan says that Safe City is “something that needs more awareness campaigns around it… given the alarming increase of CCTV cams for facial recognition and matching, especially given the influx of cheap Chinese hardware and software.” As technology-enabled people become more aware of the risks associated with unregulated facial recognition software, digital security experts emphasize need to bridge the gap in awareness between technologists and the broader public.
Second: CSOs need training on how to pursue reforms in internet governance and digital safety. “[CSOs should] make noise demand Data Protection Policy before such [a surveillance] system could be installed,” says Arzak Khan, “At a policy level the use of such systems should be limited. With strict enforcement of data protection and privacy laws.”
Third: There needs to be greater rules and regulations. This is an area where CSOs can focus advocacy efforts. “You put restrictions on how and why you use the system,” said the Serbia-based digital security specialist. “You cannot use it all the time… have logs [and] if somebody if tries to abuse [the system], you can have an independent oversight. What I’m saying is these systems can be useful if they are not abused.”
Fourth: Surveillance systems require accountability. “Surveillance should always have checks and balances in it,” says Eric Johnson, the Chief of Party at Internet Freedom in Counterpart, outlining how systems that employ surveillance can be justifiably employed. “When a law enforcement agency wants to tap my phone they should have get a court order… There should [also] be some way for it to be protested. I would like to say that I should be informed about it, but I don’t think that’s realistic, but suppose I do find out about it: I should be able to contest it in court.”
Fifth: Shift the private sector. “We already signed and adopted the Declaration of Freedom of the Internet…and then we handed it to Facebook, Google and Amazon,” said an India-based technologist, referring to the corporate surveillance system that mirrors – and in some capacities exceeds – that of the state. “So now the only way I see is forward….[is to] stay one step ahead in the privacy race and that’s now about behavior training [for the people] rather than tech.” Solutions focused on behavior change should, according to this technologist, encourage alternative, competitive business models that focus on data protection. “I think the hope lies in the SME [small and medium enterprise] sector…[We need] social enterprises that provide ethical alternatives to surveillance and [get] paid for protecting privacy of their user communities.”
Forward unto Uncertainty
As the technology enabling facial recognition surveillance becomes cheaper and more accessible, citizens in both open and closed civic space can expect increasing levels of surveillance over time, whether it be by government actors or commercial entities. The potential utility of these tools to deter crime, improve safety and convenience is clear. For activists in Counterpart’s network, however, it remains to be seen whether the technology will be implemented with clear safeguards to prevent abuse and protect the civic liberties of citizens.